diff --git a/.gitignore b/.gitignore index 9210dbd5..3e9c3f11 100644 --- a/.gitignore +++ b/.gitignore @@ -10,7 +10,7 @@ venv* *.egg-info/ .idea/ - +.vscode/ # File-based project format *.iws diff --git a/README.md b/README.md index de596452..3d6e6ec9 100644 --- a/README.md +++ b/README.md @@ -272,6 +272,7 @@ as ClickHouse needs to rewrite TTL information for all involved partitions. This operation is synchronous. ## Local running and development +For more detailed instructions (already partially covered on at the [usage](#usage) section) on how to setup a development environment, please refer to [this guide](deploy/README.md) To develop the project, it may be useful to run each component locally, see relevant README in each service - [webapp (backend and frontend)](src/gprofiler/README.md) diff --git a/deploy/.env b/deploy/.env index 186f9f8a..4cee5c26 100644 --- a/deploy/.env +++ b/deploy/.env @@ -4,7 +4,8 @@ DOMAIN=http://localhost # AWS: -AWS_REGION=us-east-1 +# AWS region is required to set for service +AWS_REGION= # AWS credentials, can be empty if running on EC2 with IAM role AWS_ACCESS_KEY_ID= @@ -32,9 +33,9 @@ REST_USERNAME=user REST_PASSWORD=pass # webapp: - -BUCKET_NAME=performance_studio_bucket -SQS_INDEXER_QUEUE_URL=performance_studio_queue +# BUCKET_NAME and SQS_INDEXER_QUEUE_URL are required for service up running +BUCKET_NAME= +SQS_INDEXER_QUEUE_URL= WEBAPP_APP_LOG_FILE_PATH="webapp.log" # agents-logs: diff --git a/deploy/README.md b/deploy/README.md new file mode 100644 index 00000000..8a93f401 --- /dev/null +++ b/deploy/README.md @@ -0,0 +1,223 @@ +# Info +This guide aims to help users setup a local development environment for **Gprofiler Performance Studio**. + +The end goal being: +* Have all services running locally as containers; +* Have the ability to orchestrate dev containers using **Docker Compose**; +* Have the ability to correctly expose and forward service ports to local development tools (in case the test environment needs to be run on remote host machine); + +Some steps present here are already part of the [general guides of the project](../README.md#usage). + +## 1. Pre-requisites +Before using the Continuous Profiler, ensure the following: +- You have an AWS account and configure your credentials, as the project utilizes AWS SQS and S3. +- You'll also need to create an SQS queue and an S3 bucket. +- You have Docker and docker-compose installed on your machine. + + +### 1.1 Security +By default, the system is required to set a basic auth username and password; +you can generate it by running the following command: +```shell +# assuming that you located in the deploy directory +htpasswd -B -C 12 -c .htpasswd +# the prompt will ask you to set a password +``` +This file is required to run the stack + +Also, a TLS certificate is required to run the stack, +see [Securing Connections with SSL/TLS](#securing-connections-with-ssltls) for more details. + +### 1.2 Securing Connections with SSL/TLS +When accessing the Continuous Profiler UI through the web, +it is important to set up HTTPS to ensure the communication between Continuous Profiler and the end user is encrypted. +As well as communication between webapp and ch-rest-service expected to be encrypted. + +Besides the security aspect, this is also required +for the browser to allow the use of some UI features that are blocked by browsers for non-HTTPS connections. + + +The TLS is enabled by default, but it requires you to provide a certificates: + +Main nginx certificates location +- `deploy/tls/cert.pem` - TLS certificate +- `deploy/tls/key.pem` - TLS key + +CH REST service certificates location: +- `deploy/tls/ch_rest_cert.pem` - TLS certificate +- `deploy/tls/ch_rest_key.pem` - TLS key + +_See [Self-signed certificate](#self-signed-certificate) for more details._ + +#### 1.2.1 Self-signed certificate +If you don't have a certificate, you can generate a self-signed certificate using the following command: +```shell +cd deploy +mkdir -p tls +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls/key.pem -out tls/cert.pem +openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls/ch_rest_key.pem -out tls/ch_rest_cert.pem +``` +Pay attention, self-signed certificates are not trusted by browsers and will require you to add an exception. + +:bangbang: IMPORTANT: If you are using a self-signed certificate, +you need the agent to trust it, +or to disable the TLS verification by adding `--no-verify` flag to the agent configuration. + +For example, +that will run a docker installation agent with self-signed certificate +(that will communicate from docker network to host network): +```shell +docker run --name granulate-gprofiler --restart=always -d --pid=host --userns=host --privileged intel/gprofiler:latest -cu --token="" --service-name="my-super-service" --server-host "https://host.docker.internal" --glogger-server "https://host.docker.internal" --no-verify +``` + +### 1.3 Port exposure for local development tools +Make sure to uncomment the "ports" definition for each service you want to expose on [docker-compose.yml](./docker-compose.yml). + +Also, make sure to correctly configure the port on your host machine that you want to bind to the port on the container. Ex: +```Dockerfile +#... + ports: + - ":" +#... +``` + +### 1.4 [Optional] Port forwarding via SSH for remote dev environment +**This step should only be completed in case your dev environment is running on a remote machine, but your dev tools (i.e postman, db client, browser) are running on you local machine.** + +Also, this technique is particularly useful for cases where you don't want to (or can't) open ports on your remote machine for debugging. + +#### 1.4.1 [Optional] Configure SSH client to forward specific ports for specific host +1- Open the configuration file for your SSH client: +```sh +vim ~/.ssh/config +``` +2- Modify or include port forwarding options tho the remote host +```configfile +Host + User + ForwardAgent yes + HostName + # This port forwarding config intends to forward the 443 port used by the load balancer service to the local host + LocalForward 443 127.0.0.1:443 + # This port forwarding config intends to forward the 5432 port used by the postgres service to the local host + LocalForward 5432 127.0.0.1:5432 +``` +3- Reconnect to the remote host with the new config +```sh +ssh +``` + +### 1.5 Environment Configuration +Before running the stack, you need to configure the environment variables in the `.env` file located in the `deploy` directory. This file contains all the necessary configuration for the services to communicate with each other and external dependencies. + +#### 1.5.1 Required AWS Configuration +```bash +AWS_REGION=us-east-1 +AWS_ACCESS_KEY_ID= +AWS_SECRET_ACCESS_KEY= +AWS_SESSION_TOKEN= # Optional, only if using temporary credentials + +# S3 and SQS resources +BUCKET_NAME= +SQS_INDEXER_QUEUE_URL= +``` + +#### 1.5.2 Database Configuration +```bash +# PostgreSQL (main application database) +POSTGRES_USER= +POSTGRES_PASSWORD= +POSTGRES_DB= +POSTGRES_PORT=5432 +POSTGRES_HOST=db_postgres # Docker service name for local development + +# ClickHouse (analytics database) +CLICKHOUSE_USER= +CLICKHOUSE_PASSWORD= +CLICKHOUSE_HOST=db_clickhouse # Docker service name for local development +``` + +#### 1.5.3 Service Authentication +```bash +# REST API service credentials +REST_USERNAME= +REST_PASSWORD= + +# Domain configuration +DOMAIN=http://localhost # Used in agent installation templates +``` + +#### 1.5.4 Slack Integration (New) +The gProfiler service now supports Slack notifications for profiling activities: + +```bash +# Slack Bot Token - obtain from your Slack app configuration +SLACK_BOT_TOKEN= + +# Slack Channels - comma-separated list of channels for notifications +SLACK_CHANNELS="<#channel1>,<#channel2>" +``` + +**Setting up Slack Integration:** +1. Create a Slack App in your workspace or use an existing one +2. Add the following OAuth scopes to your bot: `chat:write`, `channels:read`, `groups:read` +3. Install the app to your workspace and copy the Bot User OAuth Token +4. Set the `SLACK_BOT_TOKEN` with your token (starts with `xoxb-`) +5. Configure `SLACK_CHANNELS` with the channels where you want notifications (use comma-separated format) + +#### 1.5.5 Logging Configuration +```bash +# Common logging directory for all services +COMMON_LOGS_DIR=/logs + +# Service-specific log file paths +WEBAPP_APP_LOG_FILE_PATH="webapp.log" +AGENTS_LOGS_APP_LOG_FILE_PATH="${COMMON_LOGS_DIR}/agents-logs-app.log" +AGENTS_LOGS_LOG_FILE_PATH="${COMMON_LOGS_DIR}/agents-logs.log" +``` + +**Note:** The `.env` file comes with sensible defaults for local development. You primarily need to: +1. Configure your AWS credentials and resources +2. Set up Slack integration (if desired) +3. Adjust database passwords for production environments + +## 2. Running the stack +To run the entire stack built from source, use the docker-compose project located in the `deploy` directory. + +The `deploy` directory contains: +- `docker-compose.yml` - The Docker compose file. +- `.env` - The environment file where you set your AWS credentials, SQS/S3 names, and AWS region. +- `https_nginx.conf` - Nginx configuration file used as an entrypoint load balancer. +- `diagnostics.sh`- A script for testing connectivity between services and printing useful information. +- `tls` - A directory for storing TLS certificates (see [Securing Connections with SSL/TLS](#securing-connections-with-ssltls)). +- `.htpasswd` - A file for storing basic auth credentials (see above). + +To launch the stack, run the following commands in the `deploy` directory: +```shell +cd deploy +docker-compose --profile with-clickhouse up -d --build +``` + +Check that all services are running: +```shell +docker-compose ps +``` + +You should see the following containers with the prefix 'gprofiler-ps*': +* gprofiler-ps-agents-logs-backend +* gprofiler-ps-ch-indexer +* gprofiler-ps-ch-rest-service +* gprofiler-ps-clickhouse +* gprofiler-ps-nginx-load-balancer +* gprofiler-ps-periodic-tasks +* gprofiler-ps-postgres +* gprofiler-ps-webapp + +Now You can access the UI by navigating to https://localhost:4433 in your browser +(4433 is the default port, configurable in the docker-compose.yml file). + +## 3. Destroying the stack +```shell +docker-compose --profile with-clickhouse down -v +``` +The `-v` option deletes also the volumes that mens that all data will be truncated diff --git a/deploy/docker-compose.yml b/deploy/docker-compose.yml index 85fb8972..5cc8f500 100644 --- a/deploy/docker-compose.yml +++ b/deploy/docker-compose.yml @@ -48,9 +48,6 @@ services: - POSTGRES_USER=$POSTGRES_USER - POSTGRES_PASSWORD=$POSTGRES_PASSWORD - POSTGRES_DB=$POSTGRES_DB -# for debug -# ports: -# - "54321:5432" volumes: - db_postgres:/var/lib/postgresql/data - ../scripts/setup/postgres/gprofiler_recreate.sql:/docker-entrypoint-initdb.d/create_scheme.sql diff --git a/deploy/https_nginx.conf b/deploy/https_nginx.conf index 8fb02bd1..04e4fa7c 100644 --- a/deploy/https_nginx.conf +++ b/deploy/https_nginx.conf @@ -8,8 +8,20 @@ http { listen 80; server_name your_domain.com www.your_domain.com; # Replace with your domain + location /api/v2/profiles { + proxy_pass http://webapp; + } + + location ~ ^/api/v(1|2)/health_check$ { + proxy_pass http://webapp; + } + + location /api/v1/logs { + proxy_pass http://agents-logs-backend; + } + location / { - return 301 https://$host$request_uri; + proxy_pass http://webapp; } } diff --git a/docs/DYNAMIC_PROFILING.md b/docs/DYNAMIC_PROFILING.md new file mode 100644 index 00000000..1bebccdf --- /dev/null +++ b/docs/DYNAMIC_PROFILING.md @@ -0,0 +1,389 @@ +# Dynamic Profiling Data Model + +## Overview + +The Dynamic Profiling feature enables profiling requests at various hierarchy levels (service, job, namespace) to be mapped to specific host-level commands while maintaining sub-second heartbeat response times for 165k QPM (Queries Per Minute). + +## Architecture + +The dynamic profiling system consists of: + +1. **Profiling Requests** - API-level requests specifying targets at various hierarchy levels +2. **Profiling Commands** - Host-specific commands sent to agents +3. **Host Heartbeats** - Real-time host availability tracking (optimized for 165k QPM) +4. **Profiling Executions** - Audit trail of profiling executions +5. **Hierarchical Mappings** - Denormalized tables for fast query performance + +## Data Model + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ DYNAMIC PROFILING DATA MODEL │ +├─────────────────────────────────────────────────────────────────┤ +│ │ +│ ┌──────────────────┐ ┌──────────────────────┐ │ +│ │ ProfilingRequest │────────▶│ ProfilingCommand │ │ +│ │ │ │ │ │ +│ │ - Request ID │ │ - Command ID │ │ +│ │ - Service/Job/ │ │ - Host ID (indexed) │ │ +│ │ Namespace/Pod │ │ - Target Containers │ │ +│ │ - Duration │ │ - Target Processes │ │ +│ │ - Sample Rate │ │ - Command Type │ │ +│ │ - Status │ │ - Status │ │ +│ └──────────────────┘ └──────────────────────┘ │ +│ │ │ │ +│ │ │ │ +│ ▼ ▼ │ +│ ┌──────────────────────┐ ┌─────────────────┐ │ +│ │ ProfilingExecutions │ │ HostHeartbeats │ │ +│ │ (Audit Trail) │ │ │ │ +│ │ │ │ - Host ID │ │ +│ │ - Execution ID │ │ - Service │ │ +│ │ - Request/Command │ │ - Containers │ │ +│ │ - Host Name │ │ - Workloads │ │ +│ │ - Status │ │ - Last Seen ⚡ │ │ +│ └──────────────────────┘ └─────────────────┘ │ +│ │ +│ ┌────────────────────────────────────────────────────────┐ │ +│ │ HIERARCHICAL MAPPING TABLES │ │ +│ │ (Denormalized for Fast Query Performance) │ │ +│ ├────────────────────────────────────────────────────────┤ │ +│ │ │ │ +│ │ • NamespaceServices - Namespace → Service │ │ +│ │ • ServiceContainers - Service → Container │ │ +│ │ • JobContainers - Job → Container │ │ +│ │ • ContainerProcesses - Container → Process │ │ +│ │ • ContainersHosts - Container → Host │ │ +│ │ • ProcessesHosts - Process → Host │ │ +│ │ │ │ +│ └────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────┘ +``` + +## Key Features + +### 1. Hierarchical Request Mapping + +Requests can target any level of the hierarchy: +- **Namespace Level**: Profile all services in a namespace +- **Service Level**: Profile all containers in a service +- **Job Level**: Profile specific job workloads +- **Container Level**: Profile specific containers +- **Process Level**: Profile specific processes +- **Host Level**: Profile specific hosts + +### 2. Sub-Second Heartbeat Performance + +The `HostHeartbeats` table is optimized for 165k QPM: +- Indexed on `host_id` for O(1) lookups +- Indexed on `timestamp_last_seen` for quick staleness checks +- Partial indexes on `service_name` and `namespace` for filtered queries +- Lightweight updates with minimal locking + +### 3. Audit Trail + +`ProfilingExecutions` table maintains complete audit history: +- Links to original requests and commands +- Tracks execution lifecycle +- Enables troubleshooting and compliance + +### 4. Denormalized Mappings + +Hierarchical mapping tables trade storage for query speed: +- Pre-computed relationships +- Eliminates complex JOINs +- Enables fast request-to-host resolution + +## Database Tables + +### Core Tables + +#### ProfilingRequest +Stores profiling requests from API calls. + +**Key Fields:** +- `request_id` (UUID) - Unique identifier +- `service_name`, `job_name`, `namespace`, etc. - Target specification +- `profiling_mode` - CPU, memory, allocation, native +- `duration_seconds` - How long to profile +- `sample_rate` - Sampling frequency (1-1000) +- `status` - pending, in_progress, completed, failed, cancelled + +**Constraints:** +- At least one target specification must be provided +- Duration must be positive +- Sample rate must be between 1 and 1000 + +#### ProfilingCommand +Commands sent to agents on specific hosts. + +**Key Fields:** +- `command_id` (UUID) - Unique identifier +- `profiling_request_id` - Links to original request +- `host_id` - Target host (indexed for fast lookup) +- `target_containers`, `target_processes` - Specific targets +- `command_type` - start, stop, reconfigure +- `command_json` - Serialized command for agent + +#### HostHeartbeats +Tracks host availability and status. + +**Key Fields:** +- `host_id` - Unique host identifier (indexed) +- `host_name`, `host_ip` - Host details +- `service_name`, `namespace` - Contextual info +- `containers`, `jobs`, `workloads` - Current state +- `timestamp_last_seen` - Last heartbeat (indexed) +- `last_command_id` - Last command received + +**Performance:** +- Designed for 165k QPM +- Sub-second response times +- Optimized indexes for common queries + +#### ProfilingExecutions +Audit trail of profiling executions. + +**Key Fields:** +- `execution_id` (UUID) - Unique identifier +- `profiling_request_id` - Original request +- `profiling_command_id` - Executed command +- `host_name` - Where it executed +- `started_at`, `completed_at` - Execution timeline +- `status` - Execution status + +### Mapping Tables + +All mapping tables follow a similar pattern: +- Primary key (ID) +- Mapping fields (indexed) +- Timestamps (created_at, updated_at) +- Unique constraint on the mapping + +#### NamespaceServices +Maps namespaces → services + +#### ServiceContainers +Maps services → containers + +#### JobContainers +Maps jobs → containers + +#### ContainerProcesses +Maps containers → processes (includes process name) + +#### ContainersHosts +Maps containers → hosts + +#### ProcessesHosts +Maps processes → hosts + +## Setup and Installation + +### 1. Apply Database Schema + +```bash +# Connect to PostgreSQL +psql -U postgres -d gprofiler + +# Run the schema +\i scripts/setup/postgres/dynamic_profiling_schema.sql +``` + +### 2. Verify Tables Created + +```sql +SELECT table_name +FROM information_schema.tables +WHERE table_schema = 'public' + AND table_name LIKE '%profiling%' + OR table_name LIKE '%heartbeat%'; +``` + +Expected tables: +- profilingrequest +- profilingcommand +- hostheartbeats +- profilingexecutions +- namespaceservices +- servicecontainers +- jobcontainers +- containerprocesses +- containershosts +- processeshosts + +### 3. Verify Indexes + +```sql +SELECT tablename, indexname +FROM pg_indexes +WHERE tablename IN ('profilingrequest', 'profilingcommand', 'hostheartbeats', 'profilingexecutions') +ORDER BY tablename, indexname; +``` + +## API Models + +Python Pydantic models are available in: +``` +src/gprofiler/backend/models/dynamic_profiling_models.py +``` + +### Key Models + +#### Request Models +- `ProfilingRequestCreate` - Create new profiling request +- `ProfilingRequestResponse` - Response with all fields +- `ProfilingRequestUpdate` - Update request status + +#### Command Models +- `ProfilingCommandCreate` - Create command for agent +- `ProfilingCommandResponse` - Command details +- `ProfilingCommandUpdate` - Update command status + +#### Heartbeat Models +- `HostHeartbeatCreate` - Register/update host heartbeat +- `HostHeartbeatResponse` - Heartbeat details +- `HostHeartbeatUpdate` - Update heartbeat timestamp + +#### Execution Models +- `ProfilingExecutionCreate` - Create audit entry +- `ProfilingExecutionResponse` - Execution details +- `ProfilingExecutionUpdate` - Update execution status + +#### Query Models +- `ProfilingRequestQuery` - Filter profiling requests +- `HostHeartbeatQuery` - Filter heartbeats +- `ProfilingExecutionQuery` - Filter executions + +## Usage Examples + +### Creating a Profiling Request + +```python +from dynamic_profiling_models import ProfilingRequestCreate, ProfilingMode +from datetime import datetime, timedelta + +# Profile all containers in a service +request = ProfilingRequestCreate( + service_name="web-api", + profiling_mode=ProfilingMode.CPU, + duration_seconds=60, + sample_rate=100, + start_time=datetime.utcnow(), + stop_time=datetime.utcnow() + timedelta(seconds=60) +) +``` + +### Recording a Host Heartbeat + +```python +from dynamic_profiling_models import HostHeartbeatCreate + +heartbeat = HostHeartbeatCreate( + host_id="host-12345", + host_name="worker-node-01", + host_ip="10.0.1.42", + service_name="web-api", + namespace="production", + containers=["web-api-container-1", "web-api-container-2"], + jobs=["data-processing-job"], + executors=["pyspy", "perf"] +) +``` + +### Querying Active Hosts + +```python +from dynamic_profiling_models import HostHeartbeatQuery +from datetime import datetime, timedelta + +# Find hosts seen in last 5 minutes in production namespace +query = HostHeartbeatQuery( + namespace="production", + last_seen_after=datetime.utcnow() - timedelta(minutes=5), + limit=100 +) +``` + +## Performance Considerations + +### Heartbeat Optimization (165k QPM Target) + +1. **Use Connection Pooling**: Minimize connection overhead +2. **Batch Updates**: Update multiple heartbeats in single transaction +3. **Index Usage**: Queries should use `host_id` or `timestamp_last_seen` indexes +4. **Avoid Full Scans**: Always filter on indexed columns + +### Query Performance + +1. **Use Denormalized Tables**: Mapping tables eliminate expensive JOINs +2. **Limit Result Sets**: Always use pagination with `limit` and `offset` +3. **Index Coverage**: Ensure queries can use existing indexes +4. **Monitor Query Plans**: Use `EXPLAIN ANALYZE` to verify performance + +### Maintenance + +1. **Regular VACUUM**: Prevent table bloat on frequently updated tables +2. **Analyze Statistics**: Keep query planner statistics up to date +3. **Monitor Index Usage**: Check `pg_stat_user_indexes` +4. **Archive Old Data**: Consider partitioning or archiving old executions + +## Migration Path + +### Phase 1: Schema Deployment (Current) +✅ Create database tables +✅ Create Python models +✅ Add indexes and constraints + +### Phase 2: API Endpoints (Next) +- POST /api/profiling/requests +- GET /api/profiling/requests +- POST /api/profiling/heartbeats +- GET /api/profiling/heartbeats +- GET /api/profiling/executions + +### Phase 3: Agent Integration +- Agent heartbeat loop +- Command polling/pulling +- Execution reporting + +### Phase 4: Request Resolution +- Map requests to hosts +- Generate host commands +- Track execution lifecycle + +## References + +- Google Doc: [Dynamic Profiling Design](https://docs.google.com/document/d/1iwA_NN1YKDBqfig95Qevw0HcSCqgu7_ya8PGuCksCPc/edit) +- SQL Schema: `scripts/setup/postgres/dynamic_profiling_schema.sql` +- Python Models: `src/gprofiler/backend/models/dynamic_profiling_models.py` + +## Contributing to Intel Open Source + +This implementation is part of gProfiler Performance Studio's contribution to Intel's open source initiative. The dynamic profiling capability will enable: + +1. **Hierarchical Profiling**: Profile at any level (namespace, service, job, container, process, host) +2. **Scalability**: Support 165k QPM with sub-second response times +3. **Auditability**: Complete execution history for compliance and troubleshooting +4. **Flexibility**: Extensible architecture for future profiling modes + +## License + +Copyright (C) 2023 Intel Corporation + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + + + + diff --git a/docs/MIXED_CONTENT_ERROR_RESOLUTION.md b/docs/MIXED_CONTENT_ERROR_RESOLUTION.md new file mode 100644 index 00000000..ff477040 --- /dev/null +++ b/docs/MIXED_CONTENT_ERROR_RESOLUTION.md @@ -0,0 +1,215 @@ +# Mixed Content Error Resolution + +## 🚨 Issue Description + +**Error Message:** +``` +Mixed Content: The site at 'https://gprofiler.example.com/' was loaded over a secure connection, +but the file at 'https://gprofiler.example.com/api/flamegraph/download_svg?...' was redirected +through an insecure connection. This file should be served over HTTPS. +``` + +**Symptoms:** +- ✅ JSON flamegraph data loads successfully via HTTPS +- ❌ SVG download fails with mixed content error +- ❌ Browser blocks the download completely + +## 🔍 Root Cause Analysis + +### The Problem Chain + +``` +Browser → HTTPS → Envoy (Load Balancer) → HTTP → Nginx → HTTP → FastAPI + ↓ + Sets: X-Forwarded-Proto: https + ↓ +Nginx → WRONG: proxy_set_header X-Forwarded-Proto $scheme; (= "http") + ↓ +FastAPI → Thinks request was HTTP → Generates HTTP redirect URLs + ↓ +Browser → Receives HTTP redirect → Mixed Content Error! +``` + +### Technical Details + +1. **SSL Termination**: Envoy (external load balancer) terminates HTTPS and forwards HTTP to Nginx +2. **Header Forwarding**: Envoy correctly sets `X-Forwarded-Proto: https` +3. **Nginx Misconfiguration**: Nginx overwrote this with `X-Forwarded-Proto: http` (using `$scheme`) +4. **FastAPI Unawareness**: FastAPI didn't know original request was HTTPS +5. **URL Generation**: FastAPI generated HTTP redirect URLs for downloads +6. **Browser Security**: Browser blocked HTTP redirects on HTTPS pages + +## 🤔 Why JSON Worked But SVG Downloads Failed + +### JSON Endpoint Behavior +```python +@router.get("") +def get_flamegraph(fg_params: FGParamsModel = Depends(flamegraph_request_params)): + response = get_flamegraph_response(fg_params) + json_file = BytesIO(response.content) + return StreamingResponse(json_file, media_type="text/plain") +``` +- **Direct response**: No redirects involved +- **Content-based**: Returns data directly without URL generation +- **No mixed content**: Browser receives HTTPS response with data + +### SVG Download Endpoint Behavior +```python +@router.get("/download_svg") +def get_flamegraph_svg(fg_params: FGParamsModel = Depends(flamegraph_request_params)): + mimetype = "application/octet-stream" + svg_file_name = get_file_name(fg_params.start_time, fg_params.end_time, fg_params.service_name) + response = get_flamegraph_response(fg_params, file_type="collapsed_file") + svg_flamegraph = get_svg_file(response.text) + return StreamingResponse(svg_flamegraph, media_type=mimetype, + headers={"Content-Disposition": f"attachment; filename={svg_file_name}"}) +``` +- **FastAPI URL normalization**: FastAPI may redirect to normalize URLs +- **HTTP redirect generation**: FastAPI generated `Location: http://...` headers +- **Mixed content error**: Browser blocked HTTP redirects on HTTPS pages + +## 🌐 Where the Error Originates + +### Browser Security Policy +- **Origin**: The mixed content error is **generated by the browser**, not FastAPI +- **Security Feature**: Browsers block insecure (HTTP) content on secure (HTTPS) pages +- **Detection**: Browser analyzes response headers, specifically `Location` headers in redirects +- **Enforcement**: Browser prevents the download and shows the error in console + +### Error Flow +``` +1. Browser makes HTTPS request to download SVG +2. FastAPI responds with HTTP redirect (Location: http://...) +3. Browser detects mixed content violation +4. Browser blocks the request and shows error +5. Download fails +``` + +## 🔧 Solution Implementation + +### Problem Identification +The issue was in the request flow where FastAPI was unaware of the original HTTPS protocol due to incorrect proxy header handling. + +### Fix 1: Nginx Configuration +**File**: `/src/gprofiler/nginx/nginx.conf` + +**Before (Incorrect):** +```nginx +location /api { + proxy_set_header X-Forwarded-Proto $scheme; # Always "http" since Nginx receives HTTP + # ... other headers +} +``` + +**After (Correct):** +```nginx +location /api { + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; # Forwards Envoy's "https" + # ... other headers +} +``` + +**Explanation:** +- `$scheme` = protocol of current request to Nginx (always "http" from Envoy) +- `$http_x_forwarded_proto` = value of `X-Forwarded-Proto` header from Envoy ("https") + +### Fix 2: Gunicorn Configuration +**File**: `/src/gprofiler/run.sh` + +**Before:** +```bash +gunicorn_cmd_line="gunicorn --bind 0.0.0.0:5000 --workers $workers_count ..." +``` + +**After:** +```bash +gunicorn_cmd_line="gunicorn --bind 0.0.0.0:5000 --workers $workers_count --forwarded-allow-ips=* ..." +``` + +**Explanation:** +- `--forwarded-allow-ips=*` tells Gunicorn to trust `X-Forwarded-*` headers +- Without this, FastAPI ignores proxy headers for security reasons +- This allows FastAPI to correctly identify the original HTTPS protocol + +## 🎯 Architecture Overview + +### Request Flow (After Fix) +``` +Browser (HTTPS) → Envoy → Nginx → FastAPI + ↓ ↓ ↓ + Sets: Forwards: Reads: + X-Forwarded- X-Forwarded- X-Forwarded- + Proto: https Proto: https Proto: https + ↓ + Generates HTTPS URLs ✅ +``` + +### Why This Architecture? +1. **SSL Termination**: Envoy handles SSL certificates and HTTPS complexity +2. **Load Balancing**: Envoy distributes traffic across multiple instances +3. **Reverse Proxy**: Nginx handles static files and API routing +4. **Application Server**: FastAPI focuses on business logic +5. **Header Chain**: Each layer forwards protocol information downstream + +## 🚫 Alternative Solutions Considered + +### Could Envoy Fix This by Modifying Response Headers? +**Technically possible but not recommended:** + +```yaml +# Hypothetical Envoy config (NOT RECOMMENDED) +response_headers_to_add: + - header: + key: "location" + value: "https://gprofiler.example.com%{REQ(location)}%" +``` + +**Why this approach is wrong:** +- ❌ **Complex**: Requires parsing and rewriting every Location header +- ❌ **Fragile**: Breaks when FastAPI changes URL generation +- ❌ **Incomplete**: Doesn't fix URLs in response bodies or JSON +- ❌ **Performance**: Additional processing overhead +- ❌ **Non-standard**: Goes against industry best practices + +### The Correct Approach (What We Did) +- ✅ **Standard**: Industry-standard proxy header forwarding +- ✅ **Simple**: One configuration change in each layer +- ✅ **Complete**: Fixes all URL generation, not just redirects +- ✅ **Maintainable**: Follows established patterns +- ✅ **Performant**: No additional processing required + +## 📋 Verification Steps + +### Before Fix +```bash +curl -v "https://gprofiler.example.com/api/flamegraph/download_svg?..." +# Returns: Location: http://gprofiler.example.com/... (HTTP redirect) +``` + +### After Fix +```bash +curl -v "https://gprofiler.example.com/api/flamegraph/download_svg?..." +# Returns: Location: https://gprofiler.example.com/... (HTTPS redirect) +``` + +### Browser Verification +1. Open browser developer tools +2. Navigate to Network tab +3. Trigger SVG download +4. Verify no mixed content errors in Console +5. Confirm successful download + +## 🔑 Key Takeaways + +1. **Mixed content errors originate from browser security policies**, not the application +2. **Proxy header forwarding is critical** in multi-tier architectures with SSL termination +3. **Both Nginx and Gunicorn configuration** were required for the complete fix +4. **JSON vs SVG behavior difference** was due to redirects vs direct responses +5. **Industry-standard solutions** (header forwarding) are preferred over custom workarounds + +## 📚 References + +- [FastAPI Behind a Proxy Documentation](https://fastapi.tiangolo.com/advanced/behind-a-proxy/) +- [Nginx Proxy Module Documentation](http://nginx.org/en/docs/http/ngx_http_proxy_module.html) +- [Gunicorn Proxy Settings](https://docs.gunicorn.org/en/stable/settings.html#forwarded-allow-ips) +- [Mozilla Mixed Content Documentation](https://developer.mozilla.org/en-US/docs/Web/Security/Mixed_content) \ No newline at end of file diff --git a/heartbeat_doc/PERFSPECT_DYNAMIC_PROFILING.md b/heartbeat_doc/PERFSPECT_DYNAMIC_PROFILING.md new file mode 100644 index 00000000..7aebd515 --- /dev/null +++ b/heartbeat_doc/PERFSPECT_DYNAMIC_PROFILING.md @@ -0,0 +1,324 @@ +# PerfSpect Integration with Dynamic Profiling + + + +## Description + + +This feature integrates Intel PerfSpect hardware metrics collection with the gProfiler dynamic profiling system. It enables users to collect detailed hardware performance metrics (CPU utilization, memory bandwidth, cache statistics, etc.) alongside traditional CPU profiling data through a simple UI checkbox. + +The integration provides: +- **UI Control**: A "PerfSpect HW Metrics" checkbox in the dynamic profiling interface +- **Auto-Installation**: Automatic PerfSpect binary download and setup on target agents +- **Seamless Integration**: Hardware metrics collection runs alongside CPU profiling +- **Centralized Management**: Control PerfSpect across multiple hosts from a single interface + +## Related Issue + + + + +This feature addresses the need for comprehensive performance analysis by combining CPU profiling with hardware-level metrics, enabling deeper insights into application performance bottlenecks. + +## Motivation and Context + + +### Problem Solved +- **Limited Visibility**: Traditional CPU profiling only shows software-level performance, missing hardware bottlenecks +- **Manual Setup**: Previously required manual PerfSpect installation and configuration on each host +- **Fragmented Workflow**: Hardware metrics and CPU profiles were collected separately +- **Scalability Issues**: Difficult to enable hardware metrics collection across large deployments + +### Improvements Delivered +- **Unified Interface**: Single UI to control both CPU profiling and hardware metrics +- **Zero-Touch Deployment**: Automatic PerfSpect installation eliminates manual setup +- **Enhanced Analysis**: Combined hardware and software metrics provide complete performance picture +- **Enterprise Scale**: Easily enable hardware metrics across hundreds of hosts simultaneously + +## How Has This Been Tested? + + + + + +### Testing Environment +- **Local Development**: Docker Compose setup with PostgreSQL and ClickHouse +- **Test Services**: `test-service-1`, `test-service-2` +- **Test Hosts**: `test-host-001`, `test-host-202`, `test-host-1` + +### Test Cases Executed + +#### 1. UI Integration Testing +- ✅ **Checkbox Visibility**: Verified PerfSpect checkbox appears in dynamic profiling interface +- ✅ **Tooltip Functionality**: Confirmed tooltip displays helpful information about PerfSpect +- ✅ **State Management**: Tested checkbox state persistence and reset behavior +- ✅ **UX Improvement**: Verified checkbox auto-unchecks after profiling request submission + +#### 2. API Integration Testing +- ✅ **Request Handling**: Tested API accepts `additional_args` with `enable_perfspect: true` +- ✅ **Data Persistence**: Verified profiling requests save PerfSpect settings to database +- ✅ **Error Handling**: Confirmed graceful handling of malformed requests + +#### 3. Database Integration Testing +- ✅ **Schema Compatibility**: Verified `additional_args` JSONB field stores PerfSpect configuration +- ✅ **Query Performance**: Tested retrieval of PerfSpect settings from profiling requests +- ✅ **Data Integrity**: Confirmed proper JSON serialization/deserialization + +#### 4. Command Generation Testing +- ✅ **Config Merging**: Verified `additional_args` merges into top-level `combined_config` +- ✅ **Agent Compatibility**: Confirmed `enable_perfspect: true` appears at correct config level +- ✅ **Command Persistence**: Tested profiling commands save with proper PerfSpect configuration + +#### 5. End-to-End Integration Testing +```bash +# Test Case: Create profiling request with PerfSpect enabled +curl -X POST http://localhost:8080/api/metrics/profile_request \ + -u "username:password" \ + -H "Content-Type: application/json" \ + -d '{ + "service_name": "test-service-2", + "request_type": "start", + "continuous": true, + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": { + "test-host-202": null + }, + "additional_args": { + "enable_perfspect": true + } + }' + +# Verification: Check database entries +SELECT combined_config FROM ProfilingCommands +WHERE service_name = 'test-service-2' AND hostname = 'test-host-202'; + +# Result: {"enable_perfspect": true, ...} ✅ +``` + +#### 6. Agent Integration Testing +- ✅ **Heartbeat Protocol**: Verified agents receive PerfSpect configuration via heartbeat +- ✅ **Auto-Installation**: Confirmed PerfSpect binary downloads and extracts correctly +- ✅ **Command Generation**: Tested gProfiler starts with `--enable-hw-metrics-collection` +- ✅ **Path Configuration**: Verified `--perfspect-path` points to auto-installed binary + +## Screenshots + + +### Dynamic Profiling Interface with PerfSpect Checkbox +``` +┌─────────────────────────────────────────────────────────────┐ +│ [Start (2)] [Stop (0)] [Refresh] ☑ PerfSpect HW Metrics │ +│ ℹ Enable Intel PerfSpect│ +│ hardware metrics │ +│ collection │ +└─────────────────────────────────────────────────────────────┘ +``` + +### Database Schema - ProfilingRequests Table +```sql +CREATE TABLE ProfilingRequests ( + request_id uuid NOT NULL, + service_name text NOT NULL, + request_type text NOT NULL, + continuous boolean NOT NULL DEFAULT false, + duration integer NULL DEFAULT 60, + frequency integer NULL DEFAULT 11, + profiling_mode ProfilingMode NOT NULL DEFAULT 'cpu', + target_hostnames text[] NOT NULL, + pids integer[] NULL, + stop_level text NULL DEFAULT 'process', + additional_args jsonb NULL, -- ← PerfSpect config stored here + status ProfilingRequestStatus NOT NULL DEFAULT 'pending', + created_at timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP +); +``` + +### Combined Config Example +```json +{ + "duration": 60, + "frequency": 11, + "continuous": true, + "command_type": "start", + "profiling_mode": "cpu", + "enable_perfspect": true // ← Agent reads this flag +} +``` + +## Architecture Overview + +### Component Flow +``` +┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ +│ Frontend │───▶│ Backend │───▶│ Database │───▶│ Agent │ +│ UI │ │ API │ │ PostgreSQL │ │ gProfiler │ +└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ + │ │ │ │ + │ ☑ PerfSpect │ additional_args │ combined_config │ --enable-hw- + │ Checkbox │ {"enable_perfspect"│ {"enable_perfspect"│ metrics-collection + │ │ : true} │ : true} │ --perfspect-path=... +``` + +### Data Flow +1. **User Action**: Checks PerfSpect checkbox and clicks "Start" +2. **API Request**: Frontend sends `additional_args: {"enable_perfspect": true}` +3. **Database Storage**: Backend saves request with PerfSpect configuration +4. **Command Generation**: System creates profiling command with merged config +5. **Agent Heartbeat**: Agent receives command with `enable_perfspect: true` +6. **Auto-Installation**: Agent downloads and installs PerfSpect binary +7. **Metrics Collection**: gProfiler starts with hardware metrics enabled + +## Implementation Details + +### Frontend Changes +- **File**: `src/gprofiler/frontend/src/components/console/header/ProfilingTopPanel.jsx` + - Added PerfSpect checkbox with tooltip + - Integrated with Material-UI components +- **File**: `src/gprofiler/frontend/src/components/console/ProfilingStatusPage.jsx` + - Added state management for PerfSpect checkbox + - Implemented auto-reset UX improvement + - Integrated PerfSpect setting into API requests + +### Backend Changes +- **File**: `src/gprofiler/backend/routers/metrics_routes.py` + - Enhanced `/profile_request` endpoint to handle `additional_args` + - Maintained backward compatibility with existing API +- **File**: `src/gprofiler-dev/gprofiler_dev/postgres/db_manager.py` + - Updated `_get_profiling_request_details` to include `additional_args` + - Enhanced `_build_combined_config` to merge `additional_args` into top-level config + - Fixed database schema compatibility issues + +### Agent Changes +- **File**: `gprofiler/gprofiler/heartbeat.py` + - Enhanced `_create_profiler_args` to read `enable_perfspect` from config + - Integrated PerfSpect auto-installation workflow + - Added proper error handling for installation failures +- **File**: `gprofiler/gprofiler/perfspect_installer.py` + - Implemented automatic PerfSpect download and extraction + - Added binary verification and path resolution + - Included timeout and error handling + +## Configuration + +### Environment Variables +```bash +# PerfSpect download URL (default: GitHub releases) +PERFSPECT_DOWNLOAD_URL="https://github.com/intel/PerfSpect/releases/latest/download/perfspect.tgz" + +# Installation timeout in seconds (default: 300) +PERFSPECT_INSTALL_TIMEOUT=300 + +# Temporary directory for PerfSpect installation +TMPDIR="/tmp" +``` + +### Agent Command Line +When PerfSpect is enabled, the agent starts with: +```bash +gprofiler \ + --enable-hw-metrics-collection \ + --perfspect-path="/tmp/perfspect/perfspect -o /tmp" \ + [other profiling options...] +``` + +## Troubleshooting + +### Common Issues + +#### 1. PerfSpect Checkbox Not Visible +- **Cause**: Frontend build cache or outdated container +- **Solution**: Rebuild webapp container with `--no-cache` flag +```bash +docker-compose stop webapp +docker rmi -f deploy_webapp +docker-compose build webapp --no-cache +docker-compose up -d webapp +``` + +#### 2. Permission Denied During PerfSpect Installation +- **Cause**: Insufficient permissions to write to `/tmp` directory +- **Solution**: Ensure agent runs with appropriate user permissions +```bash +# Check directory permissions +ls -la /tmp/ +# Fix permissions if needed +chmod 755 /tmp/ +``` + +#### 3. PerfSpect Binary Not Found +- **Cause**: Download failure or network connectivity issues +- **Solution**: Check agent logs and network connectivity +```bash +# Check agent logs +tail -f /var/log/gprofiler.log | grep -i perfspect +# Test connectivity +curl -I https://github.com/intel/PerfSpect/releases/latest/download/perfspect.tgz +``` + +#### 4. Database Schema Mismatch +- **Cause**: Missing `continuous` column or `additional_args` field +- **Solution**: Update database schema +```sql +-- Add missing continuous column +ALTER TABLE ProfilingRequests ADD COLUMN continuous boolean NOT NULL DEFAULT false; +-- Verify additional_args exists +\d ProfilingRequests; +``` + +### Debug Commands + +#### Check PerfSpect Configuration in Database +```sql +-- Check profiling requests with PerfSpect +SELECT request_id, service_name, additional_args, created_at +FROM ProfilingRequests +WHERE additional_args::text LIKE '%perfspect%' +ORDER BY created_at DESC; + +-- Check profiling commands with PerfSpect +SELECT command_id, hostname, service_name, combined_config, created_at +FROM ProfilingCommands +WHERE combined_config::text LIKE '%perfspect%' +ORDER BY created_at DESC; +``` + +#### Verify Agent Configuration +```bash +# Check if PerfSpect is installed +ls -la /tmp/perfspect/ +# Check agent process arguments +ps aux | grep gprofiler | grep perfspect +# Check hardware metrics collection +tail -f /var/log/gprofiler.log | grep -i "hw.*metrics" +``` + +## Checklist: + + +- [x] I have updated the relevant documentation. +- [x] I have added tests for new logic. +- [x] Frontend UI components are properly integrated. +- [x] Backend API handles PerfSpect configuration correctly. +- [x] Database schema supports PerfSpect settings storage. +- [x] Agent auto-installation workflow is implemented. +- [x] Error handling and logging are comprehensive. +- [x] UX improvements enhance user experience. +- [x] End-to-end integration testing is complete. +- [x] Backward compatibility is maintained. + +## Future Enhancements + +### Planned Features +- **Custom PerfSpect Arguments**: Allow users to specify custom PerfSpect command-line options +- **Metrics Visualization**: Integrate PerfSpect HTML reports into the UI +- **Performance Thresholds**: Set alerts based on hardware metrics +- **Historical Analysis**: Compare hardware metrics across profiling sessions +- **Multi-Architecture Support**: Extend support beyond x86_64 systems + +### Performance Optimizations +- **Caching**: Cache PerfSpect binaries to reduce download overhead +- **Batch Operations**: Optimize database queries for large-scale deployments +- **Async Processing**: Implement asynchronous PerfSpect installation +- **Resource Management**: Add CPU and memory limits for PerfSpect processes diff --git a/heartbeat_doc/QUICKSTART.md b/heartbeat_doc/QUICKSTART.md new file mode 100644 index 00000000..89774e9f --- /dev/null +++ b/heartbeat_doc/QUICKSTART.md @@ -0,0 +1,133 @@ +# Quick Start Guide - Heartbeat-Based Profiling + +This guide helps you quickly test the heartbeat-based profiling control system. + +## Prerequisites + +1. **Performance Studio Backend** running on `http://localhost:5000` +2. **PostgreSQL database** with heartbeat tables created +3. **Python dependencies** installed (`requests`, `fastapi`, etc.) + +## Step 1: Start the Backend + +```bash +# Start the Performance Studio backend +cd gprofiler-performance-studio +./src/run_local.sh # This starts on port 5000 +``` + +## Step 2: Validate API + +```bash +# From the heartbeat_doc directory +cd heartbeat_doc +python3 validate_api.py +``` + +Expected output: +``` +🧪 Testing Heartbeat API Endpoints +================================================== + +1️⃣ Testing valid profiling request... +✅ Valid request successful + Request ID: abc-123 + Command ID: def-456 + +2️⃣ Testing invalid request (missing target_hostnames)... +✅ Invalid request correctly rejected with 422 + +3️⃣ Testing heartbeat request... +✅ Heartbeat successful + Message: Heartbeat received. New profiling command available. + Command received: def-456 +``` + +## Step 3: Test Full System + +```bash +# Run the comprehensive test from heartbeat_doc directory +cd heartbeat_doc +python3 test_heartbeat_system.py +``` + +This will: +- Create profiling requests via API +- Simulate agent heartbeats +- Verify command delivery and idempotency +- Test both start and stop commands + +## Step 4: Run Real Agent (Optional) + +```bash +# Run actual gProfiler agent in heartbeat mode from heartbeat_doc directory +cd heartbeat_doc +python3 run_heartbeat_agent.py +``` + +## Troubleshooting + +### "Connection refused" errors +- Ensure backend is running on `http://localhost:5000` +- Check firewall settings + +### "target_hostnames required" errors +- ✅ This is expected for invalid requests +- Ensure you include `target_hostnames` in valid requests + +### "No pending commands" in heartbeat +- ✅ This is normal when no profiling requests are active +- Create a profiling request first, then send heartbeat + +### Database errors +- Ensure PostgreSQL is running +- Verify heartbeat tables exist (run DDL script) +- Check database connection settings + +## Example Commands + +### Create a START request +```bash +curl -X POST http://localhost:5000/api/metrics/profile_request \ + -H "Content-Type: application/json" \ + -d '{ + "service_name": "my-service", + "request_type": "start", + "duration": 60, + "target_hostnames": ["host1"] + }' +``` + +### Send a heartbeat +```bash +curl -X POST http://localhost:5000/api/metrics/heartbeat \ + -H "Content-Type: application/json" \ + -d '{ + "hostname": "host1", + "ip_address": "127.0.0.1", + "service_name": "my-service", + "status": "active" + }' +``` + +### Create a STOP request +```bash +curl -X POST http://localhost:5000/api/metrics/profile_request \ + -H "Content-Type: application/json" \ + -d '{ + "service_name": "my-service", + "request_type": "stop", + "target_hostnames": ["host1"], + "stop_level": "host" + }' +``` + +## Files Updated + +- ✅ **metrics_routes.py** - Fixed field validation and imports +- ✅ **test_heartbeat_system.py** - Updated to use `request_type` +- ✅ **validate_api.py** - Quick API validation script +- ✅ **README_HEARTBEAT.md** - Comprehensive documentation +- ✅ **Database DDL** - Simplified schema without stored procedures + +All test files are now consistent with the simplified heartbeat implementation! diff --git a/heartbeat_doc/README_HEARTBEAT.md b/heartbeat_doc/README_HEARTBEAT.md new file mode 100644 index 00000000..baf73e98 --- /dev/null +++ b/heartbeat_doc/README_HEARTBEAT.md @@ -0,0 +1,345 @@ +# gProfiler Performance Studio - Heartbeat-Based Profiling Control + +This document describes the heartbeat-based profiling control system that allows dynamic start/stop of profiling sessions through API commands. + +## Overview + +The heartbeat system enables remote control of gProfiler agents through a simple yet robust mechanism: + +1. **Agents send periodic heartbeats** to the Performance Studio backend +2. **Backend responds with profiling commands** (start/stop) when available +3. **Agents execute commands with built-in idempotency** to prevent duplicate execution +4. **Commands are tracked and logged** for audit and debugging + +## Architecture + +``` +┌─────────────────┐ heartbeat ┌──────────────────────┐ +│ gProfiler │ ──────────────► │ Performance Studio │ +│ Agent │ │ Backend │ +│ │ ◄────────────── │ │ +└─────────────────┘ commands └──────────────────────┘ + │ │ + │ │ + ▼ ▼ +┌─────────────────┐ ┌──────────────────────┐ +│ Profile Data │ │ PostgreSQL DB │ +│ (S3/Local) │ │ - Host Heartbeats │ +└─────────────────┘ │ - Profiling Cmds │ + └──────────────────────┘ +``` + +## Database Schema + +### Core Tables + +1. **HostHeartbeats** - Track agent status and last seen information +2. **ProfilingRequests** - Store profiling requests from API calls +3. **ProfilingCommands** - Commands sent to agents (merged from multiple requests) +4. **ProfilingExecutions** - Execution history for audit trail + +### Key Features +- **Simple DDL** with essential indexes only +- **No stored procedures** - all logic in application code +- **No triggers** - timestamps handled by application +- **Consistent naming** with `idx_` prefix for all indexes + +## API Endpoints + +### 1. Create Profiling Request +```http +POST /api/metrics/profile_request +Content-Type: application/json + +{ + "service_name": "my-service", + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": { + "host1": [1234, 5678], + "host2": null + }, + "stop_level": "process", + "additional_args": {} +} +``` + +**Response:** +```json +{ + "success": true, + "message": "Start profiling request submitted successfully", + "request_id": "uuid", + "command_id": "uuid", + "estimated_completion_time": "2025-01-15T10:30:00Z" +} +``` + +### 2. Agent Heartbeat +```http +POST /api/metrics/heartbeat +Content-Type: application/json + +{ + "hostname": "host1", + "ip_address": "10.0.1.100", + "service_name": "my-service", + "last_command_id": "previous-command-uuid", + "status": "active" +} +``` + +**Response (with command):** +```json +{ + "success": true, + "message": "Heartbeat received. New profiling command available.", + "command_id": "new-command-uuid", + "profiling_command": { + "command_type": "start", + "combined_config": { + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "pids": "1234,5678" + } + } +} +``` + +### 3. Command Completion +```http +POST /api/metrics/command_completion +Content-Type: application/json + +{ + "command_id": "command-uuid", + "hostname": "host1", + "status": "completed", + "execution_time": 65, + "results_path": "/path/to/results" +} +``` + +## Agent Integration + +### Heartbeat Configuration +```bash +python3 gprofiler/main.py \ + --enable-heartbeat-server \ + --api-server "https://perf-studio.example.com" \ + --heartbeat-interval 30 \ + --service-name "my-service" \ + --token "api-token" +``` + +### Heartbeat Flow +1. **Agent sends heartbeat** every 30 seconds (configurable) +2. **Backend checks for pending commands** for this hostname/service +3. **If command available**, backend responds with command details +4. **Agent executes command** and reports completion +5. **Idempotency ensured** by tracking `last_command_id` + +## Command Types + +### START Commands +- Create new profiling sessions +- Merge multiple requests for same host +- Include combined configuration (duration, frequency, PIDs) + +### STOP Commands +- **Process-level**: Stop specific PIDs +- **Host-level**: Stop entire profiling session +- Automatic conversion when only one PID remains + +## Data Flow Example + +### 1. Create Profiling Request +```bash +curl -X POST http://localhost:8000/api/metrics/profile_request \ + -H "Content-Type: application/json" \ + -d '{ + "service_name": "web-service", + "request_type": "start", + "duration": 120, + "target_hostnames": ["web-01", "web-02"], + "profiling_mode": "cpu" + }' +``` + +### 2. Agent Heartbeat +```bash +# Agent automatically sends: +curl -X POST http://localhost:8000/api/metrics/heartbeat \ + -H "Content-Type: application/json" \ + -d '{ + "hostname": "web-01", + "ip_address": "10.0.1.10", + "service_name": "web-service", + "status": "active" + }' +``` + +### 3. Agent Receives Command +```json +{ + "success": true, + "command_id": "cmd-12345", + "profiling_command": { + "command_type": "start", + "combined_config": { + "duration": 120, + "frequency": 11, + "profiling_mode": "cpu" + } + } +} +``` + +### 4. Agent Reports Completion +```bash +curl -X POST http://localhost:8000/api/metrics/command_completion \ + -H "Content-Type: application/json" \ + -d '{ + "command_id": "cmd-12345", + "hostname": "web-01", + "status": "completed", + "execution_time": 122 + }' +``` + +## Testing + +### 1. Test Heartbeat System +```bash +cd gprofiler-performance-studio +python3 test_heartbeat_system.py +``` + +This script: +- Simulates agent heartbeat behavior +- Creates test profiling requests +- Verifies command delivery and idempotency +- Tests both start and stop commands + +### 2. Run Test Agent +```bash +python3 run_heartbeat_agent.py +``` + +This script: +- Starts a real gProfiler agent in heartbeat mode +- Connects to the Performance Studio backend +- Receives and executes actual profiling commands + +## Configuration + +### Backend Configuration +```yaml +# Backend settings +database: + host: localhost + port: 5432 + database: gprofiler + +heartbeat: + max_age_minutes: 10 # Consider hosts offline after 10 minutes + cleanup_interval: 300 # Clean up old records every 5 minutes +``` + +### Agent Configuration +```bash +# Required parameters +--enable-heartbeat-server # Enable heartbeat mode +--api-server URL # Performance Studio backend URL +--service-name NAME # Service identifier +--heartbeat-interval SECONDS # Heartbeat frequency (default: 30) + +# Optional parameters +--token TOKEN # Authentication token +--server-host URL # Profile upload server (can be same as api-server) +--no-verify # Skip SSL verification (testing only) +``` + +## Monitoring and Debugging + +### Database Queries +```sql +-- Check active hosts +SELECT hostname, service_name, status, heartbeat_timestamp +FROM HostHeartbeats +WHERE status = 'active' AND heartbeat_timestamp > NOW() - INTERVAL '10 minutes'; + +-- Check pending commands +SELECT hostname, service_name, command_type, status, created_at +FROM ProfilingCommands +WHERE status = 'pending'; + +-- Check command execution history +SELECT pe.hostname, pr.request_type, pe.status, pe.execution_time +FROM ProfilingExecutions pe +JOIN ProfilingRequests pr ON pe.profiling_request_id = pr.ID +ORDER BY pe.created_at DESC; +``` + +### Log Monitoring +```bash +# Backend logs +tail -f /var/log/gprofiler-studio/backend.log | grep -E "(heartbeat|command)" + +# Agent logs +tail -f /tmp/gprofiler-heartbeat.log | grep -E "(heartbeat|command)" +``` + +## Troubleshooting + +### Common Issues + +1. **Agents not receiving commands** + - Check heartbeat connectivity to backend + - Verify service_name matches between request and agent + - Check agent authentication (token) + +2. **Commands executing multiple times** + - Verify agent is tracking `last_command_id` correctly + - Check for agent restarts that reset command tracking + +3. **Commands not being created** + - Verify `target_hostnames` includes the agent's hostname + - Check database constraints and foreign key relationships + +### Debug Commands +```bash +# Test backend connectivity +curl -v http://localhost:8000/api/metrics/heartbeat \ + -H "Content-Type: application/json" \ + -d '{"hostname":"test","ip_address":"127.0.0.1","service_name":"test","status":"active"}' + +# Check database state +psql -d gprofiler -c "SELECT * FROM HostHeartbeats ORDER BY heartbeat_timestamp DESC LIMIT 5;" +``` + +## Security Considerations + +1. **Authentication**: Use API tokens for agent authentication +2. **Network**: Secure communication with HTTPS/TLS +3. **Authorization**: Validate service permissions before creating commands +4. **Rate Limiting**: Implement rate limits on heartbeat endpoints +5. **Input Validation**: Sanitize all input parameters + +## Performance Considerations + +1. **Database Indexes**: Essential indexes are created for all lookup patterns +2. **Heartbeat Frequency**: Balance between responsiveness and load (default: 30s) +3. **Command Cleanup**: Implement periodic cleanup of old commands/executions +4. **Connection Pooling**: Use connection pooling for database access + +## Future Enhancements + +1. **Agent Discovery**: Automatic service registration +2. **Command Queuing**: Support for command queues per host +3. **Conditional Commands**: Commands based on host metrics or state +4. **Command Templates**: Predefined command templates for common scenarios +5. **Real-time Dashboard**: Web UI for monitoring active agents and commands diff --git a/heartbeat_doc/run_heartbeat_agent.py b/heartbeat_doc/run_heartbeat_agent.py new file mode 100755 index 00000000..31bcdcad --- /dev/null +++ b/heartbeat_doc/run_heartbeat_agent.py @@ -0,0 +1,137 @@ +#!/usr/bin/env python3 +""" +Test runner for the gProfiler agent with heartbeat mode enabled. + +This script demonstrates how to run the gProfiler agent in heartbeat mode +to receive dynamic profiling commands from the Performance Studio backend. +""" + +import subprocess +import sys +import os +import signal +import time +from pathlib import Path + +def run_gprofiler_heartbeat_mode(): + """Run gProfiler in heartbeat mode""" + + # Configuration - adjust these values for your environment + config = { + "server_token": "test-token", + "service_name": "test-service", + "api_server": "http://localhost:5000", # Performance Studio backend URL (port 5000) + "server_host": "http://localhost:5000", # Profile upload server URL (can be same) + "output_dir": "/tmp/gprofiler-test", + "log_file": "/tmp/gprofiler-heartbeat.log", + "heartbeat_interval": "10", # seconds + "verbose": True + } + + # Ensure output directory exists + os.makedirs(config["output_dir"], exist_ok=True) + + # Build the command + # Path from heartbeat_doc to gprofiler main.py + gprofiler_path = Path(__file__).parent.parent / "src" / "gprofiler" / "main.py" + + cmd = [ + sys.executable, + str(gprofiler_path), + "--enable-heartbeat-server", + "--upload-results", + "--token", config["server_token"], + "--service-name", config["service_name"], + "--api-server", config["api_server"], + "--server-host", config["server_host"], + "--output-dir", config["output_dir"], + "--log-file", config["log_file"], + "--heartbeat-interval", config["heartbeat_interval"], + "--no-verify", # For testing with localhost + ] + + if config["verbose"]: + cmd.append("--verbose") + + print("🤖 Starting gProfiler in heartbeat mode...") + print(f"📝 Command: {' '.join(cmd)}") + print("="*60) + print("The agent will:") + print("1. Send heartbeats to the backend every 10 seconds") + print("2. Wait for profiling commands from the server") + print("3. Execute start/stop commands as received") + print("4. Maintain idempotency for duplicate commands") + print("="*60) + print("💡 To test the system:") + print("1. Start the Performance Studio backend") + print("2. Run this script to start the agent") + print("3. Use the backend API to send profiling requests") + print("4. Watch the agent logs to see command execution") + print("="*60) + print("\n🚀 Starting agent... (Press Ctrl+C to stop)") + + try: + # Start the process + process = subprocess.Popen( + cmd, + stdout=subprocess.PIPE, + stderr=subprocess.STDOUT, + universal_newlines=True, + bufsize=1 + ) + + # Monitor output + for line in iter(process.stdout.readline, ''): + print(f"[AGENT] {line.rstrip()}") + + process.wait() + + except KeyboardInterrupt: + print("\n🛑 Received interrupt signal, stopping agent...") + if process: + process.send_signal(signal.SIGINT) + try: + process.wait(timeout=10) + except subprocess.TimeoutExpired: + print("⚠️ Process didn't stop gracefully, forcing termination...") + process.kill() + process.wait() + + except Exception as e: + print(f"❌ Error running gProfiler: {e}") + return 1 + + print("✅ Agent stopped") + return 0 + +def print_usage(): + """Print usage instructions""" + print("📖 gProfiler Heartbeat Mode Test Runner") + print("="*50) + print("\nThis script runs gProfiler in heartbeat mode for testing.") + print("\nPrerequisites:") + print("1. Performance Studio backend running on http://localhost:8000") + print("2. gProfiler agent code in the expected location") + print("3. Python dependencies installed") + print("\nUsage:") + print(f" {sys.argv[0]}") + print("\nConfiguration:") + print("- Edit the 'config' dictionary in this script to customize settings") + print("- Logs will be written to /tmp/gprofiler-heartbeat.log") + print("- Profiles will be saved to /tmp/gprofiler-test/") + print("\nTesting flow:") + print("1. Start the backend server") + print("2. Run this script to start the agent") + print("3. Use test_heartbeat_system.py to send commands") + print("4. Watch the agent respond to commands") + +def main(): + """Main function""" + if len(sys.argv) > 1 and sys.argv[1] in ["-h", "--help"]: + print_usage() + return 0 + + return run_gprofiler_heartbeat_mode() + +if __name__ == "__main__": + sys.exit(main()) diff --git a/heartbeat_doc/test_heartbeat_system.py b/heartbeat_doc/test_heartbeat_system.py new file mode 100755 index 00000000..5e844f6c --- /dev/null +++ b/heartbeat_doc/test_heartbeat_system.py @@ -0,0 +1,192 @@ +#!/usr/bin/env python3 +""" +Test script to verify the heartbeat-based profiling control system. + +This script demonstrates: +1. Agent sending heartbeat to backend +2. Backend responding with start/stop commands +3. Agent acting on commands with idempotency +""" + +import json +import requests +import time +from datetime import datetime +from typing import Dict, Any, Optional, Set + +# Configuration +BACKEND_URL = "http://localhost:5000" # Updated to port 5000 +SERVICE_NAME = "test-service" +HOSTNAME = "test-host" +IP_ADDRESS = "127.0.0.1" + +class TestHeartbeatClient: + """Test client to simulate agent heartbeat behavior""" + + def __init__(self, backend_url: str, service_name: str, hostname: str, ip_address: str): + self.backend_url = backend_url.rstrip('/') + self.service_name = service_name + self.hostname = hostname + self.ip_address = ip_address + self.last_command_id: Optional[str] = None + self.executed_commands: Set[str] = set() + + def send_heartbeat(self) -> Optional[Dict[str, Any]]: + """Send heartbeat to backend and return response""" + heartbeat_data = { + "ip_address": self.ip_address, + "hostname": self.hostname, + "service_name": self.service_name, + "last_command_id": self.last_command_id, + "status": "active", + "timestamp": datetime.now().isoformat() + } + + try: + response = requests.post( + f"{self.backend_url}/api/metrics/heartbeat", + json=heartbeat_data, + timeout=10 + ) + + if response.status_code == 200: + result = response.json() + print(f"✓ Heartbeat successful: {result.get('message')}") + + if result.get("profiling_command") and result.get("command_id"): + command_id = result["command_id"] + profiling_command = result["profiling_command"] + command_type = profiling_command.get("command_type", "unknown") + + print(f"📋 Received command: {command_type} (ID: {command_id})") + + # Check idempotency + if command_id in self.executed_commands: + print(f"⚠️ Command {command_id} already executed, skipping...") + return None + + # Mark as executed + self.executed_commands.add(command_id) + self.last_command_id = command_id + + return { + "command_type": command_type, + "command_id": command_id, + "profiling_command": profiling_command + } + else: + print("📭 No pending commands") + return None + else: + print(f"❌ Heartbeat failed: {response.status_code} - {response.text}") + return None + + except Exception as e: + print(f"❌ Heartbeat error: {e}") + return None + + def simulate_profiling_action(self, command_type: str, command_id: str): + """Simulate profiling action (start/stop)""" + if command_type == "start": + print(f"🚀 Starting profiler for command {command_id}") + # Simulate profiling work + time.sleep(2) + print(f"✅ Profiler started successfully") + elif command_type == "stop": + print(f"🛑 Stopping profiler for command {command_id}") + # Simulate stopping + time.sleep(1) + print(f"✅ Profiler stopped successfully") + else: + print(f"⚠️ Unknown command type: {command_type}") + +def create_test_profiling_request(backend_url: str, service_name: str, request_type: str = "start") -> bool: + """Create a test profiling request""" + request_data = { + "service_name": service_name, + "request_type": request_type, # Updated to use request_type instead of command_type + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": {HOSTNAME: [1234, 5678]}, # Required field + "additional_args": {"test": True} + } + + try: + response = requests.post( + f"{backend_url}/api/metrics/profile_request", + json=request_data, + timeout=10 + ) + + if response.status_code == 200: + result = response.json() + print(f"✅ Profiling request created: {result.get('message')}") + print(f" Request ID: {result.get('request_id')}") + print(f" Command ID: {result.get('command_id')}") + return True + else: + print(f"❌ Failed to create profiling request: {response.status_code} - {response.text}") + return False + + except Exception as e: + print(f"❌ Error creating profiling request: {e}") + return False + +def main(): + """Main test function""" + print("🧪 Testing Heartbeat-Based Profiling Control System") + print("=" * 60) + + # Initialize test client + client = TestHeartbeatClient(BACKEND_URL, SERVICE_NAME, HOSTNAME, IP_ADDRESS) + + # Test 1: Send initial heartbeat (should have no commands) + print("\n1️⃣ Test: Initial heartbeat (no commands expected)") + client.send_heartbeat() + + # Test 2: Create a START profiling request + print("\n2️⃣ Test: Create START profiling request") + if create_test_profiling_request(BACKEND_URL, SERVICE_NAME, "start"): + time.sleep(1) # Give backend time to process + + # Send heartbeat to receive the command + print("\n 📡 Sending heartbeat to receive command...") + command = client.send_heartbeat() + + if command: + client.simulate_profiling_action(command["command_type"], command["command_id"]) + + # Test idempotency - send heartbeat again + print("\n 🔄 Testing idempotency - sending heartbeat again...") + command = client.send_heartbeat() + if command is None: + print("✅ Idempotency working - no duplicate command received") + + # Test 3: Create a STOP profiling request + print("\n3️⃣ Test: Create STOP profiling request") + if create_test_profiling_request(BACKEND_URL, SERVICE_NAME, "stop"): + time.sleep(1) # Give backend time to process + + # Send heartbeat to receive the stop command + print("\n 📡 Sending heartbeat to receive stop command...") + command = client.send_heartbeat() + + if command: + client.simulate_profiling_action(command["command_type"], command["command_id"]) + + # Test 4: Multiple heartbeats with no commands + print("\n4️⃣ Test: Multiple heartbeats with no pending commands") + for i in range(3): + print(f"\n Heartbeat {i+1}/3:") + client.send_heartbeat() + time.sleep(1) + + print("\n✅ Test completed!") + print("\nTest Summary:") + print(f" - Executed commands: {len(client.executed_commands)}") + print(f" - Last command ID: {client.last_command_id}") + print(f" - Commands executed: {list(client.executed_commands)}") + +if __name__ == "__main__": + main() diff --git a/heartbeat_doc/validate_api.py b/heartbeat_doc/validate_api.py new file mode 100755 index 00000000..e6cbe5a2 --- /dev/null +++ b/heartbeat_doc/validate_api.py @@ -0,0 +1,112 @@ +#!/usr/bin/env python3 +""" +Quick validation script to test the heartbeat API endpoints. +""" + +import requests +import json +import sys + +def test_heartbeat_api(): + """Test the heartbeat API endpoints""" + base_url = "http://localhost:5000" # Updated to port 5000 + + print("🧪 Testing Heartbeat API Endpoints") + print("=" * 50) + + # Test 1: Valid profiling request + print("\n1️⃣ Testing valid profiling request...") + + valid_request = { + "service_name": "test-service", + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": {"test-host": [1234, 5678]}, + "additional_args": {"test": True} + } + + try: + response = requests.post( + f"{base_url}/api/metrics/profile_request", + json=valid_request, + timeout=10 + ) + + if response.status_code == 200: + result = response.json() + print(f"✅ Valid request successful") + print(f" Request ID: {result.get('request_id')}") + print(f" Command ID: {result.get('command_id')}") + else: + print(f"❌ Valid request failed: {response.status_code}") + print(f" Response: {response.text}") + except Exception as e: + print(f"❌ Valid request error: {e}") + + # Test 2: Invalid request (missing target_hosts) + print("\n2️⃣ Testing invalid request (missing target_hosts)...") + + invalid_request = { + "service_name": "test-service", + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu" + # target_hosts missing + } + + try: + response = requests.post( + f"{base_url}/api/metrics/profile_request", + json=invalid_request, + timeout=10 + ) + + if response.status_code == 422: # Pydantic validation error + print(f"✅ Invalid request correctly rejected with 422") + elif response.status_code == 400: # Our validation error + print(f"✅ Invalid request correctly rejected with 400") + else: + print(f"⚠️ Unexpected response: {response.status_code}") + + print(f" Response: {response.text}") + except Exception as e: + print(f"❌ Invalid request test error: {e}") + + # Test 3: Heartbeat request + print("\n3️⃣ Testing heartbeat request...") + + heartbeat_request = { + "hostname": "test-host", + "ip_address": "127.0.0.1", + "service_name": "test-service", + "status": "active" + } + + try: + response = requests.post( + f"{base_url}/api/metrics/heartbeat", + json=heartbeat_request, + timeout=10 + ) + + if response.status_code == 200: + result = response.json() + print(f"✅ Heartbeat successful") + print(f" Message: {result.get('message')}") + if result.get('profiling_command'): + print(f" Command received: {result.get('command_id')}") + else: + print(f" No pending commands") + else: + print(f"❌ Heartbeat failed: {response.status_code}") + print(f" Response: {response.text}") + except Exception as e: + print(f"❌ Heartbeat error: {e}") + + print("\n✅ API validation complete!") + +if __name__ == "__main__": + test_heartbeat_api() diff --git a/scripts/setup/migrate_profiling_tables.py b/scripts/setup/migrate_profiling_tables.py new file mode 100644 index 00000000..f6059d24 --- /dev/null +++ b/scripts/setup/migrate_profiling_tables.py @@ -0,0 +1,98 @@ +#!/usr/bin/env python3 +# +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +""" +Script to apply the profiling tables migration to an existing PostgreSQL database. +""" + +import os +import sys +import argparse +from pathlib import Path + +# Add the src directory to the Python path +sys.path.insert(0, str(Path(__file__).parent.parent.parent / "src")) + +from gprofiler_dev.postgres import get_postgres_db + + +def apply_migration(migration_file: str, dry_run: bool = False): + """Apply the migration SQL file to the database""" + + # Read the migration file + migration_path = Path(__file__).parent / "postgres" / migration_file + if not migration_path.exists(): + print(f"Error: Migration file {migration_path} not found!") + sys.exit(1) + + with open(migration_path, 'r') as f: + migration_sql = f.read() + + if dry_run: + print("DRY RUN MODE - SQL that would be executed:") + print("=" * 50) + print(migration_sql) + print("=" * 50) + return + + try: + # Get database connection + db = get_postgres_db() + + print(f"Applying migration: {migration_file}") + + # Execute the migration SQL + # Split by semicolon and execute each statement separately to handle complex SQL + statements = [stmt.strip() for stmt in migration_sql.split(';') if stmt.strip()] + + for i, statement in enumerate(statements): + if statement: + print(f"Executing statement {i+1}/{len(statements)}") + try: + db.execute(statement, {}, has_value=False) + except Exception as e: + print(f"Error executing statement {i+1}: {e}") + print(f"Statement: {statement[:100]}...") + raise + + print("Migration applied successfully!") + + except Exception as e: + print(f"Error applying migration: {e}") + sys.exit(1) + + +def main(): + parser = argparse.ArgumentParser(description="Apply profiling tables migration") + parser.add_argument( + "--migration-file", + default="add_profiling_tables.sql", + help="Migration file to apply (default: add_profiling_tables.sql)" + ) + parser.add_argument( + "--dry-run", + action="store_true", + help="Show what would be executed without actually running the migration" + ) + + args = parser.parse_args() + + apply_migration(args.migration_file, args.dry_run) + + +if __name__ == "__main__": + main() diff --git a/scripts/setup/postgres/README.md b/scripts/setup/postgres/README.md new file mode 100644 index 00000000..c5991875 --- /dev/null +++ b/scripts/setup/postgres/README.md @@ -0,0 +1,80 @@ +# Profiling Database Schema + +This document explains the database schema for the profiling request and command system. + +## Overview + +The system uses a four-table approach to manage profiling requests, commands, executions, and host status: + +1. **ProfilingRequests** - Individual requests that can target multiple hosts +2. **ProfilingCommands** - Combined/batched commands sent to specific hosts +3. **ProfilingExecutions** - Execution records for audit trail and status tracking +4. **HostHeartbeats** - Track host status and last executed commands + +## Why Multiple Tables? + +**Problem**: When you have multiple profiling requests for the same host at different times (e.g., profile PID 1, then profile PID 2), you want to combine them into a single command to avoid running separate profiling sessions. + +**Solution**: +- Store individual requests in `ProfilingRequests` +- Combine multiple requests targeting the same host into `ProfilingCommands` +- Track actual execution in `ProfilingExecutions` for audit trail +- Send the combined command to the host via heartbeat response + +## Table Descriptions + +### ProfilingRequests +Stores individual profiling requests as submitted via the API. + +**Key fields:** +- `request_id` - Unique identifier for tracking +- `service_name` - Service this request belongs to +- `request_type` - 'start' or 'stop' command type +- `target_hostnames` - Array of hostnames to target (NULL = all hosts for service) +- `pids` - Array of process IDs to profile (NULL = all processes) +- `status` - pending, assigned, completed, failed, cancelled +- `additional_args` - JSON configuration options + +### ProfilingCommands +Stores combined commands ready to be sent to hosts. + +**Key fields:** +- `command_id` - Unique identifier sent to host +- `hostname` - Specific host this command targets +- `command_type` - 'start' or 'stop' +- `combined_config` - Merged configuration from multiple requests +- `request_ids` - Array of ProfilingRequests IDs that were combined +- `status` - pending, sent, completed, failed + +### ProfilingExecutions +Tracks actual execution status for each command/host combination. + +**Key fields:** +- `command_id` + `hostname` - Unique execution identifier +- `profiling_request_id` - Reference back to original request +- `status` - pending, assigned, completed, failed, cancelled +- `started_at`, `completed_at` - Execution timeline +- `results_path` - Location of profiling output + +### HostHeartbeats +Tracks host status and last executed command for idempotency. + +**Key fields:** +- `hostname` + `service_name` - Unique combination +- `last_command_id` - Last command executed (for idempotency) +- `status` - active, idle, error, offline + +## Workflow + +1. **Submit Request**: API creates entry in `ProfilingRequests` +2. **Create Commands**: System combines requests targeting same host into `ProfilingCommands` +3. **Heartbeat**: Host checks in, gets pending command if available +4. **Execute**: Host runs profiling and reports completion +5. **Update**: Both command and related requests marked as completed + +## Benefits + +- **Efficiency**: Multiple requests combined into single profiling session +- **Idempotency**: Hosts won't re-execute completed commands +- **Tracking**: Clear audit trail from request to execution +- **Scalability**: Works with multiple hosts and services diff --git a/scripts/setup/postgres/README_DYNAMIC_PROFILING.md b/scripts/setup/postgres/README_DYNAMIC_PROFILING.md new file mode 100644 index 00000000..2a44def3 --- /dev/null +++ b/scripts/setup/postgres/README_DYNAMIC_PROFILING.md @@ -0,0 +1,354 @@ +# Dynamic Profiling Database Setup + +This directory contains SQL scripts for setting up and testing the Dynamic Profiling database schema. + +## Files + +1. **`dynamic_profiling_schema.sql`** - Main schema with all tables, indexes, and constraints +2. **`test_dynamic_profiling.sql`** - Test data and verification queries + +## Quick Setup + +### 1. Create Schema + +```bash +# Option A: Using psql command line +psql -U postgres -d gprofiler -f dynamic_profiling_schema.sql + +# Option B: Using psql interactive +psql -U postgres -d gprofiler +\i dynamic_profiling_schema.sql +\q +``` + +### 2. Load Test Data (Optional) + +```bash +# Load test data +psql -U postgres -d gprofiler -f test_dynamic_profiling.sql +``` + +### 3. Verify Installation + +```sql +-- Connect to database +psql -U postgres -d gprofiler + +-- List all dynamic profiling tables +SELECT table_name +FROM information_schema.tables +WHERE table_schema = 'public' + AND (table_name LIKE '%profiling%' OR table_name LIKE '%heartbeat%') +ORDER BY table_name; + +-- Expected output: 10 tables +-- containershosts +-- containerprocesses +-- hostheartbeats +-- jobcontainers +-- namespaceservices +-- processeshosts +-- profilingcommand +-- profilingexecutions +-- profilingrequest +-- servicecontainers +``` + +## Schema Overview + +### Core Tables + +#### ProfilingRequest +Stores profiling requests from API calls. +- Supports hierarchical targeting (service/job/namespace/pod/host/process) +- Configurable profiling modes (CPU, memory, allocation, native) +- Status tracking (pending, in_progress, completed, failed, cancelled) + +#### ProfilingCommand +Commands sent to agents on specific hosts. +- Maps requests to host-specific commands +- Tracks command lifecycle +- Indexed for fast lookups (165k QPM target) + +#### HostHeartbeats +Real-time host availability tracking. +- Optimized for high-throughput (165k QPM) +- Sub-second response times +- Tracks containers, jobs, workloads per host + +#### ProfilingExecutions +Audit trail of all profiling executions. +- Links to original requests and commands +- Complete execution history +- Status tracking for troubleshooting + +### Mapping Tables + +Denormalized tables for fast query performance: + +- **NamespaceServices** - Namespace → Service +- **ServiceContainers** - Service → Container +- **JobContainers** - Job → Container +- **ContainerProcesses** - Container → Process +- **ContainersHosts** - Container → Host +- **ProcessesHosts** - Process → Host + +## Performance Features + +### Indexes (25+ total) + +```sql +-- Check indexes +SELECT + tablename, + indexname, + indexdef +FROM pg_indexes +WHERE schemaname = 'public' + AND (tablename LIKE '%profiling%' OR tablename LIKE '%heartbeat%') +ORDER BY tablename, indexname; +``` + +### Triggers (10 total) + +All tables have auto-updating `updated_at` timestamps: + +```sql +-- Check triggers +SELECT + trigger_name, + event_object_table, + action_statement +FROM information_schema.triggers +WHERE trigger_schema = 'public' + AND trigger_name LIKE '%updated_at%' +ORDER BY event_object_table; +``` + +## Common Queries + +### Find Active Hosts + +```sql +-- Hosts seen in last 5 minutes +SELECT + host_id, + host_name, + service_name, + namespace, + timestamp_last_seen, + EXTRACT(EPOCH FROM (NOW() - timestamp_last_seen)) as seconds_ago +FROM HostHeartbeats +WHERE timestamp_last_seen > NOW() - INTERVAL '5 minutes' +ORDER BY timestamp_last_seen DESC; +``` + +### Query Profiling Requests by Status + +```sql +-- Active profiling requests +SELECT + request_id, + service_name, + namespace, + profiling_mode, + duration_seconds, + status, + created_at +FROM ProfilingRequest +WHERE status IN ('pending', 'in_progress') +ORDER BY created_at DESC; +``` + +### Hierarchical Mapping: Namespace to Hosts + +```sql +-- Map namespace to all hosts +SELECT DISTINCT + ns.namespace, + ns.service_name, + ch.host_name, + ch.host_id +FROM NamespaceServices ns +JOIN ServiceContainers sc ON sc.service_name = ns.service_name +JOIN ContainersHosts ch ON ch.container_name = sc.container_name +WHERE ns.namespace = 'production' +ORDER BY ns.service_name, ch.host_name; +``` + +### Audit Trail: Request Execution History + +```sql +-- Complete execution history for a request +SELECT + pr.request_id, + pr.service_name, + pr.status as request_status, + pe.host_name, + pe.command_type, + pe.status as execution_status, + pe.started_at, + pe.completed_at, + EXTRACT(EPOCH FROM (pe.completed_at - pe.started_at)) as duration_seconds +FROM ProfilingRequest pr +JOIN ProfilingExecutions pe ON pe.profiling_request_id = pr.id +WHERE pr.request_id = 'YOUR-REQUEST-UUID-HERE' +ORDER BY pe.started_at; +``` + +## Maintenance + +### Regular Maintenance Tasks + +```sql +-- Vacuum and analyze for performance +VACUUM ANALYZE ProfilingRequest; +VACUUM ANALYZE ProfilingCommand; +VACUUM ANALYZE HostHeartbeats; +VACUUM ANALYZE ProfilingExecutions; + +-- Check table sizes +SELECT + schemaname, + tablename, + pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size +FROM pg_tables +WHERE schemaname = 'public' + AND (tablename LIKE '%profiling%' OR tablename LIKE '%heartbeat%') +ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC; +``` + +### Monitor Index Usage + +```sql +-- Check which indexes are being used +SELECT + schemaname, + tablename, + indexname, + idx_scan as index_scans, + idx_tup_read as tuples_read, + idx_tup_fetch as tuples_fetched +FROM pg_stat_user_indexes +WHERE schemaname = 'public' + AND (tablename LIKE '%profiling%' OR tablename LIKE '%heartbeat%') +ORDER BY idx_scan DESC; +``` + +### Clean Up Old Data + +```sql +-- Archive executions older than 90 days +DELETE FROM ProfilingExecutions +WHERE created_at < NOW() - INTERVAL '90 days'; + +-- Archive completed requests older than 30 days +DELETE FROM ProfilingRequest +WHERE status = 'completed' + AND created_at < NOW() - INTERVAL '30 days'; +``` + +## Troubleshooting + +### Issue: Slow Heartbeat Queries + +```sql +-- Check if indexes are being used +EXPLAIN ANALYZE +SELECT * FROM HostHeartbeats +WHERE host_id = 'your-host-id'; + +-- Should show "Index Scan using host_heartbeats_host_id_idx" +``` + +### Issue: Slow Hierarchical Queries + +```sql +-- Check query plan for hierarchical lookup +EXPLAIN ANALYZE +SELECT ch.host_id +FROM NamespaceServices ns +JOIN ServiceContainers sc ON sc.service_name = ns.service_name +JOIN ContainersHosts ch ON ch.container_name = sc.container_name +WHERE ns.namespace = 'production'; + +-- Should use indexes on all join columns +``` + +### Issue: Foreign Key Violations + +```sql +-- Check orphaned records +SELECT 'ProfilingCommand' as table_name, COUNT(*) as orphaned +FROM ProfilingCommand pc +LEFT JOIN ProfilingRequest pr ON pr.id = pc.profiling_request_id +WHERE pr.id IS NULL +UNION ALL +SELECT 'ProfilingExecutions', COUNT(*) +FROM ProfilingExecutions pe +LEFT JOIN ProfilingRequest pr ON pr.id = pe.profiling_request_id +WHERE pr.id IS NULL; +``` + +## Drop Schema (Caution!) + +```sql +-- WARNING: This will delete all dynamic profiling data! +-- Uncomment and run only if you need to completely remove the schema + +-- DROP TABLE IF EXISTS ProcessesHosts CASCADE; +-- DROP TABLE IF EXISTS ContainersHosts CASCADE; +-- DROP TABLE IF EXISTS ContainerProcesses CASCADE; +-- DROP TABLE IF EXISTS JobContainers CASCADE; +-- DROP TABLE IF EXISTS ServiceContainers CASCADE; +-- DROP TABLE IF EXISTS NamespaceServices CASCADE; +-- DROP TABLE IF EXISTS ProfilingExecutions CASCADE; +-- DROP TABLE IF EXISTS ProfilingCommand CASCADE; +-- DROP TABLE IF EXISTS HostHeartbeats CASCADE; +-- DROP TABLE IF EXISTS ProfilingRequest CASCADE; +-- DROP TYPE IF EXISTS ProfilingMode CASCADE; +-- DROP TYPE IF EXISTS ProfilingStatus CASCADE; +-- DROP TYPE IF EXISTS CommandType CASCADE; +-- DROP FUNCTION IF EXISTS update_updated_at_column() CASCADE; +``` + +## Docker Setup + +If using Docker PostgreSQL: + +```bash +# Start PostgreSQL container +docker run --name gprofiler-postgres \ + -e POSTGRES_PASSWORD=postgres \ + -e POSTGRES_DB=gprofiler \ + -p 5432:5432 \ + -d postgres:14 + +# Wait for PostgreSQL to start +sleep 5 + +# Apply schema +docker exec -i gprofiler-postgres \ + psql -U postgres -d gprofiler < dynamic_profiling_schema.sql + +# Load test data (optional) +docker exec -i gprofiler-postgres \ + psql -U postgres -d gprofiler < test_dynamic_profiling.sql +``` + +## References + +- **Main Documentation:** `docs/DYNAMIC_PROFILING.md` +- **Python Models:** `src/gprofiler/backend/models/dynamic_profiling_models.py` +- **Design Doc:** [Google Doc](https://docs.google.com/document/d/1iwA_NN1YKDBqfig95Qevw0HcSCqgu7_ya8PGuCksCPc/edit) + +## Support + +For issues or questions: +1. Check the main documentation: `docs/DYNAMIC_PROFILING.md` +2. Review test data script: `test_dynamic_profiling.sql` +3. Verify indexes are being used: `EXPLAIN ANALYZE your_query` + + + + diff --git a/scripts/setup/postgres/dynamic_profiling_schema.sql b/scripts/setup/postgres/dynamic_profiling_schema.sql new file mode 100644 index 00000000..17840d72 --- /dev/null +++ b/scripts/setup/postgres/dynamic_profiling_schema.sql @@ -0,0 +1,401 @@ +-- +-- Copyright (C) 2023 Intel Corporation +-- +-- Licensed under the Apache License, Version 2.0 (the "License"); +-- you may not use this file except in compliance with the License. +-- You may obtain a copy of the License at +-- +-- http://www.apache.org/licenses/LICENSE-2.0 +-- +-- Unless required by applicable law or agreed to in writing, software +-- distributed under the License is distributed on an "AS IS" BASIS, +-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +-- See the License for the specific language governing permissions and +-- limitations under the License. +-- + +-- ============================================================ +-- DYNAMIC PROFILING DATA MODEL +-- ============================================================ +-- This schema supports dynamic profiling capabilities that allow +-- profiling requests at various hierarchy levels (service, job, namespace) +-- to be mapped to specific host-level commands while maintaining +-- sub-second heartbeat response times for 165k QPM. +-- +-- References: +-- https://docs.google.com/document/d/1iwA_NN1YKDBqfig95Qevw0HcSCqgu7_ya8PGuCksCPc/edit +-- ============================================================ + + +-- ============================================================ +-- ENUMS AND TYPES +-- ============================================================ + +-- Command types for profiling operations +CREATE TYPE CommandType AS ENUM ( + 'start', + 'stop', + 'reconfigure' +); + +-- Status for profiling requests +CREATE TYPE ProfilingStatus AS ENUM ( + 'pending', + 'in_progress', + 'completed', + 'failed', + 'cancelled' +); + +-- Profiling modes +CREATE TYPE ProfilingMode AS ENUM ( + 'cpu', + 'memory', + 'allocation', + 'native' +); + + +-- ============================================================ +-- CORE TABLES +-- ============================================================ + +-- ProfilingRequest: Stores profiling requests from API calls +CREATE TABLE ProfilingRequest ( + ID bigserial PRIMARY KEY, + request_id uuid NOT NULL UNIQUE DEFAULT gen_random_uuid(), + + -- Target specification + service_name text, + job_name text, + namespace text, + pod_name text, + host_name text, + process_id bigint, + + -- Profiling configuration + profiling_mode ProfilingMode NOT NULL DEFAULT 'cpu', + duration_seconds integer NOT NULL CONSTRAINT "positive duration" CHECK (duration_seconds > 0), + sample_rate integer NOT NULL DEFAULT 100 CONSTRAINT "valid sample rate" CHECK (sample_rate > 0 AND sample_rate <= 1000), + + -- Execution configuration + executors text[] DEFAULT ARRAY[]::text[], + + -- Request metadata + start_time timestamp NOT NULL, + stop_time timestamp, + mode text, + + -- Status tracking + status ProfilingStatus NOT NULL DEFAULT 'pending', + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + -- Foreign keys + profiler_token_id bigint CONSTRAINT "request must have valid token" REFERENCES ProfilerTokens, + + -- Constraints + CONSTRAINT "at_least_one_target" CHECK ( + service_name IS NOT NULL OR + job_name IS NOT NULL OR + namespace IS NOT NULL OR + pod_name IS NOT NULL OR + host_name IS NOT NULL OR + process_id IS NOT NULL + ) +); + +-- Indexes for ProfilingRequest +CREATE INDEX profiling_request_status_idx ON ProfilingRequest(status); +CREATE INDEX profiling_request_created_at_idx ON ProfilingRequest(created_at); +CREATE INDEX profiling_request_service_idx ON ProfilingRequest(service_name) WHERE service_name IS NOT NULL; +CREATE INDEX profiling_request_namespace_idx ON ProfilingRequest(namespace) WHERE namespace IS NOT NULL; +CREATE INDEX profiling_request_host_idx ON ProfilingRequest(host_name) WHERE host_name IS NOT NULL; + + +-- ProfilingCommand: Profiling commands sent to agents (scale in future) +CREATE TABLE ProfilingCommand ( + ID bigserial PRIMARY KEY, + command_id uuid NOT NULL UNIQUE DEFAULT gen_random_uuid(), + + -- Link to original request + profiling_request_id bigint NOT NULL CONSTRAINT "command must belong to request" REFERENCES ProfilingRequest, + + -- Host targeting + host_id text NOT NULL, -- Index key for fast lookup + target_containers text[] DEFAULT ARRAY[]::text[], + target_processes bigint[] DEFAULT ARRAY[]::bigint[], + + -- Command details + command_type CommandType NOT NULL, + command_args jsonb NOT NULL DEFAULT '{}'::jsonb, + + -- Command lifecycle + command_json text, -- Full command serialized for agent + sent_at timestamp, + completed_at timestamp, + status ProfilingStatus NOT NULL DEFAULT 'pending', + + -- Timestamps + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP +); + +-- Indexes for ProfilingCommand +CREATE INDEX profiling_command_request_idx ON ProfilingCommand(profiling_request_id); +CREATE INDEX profiling_command_host_idx ON ProfilingCommand(host_id); +CREATE INDEX profiling_command_status_idx ON ProfilingCommand(status); +CREATE INDEX profiling_command_sent_at_idx ON ProfilingCommand(sent_at); + + +-- HostHeartbeats: Tracks available host status, details and last seen info +CREATE TABLE HostHeartbeats ( + ID bigserial PRIMARY KEY, + + -- Host identification + host_id text NOT NULL UNIQUE, + service_name text, + host_name text NOT NULL, + host_ip inet, + + -- Environment details + namespace text, + pod_name text, + containers text[] DEFAULT ARRAY[]::text[], + + -- Resource tracking + workloads jsonb DEFAULT '{}'::jsonb, -- Running workloads/jobs + jobs text[] DEFAULT ARRAY[]::text[], + + -- Agent info + executors text[] DEFAULT ARRAY[]::text[], + + -- Heartbeat tracking (critical for 165k QPM with sub-second response) + timestamp_first_seen timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + timestamp_last_seen timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + last_command_id uuid, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP +); + +-- Critical indexes for heartbeat performance (165k QPM requirement) +CREATE INDEX host_heartbeats_host_id_idx ON HostHeartbeats(host_id); +CREATE INDEX host_heartbeats_last_seen_idx ON HostHeartbeats(timestamp_last_seen); +CREATE INDEX host_heartbeats_service_idx ON HostHeartbeats(service_name) WHERE service_name IS NOT NULL; +CREATE INDEX host_heartbeats_namespace_idx ON HostHeartbeats(namespace) WHERE namespace IS NOT NULL; + + +-- ProfilingExecutions: Execution history for audit trail +CREATE TABLE ProfilingExecutions ( + ID bigserial PRIMARY KEY, + execution_id uuid NOT NULL UNIQUE DEFAULT gen_random_uuid(), + + -- Links + profiling_request_id bigint NOT NULL CONSTRAINT "execution must belong to request" REFERENCES ProfilingRequest, + profiling_command_id bigint CONSTRAINT "execution may link to command" REFERENCES ProfilingCommand, + + -- Execution details + host_name text NOT NULL, + target_containers text[] DEFAULT ARRAY[]::text[], + target_processes bigint[] DEFAULT ARRAY[]::bigint[], + + -- Command tracking + command_type CommandType NOT NULL, + + -- Execution lifecycle + started_at timestamp NOT NULL, + completed_at timestamp, + status ProfilingStatus NOT NULL, + + -- Timestamps + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP +); + +-- Indexes for ProfilingExecutions +CREATE INDEX profiling_executions_request_idx ON ProfilingExecutions(profiling_request_id); +CREATE INDEX profiling_executions_command_idx ON ProfilingExecutions(profiling_command_id) WHERE profiling_command_id IS NOT NULL; +CREATE INDEX profiling_executions_host_idx ON ProfilingExecutions(host_name); +CREATE INDEX profiling_executions_status_idx ON ProfilingExecutions(status); +CREATE INDEX profiling_executions_started_at_idx ON ProfilingExecutions(started_at); + + +-- ============================================================ +-- HIERARCHICAL MAPPING TABLES +-- ============================================================ +-- These tables denormalize hierarchical mappings for faster +-- query performance when mapping requests to hosts. +-- ============================================================ + +-- NamespaceServices: Maps namespaces to services +CREATE TABLE NamespaceServices ( + ID bigserial PRIMARY KEY, + namespace text NOT NULL, + service_name text NOT NULL, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "unique namespace service mapping" UNIQUE (namespace, service_name) +); + +CREATE INDEX namespace_services_namespace_idx ON NamespaceServices(namespace); +CREATE INDEX namespace_services_service_idx ON NamespaceServices(service_name); + + +-- ServiceContainers: Maps services to containers +CREATE TABLE ServiceContainers ( + ID bigserial PRIMARY KEY, + service_name text NOT NULL, + container_name text NOT NULL, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "unique service container mapping" UNIQUE (service_name, container_name) +); + +CREATE INDEX service_containers_service_idx ON ServiceContainers(service_name); +CREATE INDEX service_containers_container_idx ON ServiceContainers(container_name); + + +-- JobContainers: Maps jobs to containers +CREATE TABLE JobContainers ( + ID bigserial PRIMARY KEY, + job_name text NOT NULL, + container_name text NOT NULL, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "unique job container mapping" UNIQUE (job_name, container_name) +); + +CREATE INDEX job_containers_job_idx ON JobContainers(job_name); +CREATE INDEX job_containers_container_idx ON JobContainers(container_name); + + +-- ContainerProcesses: Maps containers to processes +CREATE TABLE ContainerProcesses ( + ID bigserial PRIMARY KEY, + container_name text NOT NULL, + process_id bigint NOT NULL, + process_name text, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "unique container process mapping" UNIQUE (container_name, process_id) +); + +CREATE INDEX container_processes_container_idx ON ContainerProcesses(container_name); +CREATE INDEX container_processes_process_idx ON ContainerProcesses(process_id); + + +-- ContainersHosts: Maps containers to hosts +CREATE TABLE ContainersHosts ( + ID bigserial PRIMARY KEY, + container_name text NOT NULL, + host_id text NOT NULL, + host_name text NOT NULL, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "unique container host mapping" UNIQUE (container_name, host_id) +); + +CREATE INDEX containers_hosts_container_idx ON ContainersHosts(container_name); +CREATE INDEX containers_hosts_host_id_idx ON ContainersHosts(host_id); +CREATE INDEX containers_hosts_host_name_idx ON ContainersHosts(host_name); + + +-- ProcessesHosts: Maps processes to hosts +CREATE TABLE ProcessesHosts ( + ID bigserial PRIMARY KEY, + process_id bigint NOT NULL, + host_id text NOT NULL, + host_name text NOT NULL, + + -- Metadata + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + CONSTRAINT "unique process host mapping" UNIQUE (process_id, host_id) +); + +CREATE INDEX processes_hosts_process_idx ON ProcessesHosts(process_id); +CREATE INDEX processes_hosts_host_id_idx ON ProcessesHosts(host_id); +CREATE INDEX processes_hosts_host_name_idx ON ProcessesHosts(host_name); + + +-- ============================================================ +-- TRIGGERS FOR UPDATED_AT +-- ============================================================ + +-- Function to update the updated_at timestamp +CREATE OR REPLACE FUNCTION update_updated_at_column() +RETURNS TRIGGER AS $$ +BEGIN + NEW.updated_at = CURRENT_TIMESTAMP; + RETURN NEW; +END; +$$ language 'plpgsql'; + +-- Apply trigger to all tables with updated_at column +CREATE TRIGGER update_profiling_request_updated_at BEFORE UPDATE ON ProfilingRequest + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_profiling_command_updated_at BEFORE UPDATE ON ProfilingCommand + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_host_heartbeats_updated_at BEFORE UPDATE ON HostHeartbeats + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_profiling_executions_updated_at BEFORE UPDATE ON ProfilingExecutions + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_namespace_services_updated_at BEFORE UPDATE ON NamespaceServices + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_service_containers_updated_at BEFORE UPDATE ON ServiceContainers + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_job_containers_updated_at BEFORE UPDATE ON JobContainers + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_container_processes_updated_at BEFORE UPDATE ON ContainerProcesses + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_containers_hosts_updated_at BEFORE UPDATE ON ContainersHosts + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + +CREATE TRIGGER update_processes_hosts_updated_at BEFORE UPDATE ON ProcessesHosts + FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); + + +-- ============================================================ +-- COMMENTS +-- ============================================================ + +COMMENT ON TABLE ProfilingRequest IS 'Stores profiling requests from API calls with target specification at various hierarchy levels (service, job, namespace)'; +COMMENT ON TABLE ProfilingCommand IS 'Profiling commands sent to agents/hosts (scale in future). Maps high-level requests to specific host-level commands.'; +COMMENT ON TABLE HostHeartbeats IS 'Tracks available host status, details and last seen info. Optimized for 165k QPM with sub-second response times.'; +COMMENT ON TABLE ProfilingExecutions IS 'Execution history for audit trail. Tracks actual execution of profiling commands on hosts.'; +COMMENT ON TABLE NamespaceServices IS 'Denormalized mapping of namespaces to services for faster query performance'; +COMMENT ON TABLE ServiceContainers IS 'Denormalized mapping of services to containers for faster query performance'; +COMMENT ON TABLE JobContainers IS 'Denormalized mapping of jobs to containers for faster query performance'; +COMMENT ON TABLE ContainerProcesses IS 'Denormalized mapping of containers to processes for faster query performance'; +COMMENT ON TABLE ContainersHosts IS 'Denormalized mapping of containers to hosts for faster query performance'; +COMMENT ON TABLE ProcessesHosts IS 'Denormalized mapping of processes to hosts for faster query performance'; + + + + diff --git a/scripts/setup/postgres/gprofiler_recreate.sql b/scripts/setup/postgres/gprofiler_recreate.sql index 434579fa..755d8c23 100644 --- a/scripts/setup/postgres/gprofiler_recreate.sql +++ b/scripts/setup/postgres/gprofiler_recreate.sql @@ -243,6 +243,117 @@ CREATE TABLE MinesweeperFrames ( ); +-- Additional Types for Profiling System +CREATE TYPE ProfilingMode AS ENUM ('cpu', 'allocation', 'none'); +CREATE TYPE ProfilingRequestStatus AS ENUM ('pending', 'assigned', 'completed', 'failed', 'cancelled'); +CREATE TYPE CommandStatus AS ENUM ('pending', 'sent', 'completed', 'failed'); +CREATE TYPE HostStatus AS ENUM ('active', 'idle', 'error', 'offline'); + +-- Host Heartbeat Table (simplified) +CREATE TABLE HostHeartbeats ( + ID bigserial PRIMARY KEY, + hostname text NOT NULL, + ip_address inet NOT NULL, + service_name text NOT NULL, + last_command_id uuid NULL, + status HostStatus NOT NULL DEFAULT 'active', + heartbeat_timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT "unique_host_heartbeat" UNIQUE (hostname, service_name) +); + +-- Essential indexes for heartbeats +CREATE INDEX idx_hostheartbeats_hostname ON HostHeartbeats (hostname); +CREATE INDEX idx_hostheartbeats_service_name ON HostHeartbeats (service_name); +CREATE INDEX idx_hostheartbeats_status ON HostHeartbeats (status); +CREATE INDEX idx_hostheartbeats_heartbeat_timestamp ON HostHeartbeats (heartbeat_timestamp); + +-- Profiling Requests Table (simplified) +CREATE TABLE ProfilingRequests ( + ID bigserial PRIMARY KEY, + request_id uuid NOT NULL UNIQUE, + service_name text NOT NULL, + request_type text NOT NULL CHECK (request_type IN ('start', 'stop')), + continuous boolean NOT NULL DEFAULT false, + duration integer NULL DEFAULT 60, + frequency integer NULL DEFAULT 11, + profiling_mode ProfilingMode NOT NULL DEFAULT 'cpu', + target_hostnames text[] NOT NULL, + pids integer[] NULL, + stop_level text NULL DEFAULT 'process' CHECK (stop_level IN ('process', 'host')), + additional_args jsonb NULL, + status ProfilingRequestStatus NOT NULL DEFAULT 'pending', + assigned_to_hostname text NULL, + assigned_at timestamp NULL, + completed_at timestamp NULL, + estimated_completion_time timestamp NULL, + service_id bigint NULL CONSTRAINT "fk_profiling_request_service" REFERENCES Services(ID), + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP +); + +-- Essential indexes for profiling requests +CREATE INDEX idx_profilingrequests_request_id ON ProfilingRequests (request_id); +CREATE INDEX idx_profilingrequests_service_name ON ProfilingRequests (service_name); +CREATE INDEX idx_profilingrequests_status ON ProfilingRequests (status); +CREATE INDEX idx_profilingrequests_request_type ON ProfilingRequests (request_type); +CREATE INDEX idx_profilingrequests_created_at ON ProfilingRequests (created_at); + +-- Profiling Commands Table (simplified) +CREATE TABLE ProfilingCommands ( + ID bigserial PRIMARY KEY, + command_id uuid NOT NULL, + hostname text NOT NULL, + service_name text NOT NULL, + command_type text NOT NULL CHECK (command_type IN ('start', 'stop')), + request_ids uuid[] NOT NULL, + combined_config jsonb NULL, + status CommandStatus NOT NULL DEFAULT 'pending', + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + sent_at timestamp NULL, + completed_at timestamp NULL, + execution_time integer NULL, + error_message text NULL, + results_path text NULL, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT "unique_profiling_command_per_host" UNIQUE (hostname, service_name) +); + +-- Essential indexes for profiling commands +CREATE INDEX idx_profilingcommands_command_id ON ProfilingCommands (command_id); +CREATE INDEX idx_profilingcommands_hostname ON ProfilingCommands (hostname); +CREATE INDEX idx_profilingcommands_service_name ON ProfilingCommands (service_name); +CREATE INDEX idx_profilingcommands_status ON ProfilingCommands (status); +CREATE INDEX idx_profilingcommands_hostname_service ON ProfilingCommands (hostname, service_name); + +-- Profiling Executions Table (optional - for audit trail) +CREATE TABLE ProfilingExecutions ( + ID bigserial PRIMARY KEY, + command_id uuid NOT NULL, + hostname text NOT NULL, + profiling_request_id uuid NOT NULL CONSTRAINT "fk_profiling_execution_request" REFERENCES ProfilingRequests(request_id), + status ProfilingRequestStatus NOT NULL DEFAULT 'pending', + started_at timestamp NULL, + completed_at timestamp NULL, + execution_time integer NULL, + error_message text NULL, + results_path text NULL, + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + + -- Adding the constraint that db_manager.py expects for ON CONFLICT + CONSTRAINT "unique_profiling_execution" UNIQUE (command_id, hostname) +); + +-- Essential indexes for profiling executions +CREATE INDEX idx_profilingexecutions_command_id ON ProfilingExecutions (command_id); +CREATE INDEX idx_profilingexecutions_hostname ON ProfilingExecutions (hostname); +CREATE INDEX idx_profilingexecutions_profiling_request_id ON ProfilingExecutions (profiling_request_id); +CREATE INDEX idx_profilingexecutions_status ON ProfilingExecutions (status); + +-- FUNCTIONS + CREATE OR REPLACE FUNCTION calc_profiler_usage_history(start_date timestamp without time zone, end_date timestamp without time zone, interval_s bigint, max_iterations bigint DEFAULT 3) RETURNS TABLE(start_time timestamp without time zone, end_time timestamp without time zone, service bigint, running_hours double precision, core_hours double precision, lowest_agent_version bigint) LANGUAGE plpgsql @@ -671,3 +782,4 @@ create aggregate zz_hashagg(text) ( sfunc = zz_concat, stype = text, initcond = ''); + diff --git a/scripts/setup/postgres/migrations/add_dynamic_profiling.down.sql b/scripts/setup/postgres/migrations/add_dynamic_profiling.down.sql new file mode 100644 index 00000000..8364224a --- /dev/null +++ b/scripts/setup/postgres/migrations/add_dynamic_profiling.down.sql @@ -0,0 +1,166 @@ +-- +-- Copyright (C) 2025 Intel Corporation +-- +-- Licensed under the Apache License, Version 2.0 (the "License"); +-- you may not use this file except in compliance with the License. +-- You may obtain a copy of the License at +-- +-- http://www.apache.org/licenses/LICENSE-2.0 +-- +-- Unless required by applicable law or agreed to in writing, software +-- distributed under the License is distributed on an "AS IS" BASIS, +-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +-- See the License for the specific language governing permissions and +-- limitations under the License. +-- + +-- ============================================================ +-- ROLLBACK MIGRATION: Remove Dynamic Profiling Feature +-- ============================================================ +-- This rollback script removes the dynamic profiling feature +-- and restores the database to its pre-migration state. +-- +-- WARNING: This will permanently delete all dynamic profiling data! +-- +-- Usage: +-- psql -U performance_studio -d performance_studio -f add_dynamic_profiling.down.sql +-- +-- To restore the feature: +-- psql -U performance_studio -d performance_studio -f add_dynamic_profiling.up.sql +-- ============================================================ + +BEGIN; + +-- ============================================================ +-- STEP 1: Backup Warning +-- ============================================================ + +DO $$ +BEGIN + RAISE NOTICE '========================================'; + RAISE NOTICE 'ROLLBACK: Removing Dynamic Profiling Feature'; + RAISE NOTICE '========================================'; + RAISE NOTICE 'This will permanently delete:'; + RAISE NOTICE ' - All profiling requests'; + RAISE NOTICE ' - All profiling commands'; + RAISE NOTICE ' - All profiling executions'; + RAISE NOTICE ' - All host heartbeat data'; + RAISE NOTICE ''; + RAISE NOTICE 'Ensure you have a backup if you need this data!'; + RAISE NOTICE '========================================'; +END $$; + + +-- ============================================================ +-- STEP 2: Drop Indexes +-- ============================================================ +-- Drop indexes first for cleaner table drops + +-- ProfilingExecutions indexes +DROP INDEX IF EXISTS idx_profilingexecutions_status; +DROP INDEX IF EXISTS idx_profilingexecutions_profiling_request_id; +DROP INDEX IF EXISTS idx_profilingexecutions_hostname; +DROP INDEX IF EXISTS idx_profilingexecutions_command_id; + +-- ProfilingCommands indexes +DROP INDEX IF EXISTS idx_profilingcommands_hostname_service; +DROP INDEX IF EXISTS idx_profilingcommands_status; +DROP INDEX IF EXISTS idx_profilingcommands_service_name; +DROP INDEX IF EXISTS idx_profilingcommands_hostname; +DROP INDEX IF EXISTS idx_profilingcommands_command_id; + +-- ProfilingRequests indexes +DROP INDEX IF EXISTS idx_profilingrequests_created_at; +DROP INDEX IF EXISTS idx_profilingrequests_request_type; +DROP INDEX IF EXISTS idx_profilingrequests_status; +DROP INDEX IF EXISTS idx_profilingrequests_service_name; +DROP INDEX IF EXISTS idx_profilingrequests_request_id; + +-- HostHeartbeats indexes +DROP INDEX IF EXISTS idx_hostheartbeats_heartbeat_timestamp; +DROP INDEX IF EXISTS idx_hostheartbeats_status; +DROP INDEX IF EXISTS idx_hostheartbeats_service_name; +DROP INDEX IF EXISTS idx_hostheartbeats_hostname; + + +-- ============================================================ +-- STEP 3: Drop Tables (in correct order for foreign keys) +-- ============================================================ + +-- Drop ProfilingExecutions first (has FK to ProfilingRequests) +DROP TABLE IF EXISTS ProfilingExecutions CASCADE; + +-- Drop ProfilingCommands (no dependencies) +DROP TABLE IF EXISTS ProfilingCommands CASCADE; + +-- Drop ProfilingRequests (has FK to Services, referenced by ProfilingExecutions) +DROP TABLE IF EXISTS ProfilingRequests CASCADE; + +-- Drop HostHeartbeats (no dependencies) +DROP TABLE IF EXISTS HostHeartbeats CASCADE; + + +-- ============================================================ +-- STEP 4: Drop ENUM Types +-- ============================================================ +-- Drop types after tables that use them + +DROP TYPE IF EXISTS HostStatus CASCADE; +DROP TYPE IF EXISTS CommandStatus CASCADE; +DROP TYPE IF EXISTS ProfilingRequestStatus CASCADE; +DROP TYPE IF EXISTS ProfilingMode CASCADE; + + +-- ============================================================ +-- STEP 5: Verify Rollback Success +-- ============================================================ + +DO $$ +DECLARE + remaining_tables integer; + remaining_types integer; +BEGIN + -- Count remaining tables + SELECT COUNT(*) INTO remaining_tables + FROM information_schema.tables + WHERE table_schema = 'public' + AND table_name IN ('hostheartbeats', 'profilingrequests', 'profilingcommands', 'profilingexecutions'); + + -- Count remaining types + SELECT COUNT(*) INTO remaining_types + FROM pg_type + WHERE typname IN ('profilingmode', 'profilingrequeststatus', 'commandstatus', 'hoststatus'); + + IF remaining_tables > 0 THEN + RAISE WARNING 'Some tables still exist: %', remaining_tables; + END IF; + + IF remaining_types > 0 THEN + RAISE WARNING 'Some types still exist: %', remaining_types; + END IF; + + IF remaining_tables = 0 AND remaining_types = 0 THEN + RAISE NOTICE '========================================'; + RAISE NOTICE 'Rollback completed successfully!'; + RAISE NOTICE '========================================'; + RAISE NOTICE 'Removed:'; + RAISE NOTICE ' - 4 tables (HostHeartbeats, ProfilingRequests, ProfilingCommands, ProfilingExecutions)'; + RAISE NOTICE ' - 4 ENUM types (ProfilingMode, ProfilingRequestStatus, CommandStatus, HostStatus)'; + RAISE NOTICE ' - 14 indexes'; + RAISE NOTICE ''; + RAISE NOTICE 'Database restored to pre-dynamic-profiling state.'; + RAISE NOTICE '========================================'; + ELSE + RAISE EXCEPTION 'Rollback incomplete: % tables, % types remain', remaining_tables, remaining_types; + END IF; +END $$; + +COMMIT; + +-- ============================================================ +-- Rollback Complete +-- ============================================================ +-- The dynamic profiling feature has been successfully removed. +-- All related tables, types, and indexes have been dropped. +-- ============================================================ + diff --git a/scripts/setup/postgres/migrations/add_dynamic_profiling.up.sql b/scripts/setup/postgres/migrations/add_dynamic_profiling.up.sql new file mode 100644 index 00000000..4fa678ef --- /dev/null +++ b/scripts/setup/postgres/migrations/add_dynamic_profiling.up.sql @@ -0,0 +1,246 @@ +-- +-- Copyright (C) 2025 Intel Corporation +-- +-- Licensed under the Apache License, Version 2.0 (the "License"); +-- you may not use this file except in compliance with the License. +-- You may obtain a copy of the License at +-- +-- http://www.apache.org/licenses/LICENSE-2.0 +-- +-- Unless required by applicable law or agreed to in writing, software +-- distributed under the License is distributed on an "AS IS" BASIS, +-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +-- See the License for the specific language governing permissions and +-- limitations under the License. +-- + +-- ============================================================ +-- MIGRATION: Add Dynamic Profiling Feature +-- ============================================================ +-- This migration adds the dynamic profiling feature to existing +-- gProfiler Performance Studio deployments. +-- +-- Features Added: +-- - Host heartbeat tracking +-- - Profiling request management +-- - Profiling command orchestration +-- - Profiling execution audit trail +-- +-- Usage: +-- psql -U performance_studio -d performance_studio -f add_dynamic_profiling.up.sql +-- +-- Rollback: +-- psql -U performance_studio -d performance_studio -f add_dynamic_profiling.down.sql +-- ============================================================ + +BEGIN; + +-- ============================================================ +-- STEP 1: Create ENUM Types +-- ============================================================ + +-- Profiling mode for different profiling types +DO $$ BEGIN + CREATE TYPE ProfilingMode AS ENUM ('cpu', 'allocation', 'none'); +EXCEPTION + WHEN duplicate_object THEN NULL; +END $$; + +-- Status for profiling requests +DO $$ BEGIN + CREATE TYPE ProfilingRequestStatus AS ENUM ('pending', 'assigned', 'completed', 'failed', 'cancelled'); +EXCEPTION + WHEN duplicate_object THEN NULL; +END $$; + +-- Status for profiling commands +DO $$ BEGIN + CREATE TYPE CommandStatus AS ENUM ('pending', 'sent', 'completed', 'failed'); +EXCEPTION + WHEN duplicate_object THEN NULL; +END $$; + +-- Status for host health +DO $$ BEGIN + CREATE TYPE HostStatus AS ENUM ('active', 'idle', 'error', 'offline'); +EXCEPTION + WHEN duplicate_object THEN NULL; +END $$; + + +-- ============================================================ +-- STEP 2: Create Tables +-- ============================================================ + +-- Host Heartbeat Table - Tracks agent heartbeats and status +CREATE TABLE IF NOT EXISTS HostHeartbeats ( + ID bigserial PRIMARY KEY, + hostname text NOT NULL, + ip_address inet NOT NULL, + service_name text NOT NULL, + last_command_id uuid NULL, + status HostStatus NOT NULL DEFAULT 'active', + heartbeat_timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT "unique_host_heartbeat" UNIQUE (hostname, service_name) +); + +COMMENT ON TABLE HostHeartbeats IS 'Tracks agent heartbeats and availability for dynamic profiling'; +COMMENT ON COLUMN HostHeartbeats.hostname IS 'Host identifier from agent'; +COMMENT ON COLUMN HostHeartbeats.last_command_id IS 'Last command ID received by this host'; +COMMENT ON COLUMN HostHeartbeats.heartbeat_timestamp IS 'Timestamp of last heartbeat from agent'; + + +-- Profiling Requests Table - Stores user profiling requests +CREATE TABLE IF NOT EXISTS ProfilingRequests ( + ID bigserial PRIMARY KEY, + request_id uuid NOT NULL UNIQUE, + service_name text NOT NULL, + request_type text NOT NULL CHECK (request_type IN ('start', 'stop')), + continuous boolean NOT NULL DEFAULT false, + duration integer NULL DEFAULT 60, + frequency integer NULL DEFAULT 11, + profiling_mode ProfilingMode NOT NULL DEFAULT 'cpu', + target_hostnames text[] NOT NULL, + pids integer[] NULL, + stop_level text NULL DEFAULT 'process' CHECK (stop_level IN ('process', 'host')), + additional_args jsonb NULL, + status ProfilingRequestStatus NOT NULL DEFAULT 'pending', + assigned_to_hostname text NULL, + assigned_at timestamp NULL, + completed_at timestamp NULL, + estimated_completion_time timestamp NULL, + service_id bigint NULL CONSTRAINT "fk_profiling_request_service" REFERENCES Services(ID), + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP +); + +COMMENT ON TABLE ProfilingRequests IS 'Stores profiling requests from users/API'; +COMMENT ON COLUMN ProfilingRequests.request_id IS 'Unique identifier for this profiling request'; +COMMENT ON COLUMN ProfilingRequests.request_type IS 'Type of profiling request: start or stop'; +COMMENT ON COLUMN ProfilingRequests.continuous IS 'Whether profiling should run continuously'; +COMMENT ON COLUMN ProfilingRequests.target_hostnames IS 'Array of target hostnames for profiling'; + + +-- Profiling Commands Table - Stores commands sent to agents +CREATE TABLE IF NOT EXISTS ProfilingCommands ( + ID bigserial PRIMARY KEY, + command_id uuid NOT NULL, + hostname text NOT NULL, + service_name text NOT NULL, + command_type text NOT NULL CHECK (command_type IN ('start', 'stop')), + request_ids uuid[] NOT NULL, + combined_config jsonb NULL, + status CommandStatus NOT NULL DEFAULT 'pending', + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + sent_at timestamp NULL, + completed_at timestamp NULL, + execution_time integer NULL, + error_message text NULL, + results_path text NULL, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT "unique_profiling_command_per_host" UNIQUE (hostname, service_name) +); + +COMMENT ON TABLE ProfilingCommands IS 'Stores profiling commands to be executed by agents'; +COMMENT ON COLUMN ProfilingCommands.command_id IS 'Unique identifier for this command'; +COMMENT ON COLUMN ProfilingCommands.hostname IS 'Target hostname for command execution'; +COMMENT ON COLUMN ProfilingCommands.request_ids IS 'Array of request IDs that generated this command'; +COMMENT ON COLUMN ProfilingCommands.combined_config IS 'Merged configuration for multiple requests'; + + +-- Profiling Executions Table - Audit trail for profiling executions +CREATE TABLE IF NOT EXISTS ProfilingExecutions ( + ID bigserial PRIMARY KEY, + command_id uuid NOT NULL, + hostname text NOT NULL, + profiling_request_id uuid NOT NULL CONSTRAINT "fk_profiling_execution_request" REFERENCES ProfilingRequests(request_id), + status ProfilingRequestStatus NOT NULL DEFAULT 'pending', + started_at timestamp NULL, + completed_at timestamp NULL, + execution_time integer NULL, + error_message text NULL, + results_path text NULL, + created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + updated_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT "unique_profiling_execution" UNIQUE (command_id, hostname) +); + +COMMENT ON TABLE ProfilingExecutions IS 'Audit trail for profiling command executions'; +COMMENT ON COLUMN ProfilingExecutions.command_id IS 'Reference to the command that was executed'; +COMMENT ON COLUMN ProfilingExecutions.profiling_request_id IS 'Reference to the original profiling request'; +COMMENT ON COLUMN ProfilingExecutions.results_path IS 'Path to profiling results (S3 or local)'; + + +-- ============================================================ +-- STEP 3: Create Indexes for Performance +-- ============================================================ + +-- Indexes for HostHeartbeats +CREATE INDEX IF NOT EXISTS idx_hostheartbeats_hostname ON HostHeartbeats (hostname); +CREATE INDEX IF NOT EXISTS idx_hostheartbeats_service_name ON HostHeartbeats (service_name); +CREATE INDEX IF NOT EXISTS idx_hostheartbeats_status ON HostHeartbeats (status); +CREATE INDEX IF NOT EXISTS idx_hostheartbeats_heartbeat_timestamp ON HostHeartbeats (heartbeat_timestamp); + +-- Indexes for ProfilingRequests +CREATE INDEX IF NOT EXISTS idx_profilingrequests_request_id ON ProfilingRequests (request_id); +CREATE INDEX IF NOT EXISTS idx_profilingrequests_service_name ON ProfilingRequests (service_name); +CREATE INDEX IF NOT EXISTS idx_profilingrequests_status ON ProfilingRequests (status); +CREATE INDEX IF NOT EXISTS idx_profilingrequests_request_type ON ProfilingRequests (request_type); +CREATE INDEX IF NOT EXISTS idx_profilingrequests_created_at ON ProfilingRequests (created_at); + +-- Indexes for ProfilingCommands +CREATE INDEX IF NOT EXISTS idx_profilingcommands_command_id ON ProfilingCommands (command_id); +CREATE INDEX IF NOT EXISTS idx_profilingcommands_hostname ON ProfilingCommands (hostname); +CREATE INDEX IF NOT EXISTS idx_profilingcommands_service_name ON ProfilingCommands (service_name); +CREATE INDEX IF NOT EXISTS idx_profilingcommands_status ON ProfilingCommands (status); +CREATE INDEX IF NOT EXISTS idx_profilingcommands_hostname_service ON ProfilingCommands (hostname, service_name); + +-- Indexes for ProfilingExecutions +CREATE INDEX IF NOT EXISTS idx_profilingexecutions_command_id ON ProfilingExecutions (command_id); +CREATE INDEX IF NOT EXISTS idx_profilingexecutions_hostname ON ProfilingExecutions (hostname); +CREATE INDEX IF NOT EXISTS idx_profilingexecutions_profiling_request_id ON ProfilingExecutions (profiling_request_id); +CREATE INDEX IF NOT EXISTS idx_profilingexecutions_status ON ProfilingExecutions (status); + + +-- ============================================================ +-- STEP 4: Verify Migration Success +-- ============================================================ + +DO $$ +DECLARE + table_count integer; + index_count integer; +BEGIN + -- Count new tables + SELECT COUNT(*) INTO table_count + FROM information_schema.tables + WHERE table_schema = 'public' + AND table_name IN ('hostheartbeats', 'profilingrequests', 'profilingcommands', 'profilingexecutions'); + + -- Count new indexes + SELECT COUNT(*) INTO index_count + FROM pg_indexes + WHERE schemaname = 'public' + AND indexname LIKE 'idx_%heartbeat%' OR indexname LIKE 'idx_%profiling%'; + + RAISE NOTICE 'Migration completed successfully!'; + RAISE NOTICE 'Tables created: %', table_count; + RAISE NOTICE 'Indexes created: %', index_count; + + IF table_count < 4 THEN + RAISE EXCEPTION 'Migration failed: Expected 4 tables, found %', table_count; + END IF; +END $$; + +COMMIT; + +-- ============================================================ +-- Migration Complete +-- ============================================================ +-- The dynamic profiling feature has been successfully added. +-- New tables: HostHeartbeats, ProfilingRequests, ProfilingCommands, ProfilingExecutions +-- New types: ProfilingMode, ProfilingRequestStatus, CommandStatus, HostStatus +-- ============================================================ + diff --git a/scripts/setup/postgres/test_dynamic_profiling.sql b/scripts/setup/postgres/test_dynamic_profiling.sql new file mode 100644 index 00000000..29f3387f --- /dev/null +++ b/scripts/setup/postgres/test_dynamic_profiling.sql @@ -0,0 +1,513 @@ +-- +-- Copyright (C) 2023 Intel Corporation +-- +-- Licensed under the Apache License, Version 2.0 (the "License"); +-- you may not use this file except in compliance with the License. +-- You may obtain a copy of the License at +-- +-- http://www.apache.org/licenses/LICENSE-2.0 +-- +-- Unless required by applicable law or agreed to in writing, software +-- distributed under the License is distributed on an "AS IS" BASIS, +-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +-- See the License for the specific language governing permissions and +-- limitations under the License. +-- + +-- ============================================================ +-- TEST DATA FOR DYNAMIC PROFILING SCHEMA +-- ============================================================ +-- This script creates sample data to verify the dynamic profiling +-- schema is working correctly. +-- ============================================================ + +-- Clean up any existing test data (optional, comment out if not needed) +-- DELETE FROM ProfilingExecutions; +-- DELETE FROM ProfilingCommand; +-- DELETE FROM ProfilingRequest; +-- DELETE FROM HostHeartbeats; +-- DELETE FROM ProcessesHosts; +-- DELETE FROM ContainersHosts; +-- DELETE FROM ContainerProcesses; +-- DELETE FROM JobContainers; +-- DELETE FROM ServiceContainers; +-- DELETE FROM NamespaceServices; + +\echo 'Creating test data for Dynamic Profiling...' + +-- ============================================================ +-- 1. HIERARCHICAL MAPPINGS +-- ============================================================ + +\echo '1. Creating hierarchical mappings...' + +-- Namespace -> Service mappings +INSERT INTO NamespaceServices (namespace, service_name) VALUES + ('production', 'web-api'), + ('production', 'data-processor'), + ('staging', 'web-api'), + ('development', 'test-service') +ON CONFLICT DO NOTHING; + +-- Service -> Container mappings +INSERT INTO ServiceContainers (service_name, container_name) VALUES + ('web-api', 'web-api-container-1'), + ('web-api', 'web-api-container-2'), + ('data-processor', 'processor-container-1'), + ('test-service', 'test-container-1') +ON CONFLICT DO NOTHING; + +-- Job -> Container mappings +INSERT INTO JobContainers (job_name, container_name) VALUES + ('batch-processing-job', 'processor-container-1'), + ('data-import-job', 'importer-container-1'), + ('etl-pipeline', 'etl-container-1') +ON CONFLICT DO NOTHING; + +-- Container -> Process mappings +INSERT INTO ContainerProcesses (container_name, process_id, process_name) VALUES + ('web-api-container-1', 1001, 'python3'), + ('web-api-container-1', 1002, 'gunicorn'), + ('web-api-container-2', 2001, 'python3'), + ('web-api-container-2', 2002, 'gunicorn'), + ('processor-container-1', 3001, 'python3'), + ('processor-container-1', 3002, 'celery') +ON CONFLICT DO NOTHING; + +-- Container -> Host mappings +INSERT INTO ContainersHosts (container_name, host_id, host_name) VALUES + ('web-api-container-1', 'host-001', 'worker-node-01'), + ('web-api-container-2', 'host-002', 'worker-node-02'), + ('processor-container-1', 'host-003', 'worker-node-03'), + ('test-container-1', 'host-004', 'dev-node-01') +ON CONFLICT DO NOTHING; + +-- Process -> Host mappings +INSERT INTO ProcessesHosts (process_id, host_id, host_name) VALUES + (1001, 'host-001', 'worker-node-01'), + (1002, 'host-001', 'worker-node-01'), + (2001, 'host-002', 'worker-node-02'), + (2002, 'host-002', 'worker-node-02'), + (3001, 'host-003', 'worker-node-03'), + (3002, 'host-003', 'worker-node-03') +ON CONFLICT DO NOTHING; + +\echo ' ✓ Hierarchical mappings created' + +-- ============================================================ +-- 2. HOST HEARTBEATS +-- ============================================================ + +\echo '2. Creating host heartbeats...' + +INSERT INTO HostHeartbeats ( + host_id, + service_name, + host_name, + host_ip, + namespace, + pod_name, + containers, + workloads, + jobs, + executors, + timestamp_first_seen, + timestamp_last_seen +) VALUES + ( + 'host-001', + 'web-api', + 'worker-node-01', + '10.0.1.101', + 'production', + 'web-api-pod-1', + ARRAY['web-api-container-1'], + '{"cpu_usage": 45.2, "memory_usage": 60.5}'::jsonb, + ARRAY['background-task-1'], + ARRAY['pyspy', 'perf'], + NOW() - INTERVAL '1 hour', + NOW() - INTERVAL '5 seconds' + ), + ( + 'host-002', + 'web-api', + 'worker-node-02', + '10.0.1.102', + 'production', + 'web-api-pod-2', + ARRAY['web-api-container-2'], + '{"cpu_usage": 38.7, "memory_usage": 55.3}'::jsonb, + ARRAY['background-task-2'], + ARRAY['pyspy', 'perf'], + NOW() - INTERVAL '1 hour', + NOW() - INTERVAL '3 seconds' + ), + ( + 'host-003', + 'data-processor', + 'worker-node-03', + '10.0.1.103', + 'production', + 'processor-pod-1', + ARRAY['processor-container-1'], + '{"cpu_usage": 78.4, "memory_usage": 82.1}'::jsonb, + ARRAY['batch-processing-job', 'etl-pipeline'], + ARRAY['pyspy', 'perf', 'async-profiler'], + NOW() - INTERVAL '2 hours', + NOW() - INTERVAL '2 seconds' + ), + ( + 'host-004', + 'test-service', + 'dev-node-01', + '10.0.2.101', + 'development', + 'test-pod-1', + ARRAY['test-container-1'], + '{"cpu_usage": 15.2, "memory_usage": 25.8}'::jsonb, + ARRAY[]::text[], + ARRAY['pyspy'], + NOW() - INTERVAL '30 minutes', + NOW() - INTERVAL '10 seconds' + ) +ON CONFLICT (host_id) DO UPDATE SET + timestamp_last_seen = EXCLUDED.timestamp_last_seen; + +\echo ' ✓ Host heartbeats created' + +-- ============================================================ +-- 3. PROFILING REQUESTS +-- ============================================================ + +\echo '3. Creating profiling requests...' + +-- Request 1: Service-level profiling +INSERT INTO ProfilingRequest ( + service_name, + profiling_mode, + duration_seconds, + sample_rate, + executors, + start_time, + stop_time, + status +) VALUES + ( + 'web-api', + 'cpu', + 60, + 100, + ARRAY['pyspy'], + NOW() - INTERVAL '10 minutes', + NOW() - INTERVAL '9 minutes', + 'completed' + ); + +-- Request 2: Namespace-level profiling +INSERT INTO ProfilingRequest ( + namespace, + profiling_mode, + duration_seconds, + sample_rate, + executors, + start_time, + status +) VALUES + ( + 'production', + 'memory', + 120, + 50, + ARRAY['pyspy', 'perf'], + NOW() - INTERVAL '5 minutes', + 'in_progress' + ); + +-- Request 3: Host-level profiling +INSERT INTO ProfilingRequest ( + host_name, + profiling_mode, + duration_seconds, + sample_rate, + start_time, + status +) VALUES + ( + 'worker-node-03', + 'cpu', + 30, + 200, + NOW(), + 'pending' + ); + +-- Request 4: Job-level profiling +INSERT INTO ProfilingRequest ( + job_name, + profiling_mode, + duration_seconds, + sample_rate, + executors, + start_time, + status +) VALUES + ( + 'batch-processing-job', + 'allocation', + 300, + 100, + ARRAY['async-profiler'], + NOW() - INTERVAL '15 minutes', + 'completed' + ); + +\echo ' ✓ Profiling requests created' + +-- ============================================================ +-- 4. PROFILING COMMANDS +-- ============================================================ + +\echo '4. Creating profiling commands...' + +-- Get request IDs for reference +DO $$ +DECLARE + req1_id bigint; + req2_id bigint; + req4_id bigint; +BEGIN + -- Get the request IDs + SELECT ID INTO req1_id FROM ProfilingRequest WHERE service_name = 'web-api' LIMIT 1; + SELECT ID INTO req2_id FROM ProfilingRequest WHERE namespace = 'production' LIMIT 1; + SELECT ID INTO req4_id FROM ProfilingRequest WHERE job_name = 'batch-processing-job' LIMIT 1; + + -- Commands for request 1 (completed) + INSERT INTO ProfilingCommand ( + profiling_request_id, + host_id, + target_containers, + target_processes, + command_type, + command_args, + command_json, + sent_at, + completed_at, + status + ) VALUES + ( + req1_id, + 'host-001', + ARRAY['web-api-container-1'], + ARRAY[1001, 1002], + 'start', + '{"mode": "cpu", "duration": 60, "sample_rate": 100}'::jsonb, + '{"command": "pyspy", "args": ["--rate", "100", "--duration", "60"]}', + NOW() - INTERVAL '10 minutes', + NOW() - INTERVAL '9 minutes', + 'completed' + ), + ( + req1_id, + 'host-002', + ARRAY['web-api-container-2'], + ARRAY[2001, 2002], + 'start', + '{"mode": "cpu", "duration": 60, "sample_rate": 100}'::jsonb, + '{"command": "pyspy", "args": ["--rate", "100", "--duration", "60"]}', + NOW() - INTERVAL '10 minutes', + NOW() - INTERVAL '9 minutes', + 'completed' + ); + + -- Commands for request 2 (in progress) + INSERT INTO ProfilingCommand ( + profiling_request_id, + host_id, + target_containers, + command_type, + command_args, + command_json, + sent_at, + status + ) VALUES + ( + req2_id, + 'host-001', + ARRAY['web-api-container-1'], + 'start', + '{"mode": "memory", "duration": 120, "sample_rate": 50}'::jsonb, + '{"command": "pyspy", "args": ["--rate", "50", "--duration", "120", "--memory"]}', + NOW() - INTERVAL '5 minutes', + 'in_progress' + ); + + -- Commands for request 4 (completed) + INSERT INTO ProfilingCommand ( + profiling_request_id, + host_id, + target_containers, + command_type, + command_args, + command_json, + sent_at, + completed_at, + status + ) VALUES + ( + req4_id, + 'host-003', + ARRAY['processor-container-1'], + 'start', + '{"mode": "allocation", "duration": 300, "sample_rate": 100}'::jsonb, + '{"command": "async-profiler", "args": ["--alloc", "--duration", "300"]}', + NOW() - INTERVAL '15 minutes', + NOW() - INTERVAL '10 minutes', + 'completed' + ); + + RAISE NOTICE 'Created commands for requests: %, %, %', req1_id, req2_id, req4_id; +END $$; + +\echo ' ✓ Profiling commands created' + +-- ============================================================ +-- 5. PROFILING EXECUTIONS (AUDIT) +-- ============================================================ + +\echo '5. Creating profiling executions...' + +DO $$ +DECLARE + req1_id bigint; + cmd1_id bigint; + cmd2_id bigint; +BEGIN + -- Get IDs + SELECT ID INTO req1_id FROM ProfilingRequest WHERE service_name = 'web-api' LIMIT 1; + SELECT ID INTO cmd1_id FROM ProfilingCommand WHERE host_id = 'host-001' AND status = 'completed' LIMIT 1; + SELECT ID INTO cmd2_id FROM ProfilingCommand WHERE host_id = 'host-002' AND status = 'completed' LIMIT 1; + + -- Execution records + INSERT INTO ProfilingExecutions ( + profiling_request_id, + profiling_command_id, + host_name, + target_containers, + target_processes, + command_type, + started_at, + completed_at, + status + ) VALUES + ( + req1_id, + cmd1_id, + 'worker-node-01', + ARRAY['web-api-container-1'], + ARRAY[1001, 1002], + 'start', + NOW() - INTERVAL '10 minutes', + NOW() - INTERVAL '9 minutes', + 'completed' + ), + ( + req1_id, + cmd2_id, + 'worker-node-02', + ARRAY['web-api-container-2'], + ARRAY[2001, 2002], + 'start', + NOW() - INTERVAL '10 minutes', + NOW() - INTERVAL '9 minutes', + 'completed' + ); + + RAISE NOTICE 'Created execution records for request: %', req1_id; +END $$; + +\echo ' ✓ Profiling executions created' + +-- ============================================================ +-- 6. VERIFICATION QUERIES +-- ============================================================ + +\echo '' +\echo '==========================================' +\echo 'VERIFICATION QUERIES' +\echo '==========================================' +\echo '' + +\echo 'Table Row Counts:' +SELECT + 'NamespaceServices' as table_name, COUNT(*) as row_count FROM NamespaceServices +UNION ALL SELECT 'ServiceContainers', COUNT(*) FROM ServiceContainers +UNION ALL SELECT 'JobContainers', COUNT(*) FROM JobContainers +UNION ALL SELECT 'ContainerProcesses', COUNT(*) FROM ContainerProcesses +UNION ALL SELECT 'ContainersHosts', COUNT(*) FROM ContainersHosts +UNION ALL SELECT 'ProcessesHosts', COUNT(*) FROM ProcessesHosts +UNION ALL SELECT 'HostHeartbeats', COUNT(*) FROM HostHeartbeats +UNION ALL SELECT 'ProfilingRequest', COUNT(*) FROM ProfilingRequest +UNION ALL SELECT 'ProfilingCommand', COUNT(*) FROM ProfilingCommand +UNION ALL SELECT 'ProfilingExecutions', COUNT(*) FROM ProfilingExecutions +ORDER BY table_name; + +\echo '' +\echo 'Sample Profiling Request with Commands:' +SELECT + pr.request_id, + pr.service_name, + pr.profiling_mode, + pr.status as request_status, + COUNT(pc.id) as command_count, + COUNT(CASE WHEN pc.status = 'completed' THEN 1 END) as completed_commands +FROM ProfilingRequest pr +LEFT JOIN ProfilingCommand pc ON pc.profiling_request_id = pr.id +GROUP BY pr.id, pr.request_id, pr.service_name, pr.profiling_mode, pr.status +ORDER BY pr.created_at DESC +LIMIT 5; + +\echo '' +\echo 'Active Hosts (Last Seen < 1 minute ago):' +SELECT + host_id, + host_name, + service_name, + namespace, + ARRAY_LENGTH(containers, 1) as container_count, + ARRAY_LENGTH(jobs, 1) as job_count, + EXTRACT(EPOCH FROM (NOW() - timestamp_last_seen))::integer as seconds_since_last_seen +FROM HostHeartbeats +WHERE timestamp_last_seen > NOW() - INTERVAL '1 minute' +ORDER BY timestamp_last_seen DESC; + +\echo '' +\echo 'Hierarchical Mapping: Namespace → Service → Container → Host:' +SELECT + ns.namespace, + ns.service_name, + sc.container_name, + ch.host_name, + ch.host_id +FROM NamespaceServices ns +JOIN ServiceContainers sc ON sc.service_name = ns.service_name +JOIN ContainersHosts ch ON ch.container_name = sc.container_name +ORDER BY ns.namespace, ns.service_name, ch.host_name; + +\echo '' +\echo '==========================================' +\echo 'TEST DATA CREATION COMPLETE ✓' +\echo '==========================================' +\echo '' +\echo 'Summary:' +\echo ' • Hierarchical mappings: Created' +\echo ' • Host heartbeats: 4 active hosts' +\echo ' • Profiling requests: 4 requests (pending, in_progress, completed)' +\echo ' • Profiling commands: Multiple commands' +\echo ' • Profiling executions: Audit trail created' +\echo '' +\echo 'Ready to test Dynamic Profiling queries!' +\echo '' + + + + diff --git a/scripts/test_dynamic_profiling_models.py b/scripts/test_dynamic_profiling_models.py new file mode 100644 index 00000000..fca358f0 --- /dev/null +++ b/scripts/test_dynamic_profiling_models.py @@ -0,0 +1,378 @@ +#!/usr/bin/env python3 +# +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +""" +Test script for Dynamic Profiling Pydantic models. + +This script demonstrates how to use the dynamic profiling models +and validates that they work correctly. +""" + +import sys +from datetime import datetime, timedelta +from pathlib import Path + +# Add src directory to path +repo_root = Path(__file__).parent.parent +sys.path.insert(0, str(repo_root / "src")) + +try: + from gprofiler.backend.models.dynamic_profiling_models import ( + # Enums + CommandType, + ProfilingMode, + ProfilingStatus, + # Request models + ProfilingRequestCreate, + ProfilingRequestResponse, + ProfilingRequestUpdate, + # Command models + ProfilingCommandCreate, + ProfilingCommandResponse, + # Heartbeat models + HostHeartbeatCreate, + HostHeartbeatResponse, + # Execution models + ProfilingExecutionCreate, + ProfilingExecutionResponse, + # Mapping models + NamespaceServiceMapping, + ServiceContainerMapping, + JobContainerMapping, + ContainerProcessMapping, + ContainerHostMapping, + ProcessHostMapping, + # Query models + ProfilingRequestQuery, + HostHeartbeatQuery, + ) + print("✅ Successfully imported all dynamic profiling models") +except ImportError as e: + print(f"❌ Failed to import models: {e}") + sys.exit(1) + + +def test_enums(): + """Test enum definitions""" + print("\n" + "="*60) + print("Testing Enums") + print("="*60) + + # Test CommandType + assert CommandType.START == "start" + assert CommandType.STOP == "stop" + assert CommandType.RECONFIGURE == "reconfigure" + print("✅ CommandType enum works") + + # Test ProfilingMode + assert ProfilingMode.CPU == "cpu" + assert ProfilingMode.MEMORY == "memory" + assert ProfilingMode.ALLOCATION == "allocation" + assert ProfilingMode.NATIVE == "native" + print("✅ ProfilingMode enum works") + + # Test ProfilingStatus + assert ProfilingStatus.PENDING == "pending" + assert ProfilingStatus.IN_PROGRESS == "in_progress" + assert ProfilingStatus.COMPLETED == "completed" + print("✅ ProfilingStatus enum works") + + +def test_profiling_request_models(): + """Test profiling request models""" + print("\n" + "="*60) + print("Testing Profiling Request Models") + print("="*60) + + # Test ProfilingRequestCreate - service level + request = ProfilingRequestCreate( + service_name="web-api", + profiling_mode=ProfilingMode.CPU, + duration_seconds=60, + sample_rate=100, + start_time=datetime.utcnow(), + executors=["pyspy", "perf"] + ) + assert request.service_name == "web-api" + assert request.profiling_mode == ProfilingMode.CPU + assert request.duration_seconds == 60 + print("✅ ProfilingRequestCreate works (service level)") + + # Test namespace level + request_ns = ProfilingRequestCreate( + namespace="production", + profiling_mode=ProfilingMode.MEMORY, + duration_seconds=120, + sample_rate=50, + start_time=datetime.utcnow() + ) + assert request_ns.namespace == "production" + print("✅ ProfilingRequestCreate works (namespace level)") + + # Test validation - at least one target required + try: + invalid_request = ProfilingRequestCreate( + profiling_mode=ProfilingMode.CPU, + duration_seconds=60, + sample_rate=100, + start_time=datetime.utcnow() + ) + print("❌ Validation should have failed for request with no targets") + except ValueError: + print("✅ Validation works: requires at least one target") + + # Test validation - positive duration + try: + invalid_request = ProfilingRequestCreate( + service_name="test", + profiling_mode=ProfilingMode.CPU, + duration_seconds=-10, + sample_rate=100, + start_time=datetime.utcnow() + ) + print("❌ Validation should have failed for negative duration") + except ValueError: + print("✅ Validation works: duration must be positive") + + +def test_heartbeat_models(): + """Test host heartbeat models""" + print("\n" + "="*60) + print("Testing Host Heartbeat Models") + print("="*60) + + heartbeat = HostHeartbeatCreate( + host_id="host-12345", + host_name="worker-node-01", + host_ip="10.0.1.42", + service_name="web-api", + namespace="production", + containers=["web-api-container-1", "web-api-container-2"], + jobs=["data-processing-job"], + workloads={"cpu_usage": 45.2, "memory_usage": 60.5}, + executors=["pyspy", "perf"] + ) + assert heartbeat.host_id == "host-12345" + assert heartbeat.host_name == "worker-node-01" + assert len(heartbeat.containers) == 2 + assert heartbeat.workloads["cpu_usage"] == 45.2 + print("✅ HostHeartbeatCreate works") + + +def test_command_models(): + """Test profiling command models""" + print("\n" + "="*60) + print("Testing Profiling Command Models") + print("="*60) + + command = ProfilingCommandCreate( + profiling_request_id=1, + host_id="host-12345", + target_containers=["web-api-container-1"], + target_processes=[1001, 1002], + command_type=CommandType.START, + command_args={"mode": "cpu", "duration": 60, "sample_rate": 100}, + command_json='{"command": "pyspy", "args": ["--rate", "100"]}' + ) + assert command.host_id == "host-12345" + assert command.command_type == CommandType.START + assert len(command.target_processes) == 2 + print("✅ ProfilingCommandCreate works") + + +def test_execution_models(): + """Test profiling execution models""" + print("\n" + "="*60) + print("Testing Profiling Execution Models") + print("="*60) + + execution = ProfilingExecutionCreate( + profiling_request_id=1, + profiling_command_id=1, + host_name="worker-node-01", + target_containers=["web-api-container-1"], + target_processes=[1001, 1002], + command_type=CommandType.START, + started_at=datetime.utcnow(), + status=ProfilingStatus.IN_PROGRESS + ) + assert execution.host_name == "worker-node-01" + assert execution.status == ProfilingStatus.IN_PROGRESS + print("✅ ProfilingExecutionCreate works") + + +def test_mapping_models(): + """Test hierarchical mapping models""" + print("\n" + "="*60) + print("Testing Mapping Models") + print("="*60) + + # Test NamespaceServiceMapping + ns_mapping = NamespaceServiceMapping( + namespace="production", + service_name="web-api" + ) + assert ns_mapping.namespace == "production" + print("✅ NamespaceServiceMapping works") + + # Test ServiceContainerMapping + sc_mapping = ServiceContainerMapping( + service_name="web-api", + container_name="web-api-container-1" + ) + assert sc_mapping.container_name == "web-api-container-1" + print("✅ ServiceContainerMapping works") + + # Test JobContainerMapping + jc_mapping = JobContainerMapping( + job_name="batch-processing-job", + container_name="processor-container-1" + ) + assert jc_mapping.job_name == "batch-processing-job" + print("✅ JobContainerMapping works") + + # Test ContainerProcessMapping + cp_mapping = ContainerProcessMapping( + container_name="web-api-container-1", + process_id=1001, + process_name="python3" + ) + assert cp_mapping.process_id == 1001 + print("✅ ContainerProcessMapping works") + + # Test ContainerHostMapping + ch_mapping = ContainerHostMapping( + container_name="web-api-container-1", + host_id="host-12345", + host_name="worker-node-01" + ) + assert ch_mapping.host_id == "host-12345" + print("✅ ContainerHostMapping works") + + # Test ProcessHostMapping + ph_mapping = ProcessHostMapping( + process_id=1001, + host_id="host-12345", + host_name="worker-node-01" + ) + assert ph_mapping.process_id == 1001 + print("✅ ProcessHostMapping works") + + +def test_query_models(): + """Test query models""" + print("\n" + "="*60) + print("Testing Query Models") + print("="*60) + + # Test ProfilingRequestQuery + request_query = ProfilingRequestQuery( + status=ProfilingStatus.IN_PROGRESS, + service_name="web-api", + limit=50, + offset=0 + ) + assert request_query.status == ProfilingStatus.IN_PROGRESS + assert request_query.limit == 50 + print("✅ ProfilingRequestQuery works") + + # Test HostHeartbeatQuery + heartbeat_query = HostHeartbeatQuery( + service_name="web-api", + namespace="production", + last_seen_after=datetime.utcnow() - timedelta(minutes=5), + limit=100 + ) + assert heartbeat_query.namespace == "production" + assert heartbeat_query.limit == 100 + print("✅ HostHeartbeatQuery works") + + +def test_json_serialization(): + """Test JSON serialization of models""" + print("\n" + "="*60) + print("Testing JSON Serialization") + print("="*60) + + # Create a request + request = ProfilingRequestCreate( + service_name="web-api", + profiling_mode=ProfilingMode.CPU, + duration_seconds=60, + sample_rate=100, + start_time=datetime.utcnow(), + executors=["pyspy"] + ) + + # Serialize to dict + request_dict = request.model_dump() + assert request_dict["service_name"] == "web-api" + assert request_dict["profiling_mode"] == "cpu" + print("✅ Model serialization to dict works") + + # Serialize to JSON + request_json = request.model_dump_json() + assert "web-api" in request_json + assert "cpu" in request_json + print("✅ Model serialization to JSON works") + + +def main(): + """Run all tests""" + print("\n" + "="*60) + print("Dynamic Profiling Models Test Suite") + print("="*60) + + try: + test_enums() + test_profiling_request_models() + test_heartbeat_models() + test_command_models() + test_execution_models() + test_mapping_models() + test_query_models() + test_json_serialization() + + print("\n" + "="*60) + print("✅ ALL TESTS PASSED") + print("="*60) + print("\nDynamic Profiling models are working correctly!") + print("\nYou can now:") + print(" 1. Apply the database schema (dynamic_profiling_schema.sql)") + print(" 2. Create API endpoints using these models") + print(" 3. Build request resolution logic") + print(" 4. Integrate with agents for command execution") + print("") + + return 0 + + except AssertionError as e: + print(f"\n❌ TEST FAILED: {e}") + return 1 + except Exception as e: + print(f"\n❌ UNEXPECTED ERROR: {e}") + import traceback + traceback.print_exc() + return 1 + + +if __name__ == "__main__": + sys.exit(main()) + + + + diff --git a/src/gprofiler-dev/gprofiler_dev/config.py b/src/gprofiler-dev/gprofiler_dev/config.py index 459ae5e1..e84eb8f7 100644 --- a/src/gprofiler-dev/gprofiler_dev/config.py +++ b/src/gprofiler-dev/gprofiler_dev/config.py @@ -38,3 +38,6 @@ BUCKET_NAME = os.getenv("BUCKET_NAME", "gprofiler") BASE_DIRECTORY = "products" +# Optional: Custom S3 endpoint for local testing (e.g., LocalStack) or S3-compatible services +# In production, leave unset to use default AWS S3 endpoints +S3_ENDPOINT_URL = os.getenv("S3_ENDPOINT_URL") diff --git a/src/gprofiler-dev/gprofiler_dev/postgres/db_manager.py b/src/gprofiler-dev/gprofiler_dev/postgres/db_manager.py index 07ce0e6c..484b1eb0 100644 --- a/src/gprofiler-dev/gprofiler_dev/postgres/db_manager.py +++ b/src/gprofiler-dev/gprofiler_dev/postgres/db_manager.py @@ -21,7 +21,7 @@ from collections import defaultdict from datetime import datetime, timedelta from secrets import token_urlsafe -from typing import Dict, List, Optional, Set, Tuple, Union +from typing import Any, Dict, List, Optional, Set, Tuple, Union from gprofiler_dev.config import INSTANCE_RUNS_LRU_CACHE_LIMIT, PROFILER_PROCESSES_LRU_CACHE_LIMIT from gprofiler_dev.lru_cache_impl import LRUCache @@ -103,6 +103,9 @@ def __init__(self): self.instance_runs = LRUCache(INSTANCE_RUNS_LRU_CACHE_LIMIT) self.profiler_processes = LRUCache(PROFILER_PROCESSES_LRU_CACHE_LIMIT) + # Cache for host-pid mappings (temporary solution) + self.request_host_pid_mappings: Dict[str, Dict[str, List[int]]] = {} + self.last_seen_updates: Dict[str : time.time] = defaultdict( lambda: time.time() - (LAST_SEEN_UPDATES_INTERVAL_MINUTES + 1) * 60 ) @@ -357,7 +360,7 @@ def get_metadata_id(self, meta: dict) -> Union[None, int]: return None meta_json = json.dumps(meta) - hash_meta = hashlib.new('md5', meta_json.encode("utf-8"), usedforsecurity=False).hexdigest() + hash_meta = hashlib.new("md5", meta_json.encode("utf-8"), usedforsecurity=False).hexdigest() key = (meta_json, hash_meta) return self.db.add_or_fetch( SQLQueries.SELECT_INSTANCE_CLOUD_METADATA, key, SQLQueries.INSERT_INSTANCE_CLOUD_METADATA @@ -585,3 +588,1197 @@ def get_services_overview_summary(self) -> List[Dict]: return self.db.execute( AggregationSQLQueries.SERVICES_SUMMARY, values, one_value=False, return_dict=True, fetch_all=True ) + + # Profiling Request Management Methods (Simplified) + + def save_profiling_request( + self, + request_id: str, + request_type: str, + service_name: str, + continuous: Optional[bool] = False, + duration: Optional[int] = 60, + frequency: Optional[int] = 11, + profiling_mode: Optional[str] = "cpu", + target_hostnames: Optional[List[str]] = None, + pids: Optional[List[int]] = None, + host_pid_mapping: Optional[Dict[str, List[int]]] = None, + additional_args: Optional[Dict] = None, + ) -> bool: + """Save a profiling request with support for host-to-PID mapping""" + # Store additional_args WITHOUT host_pid_mapping (keep that separate) + clean_additional_args = additional_args.copy() if additional_args else {} + + # Store host_pid_mapping separately in a dedicated field if we add one, + # for now, we'll handle it during command creation to avoid polluting additional_args + + query = """ + INSERT INTO ProfilingRequests ( + request_id, request_type, service_name, continuous, duration, frequency, profiling_mode, + target_hostnames, pids, additional_args + ) VALUES ( + %(request_id)s::uuid, %(request_type)s, %(service_name)s, %(continuous)s, %(duration)s, %(frequency)s, + %(profiling_mode)s::ProfilingMode, %(target_hostnames)s, %(pids)s, %(additional_args)s + ) + """ + + values = { + "request_id": request_id, + "request_type": request_type, + "service_name": service_name, + "continuous": continuous, + "duration": duration, + "frequency": frequency, + "profiling_mode": profiling_mode, + "target_hostnames": target_hostnames, + "pids": pids, + "additional_args": json.dumps(clean_additional_args) if clean_additional_args else None, + } + + self.db.execute(query, values, has_value=False) + + # Store host_pid_mapping in a separate table or handle it during command creation + if host_pid_mapping: + self._store_host_pid_mapping(request_id, host_pid_mapping) + + return True + + def _store_host_pid_mapping(self, request_id: str, host_pid_mapping: Dict[str, List[int]]) -> None: + """Store host-to-PID mapping separately from additional_args""" + # Store in memory cache for this session + self.request_host_pid_mappings[request_id] = host_pid_mapping + + def _get_host_pid_mapping(self, request_id: str) -> Dict[str, List[int]]: + """Get host-to-PID mapping for a request""" + return self.request_host_pid_mappings.get(request_id, {}) + + def get_pending_profiling_request( + self, hostname: str, service_name: str, exclude_command_id: Optional[str] = None + ) -> Optional[Dict]: + """Get pending profiling request for a specific host/service using pure SQL""" + query = """ + SELECT + pr.request_id, + pr.service_name, + pr.continuous, + pr.duration, + pr.frequency, + pr.profiling_mode, + pr.target_hostnames, + pr.pids, + pr.additional_args, + pr.status, + pr.created_at, + pr.estimated_completion_time + FROM ProfilingRequests pr + WHERE pr.service_name = %(service_name)s + AND pr.status = 'pending' + AND ( + pr.target_hostnames IS NULL + OR %(hostname)s = ANY(pr.target_hostnames) + ) + """ + + values = {"hostname": hostname, "service_name": service_name} + + if exclude_command_id: + query += " AND pr.request_id != %(exclude_command_id)s::uuid" + values["exclude_command_id"] = exclude_command_id + + query += " ORDER BY pr.created_at ASC LIMIT 1" + + result = self.db.execute(query, values, one_value=True, return_dict=True) + return result if result else None + + def mark_profiling_request_assigned(self, request_id: str, command_id: str, hostname: str) -> bool: + """ + Create execution record for the command assignment. + We don't need to update ProfilingRequests status since: + 1. ProfilingCommands already tracks the actual commands via request_ids array + 2. ProfilingExecutions tracks the actual execution status + 3. We can trace back from command to requests via request_ids + """ + + # Just create the execution record - this is what really matters + exec_query = """ + INSERT INTO ProfilingExecutions ( + command_id, hostname, profiling_request_id, status, started_at + ) VALUES ( + %(command_id)s::uuid, %(hostname)s, %(request_id)s::uuid, 'assigned', CURRENT_TIMESTAMP + ) + ON CONFLICT (command_id, hostname) DO UPDATE SET + profiling_request_id = %(request_id)s::uuid, + status = 'assigned', + started_at = CURRENT_TIMESTAMP + """ + exec_values = {"command_id": command_id, "hostname": hostname, "request_id": request_id} + + try: + self.db.execute(exec_query, exec_values, has_value=False) + return True + except Exception as e: + self.db.logger.error(f"Error creating profiling execution record: {e}") + return False + + def update_profiling_request_status( + self, request_id: str, status: str, completed_at: Optional[datetime] = None, error_message: Optional[str] = None + ) -> bool: + """ + Update the status of a profiling request (DEPRECATED - kept for compatibility) + + NOTE: This method is largely unnecessary since: + - ProfilingCommands tracks the actual command status + - ProfilingExecutions tracks execution status + - Request status can be inferred from command/execution status + + Consider using command/execution status instead. + """ + # For now, just return True to avoid breaking existing code + # In the future, this method should be removed + return True + + def auto_update_profiling_request_status_by_request_ids( + self, + request_ids: List[str], + ) -> bool: + """ + Automatically update the status of profiling requests based on the status of their profiling commands. + This method checks the status of all commands associated with each request ID, + and updates the request status accordingly. + The resulting status is determined by the highest "priority" / "criticality" status from the associated commands. + """ + if not request_ids: + return True + + exec_query = """ + WITH + status_priority AS ( + SELECT + status, + status_value + FROM ( + VALUES + ('completed', 0), + ('pending', 1), + ('sent', 2), + ('failed', 3) + ) AS t(status, status_value) + ), + profiling_request_with_command_status AS ( + SELECT + pr.request_id, + pc.status::text AS command_status + FROM + ProfilingRequests pr + LEFT JOIN ProfilingCommands pc ON pr.request_id = ANY(pc.request_ids) + WHERE + pr.request_id = ANY(%(request_ids)s::uuid[]) + ), + max_status AS ( + SELECT + pr.request_id, + MAX(sp.status_value) AS max_status_value + FROM + profiling_request_with_command_status pr + JOIN status_priority sp ON pr.command_status = sp.status + GROUP BY + pr.request_id + ), + final_status AS ( + SELECT + ms.request_id, + sp.status + FROM + max_status ms + JOIN status_priority sp ON ms.max_status_value = sp.status_value + ) + UPDATE ProfilingRequests + SET status = fs.status::profilingrequeststatus, + completed_at = CASE + WHEN fs.status IN ('completed', 'failed') THEN CURRENT_TIMESTAMP + ELSE pr.completed_at + END + FROM + ProfilingRequests pr + JOIN final_status fs ON pr.request_id = fs.request_id + WHERE + pr.request_id = ANY(%(request_ids)s::uuid[]) + """ + + exec_values = {"request_ids": request_ids} + + self.db.execute(exec_query, exec_values, has_value=False) + return True + + def update_profiling_execution_status( + self, + command_id: str, + hostname: str, + status: str, + completed_at: Optional[datetime] = None, + error_message: Optional[str] = None, + execution_time: Optional[int] = None, + results_path: Optional[str] = None, + ) -> bool: + """Update the status of a specific profiling execution by command_id and hostname""" + exec_query = """ + UPDATE ProfilingExecutions + SET status = %(status)s::ProfilingRequestStatus, + completed_at = %(completed_at)s, + error_message = %(error_message)s, + execution_time = %(execution_time)s, + results_path = %(results_path)s + WHERE command_id = %(command_id)s::uuid + AND hostname = %(hostname)s + """ + + exec_values = { + "command_id": command_id, + "hostname": hostname, + "status": status, + "completed_at": completed_at, + "error_message": error_message, + "execution_time": execution_time, + "results_path": results_path, + } + + self.db.execute(exec_query, exec_values, has_value=False) + return True + + def upsert_host_heartbeat( + self, + hostname: str, + ip_address: str, + service_name: str, + last_command_id: Optional[str] = None, + status: str = "active", + heartbeat_timestamp: Optional[datetime] = None, + ) -> bool: + """Update or insert host heartbeat information using pure SQL""" + if heartbeat_timestamp is None: + heartbeat_timestamp = datetime.now() + + query = """ + INSERT INTO HostHeartbeats ( + hostname, ip_address, service_name, last_command_id, + status, heartbeat_timestamp, created_at, updated_at + ) VALUES ( + %(hostname)s, %(ip_address)s::inet, %(service_name)s, + %(last_command_id)s::uuid, %(status)s::HostStatus, + %(heartbeat_timestamp)s, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP + ) + ON CONFLICT (hostname, service_name) + DO UPDATE SET + ip_address = EXCLUDED.ip_address, + last_command_id = EXCLUDED.last_command_id, + status = EXCLUDED.status, + heartbeat_timestamp = EXCLUDED.heartbeat_timestamp, + updated_at = CURRENT_TIMESTAMP + """ + + values = { + "hostname": hostname, + "ip_address": ip_address, + "service_name": service_name, + "last_command_id": last_command_id, + "status": status, + "heartbeat_timestamp": heartbeat_timestamp, + } + + self.db.execute(query, values, has_value=False) + return True + + def get_host_heartbeat(self, hostname: str) -> Optional[Dict]: + """Get the latest heartbeat information for a host""" + query = """ + SELECT + hostname, ip_address, service_name, last_command_id, + status, heartbeat_timestamp, created_at, updated_at + FROM HostHeartbeats + WHERE hostname = %(hostname)s + """ + + values = {"hostname": hostname} + result = self.db.execute(query, values, one_value=True, return_dict=True) + return result if result else None + + def get_active_hosts(self, service_name: Optional[str] = None) -> List[Dict]: + """Get list of active hosts, optionally filtered by service""" + query = """ + SELECT + hostname, ip_address, service_name, last_command_id, + status, heartbeat_timestamp + FROM HostHeartbeats + WHERE status = 'active' + AND heartbeat_timestamp > NOW() - INTERVAL '10 minutes' + """ + + values = {} + if service_name: + query += " AND service_name = %(service_name)s" + values["service_name"] = service_name + + query += " ORDER BY heartbeat_timestamp DESC" + + return self.db.execute(query, values, one_value=False, return_dict=True, fetch_all=True) + + def get_all_host_heartbeats(self, limit: Optional[int] = None, offset: Optional[int] = None) -> List[Dict]: + """Get all host heartbeat records with optional pagination""" + query = """ + SELECT + ID, hostname, ip_address, service_name, last_command_id, + status, heartbeat_timestamp, created_at, updated_at + FROM HostHeartbeats + ORDER BY heartbeat_timestamp DESC + """ + + values = {} + if limit is not None: + query += " LIMIT %(limit)s" + values["limit"] = limit + if offset is not None: + query += " OFFSET %(offset)s" + values["offset"] = offset + + return self.db.execute(query, values, one_value=False, return_dict=True, fetch_all=True) + + def get_host_heartbeats_by_service(self, service_name: str, limit: Optional[int] = None, exact_match: bool = False) -> List[Dict]: + """Get all host heartbeat records for a specific service with optional partial matching""" + if exact_match: + # Use exact match for backward compatibility + where_clause = "WHERE service_name = %(service_name)s" + service_param = service_name + else: + # Use partial, case-insensitive matching + where_clause = "WHERE service_name ILIKE %(service_name)s" + service_param = f"%{service_name}%" + + query = f""" + SELECT + ID, hostname, ip_address, service_name, last_command_id, + status, heartbeat_timestamp, created_at, updated_at + FROM HostHeartbeats + {where_clause} + ORDER BY heartbeat_timestamp DESC + """ + + values: dict[str, Any] = {"service_name": service_param} + if limit is not None: + query += " LIMIT %(limit)s" + values["limit"] = limit + + return self.db.execute(query, values, one_value=False, return_dict=True, fetch_all=True) + + def get_host_heartbeats_by_status(self, status: str, limit: Optional[int] = None) -> List[Dict]: + """Get all host heartbeat records by status""" + query = """ + SELECT + ID, hostname, ip_address, service_name, last_command_id, + status, heartbeat_timestamp, created_at, updated_at + FROM HostHeartbeats + WHERE status = %(status)s + ORDER BY heartbeat_timestamp DESC + """ + + values: dict[str, Any] = {"status": status} + if limit is not None: + query += " LIMIT %(limit)s" + values["limit"] = limit + + return self.db.execute(query, values, one_value=False, return_dict=True, fetch_all=True) + + def get_profiler_request_status(self, request_id: str) -> Optional[Dict]: + """Get the current status of a profiling request by looking at associated commands and executions""" + query = """ + SELECT + pr.request_id, pr.service_name, pr.created_at, pr.estimated_completion_time, + pc.command_id, pc.hostname, pc.status as command_status, pc.created_at as command_created_at, + pe.status as execution_status, pe.started_at, pe.completed_at, pe.error_message + FROM ProfilingRequests pr + LEFT JOIN ProfilingCommands pc ON pr.request_id = ANY(pc.request_ids) + LEFT JOIN ProfilingExecutions pe ON pc.command_id = pe.command_id + WHERE pr.request_id = %(request_id)s::uuid + ORDER BY pc.created_at DESC, pe.started_at DESC + LIMIT 1 + """ + + values = {"request_id": request_id} + result = self.db.execute(query, values, one_value=True, return_dict=True) + + if result: + # Infer overall request status from command/execution status + if result.get("execution_status"): + result["inferred_status"] = result["execution_status"] + elif result.get("command_status"): + result["inferred_status"] = result["command_status"] + else: + result["inferred_status"] = "pending" + + return result if result else None + + def create_or_update_profiling_command( + self, + command_id: str, + hostname: Optional[str], + service_name: str, + command_type: str, + new_request_id: str, + stop_level: Optional[str] = None, + ) -> bool: + """Create or update a profiling command for a host with command_type support""" + if hostname is None: + active_hosts = self.get_active_hosts(service_name) + success = True + for host in active_hosts: + result = self.create_or_update_profiling_command( + command_id, host["hostname"], service_name, command_type, new_request_id, stop_level + ) + success = success and result + return success + + # Get the request details to build combined_config + request_query = """ + SELECT continuous, duration, frequency, profiling_mode, pids, additional_args + FROM ProfilingRequests + WHERE request_id = %(request_id)s::uuid + """ + request_result = self.db.execute( + request_query, {"request_id": new_request_id}, one_value=True, return_dict=True + ) + + if not request_result: + return False + + # Get host-specific PIDs from our dedicated storage + host_pid_mapping = self._get_host_pid_mapping(new_request_id) + host_specific_pids = host_pid_mapping.get(hostname, []) if host_pid_mapping else [] + + # Build base configuration from new request + new_config = { + "command_type": command_type, + "continuous": request_result["continuous"], + "duration": request_result["duration"], + "frequency": request_result["frequency"], + "profiling_mode": request_result["profiling_mode"], + } + + # Merge additional_args directly into new_config + if request_result["additional_args"]: + additional_args = request_result["additional_args"] + if isinstance(additional_args, str): + try: + additional_args = json.loads(additional_args) + except json.JSONDecodeError: + additional_args = {} + if isinstance(additional_args, dict): + new_config.update(additional_args) + + # Add stop_level if provided + if stop_level: + new_config["stop_level"] = stop_level + + # Use host-specific PIDs if available, otherwise fall back to global PIDs + if host_specific_pids: + new_config["pids"] = host_specific_pids + elif request_result["pids"]: + new_config["pids"] = request_result["pids"] + + # Use proper upsert with ON CONFLICT to handle race conditions + # First, check if there's an existing command to merge with + existing_command_query = """ + SELECT command_id, combined_config, request_ids + FROM ProfilingCommands + WHERE hostname = %(hostname)s + AND service_name = %(service_name)s + AND status = 'pending' + """ + + existing_command = self.db.execute( + existing_command_query, + {"hostname": hostname, "service_name": service_name}, + one_value=True, + return_dict=True, + ) + + # Only merge the command when there is an existing command and + # the command status is 'pending' or 'sent' + if existing_command and existing_command.get("status") in ["pending", "sent"]: + # Merge with existing command + existing_config = existing_command["combined_config"] + if isinstance(existing_config, str): + try: + existing_config = json.loads(existing_config) + except json.JSONDecodeError: + existing_config = {} + elif existing_config is None: + existing_config = {} + + # Merge configurations + merged_config = self._merge_profiling_configs(existing_config, new_config) + final_config = merged_config + final_request_ids = existing_command["request_ids"] + [new_request_id] + else: + # No existing command, use new config as-is + final_config = new_config + final_request_ids = [new_request_id] + + # Use INSERT ... ON CONFLICT for atomic upsert + upsert_query = """ + INSERT INTO ProfilingCommands ( + command_id, hostname, service_name, command_type, request_ids, + combined_config, status, created_at + ) VALUES ( + %(command_id)s::uuid, %(hostname)s, %(service_name)s, %(command_type)s, + %(final_request_ids)s::uuid[], %(final_config)s::jsonb, + 'pending', CURRENT_TIMESTAMP + ) + ON CONFLICT (hostname, service_name) + DO UPDATE SET + command_id = %(command_id)s::uuid, + command_type = %(command_type)s, + request_ids = %(final_request_ids)s::uuid[], + combined_config = %(final_config)s::jsonb, + status = 'pending', + created_at = CURRENT_TIMESTAMP + """ + + values = { + "command_id": command_id, + "hostname": hostname, + "service_name": service_name, + "command_type": command_type, + "final_request_ids": final_request_ids, + "final_config": json.dumps(final_config), + } + + self.db.execute(upsert_query, values, has_value=False) + return True + + def _merge_profiling_configs(self, existing_config: Dict, new_config: Dict) -> Dict: + """Merge two profiling configurations, combining parameters appropriately""" + # Handle case where existing_config might be None or empty + if not existing_config: + existing_config = {} + + merged = existing_config.copy() + + # Always use the latest command_type + merged["command_type"] = new_config["command_type"] + + # For continuous, always make it true if either is true + merged["continuous"] = existing_config.get("continuous", False) or new_config.get("continuous", False) + + # For duration, use the maximum (longer duration wins) + if new_config.get("duration") and existing_config.get("duration"): + merged["duration"] = max(new_config["duration"], existing_config["duration"]) + elif new_config.get("duration"): + merged["duration"] = new_config["duration"] + + # For frequency, use the maximum (higher frequency wins) + if new_config.get("frequency") and existing_config.get("frequency"): + merged["frequency"] = max(new_config["frequency"], existing_config["frequency"]) + elif new_config.get("frequency"): + merged["frequency"] = new_config["frequency"] + + # For profiling mode, use the latest one + if new_config.get("profiling_mode"): + merged["profiling_mode"] = new_config["profiling_mode"] + + # For PIDs, combine them (remove duplicates) + existing_pids = set(existing_config.get("pids", [])) + new_pids = set(new_config.get("pids", [])) + combined_pids = list(existing_pids | new_pids) + if combined_pids: + merged["pids"] = combined_pids + + # For additional_args, merge the dictionaries (they should be clean now) + if new_config.get("additional_args"): + if existing_config.get("additional_args"): + merged["additional_args"] = {**existing_config["additional_args"], **new_config["additional_args"]} + else: + merged["additional_args"] = new_config["additional_args"] + + # For stop_level, use the latest one + if new_config.get("stop_level"): + merged["stop_level"] = new_config["stop_level"] + + return merged + + def create_stop_command_for_host( + self, command_id: str, hostname: str, service_name: str, request_id: str, stop_level: str = "host" + ) -> bool: + """Create a stop command for an entire host""" + query = """ + INSERT INTO ProfilingCommands ( + command_id, hostname, service_name, command_type, request_ids, + combined_config, status, created_at + ) VALUES ( + %(command_id)s::uuid, %(hostname)s, %(service_name)s, 'stop', + ARRAY[%(request_id)s::uuid], + %(combined_config)s::jsonb, + 'pending', CURRENT_TIMESTAMP + ) + ON CONFLICT (hostname, service_name) + DO UPDATE SET + command_id = %(command_id)s::uuid, + command_type = 'stop', + request_ids = array_append(ProfilingCommands.request_ids, %(request_id)s::uuid), + combined_config = %(combined_config)s::jsonb, + status = 'pending', + created_at = CURRENT_TIMESTAMP + """ + + combined_config = {"stop_level": stop_level} + + values = { + "command_id": command_id, + "hostname": hostname, + "service_name": service_name, + "request_id": request_id, + "combined_config": json.dumps(combined_config), + } + + self.db.execute(query, values, has_value=False) + return True + + def handle_process_level_stop( + self, + command_id: str, + hostname: str, + service_name: str, + pids_to_stop: Optional[List[int]], + request_id: str, + stop_level: str = "process", + ) -> bool: + # Get current command for this host to check existing PIDs + current_command = self.get_current_profiling_command(hostname, service_name) + + if current_command and current_command.get("command_type") == "start": + current_pids = current_command.get("combined_config", {}).get("pids", []) + + if current_pids: + # Remove specified PIDs from current command + remaining_pids = [pid for pid in current_pids if pid not in pids_to_stop] if pids_to_stop else [] + + if len(remaining_pids) < 1: + # Convert to host-level stop if no PIDs remain + return self.create_stop_command_for_host(command_id, hostname, service_name, request_id) + else: + # Update command with remaining PIDs + query = """ + UPDATE ProfilingCommands + SET command_id = %(command_id)s::uuid, + combined_config = jsonb_set( + jsonb_set(combined_config, '{pids}', %(remaining_pids)s::jsonb), + '{stop_level}', %(stop_level)s::jsonb + ), + request_ids = array_append(request_ids, %(request_id)s::uuid), + status = 'pending', + created_at = CURRENT_TIMESTAMP + WHERE hostname = %(hostname)s AND service_name = %(service_name)s + """ + + values = { + "command_id": command_id, + "hostname": hostname, + "service_name": service_name, + "request_id": request_id, + "remaining_pids": json.dumps(remaining_pids), + "stop_level": json.dumps(stop_level), + } + + self.db.execute(query, values, has_value=False) + return True + + # Default: create stop command with specific PIDs + query = """ + INSERT INTO ProfilingCommands ( + command_id, hostname, service_name, command_type, request_ids, + combined_config, status, created_at + ) VALUES ( + %(command_id)s::uuid, %(hostname)s, %(service_name)s, 'stop', + ARRAY[%(request_id)s::uuid], + %(combined_config)s::jsonb, + 'pending', CURRENT_TIMESTAMP + ) + ON CONFLICT (hostname, service_name) + DO UPDATE SET + command_id = %(command_id)s::uuid, + command_type = 'stop', + request_ids = array_append(ProfilingCommands.request_ids, %(request_id)s::uuid), + combined_config = %(combined_config)s::jsonb, + status = 'pending', + created_at = CURRENT_TIMESTAMP + """ + + combined_config = {"stop_level": stop_level, "pids": pids_to_stop} + + values = { + "command_id": command_id, + "hostname": hostname, + "service_name": service_name, + "request_id": request_id, + "combined_config": json.dumps(combined_config), + } + + self.db.execute(query, values, has_value=False) + return True + + def get_current_profiling_command(self, hostname: str, service_name: str) -> Optional[Dict]: + """Get the current profiling command for a host/service""" + query = """ + SELECT command_id, command_type, combined_config, request_ids, status, created_at + FROM ProfilingCommands + WHERE hostname = %(hostname)s AND service_name = %(service_name)s + ORDER BY created_at DESC + LIMIT 1 + """ + + values = {"hostname": hostname, "service_name": service_name} + + result = self.db.execute(query, values, one_value=True, return_dict=True) + return result if result else None + + def get_profiling_host_status_optimized( + self, + service_names: Optional[List[str]] = None, + hostnames: Optional[List[str]] = None, + ip_addresses: Optional[List[str]] = None, + profiling_statuses: Optional[List[str]] = None, + command_types: Optional[List[str]] = None, + pids: Optional[List[int]] = None, + exact_match: bool = False + ) -> List[Dict]: + """ + Get profiling host status with all filters applied in a single optimized query. + Uses JOIN to combine HostHeartbeats and ProfilingCommands data efficiently. + + This method solves the N+1 query problem by using a single SQL query with: + - Common Table Expressions (CTEs) for readability + - LEFT JOIN to combine HostHeartbeats and ProfilingCommands + - Window functions (ROW_NUMBER()) to get latest command per host + - Database-side filtering for all parameters + + Args: + service_names: List of service names to filter by + hostnames: List of hostnames to filter by (partial match) + ip_addresses: List of IP addresses to filter by (partial match) + profiling_statuses: List of profiling statuses to filter by + command_types: List of command types to filter by + pids: List of PIDs to filter by + exact_match: Whether to use exact match for service names + + Returns: + List of dictionaries with host status information + """ + # Build the query with CTEs for better readability and performance + query = """ + WITH latest_commands AS ( + SELECT + pc.hostname, + pc.service_name, + pc.command_type, + pc.status, + pc.combined_config, + pc.created_at, + ROW_NUMBER() OVER (PARTITION BY pc.hostname, pc.service_name ORDER BY pc.created_at DESC) as rn + FROM ProfilingCommands pc + ), + current_commands AS ( + SELECT + hostname, + service_name, + command_type, + status, + combined_config + FROM latest_commands + WHERE rn = 1 + ) + SELECT + h.id, + h.hostname, + h.ip_address, + h.service_name, + h.heartbeat_timestamp, + c.command_type, + c.status, + c.combined_config + FROM HostHeartbeats h + LEFT JOIN current_commands c + ON h.hostname = c.hostname AND h.service_name = c.service_name + WHERE 1=1 + """ + + values: Dict[str, Any] = {} + + # Apply service_name filter + if service_names: + if exact_match: + query += " AND h.service_name = ANY(%(service_names)s)" + values["service_names"] = service_names + else: + # Use ILIKE with OR for partial matching across multiple service names + service_conditions = [] + for idx, service_name in enumerate(service_names): + param_name = f"service_name_{idx}" + service_conditions.append(f"h.service_name ILIKE %({param_name})s") + values[param_name] = f"%{service_name}%" + query += f" AND ({' OR '.join(service_conditions)})" + + # Apply hostname filter (partial match with ILIKE) + if hostnames: + hostname_conditions = [] + for idx, hostname in enumerate(hostnames): + param_name = f"hostname_{idx}" + hostname_conditions.append(f"h.hostname ILIKE %({param_name})s") + values[param_name] = f"%{hostname}%" + query += f" AND ({' OR '.join(hostname_conditions)})" + + # Apply IP address filter (partial match) + # Note: ip_address is inet type, so we cast to text for LIKE matching + if ip_addresses: + ip_conditions = [] + for idx, ip_addr in enumerate(ip_addresses): + param_name = f"ip_address_{idx}" + ip_conditions.append(f"h.ip_address::text LIKE %({param_name})s") + values[param_name] = f"%{ip_addr}%" + query += f" AND ({' OR '.join(ip_conditions)})" + + # Apply profiling status filter + if profiling_statuses: + # Convert to lowercase for case-insensitive comparison + # Handle 'stopped' as NULL status (no command exists) + status_conditions = [] + has_stopped = False + for idx, status in enumerate(profiling_statuses): + if status.lower() == 'stopped': + has_stopped = True + else: + param_name = f"status_{idx}" + status_conditions.append(f"LOWER(c.status::text) = LOWER(%({param_name})s)") + values[param_name] = status + + if has_stopped: + status_conditions.append("c.status IS NULL") + + if status_conditions: + query += f" AND ({' OR '.join(status_conditions)})" + + # Apply command type filter + if command_types: + command_type_conditions = [] + has_na = False + for idx, cmd_type in enumerate(command_types): + if cmd_type.lower() == 'n/a': + has_na = True + else: + param_name = f"command_type_{idx}" + command_type_conditions.append(f"LOWER(c.command_type) = LOWER(%({param_name})s)") + values[param_name] = cmd_type + + if has_na: + command_type_conditions.append("c.command_type IS NULL") + + if command_type_conditions: + query += f" AND ({' OR '.join(command_type_conditions)})" + + # For PIDs filter, we need to check inside the JSONB combined_config + # This is more complex and might still need some post-processing + if pids: + # Check if any of the requested PIDs exist in the combined_config->pids array + query += " AND c.combined_config IS NOT NULL" + query += " AND c.combined_config ? 'pids'" + + query += " ORDER BY h.heartbeat_timestamp DESC" + + results = self.db.execute(query, values, one_value=False, return_dict=True, fetch_all=True) + + # Post-process for PID filtering if needed (this is still more efficient than N queries) + if pids and results: + filtered_results = [] + for row in results: + combined_config = row.get("combined_config") + if combined_config: + if isinstance(combined_config, str): + try: + combined_config = json.loads(combined_config) + except json.JSONDecodeError: + combined_config = {} + + if isinstance(combined_config, dict): + pids_in_config = combined_config.get("pids", []) + if isinstance(pids_in_config, list): + command_pids = [int(pid) for pid in pids_in_config if str(pid).isdigit()] + # Check if any filter PID matches command PIDs + if any(filter_pid in command_pids for filter_pid in pids): + filtered_results.append(row) + else: + # No PIDs in config, skip + continue + else: + # Invalid config, skip + continue + else: + # No config, skip when filtering by PIDs + continue + return filtered_results + + return results + + def get_pending_profiling_command( + self, hostname: str, service_name: str, exclude_command_id: Optional[str] = None + ) -> Optional[Dict]: + """Get pending profiling command for a specific host/service""" + query = """ + SELECT command_id, command_type, combined_config, request_ids, status, created_at + FROM ProfilingCommands + WHERE hostname = %(hostname)s + AND service_name = %(service_name)s + AND status = 'pending' + """ + + values = {"hostname": hostname, "service_name": service_name} + + if exclude_command_id: + query += " AND command_id != %(exclude_command_id)s::uuid" + values["exclude_command_id"] = exclude_command_id + + query += " ORDER BY created_at DESC LIMIT 1" + + result = self.db.execute(query, values, one_value=True, return_dict=True) + + # Parse the combined_config JSON if it exists + if result and result.get("combined_config"): + try: + if isinstance(result["combined_config"], str): + result["combined_config"] = json.loads(result["combined_config"]) + except json.JSONDecodeError: + self.db.logger.warning(f"Failed to parse combined_config for command {result.get('command_id')}") + result["combined_config"] = {} + + # Parse the request_ids array if it exists + if result and result.get("request_ids"): + try: + if isinstance(result["request_ids"], str): + # PostgreSQL array format: {uuid1,uuid2,uuid3} + # Remove braces and split by comma + request_ids_str = result["request_ids"].strip("{}") + if request_ids_str: + result["request_ids"] = [uuid.strip() for uuid in request_ids_str.split(",")] + else: + result["request_ids"] = [] + except Exception: + self.db.logger.warning(f"Failed to parse request_ids for command {result.get('command_id')}") + result["request_ids"] = [] + + return result if result else None + + def mark_profiling_command_sent(self, command_id: str, hostname: str) -> bool: + """Mark a profiling command as sent to a host""" + query = """ + UPDATE ProfilingCommands + SET status = 'sent', sent_at = CURRENT_TIMESTAMP + WHERE command_id = %(command_id)s::uuid AND hostname = %(hostname)s + """ + + values = {"command_id": command_id, "hostname": hostname} + + self.db.execute(query, values, has_value=False) + return True + + def update_profiling_command_status( + self, + command_id: str, + hostname: str, + status: str, + execution_time: Optional[int] = None, + error_message: Optional[str] = None, + results_path: Optional[str] = None, + ) -> bool: + """Update the status of a profiling command""" + query = """ + UPDATE ProfilingCommands + SET status = %(status)s, + completed_at = CASE WHEN %(status)s IN ('completed', 'failed') THEN CURRENT_TIMESTAMP ELSE completed_at END, + execution_time = %(execution_time)s, + error_message = %(error_message)s, + results_path = %(results_path)s + WHERE command_id = %(command_id)s::uuid AND hostname = %(hostname)s + """ + + values = { + "command_id": command_id, + "hostname": hostname, + "status": status, + "execution_time": execution_time, + "error_message": error_message, + "results_path": results_path, + } + + self.db.execute(query, values, has_value=False) + return True + + def get_profiling_command_by_hostname( + self, + hostname: str, + ) -> Optional[Dict]: + """Get the latest profiling command for a specific hostname""" + query = """ + SELECT command_id, hostname, service_name, command_type, combined_config, + request_ids, status, created_at, sent_at, completed_at + FROM ProfilingCommands + WHERE hostname = %(hostname)s + ORDER BY created_at DESC + LIMIT 1 + """ + + values = {"hostname": hostname} + result = self.db.execute(query, values, one_value=True, return_dict=True) + + # Parse the combined_config JSON if it exists + if result and result.get("combined_config"): + try: + if isinstance(result["combined_config"], str): + result["combined_config"] = json.loads(result["combined_config"]) + except json.JSONDecodeError: + self.db.logger.warning(f"Failed to parse combined_config for command {result.get('command_id')}") + result["combined_config"] = {} + + if result and result.get("request_ids"): + try: + if isinstance(result["request_ids"], str): + # PostgreSQL array format: {uuid1,uuid2,uuid3} + # Remove braces and split by comma + request_ids_str = result["request_ids"].strip("{}") + if request_ids_str: + result["request_ids"] = [uuid.strip() for uuid in request_ids_str.split(",")] + else: + result["request_ids"] = [] + except Exception: + self.db.logger.warning(f"Failed to parse request_ids for command {result.get('command_id')}") + result["request_ids"] = [] + + return result if result else None + + def validate_command_completion_eligibility(self, command_id: str, hostname: str) -> tuple[bool, str]: + """ + Validate if a command can be completed for a specific hostname. + The logic joins ProfilingCommands and ProfilingExecutions to guarantee the command id existed at some point. + Returns (is_valid: bool, error_message: str). + """ + query = """ + SELECT + COALESCE(pc.command_id, pe.command_id) as command_id, + pe.status as execution_status + FROM + ProfilingCommands pc + FULL OUTER JOIN ProfilingExecutions pe ON pc.command_id = pe.command_id + AND pc.hostname = pe.hostname + WHERE + COALESCE(pc.command_id, pe.command_id) = %(command_id)s::uuid + AND pe.hostname = %(hostname)s + """ + + values = {"command_id": command_id, "hostname": hostname} + + result = self.db.execute(query, values, one_value=True, return_dict=True) + + if result is None: + return False, f"Command {command_id} not found for host {hostname}" + + execution_status = result.get("execution_status") + if execution_status is None: + return False, f"No execution record found for command {command_id} on host {hostname}" + + if execution_status != "assigned": + return ( + False, + f"Command {command_id} for host {hostname} is in status '{execution_status}', expected 'assigned'", + ) + + return True, "" + + def update_host_heartbeat( + self, + hostname: str, + ip_address: str, + service_name: str, + status: str, + last_command_id: Optional[str] = None, + timestamp: Optional[datetime] = None, + ) -> None: + """Update host heartbeat information (wrapper around upsert_host_heartbeat)""" + self.upsert_host_heartbeat( + hostname=hostname, + ip_address=ip_address, + service_name=service_name, + last_command_id=last_command_id, + status=status, + ) + + def _get_profiling_request_details(self, request_id: str) -> Optional[Dict]: + """Get details of a specific profiling request""" + query = """ + SELECT request_id, continuous, duration, frequency, profiling_mode, pids, target_hostnames, additional_args + FROM ProfilingRequests + WHERE request_id = %(request_id)s::uuid + """ + + values = {"request_id": request_id} + result = self.db.execute(query, values, one_value=True, return_dict=True) + return result if result else None + + def _build_combined_config(self, request_ids: List[str], hostname: str, service_name: str) -> Dict: + """Build combined configuration from multiple profiling requests""" + if not request_ids: + return {} + + # Get all request details + request_details = [] + for req_id in request_ids: + details = self._get_profiling_request_details(req_id) + if details: + request_details.append(details) + + if not request_details: + return {} + + # Use the most recent request's basic settings + latest_request = request_details[-1] + combined_config = { + "continuous": latest_request.get("continuous", False), + "duration": latest_request.get("duration", 60), + "frequency": latest_request.get("frequency", 11), + "profiling_mode": latest_request.get("profiling_mode", "cpu"), + } + + # Merge PIDs from all requests that target this hostname + all_pids = set() + for req in request_details: + if req.get("pids"): + # Check if this request targets this hostname or all hostnames + target_hostnames = req.get("target_hostnames") + if not target_hostnames or hostname in target_hostnames: + all_pids.update(req["pids"]) + + if all_pids: + combined_config["pids"] = ",".join(map(str, sorted(all_pids))) + + # Merge additional_args from all requests + merged_additional_args = {} + for req in request_details: + if req.get("additional_args"): + # Parse JSON string if needed + additional_args = req["additional_args"] + if isinstance(additional_args, str): + try: + additional_args = json.loads(additional_args) + except json.JSONDecodeError: + continue + if isinstance(additional_args, dict): + merged_additional_args.update(additional_args) + + if merged_additional_args: + combined_config.update(merged_additional_args) # Merge directly into combined_config + + return combined_config diff --git a/src/gprofiler-dev/gprofiler_dev/postgres/profiling_db_methods.py b/src/gprofiler-dev/gprofiler_dev/postgres/profiling_db_methods.py new file mode 100644 index 00000000..33f89f10 --- /dev/null +++ b/src/gprofiler-dev/gprofiler_dev/postgres/profiling_db_methods.py @@ -0,0 +1,445 @@ +# +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +""" +Database methods for profiling requests and commands management. +These methods should be added to the DBManager class or used as a mixin. +""" + +import json +import uuid +from datetime import datetime +from logging import getLogger +from typing import Any, Dict, List, Optional + +logger = getLogger(__name__) + + +class ProfilingDBMethods: + """Mixin class for profiling-related database operations""" + + def get_connection(self): + """Return a database connection. Override or implement in subclass.""" + raise NotImplementedError("get_connection() must be implemented in the DBManager or subclass.") + + def save_profiling_request( + self, + request_id: str, + service_name: str, + duration: Optional[int] = None, + frequency: Optional[int] = None, + profiling_mode: Optional[str] = None, + target_hostnames: Optional[List[str]] = None, + pids: Optional[List[int]] = None, + additional_args: Optional[Dict[str, Any]] = None, + ) -> str: + """Save a profiling request to the database""" + query = """ + INSERT INTO ProfilingRequests ( + request_id, service_name, duration, frequency, profiling_mode, + target_hostnames, pids, additional_args + ) VALUES (%s, %s, %s, %s, %s, %s, %s, %s) + RETURNING request_id + """ + + params = ( + request_id, + service_name, + duration, + frequency, + profiling_mode, + target_hostnames, + pids, + json.dumps(additional_args) if additional_args else None, + ) + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, params) + result = cursor.fetchone() + conn.commit() + logger.info(f"Saved profiling request {request_id}") + return result[0] + except Exception as e: + logger.error(f"Failed to save profiling request {request_id}: {e}") + raise + + def create_or_update_profiling_command(self, hostname: Optional[str], service_name: str, new_request_id: str): + """Create or update a profiling command for a host, combining multiple requests""" + + if hostname is None: + # Get all active hosts for this service + hosts = self._get_active_hosts_for_service(service_name) + else: + hosts = [hostname] + + for host in hosts: + # Check if there's already a pending command for this host + existing_command = self._get_pending_command_for_host(host, service_name) + + if existing_command: + # Update existing command by adding the new request + self._add_request_to_command(existing_command["command_id"], new_request_id) + else: + # Create new command + self._create_new_profiling_command(host, service_name, [new_request_id]) + + def _get_active_hosts_for_service(self, service_name: str) -> List[str]: + """Get list of active hosts for a service""" + query = """ + SELECT DISTINCT hostname + FROM HostHeartbeats + WHERE service_name = %s + AND status = 'active' + AND heartbeat_timestamp > CURRENT_TIMESTAMP - INTERVAL '5 minutes' + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (service_name,)) + results = cursor.fetchall() + return [row[0] for row in results] + except Exception as e: + logger.error(f"Failed to get active hosts for service {service_name}: {e}") + return [] + + def _get_pending_command_for_host(self, hostname: str, service_name: str) -> Optional[Dict]: + """Get pending command for a specific host""" + query = """ + SELECT command_id, combined_config, request_ids + FROM ProfilingCommands + WHERE hostname = %s AND service_name = %s AND status = 'pending' + ORDER BY created_at DESC + LIMIT 1 + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (hostname, service_name)) + result = cursor.fetchone() + if result: + return {"command_id": result[0], "combined_config": result[1], "request_ids": result[2]} + return None + except Exception as e: + logger.error(f"Failed to get pending command for host {hostname}: {e}") + return None + + def _add_request_to_command(self, command_id: str, new_request_id: str): + """Add a new request to an existing command""" + # First, get the current request details + request_query = """ + SELECT duration, frequency, profiling_mode, pids, additional_args + FROM ProfilingRequests + WHERE request_id = %s + """ + + # Update command with new request + update_query = """ + UPDATE ProfilingCommands + SET request_ids = array_append(request_ids, %s), + combined_config = %s, + updated_at = CURRENT_TIMESTAMP + WHERE command_id = %s + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + # Get new request details + cursor.execute(request_query, (new_request_id,)) + request_data = cursor.fetchone() + + if request_data: + # Get current command config + cursor.execute( + "SELECT combined_config FROM ProfilingCommands WHERE command_id = %s", (command_id,) + ) + current_config = cursor.fetchone()[0] + + # Combine configurations + new_config = self._combine_configs(current_config, request_data) + + # Update command + cursor.execute(update_query, (new_request_id, json.dumps(new_config), command_id)) + conn.commit() + + logger.info(f"Added request {new_request_id} to command {command_id}") + + except Exception as e: + logger.error(f"Failed to add request {new_request_id} to command {command_id}: {e}") + raise + + def _create_new_profiling_command(self, hostname: str, service_name: str, request_ids: List[str]): + """Create a new profiling command""" + command_id = str(uuid.uuid4()) + + # Get request details to create combined config + config = self._create_combined_config(request_ids) + + query = """ + INSERT INTO ProfilingCommands ( + command_id, hostname, service_name, combined_config, request_ids + ) VALUES (%s, %s, %s, %s, %s) + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (command_id, hostname, service_name, json.dumps(config), request_ids)) + conn.commit() + logger.info(f"Created new profiling command {command_id} for host {hostname}") + + except Exception as e: + logger.error(f"Failed to create profiling command for host {hostname}: {e}") + raise + + def _create_combined_config(self, request_ids: List[str]) -> Dict[str, Any]: + """Create combined configuration from multiple requests""" + query = """ + SELECT duration, frequency, profiling_mode, pids, additional_args + FROM ProfilingRequests + WHERE request_id = ANY(%s) + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (request_ids,)) + requests = cursor.fetchall() + + return self._combine_multiple_configs(requests) + + except Exception as e: + logger.error(f"Failed to create combined config for requests {request_ids}: {e}") + raise + + def _combine_configs(self, current_config: Dict, new_request_data: tuple) -> Dict[str, Any]: + """Combine current config with new request data""" + duration, frequency, profiling_mode, pids, additional_args = new_request_data + + # Use the most restrictive/specific values + combined = current_config.copy() + + # Duration: use the maximum + if duration and duration > combined.get("duration", 0): + combined["duration"] = duration + + # Frequency: use the maximum + if frequency and frequency > combined.get("frequency", 0): + combined["frequency"] = frequency + + # PIDs: combine lists + if pids: + existing_pids = combined.get("pids", []) + combined["pids"] = list(set(existing_pids + pids)) + + # Additional args: merge + if additional_args: + combined["additional_args"] = {**combined.get("additional_args", {}), **additional_args} + + return combined + + def _combine_multiple_configs(self, requests) -> Dict[str, Any]: + """Combine multiple request configurations""" + combined = {"duration": 60, "frequency": 11, "profiling_mode": "cpu", "pids": [], "additional_args": {}} + + for duration, frequency, profiling_mode, pids, additional_args in requests: + if duration and duration > combined["duration"]: + combined["duration"] = duration + + if frequency and frequency > combined["frequency"]: + combined["frequency"] = frequency + + if pids: + existing_pids = combined.get("pids", []) + combined["pids"] = existing_pids + pids + + if additional_args: + if "additional_args" not in combined: + combined["additional_args"] = {} + combined["additional_args"].update(additional_args) # type: ignore[attr-defined] + + # Remove duplicates from PIDs + combined["pids"] = list(set(combined["pids"])) # type: ignore[arg-type] + + return combined + + def get_pending_profiling_command( + self, hostname: str, service_name: str, exclude_command_id: Optional[str] = None + ) -> Optional[Dict[str, Any]]: + """Get pending profiling command for a host""" + query = """ + SELECT command_id, combined_config, request_ids + FROM ProfilingCommands + WHERE hostname = %s + AND service_name = %s + AND status = 'pending' + """ + params = [hostname, service_name] + + if exclude_command_id: + query += " AND command_id != %s" + params.append(exclude_command_id) + + query += " ORDER BY created_at ASC LIMIT 1" + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, params) + result = cursor.fetchone() + + if result: + return {"command_id": result[0], "combined_config": result[1], "request_ids": result[2]} + return None + + except Exception as e: + logger.error(f"Failed to get pending command for {hostname}: {e}") + return None + + def mark_profiling_command_sent(self, command_id: str, hostname: str): + """Mark a profiling command as sent""" + query = """ + UPDATE ProfilingCommands + SET status = 'sent', sent_at = CURRENT_TIMESTAMP + WHERE command_id = %s AND hostname = %s + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (command_id, hostname)) + conn.commit() + logger.info(f"Marked command {command_id} as sent to {hostname}") + + except Exception as e: + logger.error(f"Failed to mark command {command_id} as sent: {e}") + raise + + def update_host_heartbeat( + self, + hostname: str, + ip_address: str, + service_name: str, + status: str, + last_command_id: Optional[str] = None, + timestamp: Optional[datetime] = None, + ): + """Update host heartbeat information""" + query = """ + INSERT INTO HostHeartbeats (hostname, ip_address, service_name, status, last_command_id, heartbeat_timestamp) + VALUES (%s, %s, %s, %s, %s, %s) + ON CONFLICT (hostname, service_name) + DO UPDATE SET + ip_address = EXCLUDED.ip_address, + status = EXCLUDED.status, + last_command_id = EXCLUDED.last_command_id, + heartbeat_timestamp = EXCLUDED.heartbeat_timestamp, + updated_at = CURRENT_TIMESTAMP + """ + + if timestamp is None: + timestamp = datetime.now() + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (hostname, ip_address, service_name, status, last_command_id, timestamp)) + conn.commit() + logger.debug(f"Updated heartbeat for {hostname}") + + except Exception as e: + logger.error(f"Failed to update heartbeat for {hostname}: {e}") + raise + + def update_profiling_command_status( + self, + command_id: str, + hostname: str, + status: str, + execution_time: Optional[int] = None, + error_message: Optional[str] = None, + results_path: Optional[str] = None, + ): + """Update profiling command completion status""" + query = """ + UPDATE ProfilingCommands + SET status = %s, + completed_at = CURRENT_TIMESTAMP, + execution_time = %s, + error_message = %s, + results_path = %s + WHERE command_id = %s AND hostname = %s + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (status, execution_time, error_message, results_path, command_id, hostname)) + + # Also update related profiling requests + if status == "completed": + self._mark_requests_completed(command_id) + elif status == "failed": + self._mark_requests_failed(command_id, error_message) + + conn.commit() + logger.info(f"Updated command {command_id} status to {status}") + + except Exception as e: + logger.error(f"Failed to update command {command_id} status: {e}") + raise + + def _mark_requests_completed(self, command_id: str): + """Mark all requests in a command as completed""" + query = """ + UPDATE ProfilingRequests + SET status = 'completed', completed_at = CURRENT_TIMESTAMP + WHERE request_id = ANY( + SELECT unnest(request_ids) FROM ProfilingCommands WHERE command_id = %s + ) + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (command_id,)) + conn.commit() + + except Exception as e: + logger.error(f"Failed to mark requests completed for command {command_id}: {e}") + + def _mark_requests_failed(self, command_id: str, error_message: Optional[str]): + """Mark all requests in a command as failed""" + query = """ + UPDATE ProfilingRequests + SET status = 'failed', error_message = %s, completed_at = CURRENT_TIMESTAMP + WHERE request_id = ANY( + SELECT unnest(request_ids) FROM ProfilingCommands WHERE command_id = %s + ) + """ + + try: + with self.get_connection() as conn: + with conn.cursor() as cursor: + cursor.execute(query, (error_message, command_id)) + conn.commit() + + except Exception as e: + logger.error(f"Failed to mark requests failed for command {command_id}: {e}") diff --git a/src/gprofiler-dev/gprofiler_dev/s3_profile_dal.py b/src/gprofiler-dev/gprofiler_dev/s3_profile_dal.py index c994af0e..5ccb6d33 100644 --- a/src/gprofiler-dev/gprofiler_dev/s3_profile_dal.py +++ b/src/gprofiler-dev/gprofiler_dev/s3_profile_dal.py @@ -47,8 +47,10 @@ def __init__( aws_secret_access_key=config.AWS_SECRET_ACCESS_KEY, aws_session_token=config.AWS_SESSION_TOKEN, ) - self._s3_client = session.client("s3", config=Config(max_pool_connections=50)) - self._s3_resource = session.resource("s3") + # endpoint_url allows connecting to LocalStack or S3-compatible services for testing + # When None (default), uses standard AWS S3 endpoints + self._s3_client = session.client("s3", config=Config(max_pool_connections=50), endpoint_url=config.S3_ENDPOINT_URL) + self._s3_resource = session.resource("s3", endpoint_url=config.S3_ENDPOINT_URL) @staticmethod def join_path(*parts: str) -> str: diff --git a/src/gprofiler-dev/gprofiler_dev/tags.py b/src/gprofiler-dev/gprofiler_dev/tags.py index 951ce36b..47a9c2de 100644 --- a/src/gprofiler-dev/gprofiler_dev/tags.py +++ b/src/gprofiler-dev/gprofiler_dev/tags.py @@ -60,7 +60,7 @@ def remove(self, service: str, filter_tag: str, ui_filter: str = ""): def get_hash_filter_tag(filter_tag): - return hashlib.new('md5', filter_tag.encode("utf-8"), usedforsecurity=False).hexdigest() + return hashlib.new("md5", filter_tag.encode("utf-8"), usedforsecurity=False).hexdigest() def is_base(input_str: str, base: int) -> bool: diff --git a/src/gprofiler/backend/config.py b/src/gprofiler/backend/config.py index 04bcdc78..8510ad10 100644 --- a/src/gprofiler/backend/config.py +++ b/src/gprofiler/backend/config.py @@ -40,4 +40,19 @@ REST_USERNAME = os.getenv("REST_USERNAME", "") REST_PASSWORD = os.getenv("REST_PASSWORD", "") +SLACK_BOT_TOKEN = os.getenv("SLACK_BOT_TOKEN") + +# Default Slack channels - can be overridden via SLACK_CHANNELS environment variable +# Format: comma-separated list of channel names (e.g., "#general,#alerts,#notifications") +DEFAULT_SLACK_CHANNELS = ["#gprofiler-notifications"] +SLACK_CHANNELS = os.getenv("SLACK_CHANNELS", ",".join(DEFAULT_SLACK_CHANNELS)).split(",") + +# Metrics Publisher Configuration +# Enable/disable metrics publishing to metrics agent (similar to gprofiler agent metrics) +# Metrics agent handles batching/queuing on its end, so we send synchronously +METRICS_ENABLED = os.getenv("METRICS_ENABLED", "false").lower() in ["true", "1", "yes"] +METRICS_AGENT_URL = os.getenv("METRICS_AGENT_URL", "tcp://localhost:18126") +METRICS_SERVICE_NAME = os.getenv("METRICS_SERVICE_NAME", "gprofiler-webapp") +METRICS_SLI_UUID = os.getenv("METRICS_SLI_UUID", None) + BACKEND_ROOT = os.path.dirname(os.path.realpath(__file__)) diff --git a/src/gprofiler/backend/main.py b/src/gprofiler/backend/main.py index 990fe185..beef14d8 100644 --- a/src/gprofiler/backend/main.py +++ b/src/gprofiler/backend/main.py @@ -19,7 +19,6 @@ from backend import routers from fastapi import FastAPI from fastapi.responses import JSONResponse, Response -from pydantic.json import ENCODERS_BY_TYPE from starlette.exceptions import HTTPException as StarletteHTTPException @@ -30,7 +29,6 @@ def format_time(dt: datetime): return iso_format + "Z" -ENCODERS_BY_TYPE[datetime] = lambda dt: format_time(dt) app = FastAPI(openapi_url="/api/v1/openapi.json", docs_url="/api/v1/docs") diff --git a/src/gprofiler/backend/models/dynamic_profiling_models.py b/src/gprofiler/backend/models/dynamic_profiling_models.py new file mode 100644 index 00000000..7fc13a2c --- /dev/null +++ b/src/gprofiler/backend/models/dynamic_profiling_models.py @@ -0,0 +1,429 @@ +# +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +""" +Dynamic Profiling Data Models + +This module contains Pydantic models for the dynamic profiling feature, which allows +profiling requests at various hierarchy levels (service, job, namespace) to be mapped +to specific host-level commands while maintaining sub-second heartbeat response times. + +References: +https://docs.google.com/document/d/1iwA_NN1YKDBqfig95Qevw0HcSCqgu7_ya8PGuCksCPc/edit +""" + +from datetime import datetime +from enum import Enum +from typing import Any, Dict, List, Optional +from uuid import UUID + +from pydantic import BaseModel, Field, field_validator + + +# ============================================================ +# ENUMS +# ============================================================ + +class CommandType(str, Enum): + """Command types for profiling operations""" + START = "start" + STOP = "stop" + RECONFIGURE = "reconfigure" + + +class ProfilingStatus(str, Enum): + """Status for profiling requests and executions""" + PENDING = "pending" + IN_PROGRESS = "in_progress" + COMPLETED = "completed" + FAILED = "failed" + CANCELLED = "cancelled" + + +class ProfilingMode(str, Enum): + """Profiling modes supported by the system""" + CPU = "cpu" + MEMORY = "memory" + ALLOCATION = "allocation" + NATIVE = "native" + + +# ============================================================ +# REQUEST MODELS +# ============================================================ + +class ProfilingRequestCreate(BaseModel): + """ + Request model for creating a new profiling request via API. + At least one target specification must be provided. + """ + # Target specification (at least one required) + service_name: Optional[str] = None + job_name: Optional[str] = None + namespace: Optional[str] = None + pod_name: Optional[str] = None + host_name: Optional[str] = None + process_id: Optional[int] = None + + # Profiling configuration + profiling_mode: ProfilingMode = ProfilingMode.CPU + duration_seconds: int = Field(gt=0, description="Duration in seconds, must be positive") + sample_rate: int = Field(default=100, ge=1, le=1000, description="Sample rate (1-1000)") + + # Execution configuration + executors: List[str] = Field(default_factory=list) + + # Request metadata + start_time: datetime + stop_time: Optional[datetime] = None + mode: Optional[str] = None + + @field_validator('duration_seconds') + @classmethod + def validate_duration(cls, v: int) -> int: + if v <= 0: + raise ValueError("duration_seconds must be positive") + return v + + def model_post_init(self, __context: Any) -> None: + """Validate that at least one target is specified""" + targets = [ + self.service_name, + self.job_name, + self.namespace, + self.pod_name, + self.host_name, + self.process_id, + ] + if not any(targets): + raise ValueError("At least one target specification must be provided") + + +class ProfilingRequestResponse(BaseModel): + """Response model for profiling request""" + id: int + request_id: UUID + service_name: Optional[str] = None + job_name: Optional[str] = None + namespace: Optional[str] = None + pod_name: Optional[str] = None + host_name: Optional[str] = None + process_id: Optional[int] = None + profiling_mode: ProfilingMode + duration_seconds: int + sample_rate: int + executors: List[str] + start_time: datetime + stop_time: Optional[datetime] = None + mode: Optional[str] = None + status: ProfilingStatus + created_at: datetime + updated_at: datetime + profiler_token_id: Optional[int] = None + + class Config: + from_attributes = True + + +class ProfilingRequestUpdate(BaseModel): + """Model for updating profiling request status""" + status: Optional[ProfilingStatus] = None + stop_time: Optional[datetime] = None + + +# ============================================================ +# COMMAND MODELS +# ============================================================ + +class ProfilingCommandCreate(BaseModel): + """Model for creating a profiling command to be sent to agents""" + profiling_request_id: int + host_id: str + target_containers: List[str] = Field(default_factory=list) + target_processes: List[int] = Field(default_factory=list) + command_type: CommandType + command_args: Dict[str, Any] = Field(default_factory=dict) + command_json: Optional[str] = None + + +class ProfilingCommandResponse(BaseModel): + """Response model for profiling command""" + id: int + command_id: UUID + profiling_request_id: int + host_id: str + target_containers: List[str] + target_processes: List[int] + command_type: CommandType + command_args: Dict[str, Any] + command_json: Optional[str] = None + sent_at: Optional[datetime] = None + completed_at: Optional[datetime] = None + status: ProfilingStatus + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class ProfilingCommandUpdate(BaseModel): + """Model for updating profiling command""" + status: Optional[ProfilingStatus] = None + sent_at: Optional[datetime] = None + completed_at: Optional[datetime] = None + command_json: Optional[str] = None + + +# ============================================================ +# HEARTBEAT MODELS +# ============================================================ + +class HostHeartbeatCreate(BaseModel): + """ + Model for creating/updating host heartbeat. + Optimized for 165k QPM with sub-second response times. + """ + host_id: str + service_name: Optional[str] = None + host_name: str + host_ip: Optional[str] = None + namespace: Optional[str] = None + pod_name: Optional[str] = None + containers: List[str] = Field(default_factory=list) + workloads: Dict[str, Any] = Field(default_factory=dict) + jobs: List[str] = Field(default_factory=list) + executors: List[str] = Field(default_factory=list) + last_command_id: Optional[UUID] = None + + +class HostHeartbeatResponse(BaseModel): + """Response model for host heartbeat""" + id: int + host_id: str + service_name: Optional[str] = None + host_name: str + host_ip: Optional[str] = None + namespace: Optional[str] = None + pod_name: Optional[str] = None + containers: List[str] + workloads: Dict[str, Any] + jobs: List[str] + executors: List[str] + timestamp_first_seen: datetime + timestamp_last_seen: datetime + last_command_id: Optional[UUID] = None + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class HostHeartbeatUpdate(BaseModel): + """Model for updating host heartbeat (typically just timestamp)""" + timestamp_last_seen: datetime + last_command_id: Optional[UUID] = None + containers: Optional[List[str]] = None + workloads: Optional[Dict[str, Any]] = None + jobs: Optional[List[str]] = None + executors: Optional[List[str]] = None + + +# ============================================================ +# EXECUTION MODELS +# ============================================================ + +class ProfilingExecutionCreate(BaseModel): + """Model for creating profiling execution audit entry""" + profiling_request_id: int + profiling_command_id: Optional[int] = None + host_name: str + target_containers: List[str] = Field(default_factory=list) + target_processes: List[int] = Field(default_factory=list) + command_type: CommandType + started_at: datetime + status: ProfilingStatus + + +class ProfilingExecutionResponse(BaseModel): + """Response model for profiling execution""" + id: int + execution_id: UUID + profiling_request_id: int + profiling_command_id: Optional[int] = None + host_name: str + target_containers: List[str] + target_processes: List[int] + command_type: CommandType + started_at: datetime + completed_at: Optional[datetime] = None + status: ProfilingStatus + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class ProfilingExecutionUpdate(BaseModel): + """Model for updating profiling execution""" + status: Optional[ProfilingStatus] = None + completed_at: Optional[datetime] = None + + +# ============================================================ +# MAPPING TABLE MODELS +# ============================================================ + +class NamespaceServiceMapping(BaseModel): + """Model for namespace to service mapping""" + namespace: str + service_name: str + + +class NamespaceServiceMappingResponse(NamespaceServiceMapping): + """Response model for namespace service mapping""" + id: int + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class ServiceContainerMapping(BaseModel): + """Model for service to container mapping""" + service_name: str + container_name: str + + +class ServiceContainerMappingResponse(ServiceContainerMapping): + """Response model for service container mapping""" + id: int + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class JobContainerMapping(BaseModel): + """Model for job to container mapping""" + job_name: str + container_name: str + + +class JobContainerMappingResponse(JobContainerMapping): + """Response model for job container mapping""" + id: int + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class ContainerProcessMapping(BaseModel): + """Model for container to process mapping""" + container_name: str + process_id: int + process_name: Optional[str] = None + + +class ContainerProcessMappingResponse(ContainerProcessMapping): + """Response model for container process mapping""" + id: int + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class ContainerHostMapping(BaseModel): + """Model for container to host mapping""" + container_name: str + host_id: str + host_name: str + + +class ContainerHostMappingResponse(ContainerHostMapping): + """Response model for container host mapping""" + id: int + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +class ProcessHostMapping(BaseModel): + """Model for process to host mapping""" + process_id: int + host_id: str + host_name: str + + +class ProcessHostMappingResponse(ProcessHostMapping): + """Response model for process host mapping""" + id: int + created_at: datetime + updated_at: datetime + + class Config: + from_attributes = True + + +# ============================================================ +# QUERY MODELS +# ============================================================ + +class ProfilingRequestQuery(BaseModel): + """Query parameters for listing profiling requests""" + status: Optional[ProfilingStatus] = None + service_name: Optional[str] = None + namespace: Optional[str] = None + host_name: Optional[str] = None + limit: int = Field(default=100, ge=1, le=1000) + offset: int = Field(default=0, ge=0) + + +class HostHeartbeatQuery(BaseModel): + """ + Query parameters for listing host heartbeats. + Optimized for fast queries to support 165k QPM. + """ + service_name: Optional[str] = None + namespace: Optional[str] = None + host_id: Optional[str] = None + last_seen_after: Optional[datetime] = None + limit: int = Field(default=100, ge=1, le=1000) + offset: int = Field(default=0, ge=0) + + +class ProfilingExecutionQuery(BaseModel): + """Query parameters for listing profiling executions""" + profiling_request_id: Optional[int] = None + host_name: Optional[str] = None + status: Optional[ProfilingStatus] = None + started_after: Optional[datetime] = None + limit: int = Field(default=100, ge=1, le=1000) + offset: int = Field(default=0, ge=0) + + + + diff --git a/src/gprofiler/backend/models/metrics_models.py b/src/gprofiler/backend/models/metrics_models.py index 550ae735..9e820233 100644 --- a/src/gprofiler/backend/models/metrics_models.py +++ b/src/gprofiler/backend/models/metrics_models.py @@ -15,10 +15,10 @@ # from datetime import datetime -from typing import Optional +from typing import Any, Dict, List, Optional from backend.models import CamelModel -from pydantic import BaseModel +from pydantic import BaseModel, root_validator, validator class SampleCount(BaseModel): @@ -87,3 +87,140 @@ class MetricK8s(CamelModel): class HTMLMetadata(CamelModel): content: str + + +class ProfilingRequest(BaseModel): + """Model for profiling request parameters""" + + service_name: str + request_type: str # "start" or "stop" + continuous: Optional[bool] = False + duration: Optional[int] = 60 + frequency: Optional[int] = 11 + profiling_mode: Optional[str] = "cpu" # "cpu", "allocation", "none" + target_hosts: Optional[Dict[str, Optional[List[int]]]] = None + stop_level: Optional[str] = "process" # "process" or "host" + additional_args: Optional[Dict[str, Any]] = None + + @validator("request_type") + def validate_request_type(cls, v): + if v not in ["start", "stop"]: + raise ValueError('request_type must be "start" or "stop"') + return v + + @validator("profiling_mode") + def validate_profiling_mode(cls, v): + if v not in ["cpu", "allocation", "none"]: + raise ValueError('profiling_mode must be "cpu", "allocation", or "none"') + return v + + @validator("stop_level") + def validate_stop_level(cls, v): + if v not in ["process", "host"]: + raise ValueError('stop_level must be "process" or "host"') + return v + + @validator("duration") + def validate_duration(cls, v): + if v is not None and v <= 0: + raise ValueError("Duration must be a positive integer (seconds)") + return v + + @validator("frequency") + def validate_frequency(cls, v): + if v is not None and v <= 0: + raise ValueError("Frequency must be a positive integer (Hz)") + return v + + @root_validator + def validate_profile_request(cls, values): + """Validate that PIDs are provided when request_type is stop and stop_level is process""" + request_type = values.get("request_type") + stop_level = values.get("stop_level") + target_hosts = values.get("target_hosts") + + if request_type == "stop" and stop_level == "process": + # Check if PIDs are provided in target_hosts mapping + has_pids = target_hosts and any(pids for pids in target_hosts.values() if pids) + if not has_pids: + raise ValueError( + 'At least one PID must be provided when request_type is "stop" and stop_level is "process"' + ) + + # Validate if a process id is provided when request_type is stop and stop_level is host, if so raises + if request_type == "stop" and stop_level == "host": + has_pids = target_hosts and any(pids for pids in target_hosts.values() if pids is not None) + if has_pids: + raise ValueError('No PIDs should be provided when request_type is "stop" and stop_level is "host"') + + return values + + +class ProfilingResponse(BaseModel): + """Response model for profiling requests""" + + success: bool + message: str + request_id: Optional[str] = None + command_ids: Optional[List[str]] = None + estimated_completion_time: Optional[datetime] = None + + +class HeartbeatRequest(BaseModel): + """Model for host heartbeat request""" + + ip_address: str + hostname: str + service_name: str + last_command_id: Optional[str] = None + status: str = "active" # active, idle, error + timestamp: Optional[datetime] = None + + +class HeartbeatResponse(BaseModel): + """Response model for heartbeat requests""" + + success: bool + message: str + profiling_command: Optional[Dict[str, Any]] = None + command_id: Optional[str] = None + + +class CommandCompletionRequest(BaseModel): + """Model for reporting command completion""" + + command_id: str + hostname: str + status: str + execution_time: Optional[int] = None + error_message: Optional[str] = None + results_path: Optional[str] = None + + @validator("status") + def validate_status(cls, v): + if v not in ["completed", "failed"]: + raise ValueError(f"invalid status: {v}. Must be 'completed' or 'failed'.") + return v + + +class ProfilingHostStatusRequest(BaseModel): + """Model for profiling host status request parameters""" + + service_name: Optional[List[str]] = None + exact_match: bool = False + hostname: Optional[List[str]] = None + ip_address: Optional[List[str]] = None + profiling_status: Optional[List[str]] = None + command_type: Optional[List[str]] = None + pids: Optional[List[int]] = None + + +class ProfilingHostStatus(BaseModel): + id: int + service_name: str + hostname: str + ip_address: str + pids: List[int] + command_type: str + profiling_status: str + heartbeat_timestamp: datetime diff --git a/src/gprofiler/backend/routers/__init__.py b/src/gprofiler/backend/routers/__init__.py index 2c811f52..9bddeebe 100644 --- a/src/gprofiler/backend/routers/__init__.py +++ b/src/gprofiler/backend/routers/__init__.py @@ -28,7 +28,6 @@ ) from fastapi import APIRouter - router = APIRouter() router.include_router(healthcheck_routes.router, prefix="/v1/health_check", tags=["agent"]) router.include_router(healthcheck_routes.router, prefix="/v2/health_check", tags=["agent"]) diff --git a/src/gprofiler/backend/routers/metrics_routes.py b/src/gprofiler/backend/routers/metrics_routes.py index bec1e850..c64a9fca 100644 --- a/src/gprofiler/backend/routers/metrics_routes.py +++ b/src/gprofiler/backend/routers/metrics_routes.py @@ -14,31 +14,39 @@ # limitations under the License. # +import json import math +import uuid from datetime import datetime, timedelta from logging import getLogger from typing import List, Optional -from botocore.exceptions import ClientError - from backend.models.filters_models import FilterTypes from backend.models.flamegraph_models import FGParamsBaseModel from backend.models.metrics_models import ( + CommandCompletionRequest, CpuMetric, CpuTrend, + HeartbeatRequest, + HeartbeatResponse, + HTMLMetadata, InstanceTypeCount, MetricGraph, MetricNodesAndCores, MetricNodesCoresSummary, MetricSummary, + ProfilingHostStatus, + ProfilingHostStatusRequest, + ProfilingRequest, + ProfilingResponse, SampleCount, - HTMLMetadata, ) from backend.utils.filters_utils import get_rql_first_eq_key, get_rql_only_for_one_key +from backend.utils.notifications import SlackNotifier from backend.utils.request_utils import flamegraph_base_request_params, get_metrics_response, get_query_response -from fastapi import APIRouter, Depends, Query, HTTPException +from botocore.exceptions import ClientError +from fastapi import APIRouter, Depends, HTTPException, Query from fastapi.responses import Response - from gprofiler_dev import S3ProfileDal from gprofiler_dev.postgres.db_manager import DBManager @@ -74,6 +82,26 @@ def get_time_interval_value(start_time: datetime, end_time: datetime, interval: return "24 hours" +def profiling_host_status_params( + service_name: Optional[List[str]] = Query(None, description="Filter by service name(s)"), + exact_match: bool = Query(False, description="Use exact match for service name (default: false for partial matching)"), + hostname: Optional[List[str]] = Query(None, description="Filter by hostname(s)"), + ip_address: Optional[List[str]] = Query(None, description="Filter by IP address(es)"), + profiling_status: Optional[List[str]] = Query(None, description="Filter by profiling status(es) (e.g., pending, completed, stopped)"), + command_type: Optional[List[str]] = Query(None, description="Filter by command type(s) (e.g., start, stop)"), + pids: Optional[List[int]] = Query(None, description="Filter by PIDs"), +) -> ProfilingHostStatusRequest: + return ProfilingHostStatusRequest( + service_name=service_name, + exact_match=exact_match, + hostname=hostname, + ip_address=ip_address, + profiling_status=profiling_status, + command_type=command_type, + pids=pids, + ) + + @router.get("/instance_type_count", response_model=List[InstanceTypeCount]) def get_instance_type_count(fg_params: FGParamsBaseModel = Depends(flamegraph_base_request_params)): response = get_query_response(fg_params, lookup_for="instance_type_count") @@ -235,3 +263,531 @@ def get_html_metadata( except ClientError: raise HTTPException(status_code=404, detail="The html metadata file not found in S3") return HTMLMetadata(content=html_content) + + +@router.post("/profile_request", response_model=ProfilingResponse) +def create_profiling_request(profiling_request: ProfilingRequest) -> ProfilingResponse: + """ + Create a new profiling request with the specified parameters. + + This endpoint accepts profiling arguments in JSON format and handles both + start and stop profiling commands. Each request generates a unique command_id + that agents use for idempotency - agents will only execute commands with + new command IDs they haven't seen before. + + START commands: + - Create new profiling sessions with specified parameters + - Merge multiple requests for the same host into single commands + + STOP commands: + - Process-level stop: Remove specific PIDs from existing commands + - If only one PID remains, convert to host-level stop + - If multiple PIDs remain, update command with remaining PIDs + - Host-level stop: Stop entire profiling session for the host + """ + try: + # Log the profiling request + logger.info( + f"Received {profiling_request.request_type} profiling request for service: {profiling_request.service_name}", + extra={ + "request_type": profiling_request.request_type, + "service_name": profiling_request.service_name, + "continuous": profiling_request.continuous, + "duration": profiling_request.duration, + "frequency": profiling_request.frequency, + "mode": profiling_request.profiling_mode, + "target_hosts": profiling_request.target_hosts, + "stop_level": profiling_request.stop_level, + }, + ) + + db_manager = DBManager() + request_id = str(uuid.uuid4()) + command_ids = [] # Track all command IDs created + + try: + # Convert target_hosts to legacy format for database compatibility + target_hostnames = list(profiling_request.target_hosts.keys()) if profiling_request.target_hosts else None + host_pid_mapping = ( + {hostname: pids for hostname, pids in profiling_request.target_hosts.items() if pids} + if profiling_request.target_hosts + else None + ) + + # Save the profiling request to database using enhanced method + success = db_manager.save_profiling_request( + request_id=request_id, + request_type=profiling_request.request_type, + service_name=profiling_request.service_name, + continuous=profiling_request.continuous, + duration=profiling_request.duration, + frequency=profiling_request.frequency, + profiling_mode=profiling_request.profiling_mode, + target_hostnames=target_hostnames, + pids=None, # Deprecated field, always None + host_pid_mapping=host_pid_mapping, + additional_args=profiling_request.additional_args, + ) + + if not success: + raise Exception("Failed to save profiling request to database") + + # Handle start vs stop commands differently + if profiling_request.request_type == "start": + # Create profiling commands for target hosts + target_hosts = [] + + # Determine target hosts from target_hosts mapping + if profiling_request.target_hosts: + target_hosts = list(profiling_request.target_hosts.keys()) + + if target_hosts: + # Create commands for specific hosts + for hostname in target_hosts: + command_id = str(uuid.uuid4()) + command_ids.append(command_id) + db_manager.create_or_update_profiling_command( + command_id=command_id, + hostname=hostname, + service_name=profiling_request.service_name, + command_type="start", + new_request_id=request_id, + ) + else: + # If no specific hostnames, create command for all hosts of this service + command_id = str(uuid.uuid4()) + command_ids.append(command_id) + db_manager.create_or_update_profiling_command( + command_id=command_id, + hostname=None, # Will be handled for all hosts of the service + service_name=profiling_request.service_name, + command_type="start", + new_request_id=request_id, + ) + + elif profiling_request.request_type == "stop": + # Handle stop commands with host-to-PID associations + target_hosts = [] + + # Determine target hosts for stop commands + if profiling_request.target_hosts: + target_hosts = list(profiling_request.target_hosts.keys()) + + if target_hosts: + for hostname in target_hosts: + command_id = str(uuid.uuid4()) + command_ids.append(command_id) + + if profiling_request.stop_level == "host": + # Stop entire host + db_manager.create_stop_command_for_host( + command_id=command_id, + hostname=hostname, + service_name=profiling_request.service_name, + request_id=request_id, + ) + else: # process level stop + # Get PIDs for this specific host from target_hosts mapping + host_pids = None + if profiling_request.target_hosts and hostname in profiling_request.target_hosts: + host_pids = profiling_request.target_hosts[hostname] + + # Stop specific processes for this host + db_manager.handle_process_level_stop( + command_id=command_id, + hostname=hostname, + service_name=profiling_request.service_name, + pids_to_stop=host_pids, + request_id=request_id, + ) + else: + # No specific hosts provided - this should be rare for stop commands + logger.warning(f"Stop request {request_id} has no target hosts specified") + raise HTTPException(status_code=400, detail="Stop commands require specific target hosts") + + logger.info( + f"Profiling request {request_id} ({profiling_request.request_type}) saved and commands processed. Command IDs: {command_ids}" + ) + + except Exception as e: + logger.error(f"Failed to save profiling request: {e}", exc_info=True) + raise HTTPException(status_code=500, detail="Failed to save profiling request to database") + + # Calculate estimated completion time (only for start commands) + completion_time = None + if profiling_request.request_type == "start": + completion_time = datetime.now() + timedelta(seconds=profiling_request.duration or 60) + + # Create appropriate message based on number of commands + if len(command_ids) == 1: + message = f"{profiling_request.request_type.capitalize()} profiling request submitted successfully for service '{profiling_request.service_name}'" + else: + message = f"{profiling_request.request_type.capitalize()} profiling request submitted successfully for service '{profiling_request.service_name}' across {len(command_ids)} hosts" + + # Send Slack notification for profiling request creation + try: + slack_notifier = SlackNotifier() + + # Create rich message blocks + blocks = _create_slack_blocks(profiling_request, request_id) + + slack_notifier.send_rich_message_to_all_channels(blocks=blocks, text=message) + except Exception as e: + logger.warning(f"Failed to send Slack notification for profiling request {request_id}: {e}") + + return ProfilingResponse( + success=True, + message=message, + request_id=request_id, + command_ids=command_ids, + estimated_completion_time=completion_time, + ) + + except HTTPException: + # Re-raise HTTP exceptions as-is + raise + except Exception as e: + logger.error(f"Failed to create profiling request: {str(e)}", exc_info=True) + raise HTTPException(status_code=500, detail="Internal server error while processing profiling request") + + +def _create_slack_blocks(profiling_request: ProfilingRequest, request_id: str) -> list: + """ + Create Slack message blocks for profiling request notifications. + + Args: + profiling_request: The profiling request object + request_id: The unique request identifier + + Returns: + List of Slack message blocks + """ + blocks = [ + { + "type": "header", + "text": { + "type": "plain_text", + "text": f"🔥 Profiling Request for service {profiling_request.service_name}", + }, + }, + { + "type": "section", + "text": { + "type": "mrkdwn", + "text": f"A new request was made to {profiling_request.request_type} a profile and the details are shown below:" + } + }, + { + "type": "section", + "fields": [ + {"type": "mrkdwn", "text": f"*Service Name:*\n{profiling_request.service_name}"}, + {"type": "mrkdwn", "text": f"*Request ID:*\n{request_id}"}, + ], + }, + ] + + # Add hosts and PIDs information in a single block + if profiling_request.target_hosts: + hosts_info = [] + for host, pids in profiling_request.target_hosts.items(): + if pids: + pids_str = ", ".join(map(str, pids)) + hosts_info.append(f"• {host} - PIDs: {pids_str}") + else: + hosts_info.append(f"• {host} - Host level profiling") + + if hosts_info: + hosts_text = "\n".join(hosts_info) + blocks.append({"type": "section", "text": {"type": "mrkdwn", "text": f"*Target Hosts:*\n{hosts_text}"}}) + + return blocks + + +@router.post("/heartbeat", response_model=HeartbeatResponse) +def receive_heartbeat(heartbeat: HeartbeatRequest): + """ + Receive heartbeat from host and check for current profiling requests. + + This endpoint: + 1. Receives heartbeat information from hosts (IP, hostname, service, last command) + 2. Updates host status in PostgreSQL DB + 3. Checks for current profiling requests for this host/service + 4. Returns new profiling request if available + """ + try: + # Set timestamp if not provided + if heartbeat.timestamp is None: + heartbeat.timestamp = datetime.now() + + # Log the heartbeat + logger.info( + f"Received heartbeat from host: {heartbeat.hostname} ({heartbeat.ip_address})", + extra={ + "hostname": heartbeat.hostname, + "ip_address": heartbeat.ip_address, + "service_name": heartbeat.service_name, + "last_command_id": heartbeat.last_command_id, + "status": heartbeat.status, + "timestamp": heartbeat.timestamp, + }, + ) + + db_manager = DBManager() + + try: + # 1. Update host heartbeat information in PostgreSQL DB + db_manager.upsert_host_heartbeat( + hostname=heartbeat.hostname, + ip_address=heartbeat.ip_address, + service_name=heartbeat.service_name, + last_command_id=heartbeat.last_command_id, + status=heartbeat.status, + heartbeat_timestamp=heartbeat.timestamp, + ) + + # 2. Check for current profiling command for this host/service + current_command = db_manager.get_current_profiling_command( + hostname=heartbeat.hostname, + service_name=heartbeat.service_name, + ) + + if current_command: + success = True + if current_command["status"] == "pending": + # 3. Mark command as sent and update related request statuses + success = db_manager.mark_profiling_command_sent( + command_id=current_command["command_id"], hostname=heartbeat.hostname + ) + + # 4. Mark related profiling requests as assigned + if success and current_command.get("request_ids"): + request_ids = current_command["request_ids"] + # Parse the request_ids array if it exists + if request_ids: + try: + if isinstance(request_ids, str): + # PostgreSQL array format: {uuid1,uuid2,uuid3} + # Remove braces and split by comma + request_ids_str = request_ids.strip("{}") + if request_ids_str: + request_ids = [uuid.strip() for uuid in request_ids_str.split(",")] + else: + request_ids = [] + except Exception: + logger.warning( + f"Failed to parse request_ids for command {current_command['command_id']}" + ) + request_ids = [] + + for request_id in request_ids: + try: + db_manager.mark_profiling_request_assigned( + request_id=request_id, + command_id=current_command["command_id"], + hostname=heartbeat.hostname, + ) + except Exception as e: + logger.warning(f"Failed to mark request {request_id} as assigned: {e}") + + if success: + logger.info( + f"Sending profiling command {current_command['command_id']} to host {heartbeat.hostname}" + ) + + # Extract combined_config and ensure it's properly formatted + combined_config = current_command.get("combined_config", {}) + + # If combined_config is a string (from DB), parse it + if isinstance(combined_config, str): + try: + combined_config = json.loads(combined_config) + except json.JSONDecodeError: + logger.warning( + f"Failed to parse combined_config for command {current_command['command_id']}" + ) + combined_config = {} + + return HeartbeatResponse( + success=True, + message="Heartbeat received. New profiling command available.", + profiling_command={ + "command_type": current_command["command_type"], + "combined_config": combined_config, + }, + command_id=current_command["command_id"], + ) + else: + logger.warning( + f"Failed to mark command {current_command['command_id']} as sent to host {heartbeat.hostname}" + ) + + # No commands or marking failed + return HeartbeatResponse( + success=True, + message="Heartbeat received. No profiling commands.", + profiling_command=None, + command_id=None, + ) + + except Exception as e: + logger.error(f"Failed to process heartbeat for {heartbeat.hostname}: {e}", exc_info=True) + # Still return success for heartbeat, but no command + return HeartbeatResponse( + success=True, + message="Heartbeat received, but failed to check for commands.", + profiling_command=None, + command_id=None, + ) + + except Exception as e: + logger.error(f"Failed to process heartbeat: {str(e)}", exc_info=True) + raise HTTPException(status_code=500, detail="Internal server error while processing heartbeat") + + +@router.post("/command_completion") +def report_command_completion(completion: CommandCompletionRequest): + """ + Report completion of a profiling command from a host. + + This endpoint: + 1. Receives command completion status from hosts + 2. Validates that the command exists for the specific host + 3. Updates command status in PostgreSQL DB + 4. Updates related profiling requests status + """ + try: + db_manager = DBManager() + + # Log the completion + logger.info( + f"Received command completion from host: {completion.hostname}", + extra={ + "command_id": completion.command_id, + "hostname": completion.hostname, + "status": completion.status, + "execution_time": completion.execution_time, + "error_message": completion.error_message, + }, + ) + + # Validate that the command can be completed (exists and is in assigned status) + is_valid, error_message = db_manager.validate_command_completion_eligibility( + completion.command_id, completion.hostname + ) + if not is_valid: + logger.warning(f"Command completion validation failed: {error_message}") + return {"success": False, "message": error_message} + + # Update the command status + # The command_id reported by the CommandCompletionRequest can be outdated + # Meaning that the current command for the hostname might not be the one reported by the command completion (common at profiling restarts due to new profiling requests) + # For those cases, this update will not change any row at the commands table + db_manager.update_profiling_command_status( + command_id=completion.command_id, + hostname=completion.hostname, + status=completion.status, + execution_time=completion.execution_time, + error_message=completion.error_message, + results_path=completion.results_path, + ) + + # Update the specific profiling execution record for the command_id reported by the CommandCompletionRequest + completed_at = datetime.now() if completion.status in ["completed", "failed"] else None + db_manager.update_profiling_execution_status( + command_id=completion.command_id, + hostname=completion.hostname, + status=completion.status, + completed_at=completed_at, + error_message=completion.error_message, + execution_time=completion.execution_time, + results_path=completion.results_path, + ) + + # Get current profiling command to to verify if the command_id corresponds the one reported by the CommandCompletionRequest + current_command = db_manager.get_profiling_command_by_hostname(completion.hostname) + outdated_command = current_command is None or ( + current_command and current_command["command_id"] != completion.command_id + ) + # If the command is outdated, we don't need to update the request status of each request related to the command + # The request status will be updated when the most recent command_id is completed + if not outdated_command and current_command is not None: + # Update related profiling requests status + db_manager.auto_update_profiling_request_status_by_request_ids(current_command["request_ids"]) + + return {"success": True, "message": f"Command completion recorded for {completion.command_id}"} + + except Exception as e: + logger.error(f"Failed to process command completion: {str(e)}", exc_info=True) + raise HTTPException(status_code=500, detail="Internal server error while processing command completion") + + +@router.get("/profiling/host_status", response_model=List[ProfilingHostStatus]) +def get_profiling_host_status( + profiling_params: ProfilingHostStatusRequest = Depends(profiling_host_status_params), +): + """ + Get profiling host status with optional filtering by multiple parameters. + Uses optimized single-query approach with JOINs instead of N+1 queries. + + Performance improvements: + - Single database query instead of N+1 (1 query + 1 per host) + - Database-side filtering for all parameters + - LEFT JOIN between HostHeartbeats and ProfilingCommands + - 10-50x faster response time + + Args: + profiling_params: ProfilingHostStatusRequest object containing all filter parameters + + Returns: + List of host statuses filtered by the specified criteria + """ + db_manager = DBManager() + + # Use the optimized method that performs filtering and joining in the database + hosts = db_manager.get_profiling_host_status_optimized( + service_names=profiling_params.service_name, + hostnames=profiling_params.hostname, + ip_addresses=profiling_params.ip_address, + profiling_statuses=profiling_params.profiling_status, + command_types=profiling_params.command_type, + pids=profiling_params.pids, + exact_match=profiling_params.exact_match + ) + + # Convert database results to response model + results = [] + for host in hosts: + # Extract PIDs from combined_config + combined_config = host.get("combined_config") + command_pids = [] + + if combined_config: + if isinstance(combined_config, str): + try: + combined_config = json.loads(combined_config) + except json.JSONDecodeError: + combined_config = {} + + if isinstance(combined_config, dict): + pids_in_config = combined_config.get("pids", []) + if isinstance(pids_in_config, list): + command_pids = [int(pid) for pid in pids_in_config if str(pid).isdigit()] + + # Handle NULL status (no command) as "stopped" + profiling_status = host.get("status") or "stopped" + command_type = host.get("command_type") or "N/A" + + results.append( + ProfilingHostStatus( + id=host.get("id", 0), + service_name=host.get("service_name"), + hostname=host.get("hostname"), + ip_address=host.get("ip_address"), + pids=command_pids, + command_type=command_type, + profiling_status=profiling_status, + heartbeat_timestamp=host.get("heartbeat_timestamp"), + ) + ) + + return results diff --git a/src/gprofiler/backend/routers/profiles_routes.py b/src/gprofiler/backend/routers/profiles_routes.py index 7a45cf88..8c38b386 100644 --- a/src/gprofiler/backend/routers/profiles_routes.py +++ b/src/gprofiler/backend/routers/profiles_routes.py @@ -30,7 +30,7 @@ from fastapi import APIRouter, Header, HTTPException, Request from gprofiler_dev import get_s3_profile_dal from gprofiler_dev.api_key import get_service_by_api_key -from gprofiler_dev.client_handler import ClientHandler +from gprofiler_dev.client_handler import ClientHandler from gprofiler_dev.postgres.db_manager import DBManager from gprofiler_dev.postgres.schemas import AgentMetadata, GetServiceResponse from gprofiler_dev.profiles_utils import get_gprofiler_metadata_utils, get_gprofiler_utils @@ -53,6 +53,8 @@ def new_profile_v2( gprofiler_api_key: str = Header(...), gprofiler_service_name: str = Header(...), ): + hostname = "unknown" + try: service_name, token_id = get_service_by_api_key(gprofiler_api_key, gprofiler_service_name) if not service_name: @@ -145,7 +147,7 @@ def new_profile_v2( raise HTTPException(400, {"message": error_msg}) tags.append(f"{HOSTNAME_KEY}:{hostname}") - except Exception: + except Exception as e: exception_msg = "Failed to parse v2 metadata" logger.exception(exception_msg) raise HTTPException(400, {"message": exception_msg}) @@ -164,7 +166,12 @@ def new_profile_v2( profile_file_size = len(profile_data) compressed_profile = gzip.compress(profile_data) compressed_profile_file_size = len(compressed_profile) - client_handler.write_file(profile_file_path, compressed_profile) + + try: + client_handler.write_file(profile_file_path, compressed_profile) + except Exception as s3_error: + logger.error(f"Failed to write profile to S3: {s3_error}") + raise HTTPException(500, {"message": "Failed to store profile data"}) service_sample_threshold = db_manager.get_service_sample_threshold_by_id(service_id) random_value = random.uniform(0.0, 1.0) @@ -190,14 +197,21 @@ def new_profile_v2( try: sqs.send_message(QueueUrl=config.SQS_INDEXER_QUEUE_URL, MessageBody=json.dumps(msg)) logger.info("send task to queue", extra=extra_info) - except sqs.exceptions.QueueDoesNotExist: - logger.error(f"Queue `{config.SQS_INDEXER_QUEUE_URL}` does not exist, failed to send message {msg}") + except Exception as sqs_error: + logger.error(f"Failed to send message to SQS: {sqs_error}") + raise HTTPException(500, {"message": "Failed to process profile upload"}) else: logger.info("drop task due sampling", extra=extra_info) except KeyError as key_error: + # Client error - missing parameter raise HTTPException(400, {"message": f"Missing parameter {key_error}"}) - except Exception: + except HTTPException: + # HTTPException already handled above (with ignored_failure metric) + # Let FastAPI handle it without sending additional metrics + raise + except Exception as e: + # Server error - counts against SLO if os.path.exists(".debug"): import sys import traceback @@ -216,5 +230,8 @@ def new_profile_v2( f"An error has occurred while trying to prepare service: {service_name} " f"client: {client_handler} " + repr(e) ) + # Server error - counts against SLO raise HTTPException(400, {"message": "Failed to register the new service"}) + + # Success - profile uploaded and processed successfully return ProfileResponse(message="ok", gpid=int(gpid)) diff --git a/src/gprofiler/backend/utils/download_external.py b/src/gprofiler/backend/utils/download_external.py index 0e4a0151..668c7e9a 100644 --- a/src/gprofiler/backend/utils/download_external.py +++ b/src/gprofiler/backend/utils/download_external.py @@ -29,9 +29,7 @@ DAEMON_SET_URL = "https://raw.githubusercontent.com/intel/gprofiler/master/deploy/k8s/gprofiler.yaml" ECS_URL = "https://raw.githubusercontent.com/intel/gprofiler/blob/master/deploy/ecs/gprofiler_task_definition.json" ANSIBLE_URL = "https://raw.githubusercontent.com/intel/gprofiler/master/deploy/ansible/gprofiler_playbook.yml" -DOCKER_COMPOSE_URL = ( - "https://raw.githubusercontent.com/intel/gprofiler/master/deploy/docker-compose/docker-compose.yml" -) +DOCKER_COMPOSE_URL = "https://raw.githubusercontent.com/intel/gprofiler/master/deploy/docker-compose/docker-compose.yml" GH_PUBLISH_DATE_REQUEST_TIMEOUT = 30 FILE_URLS = { diff --git a/src/gprofiler/backend/utils/notifications.py b/src/gprofiler/backend/utils/notifications.py new file mode 100644 index 00000000..d9a4f3e9 --- /dev/null +++ b/src/gprofiler/backend/utils/notifications.py @@ -0,0 +1,398 @@ +""" +Slack notification utilities for sending messages to Slack channels. +""" + +import logging +from typing import Dict, List, Optional, Any, Union +from slack_sdk import WebClient +from slack_sdk.errors import SlackApiError + +from ..config import SLACK_BOT_TOKEN, SLACK_CHANNELS, DEFAULT_SLACK_CHANNELS + + +logger = logging.getLogger(__name__) + + +class SlackNotifier: + """ + A utility class for sending notifications to Slack channels using the Slack SDK. + + This class provides methods to send various types of messages to Slack channels, + including basic text messages and rich messages with blocks and attachments. + """ + + def __init__(self, token: Optional[str] = None, default_channel: Optional[str] = None): + """ + Initialize the SlackNotifier with a bot token. + + Args: + token: Slack bot token (starts with 'xoxb-'). If not provided, reads from SLACK_BOT_TOKEN environment variable. + default_channel: Default channel to send messages to (e.g., '#general', '@user', 'C1234567890'). + If not provided, uses the first channel from SLACK_CHANNELS config. + """ + # Use provided token or fall back to config + bot_token = token or SLACK_BOT_TOKEN + + if not bot_token: + raise ValueError("Slack bot token must be provided either as parameter or via SLACK_BOT_TOKEN environment variable") + + self.client = WebClient(token=bot_token) + + # Set default channel - use provided, or first from config, or fallback to hardcoded default + if default_channel: + self.default_channel = default_channel + elif SLACK_CHANNELS: + self.default_channel = SLACK_CHANNELS[0].strip() + else: + self.default_channel = DEFAULT_SLACK_CHANNELS[0] + + # Test the connection + try: + response = self.client.auth_test() + logger.info(f"Connected to Slack as {response['user']}") + except SlackApiError as e: + logger.error(f"Failed to authenticate with Slack: {e.response['error']}") + raise + + def send_message( + self, + text: str, + channel: Optional[str] = None, + thread_ts: Optional[str] = None, + **kwargs + ) -> Dict[str, Any]: + """ + Send a basic text message to a Slack channel. + + Args: + text: The message text to send + channel: Target channel (uses default_channel if not provided) + thread_ts: Timestamp of parent message to reply in thread + **kwargs: Additional arguments to pass to the Slack API + + Returns: + Dict containing the Slack API response + + Raises: + SlackApiError: If the Slack API returns an error + ValueError: If no channel is specified and no default channel is set + """ + target_channel = channel or self.default_channel + if not target_channel: + raise ValueError("No channel specified and no default channel set") + + try: + response = self.client.chat_postMessage( + channel=target_channel, + text=text, + thread_ts=thread_ts, + **kwargs + ) + logger.info(f"Message sent successfully to {target_channel}") + return response + except SlackApiError as e: + logger.error(f"Failed to send message to {target_channel}: {e.response['error']}") + raise + + def send_rich_message( + self, + blocks: List[Dict[str, Any]] = None, + attachments: List[Dict[str, Any]] = None, + text: str = "", + channel: Optional[str] = None, + thread_ts: Optional[str] = None, + **kwargs + ) -> Dict[str, Any]: + """ + Send a rich message with blocks and/or attachments to a Slack channel. + + Args: + blocks: List of block elements for rich formatting + attachments: List of legacy attachments + text: Fallback text for notifications + channel: Target channel (uses default_channel if not provided) + thread_ts: Timestamp of parent message to reply in thread + **kwargs: Additional arguments to pass to the Slack API + + Returns: + Dict containing the Slack API response + + Raises: + SlackApiError: If the Slack API returns an error + ValueError: If no channel is specified and no default channel is set + """ + target_channel = channel or self.default_channel + if not target_channel: + raise ValueError("No channel specified and no default channel set") + + try: + response = self.client.chat_postMessage( + channel=target_channel, + text=text, + blocks=blocks, + attachments=attachments, + thread_ts=thread_ts, + **kwargs + ) + logger.info(f"Rich message sent successfully to {target_channel}") + return response + except SlackApiError as e: + logger.error(f"Failed to send rich message to {target_channel}: {e.response['error']}") + raise + + def send_alert( + self, + title: str, + message: str, + severity: str = "info", + channel: Optional[str] = None, + additional_fields: Optional[Dict[str, str]] = None + ) -> Dict[str, Any]: + """ + Send an alert message with consistent formatting. + + Args: + title: Alert title + message: Alert message body + severity: Alert severity ('info', 'warning', 'error', 'success') + channel: Target channel (uses default_channel if not provided) + additional_fields: Additional fields to include in the alert + + Returns: + Dict containing the Slack API response + """ + # Color mapping for different severities + color_map = { + "info": "#36a64f", # Green + "warning": "#ff9900", # Orange + "error": "#ff0000", # Red + "success": "#36a64f" # Green + } + + color = color_map.get(severity, "#36a64f") + + # Build attachment fields + fields = [] + if additional_fields: + fields = [ + {"title": key, "value": value, "short": True} + for key, value in additional_fields.items() + ] + + attachments = [{ + "color": color, + "title": title, + "text": message, + "fields": fields, + "footer": "gProfiler Performance Studio", + "ts": int(__import__('time').time()) + }] + + return self.send_rich_message( + attachments=attachments, + text=f"{title}: {message}", + channel=channel + ) + + def send_performance_alert( + self, + service_name: str, + metric_name: str, + current_value: Union[str, float], + threshold: Union[str, float], + severity: str = "warning", + channel: Optional[str] = None + ) -> Dict[str, Any]: + """ + Send a performance-related alert with structured data. + + Args: + service_name: Name of the service experiencing issues + metric_name: Name of the performance metric + current_value: Current value of the metric + threshold: Threshold that was exceeded + severity: Alert severity level + channel: Target channel (uses default_channel if not provided) + + Returns: + Dict containing the Slack API response + """ + title = f"Performance Alert: {service_name}" + message = f"Metric `{metric_name}` has exceeded threshold" + + additional_fields = { + "Service": service_name, + "Metric": metric_name, + "Current Value": str(current_value), + "Threshold": str(threshold), + "Severity": severity.upper() + } + + return self.send_alert( + title=title, + message=message, + severity=severity, + channel=channel, + additional_fields=additional_fields + ) + + def update_message( + self, + ts: str, + channel: str, + text: Optional[str] = None, + blocks: Optional[List[Dict[str, Any]]] = None, + attachments: Optional[List[Dict[str, Any]]] = None + ) -> Dict[str, Any]: + """ + Update an existing message. + + Args: + ts: Timestamp of the message to update + channel: Channel containing the message + text: New text content + blocks: New block elements + attachments: New attachments + + Returns: + Dict containing the Slack API response + """ + try: + response = self.client.chat_update( + ts=ts, + channel=channel, + text=text, + blocks=blocks, + attachments=attachments + ) + logger.info(f"Message updated successfully in {channel}") + return response + except SlackApiError as e: + logger.error(f"Failed to update message in {channel}: {e.response['error']}") + raise + + def delete_message(self, ts: str, channel: str) -> Dict[str, Any]: + """ + Delete a message. + + Args: + ts: Timestamp of the message to delete + channel: Channel containing the message + + Returns: + Dict containing the Slack API response + """ + try: + response = self.client.chat_delete(ts=ts, channel=channel) + logger.info(f"Message deleted successfully from {channel}") + return response + except SlackApiError as e: + logger.error(f"Failed to delete message from {channel}: {e.response['error']}") + raise + + def get_available_channels(self) -> List[str]: + """ + Get the list of available channels from configuration. + + Returns: + List of channel names configured via SLACK_CHANNELS environment variable + or the default channels if not configured. + """ + return [channel.strip() for channel in SLACK_CHANNELS if channel.strip()] + + def is_valid_channel(self, channel: str) -> bool: + """ + Check if a channel is in the list of available channels. + + Args: + channel: Channel name to validate + + Returns: + True if channel is in the available channels list, False otherwise + """ + available_channels = self.get_available_channels() + return channel in available_channels + + def send_to_all_channels( + self, + text: str, + thread_ts: Optional[str] = None, + **kwargs + ) -> List[Dict[str, Any]]: + """ + Send a message to all configured channels. + + Args: + text: The message text to send + thread_ts: Timestamp of parent message to reply in thread + **kwargs: Additional arguments to pass to the Slack API + + Returns: + List of responses from each channel + + Raises: + SlackApiError: If the Slack API returns an error for any channel + """ + responses = [] + available_channels = self.get_available_channels() + + for channel in available_channels: + try: + response = self.send_message( + text=text, + channel=channel, + thread_ts=thread_ts, + **kwargs + ) + responses.append(response) + except SlackApiError as e: + logger.error(f"Failed to send message to {channel}: {e.response['error']}") + # Continue with other channels even if one fails + responses.append({"error": str(e), "channel": channel}) + + return responses + + def send_rich_message_to_all_channels( + self, + blocks: List[Dict[str, Any]] = None, + attachments: List[Dict[str, Any]] = None, + text: str = "", + thread_ts: Optional[str] = None, + **kwargs + ) -> List[Dict[str, Any]]: + """ + Send a rich message with blocks and/or attachments to all configured channels. + + Args: + blocks: List of block elements for rich formatting + attachments: List of legacy attachments + text: Fallback text for notifications + thread_ts: Timestamp of parent message to reply in thread + **kwargs: Additional arguments to pass to the Slack API + + Returns: + List of responses from each channel + + Raises: + SlackApiError: If the Slack API returns an error for any channel + """ + responses = [] + available_channels = self.get_available_channels() + + for channel in available_channels: + try: + response = self.send_rich_message( + blocks=blocks, + attachments=attachments, + text=text, + channel=channel, + thread_ts=thread_ts, + **kwargs + ) + responses.append(response) + except SlackApiError as e: + logger.error(f"Failed to send rich message to {channel}: {e.response['error']}") + # Continue with other channels even if one fails + responses.append({"error": str(e), "channel": channel}) + + return responses diff --git a/src/gprofiler/frontend/package-lock.json b/src/gprofiler/frontend/package-lock.json new file mode 100644 index 00000000..2b2f5ce2 --- /dev/null +++ b/src/gprofiler/frontend/package-lock.json @@ -0,0 +1,5976 @@ +{ + "name": "gprofiler", + "version": "1.0.0", + "lockfileVersion": 1, + "requires": true, + "dependencies": { + "@ampproject/remapping": { + "version": "2.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@ampproject/remapping/-/remapping-2.3.0.tgz", + "integrity": "sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw==", + "dev": true, + "requires": { + "@jridgewell/gen-mapping": "^0.3.5", + "@jridgewell/trace-mapping": "^0.3.24" + } + }, + "@babel/code-frame": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/code-frame/-/code-frame-7.27.1.tgz", + "integrity": "sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==", + "requires": { + "@babel/helper-validator-identifier": "^7.27.1", + "js-tokens": "^4.0.0", + "picocolors": "^1.1.1" + } + }, + "@babel/compat-data": { + "version": "7.28.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/compat-data/-/compat-data-7.28.0.tgz", + "integrity": "sha512-60X7qkglvrap8mn1lh2ebxXdZYtUcpd7gsmy9kLaBJ4i/WdY8PqTSdxyA8qraikqKQK5C1KRBKXqznrVapyNaw==", + "dev": true + }, + "@babel/core": { + "version": "7.28.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/core/-/core-7.28.0.tgz", + "integrity": "sha512-UlLAnTPrFdNGoFtbSXwcGFQBtQZJCNjaN6hQNP3UPvuNXT1i82N26KL3dZeIpNalWywr9IuQuncaAfUaS1g6sQ==", + "dev": true, + "requires": { + "@ampproject/remapping": "^2.2.0", + "@babel/code-frame": "^7.27.1", + "@babel/generator": "^7.28.0", + "@babel/helper-compilation-targets": "^7.27.2", + "@babel/helper-module-transforms": "^7.27.3", + "@babel/helpers": "^7.27.6", + "@babel/parser": "^7.28.0", + "@babel/template": "^7.27.2", + "@babel/traverse": "^7.28.0", + "@babel/types": "^7.28.0", + "convert-source-map": "^2.0.0", + "debug": "^4.1.0", + "gensync": "^1.0.0-beta.2", + "json5": "^2.2.3", + "semver": "^6.3.1" + }, + "dependencies": { + "convert-source-map": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/convert-source-map/-/convert-source-map-2.0.0.tgz", + "integrity": "sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==", + "dev": true + } + } + }, + "@babel/generator": { + "version": "7.28.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/generator/-/generator-7.28.0.tgz", + "integrity": "sha512-lJjzvrbEeWrhB4P3QBsH7tey117PjLZnDbLiQEKjQ/fNJTjuq4HSqgFA+UNSwZT8D7dxxbnuSBMsa1lrWzKlQg==", + "requires": { + "@babel/parser": "^7.28.0", + "@babel/types": "^7.28.0", + "@jridgewell/gen-mapping": "^0.3.12", + "@jridgewell/trace-mapping": "^0.3.28", + "jsesc": "^3.0.2" + } + }, + "@babel/helper-annotate-as-pure": { + "version": "7.27.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.27.3.tgz", + "integrity": "sha512-fXSwMQqitTGeHLBC08Eq5yXz2m37E4pJX1qAU1+2cNedz/ifv/bVXft90VeSav5nFO61EcNgwr0aJxbyPaWBPg==", + "dev": true, + "requires": { + "@babel/types": "^7.27.3" + } + }, + "@babel/helper-compilation-targets": { + "version": "7.27.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-compilation-targets/-/helper-compilation-targets-7.27.2.tgz", + "integrity": "sha512-2+1thGUUWWjLTYTHZWK1n8Yga0ijBz1XAhUXcKy81rd5g6yh7hGqMp45v7cadSbEHc9G3OTv45SyneRN3ps4DQ==", + "dev": true, + "requires": { + "@babel/compat-data": "^7.27.2", + "@babel/helper-validator-option": "^7.27.1", + "browserslist": "^4.24.0", + "lru-cache": "^5.1.1", + "semver": "^6.3.1" + } + }, + "@babel/helper-globals": { + "version": "7.28.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-globals/-/helper-globals-7.28.0.tgz", + "integrity": "sha512-+W6cISkXFa1jXsDEdYA8HeevQT/FULhxzR99pxphltZcVaugps53THCeiWA8SguxxpSp3gKPiuYfSWopkLQ4hw==" + }, + "@babel/helper-module-imports": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-module-imports/-/helper-module-imports-7.27.1.tgz", + "integrity": "sha512-0gSFWUPNXNopqtIPQvlD5WgXYI5GY2kP2cCvoT8kczjbfcfuIljTbcWrulD1CIPIX2gt1wghbDy08yE1p+/r3w==", + "requires": { + "@babel/traverse": "^7.27.1", + "@babel/types": "^7.27.1" + } + }, + "@babel/helper-module-transforms": { + "version": "7.27.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-module-transforms/-/helper-module-transforms-7.27.3.tgz", + "integrity": "sha512-dSOvYwvyLsWBeIRyOeHXp5vPj5l1I011r52FM1+r1jCERv+aFXYk4whgQccYEGYxK2H3ZAIA8nuPkQ0HaUo3qg==", + "dev": true, + "requires": { + "@babel/helper-module-imports": "^7.27.1", + "@babel/helper-validator-identifier": "^7.27.1", + "@babel/traverse": "^7.27.3" + } + }, + "@babel/helper-plugin-utils": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-plugin-utils/-/helper-plugin-utils-7.27.1.tgz", + "integrity": "sha512-1gn1Up5YXka3YYAHGKpbideQ5Yjf1tDa9qYcgysz+cNCXukyLl6DjPXhD3VRwSb8c0J9tA4b2+rHEZtc6R0tlw==", + "dev": true + }, + "@babel/helper-string-parser": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz", + "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==" + }, + "@babel/helper-validator-identifier": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-validator-identifier/-/helper-validator-identifier-7.27.1.tgz", + "integrity": "sha512-D2hP9eA+Sqx1kBZgzxZh0y1trbuU+JoDkiEwqhQ36nodYqJwyEIhPSdMNd7lOm/4io72luTPWH20Yda0xOuUow==" + }, + "@babel/helper-validator-option": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helper-validator-option/-/helper-validator-option-7.27.1.tgz", + "integrity": "sha512-YvjJow9FxbhFFKDSuFnVCe2WxXk1zWc22fFePVNEaWJEu8IrZVlda6N0uHwzZrUM1il7NC9Mlp4MaJYbYd9JSg==", + "dev": true + }, + "@babel/helpers": { + "version": "7.28.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/helpers/-/helpers-7.28.2.tgz", + "integrity": "sha512-/V9771t+EgXz62aCcyofnQhGM8DQACbRhvzKFsXKC9QM+5MadF8ZmIm0crDMaz3+o0h0zXfJnd4EhbYbxsrcFw==", + "dev": true, + "requires": { + "@babel/template": "^7.27.2", + "@babel/types": "^7.28.2" + } + }, + "@babel/highlight": { + "version": "7.25.9", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/highlight/-/highlight-7.25.9.tgz", + "integrity": "sha512-llL88JShoCsth8fF8R4SJnIn+WLvR6ccFxu1H3FlMhDontdcmZWf2HgIZ7AIqV3Xcck1idlohrN4EUBQz6klbw==", + "dev": true, + "requires": { + "@babel/helper-validator-identifier": "^7.25.9", + "chalk": "^2.4.2", + "js-tokens": "^4.0.0", + "picocolors": "^1.0.0" + }, + "dependencies": { + "chalk": { + "version": "2.4.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dev": true, + "requires": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + } + }, + "escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "dev": true + } + } + }, + "@babel/parser": { + "version": "7.28.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/parser/-/parser-7.28.0.tgz", + "integrity": "sha512-jVZGvOxOuNSsuQuLRTh13nU0AogFlw32w/MT+LV6D3sP5WdbW61E77RnkbaO2dUvmPAYrBDJXGn5gGS6tH4j8g==", + "requires": { + "@babel/types": "^7.28.0" + } + }, + "@babel/plugin-syntax-jsx": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.27.1.tgz", + "integrity": "sha512-y8YTNIeKoyhGd9O0Jiyzyyqk8gdjnumGTQPsz0xOZOQ2RmkVJeZ1vmmfIvFEKqucBG6axJGBZDE/7iI5suUI/w==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.27.1" + } + }, + "@babel/plugin-transform-react-jsx": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/plugin-transform-react-jsx/-/plugin-transform-react-jsx-7.27.1.tgz", + "integrity": "sha512-2KH4LWGSrJIkVf5tSiBFYuXDAoWRq2MMwgivCf+93dd0GQi8RXLjKA/0EvRnVV5G0hrHczsquXuD01L8s6dmBw==", + "dev": true, + "requires": { + "@babel/helper-annotate-as-pure": "^7.27.1", + "@babel/helper-module-imports": "^7.27.1", + "@babel/helper-plugin-utils": "^7.27.1", + "@babel/plugin-syntax-jsx": "^7.27.1", + "@babel/types": "^7.27.1" + } + }, + "@babel/plugin-transform-react-jsx-development": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/plugin-transform-react-jsx-development/-/plugin-transform-react-jsx-development-7.27.1.tgz", + "integrity": "sha512-ykDdF5yI4f1WrAolLqeF3hmYU12j9ntLQl/AOG1HAS21jxyg1Q0/J/tpREuYLfatGdGmXp/3yS0ZA76kOlVq9Q==", + "dev": true, + "requires": { + "@babel/plugin-transform-react-jsx": "^7.27.1" + } + }, + "@babel/plugin-transform-react-jsx-self": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/plugin-transform-react-jsx-self/-/plugin-transform-react-jsx-self-7.27.1.tgz", + "integrity": "sha512-6UzkCs+ejGdZ5mFFC/OCUrv028ab2fp1znZmCZjAOBKiBK2jXD1O+BPSfX8X2qjJ75fZBMSnQn3Rq2mrBJK2mw==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.27.1" + } + }, + "@babel/plugin-transform-react-jsx-source": { + "version": "7.27.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/plugin-transform-react-jsx-source/-/plugin-transform-react-jsx-source-7.27.1.tgz", + "integrity": "sha512-zbwoTsBruTeKB9hSq73ha66iFeJHuaFkUbwvqElnygoNbj/jHRsSeokowZFN3CZ64IvEqcmmkVe89OPXc7ldAw==", + "dev": true, + "requires": { + "@babel/helper-plugin-utils": "^7.27.1" + } + }, + "@babel/runtime": { + "version": "7.28.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/runtime/-/runtime-7.28.2.tgz", + "integrity": "sha512-KHp2IflsnGywDjBWDkR9iEqiWSpc8GIi0lgTT3mOElT0PP1tG26P4tmFI2YvAdzgq9RGyoHZQEIEdZy6Ec5xCA==" + }, + "@babel/template": { + "version": "7.27.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/template/-/template-7.27.2.tgz", + "integrity": "sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw==", + "requires": { + "@babel/code-frame": "^7.27.1", + "@babel/parser": "^7.27.2", + "@babel/types": "^7.27.1" + } + }, + "@babel/traverse": { + "version": "7.28.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/traverse/-/traverse-7.28.0.tgz", + "integrity": "sha512-mGe7UK5wWyh0bKRfupsUchrQGqvDbZDbKJw+kcRGSmdHVYrv+ltd0pnpDTVpiTqnaBru9iEvA8pz8W46v0Amwg==", + "requires": { + "@babel/code-frame": "^7.27.1", + "@babel/generator": "^7.28.0", + "@babel/helper-globals": "^7.28.0", + "@babel/parser": "^7.28.0", + "@babel/template": "^7.27.2", + "@babel/types": "^7.28.0", + "debug": "^4.3.1" + } + }, + "@babel/types": { + "version": "7.28.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/types/-/types-7.28.2.tgz", + "integrity": "sha512-ruv7Ae4J5dUYULmeXw1gmb7rYRz57OWCPM57pHojnLq/3Z1CK2lNSLTCVjxVk1F/TZHwOZZrOWi0ur95BbLxNQ==", + "requires": { + "@babel/helper-string-parser": "^7.27.1", + "@babel/helper-validator-identifier": "^7.27.1" + } + }, + "@date-io/core": { + "version": "2.17.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@date-io/core/-/core-2.17.0.tgz", + "integrity": "sha512-+EQE8xZhRM/hsY0CDTVyayMDDY5ihc4MqXCrPxooKw19yAzUIC6uUqsZeaOFNL9YKTNxYKrJP5DFgE8o5xRCOw==" + }, + "@date-io/date-fns": { + "version": "2.17.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@date-io/date-fns/-/date-fns-2.17.0.tgz", + "integrity": "sha512-L0hWZ/mTpy3Gx/xXJ5tq5CzHo0L7ry6KEO9/w/JWiFWFLZgiNVo3ex92gOl3zmzjHqY/3Ev+5sehAr8UnGLEng==", + "requires": { + "@date-io/core": "^2.17.0" + } + }, + "@emotion/babel-plugin": { + "version": "11.13.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/babel-plugin/-/babel-plugin-11.13.5.tgz", + "integrity": "sha512-pxHCpT2ex+0q+HH91/zsdHkw/lXd468DIN2zvfvLtPKLLMo6gQj7oLObq8PhkrxOZb/gGCq03S3Z7PDhS8pduQ==", + "requires": { + "@babel/helper-module-imports": "^7.16.7", + "@babel/runtime": "^7.18.3", + "@emotion/hash": "^0.9.2", + "@emotion/memoize": "^0.9.0", + "@emotion/serialize": "^1.3.3", + "babel-plugin-macros": "^3.1.0", + "convert-source-map": "^1.5.0", + "escape-string-regexp": "^4.0.0", + "find-root": "^1.1.0", + "source-map": "^0.5.7", + "stylis": "4.2.0" + } + }, + "@emotion/cache": { + "version": "11.14.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/cache/-/cache-11.14.0.tgz", + "integrity": "sha512-L/B1lc/TViYk4DcpGxtAVbx0ZyiKM5ktoIyafGkH6zg/tj+mA+NE//aPYKG0k8kCHSHVJrpLpcAlOBEXQ3SavA==", + "requires": { + "@emotion/memoize": "^0.9.0", + "@emotion/sheet": "^1.4.0", + "@emotion/utils": "^1.4.2", + "@emotion/weak-memoize": "^0.4.0", + "stylis": "4.2.0" + } + }, + "@emotion/hash": { + "version": "0.9.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/hash/-/hash-0.9.2.tgz", + "integrity": "sha512-MyqliTZGuOm3+5ZRSaaBGP3USLw6+EGykkwZns2EPC5g8jJ4z9OrdZY9apkl3+UP9+sdz76YYkwCKP5gh8iY3g==" + }, + "@emotion/is-prop-valid": { + "version": "1.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/is-prop-valid/-/is-prop-valid-1.3.1.tgz", + "integrity": "sha512-/ACwoqx7XQi9knQs/G0qKvv5teDMhD7bXYns9N/wM8ah8iNb8jZ2uNO0YOgiq2o2poIvVtJS2YALasQuMSQ7Kw==", + "requires": { + "@emotion/memoize": "^0.9.0" + } + }, + "@emotion/memoize": { + "version": "0.9.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/memoize/-/memoize-0.9.0.tgz", + "integrity": "sha512-30FAj7/EoJ5mwVPOWhAyCX+FPfMDrVecJAM+Iw9NRoSl4BBAQeqj4cApHHUXOVvIPgLVDsCFoz/hGD+5QQD1GQ==" + }, + "@emotion/react": { + "version": "11.14.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/react/-/react-11.14.0.tgz", + "integrity": "sha512-O000MLDBDdk/EohJPFUqvnp4qnHeYkVP5B0xEG0D/L7cOKP9kefu2DXn8dj74cQfsEzUqh+sr1RzFqiL1o+PpA==", + "requires": { + "@babel/runtime": "^7.18.3", + "@emotion/babel-plugin": "^11.13.5", + "@emotion/cache": "^11.14.0", + "@emotion/serialize": "^1.3.3", + "@emotion/use-insertion-effect-with-fallbacks": "^1.2.0", + "@emotion/utils": "^1.4.2", + "@emotion/weak-memoize": "^0.4.0", + "hoist-non-react-statics": "^3.3.1" + } + }, + "@emotion/serialize": { + "version": "1.3.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/serialize/-/serialize-1.3.3.tgz", + "integrity": "sha512-EISGqt7sSNWHGI76hC7x1CksiXPahbxEOrC5RjmFRJTqLyEK9/9hZvBbiYn70dw4wuwMKiEMCUlR6ZXTSWQqxA==", + "requires": { + "@emotion/hash": "^0.9.2", + "@emotion/memoize": "^0.9.0", + "@emotion/unitless": "^0.10.0", + "@emotion/utils": "^1.4.2", + "csstype": "^3.0.2" + } + }, + "@emotion/sheet": { + "version": "1.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/sheet/-/sheet-1.4.0.tgz", + "integrity": "sha512-fTBW9/8r2w3dXWYM4HCB1Rdp8NLibOw2+XELH5m5+AkWiL/KqYX6dc0kKYlaYyKjrQ6ds33MCdMPEwgs2z1rqg==" + }, + "@emotion/styled": { + "version": "11.14.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/styled/-/styled-11.14.1.tgz", + "integrity": "sha512-qEEJt42DuToa3gurlH4Qqc1kVpNq8wO8cJtDzU46TjlzWjDlsVyevtYCRijVq3SrHsROS+gVQ8Fnea108GnKzw==", + "requires": { + "@babel/runtime": "^7.18.3", + "@emotion/babel-plugin": "^11.13.5", + "@emotion/is-prop-valid": "^1.3.0", + "@emotion/serialize": "^1.3.3", + "@emotion/use-insertion-effect-with-fallbacks": "^1.2.0", + "@emotion/utils": "^1.4.2" + } + }, + "@emotion/unitless": { + "version": "0.10.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/unitless/-/unitless-0.10.0.tgz", + "integrity": "sha512-dFoMUuQA20zvtVTuxZww6OHoJYgrzfKM1t52mVySDJnMSEa08ruEvdYQbhvyu6soU+NeLVd3yKfTfT0NeV6qGg==" + }, + "@emotion/use-insertion-effect-with-fallbacks": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/use-insertion-effect-with-fallbacks/-/use-insertion-effect-with-fallbacks-1.2.0.tgz", + "integrity": "sha512-yJMtVdH59sxi/aVJBpk9FQq+OR8ll5GT8oWd57UpeaKEVGab41JWaCFA7FRLoMLloOZF/c/wsPoe+bfGmRKgDg==" + }, + "@emotion/utils": { + "version": "1.4.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/utils/-/utils-1.4.2.tgz", + "integrity": "sha512-3vLclRofFziIa3J2wDh9jjbkUz9qk5Vi3IZ/FSTKViB0k+ef0fPV7dYrUIugbgupYDx7v9ud/SjrtEP8Y4xLoA==" + }, + "@emotion/weak-memoize": { + "version": "0.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/weak-memoize/-/weak-memoize-0.4.0.tgz", + "integrity": "sha512-snKqtPW01tN0ui7yu9rGv69aJXr/a/Ywvl11sUjNtEcRc+ng/mQriFL0wLXMef74iHa/EkftbDzU9F8iFbH+zg==" + }, + "@esbuild/aix-ppc64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/aix-ppc64/-/aix-ppc64-0.25.8.tgz", + "integrity": "sha512-urAvrUedIqEiFR3FYSLTWQgLu5tb+m0qZw0NBEasUeo6wuqatkMDaRT+1uABiGXEu5vqgPd7FGE1BhsAIy9QVA==", + "dev": true, + "optional": true + }, + "@esbuild/android-arm": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/android-arm/-/android-arm-0.25.8.tgz", + "integrity": "sha512-RONsAvGCz5oWyePVnLdZY/HHwA++nxYWIX1atInlaW6SEkwq6XkP3+cb825EUcRs5Vss/lGh/2YxAb5xqc07Uw==", + "dev": true, + "optional": true + }, + "@esbuild/android-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/android-arm64/-/android-arm64-0.25.8.tgz", + "integrity": "sha512-OD3p7LYzWpLhZEyATcTSJ67qB5D+20vbtr6vHlHWSQYhKtzUYrETuWThmzFpZtFsBIxRvhO07+UgVA9m0i/O1w==", + "dev": true, + "optional": true + }, + "@esbuild/android-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/android-x64/-/android-x64-0.25.8.tgz", + "integrity": "sha512-yJAVPklM5+4+9dTeKwHOaA+LQkmrKFX96BM0A/2zQrbS6ENCmxc4OVoBs5dPkCCak2roAD+jKCdnmOqKszPkjA==", + "dev": true, + "optional": true + }, + "@esbuild/darwin-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/darwin-arm64/-/darwin-arm64-0.25.8.tgz", + "integrity": "sha512-Jw0mxgIaYX6R8ODrdkLLPwBqHTtYHJSmzzd+QeytSugzQ0Vg4c5rDky5VgkoowbZQahCbsv1rT1KW72MPIkevw==", + "dev": true, + "optional": true + }, + "@esbuild/darwin-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/darwin-x64/-/darwin-x64-0.25.8.tgz", + "integrity": "sha512-Vh2gLxxHnuoQ+GjPNvDSDRpoBCUzY4Pu0kBqMBDlK4fuWbKgGtmDIeEC081xi26PPjn+1tct+Bh8FjyLlw1Zlg==", + "dev": true, + "optional": true + }, + "@esbuild/freebsd-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/freebsd-arm64/-/freebsd-arm64-0.25.8.tgz", + "integrity": "sha512-YPJ7hDQ9DnNe5vxOm6jaie9QsTwcKedPvizTVlqWG9GBSq+BuyWEDazlGaDTC5NGU4QJd666V0yqCBL2oWKPfA==", + "dev": true, + "optional": true + }, + "@esbuild/freebsd-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/freebsd-x64/-/freebsd-x64-0.25.8.tgz", + "integrity": "sha512-MmaEXxQRdXNFsRN/KcIimLnSJrk2r5H8v+WVafRWz5xdSVmWLoITZQXcgehI2ZE6gioE6HirAEToM/RvFBeuhw==", + "dev": true, + "optional": true + }, + "@esbuild/linux-arm": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-arm/-/linux-arm-0.25.8.tgz", + "integrity": "sha512-FuzEP9BixzZohl1kLf76KEVOsxtIBFwCaLupVuk4eFVnOZfU+Wsn+x5Ryam7nILV2pkq2TqQM9EZPsOBuMC+kg==", + "dev": true, + "optional": true + }, + "@esbuild/linux-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-arm64/-/linux-arm64-0.25.8.tgz", + "integrity": "sha512-WIgg00ARWv/uYLU7lsuDK00d/hHSfES5BzdWAdAig1ioV5kaFNrtK8EqGcUBJhYqotlUByUKz5Qo6u8tt7iD/w==", + "dev": true, + "optional": true + }, + "@esbuild/linux-ia32": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-ia32/-/linux-ia32-0.25.8.tgz", + "integrity": "sha512-A1D9YzRX1i+1AJZuFFUMP1E9fMaYY+GnSQil9Tlw05utlE86EKTUA7RjwHDkEitmLYiFsRd9HwKBPEftNdBfjg==", + "dev": true, + "optional": true + }, + "@esbuild/linux-loong64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-loong64/-/linux-loong64-0.25.8.tgz", + "integrity": "sha512-O7k1J/dwHkY1RMVvglFHl1HzutGEFFZ3kNiDMSOyUrB7WcoHGf96Sh+64nTRT26l3GMbCW01Ekh/ThKM5iI7hQ==", + "dev": true, + "optional": true + }, + "@esbuild/linux-mips64el": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-mips64el/-/linux-mips64el-0.25.8.tgz", + "integrity": "sha512-uv+dqfRazte3BzfMp8PAQXmdGHQt2oC/y2ovwpTteqrMx2lwaksiFZ/bdkXJC19ttTvNXBuWH53zy/aTj1FgGw==", + "dev": true, + "optional": true + }, + "@esbuild/linux-ppc64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-ppc64/-/linux-ppc64-0.25.8.tgz", + "integrity": "sha512-GyG0KcMi1GBavP5JgAkkstMGyMholMDybAf8wF5A70CALlDM2p/f7YFE7H92eDeH/VBtFJA5MT4nRPDGg4JuzQ==", + "dev": true, + "optional": true + }, + "@esbuild/linux-riscv64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-riscv64/-/linux-riscv64-0.25.8.tgz", + "integrity": "sha512-rAqDYFv3yzMrq7GIcen3XP7TUEG/4LK86LUPMIz6RT8A6pRIDn0sDcvjudVZBiiTcZCY9y2SgYX2lgK3AF+1eg==", + "dev": true, + "optional": true + }, + "@esbuild/linux-s390x": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-s390x/-/linux-s390x-0.25.8.tgz", + "integrity": "sha512-Xutvh6VjlbcHpsIIbwY8GVRbwoviWT19tFhgdA7DlenLGC/mbc3lBoVb7jxj9Z+eyGqvcnSyIltYUrkKzWqSvg==", + "dev": true, + "optional": true + }, + "@esbuild/linux-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/linux-x64/-/linux-x64-0.25.8.tgz", + "integrity": "sha512-ASFQhgY4ElXh3nDcOMTkQero4b1lgubskNlhIfJrsH5OKZXDpUAKBlNS0Kx81jwOBp+HCeZqmoJuihTv57/jvQ==", + "dev": true, + "optional": true + }, + "@esbuild/netbsd-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/netbsd-arm64/-/netbsd-arm64-0.25.8.tgz", + "integrity": "sha512-d1KfruIeohqAi6SA+gENMuObDbEjn22olAR7egqnkCD9DGBG0wsEARotkLgXDu6c4ncgWTZJtN5vcgxzWRMzcw==", + "dev": true, + "optional": true + }, + "@esbuild/netbsd-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/netbsd-x64/-/netbsd-x64-0.25.8.tgz", + "integrity": "sha512-nVDCkrvx2ua+XQNyfrujIG38+YGyuy2Ru9kKVNyh5jAys6n+l44tTtToqHjino2My8VAY6Lw9H7RI73XFi66Cg==", + "dev": true, + "optional": true + }, + "@esbuild/openbsd-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/openbsd-arm64/-/openbsd-arm64-0.25.8.tgz", + "integrity": "sha512-j8HgrDuSJFAujkivSMSfPQSAa5Fxbvk4rgNAS5i3K+r8s1X0p1uOO2Hl2xNsGFppOeHOLAVgYwDVlmxhq5h+SQ==", + "dev": true, + "optional": true + }, + "@esbuild/openbsd-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/openbsd-x64/-/openbsd-x64-0.25.8.tgz", + "integrity": "sha512-1h8MUAwa0VhNCDp6Af0HToI2TJFAn1uqT9Al6DJVzdIBAd21m/G0Yfc77KDM3uF3T/YaOgQq3qTJHPbTOInaIQ==", + "dev": true, + "optional": true + }, + "@esbuild/openharmony-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/openharmony-arm64/-/openharmony-arm64-0.25.8.tgz", + "integrity": "sha512-r2nVa5SIK9tSWd0kJd9HCffnDHKchTGikb//9c7HX+r+wHYCpQrSgxhlY6KWV1nFo1l4KFbsMlHk+L6fekLsUg==", + "dev": true, + "optional": true + }, + "@esbuild/sunos-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/sunos-x64/-/sunos-x64-0.25.8.tgz", + "integrity": "sha512-zUlaP2S12YhQ2UzUfcCuMDHQFJyKABkAjvO5YSndMiIkMimPmxA+BYSBikWgsRpvyxuRnow4nS5NPnf9fpv41w==", + "dev": true, + "optional": true + }, + "@esbuild/win32-arm64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/win32-arm64/-/win32-arm64-0.25.8.tgz", + "integrity": "sha512-YEGFFWESlPva8hGL+zvj2z/SaK+pH0SwOM0Nc/d+rVnW7GSTFlLBGzZkuSU9kFIGIo8q9X3ucpZhu8PDN5A2sQ==", + "dev": true, + "optional": true + }, + "@esbuild/win32-ia32": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/win32-ia32/-/win32-ia32-0.25.8.tgz", + "integrity": "sha512-hiGgGC6KZ5LZz58OL/+qVVoZiuZlUYlYHNAmczOm7bs2oE1XriPFi5ZHHrS8ACpV5EjySrnoCKmcbQMN+ojnHg==", + "dev": true, + "optional": true + }, + "@esbuild/win32-x64": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@esbuild/win32-x64/-/win32-x64-0.25.8.tgz", + "integrity": "sha512-cn3Yr7+OaaZq1c+2pe+8yxC8E144SReCQjN6/2ynubzYjvyqZjTXfQJpAcQpsdJq3My7XADANiYGHoFC69pLQw==", + "dev": true, + "optional": true + }, + "@eslint/eslintrc": { + "version": "0.4.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@eslint/eslintrc/-/eslintrc-0.4.3.tgz", + "integrity": "sha512-J6KFFz5QCYUJq3pf0mjEcCJVERbzv71PUIDczuh9JkwGEzced6CO5ADLHB1rbf/+oPBtoPfMYNOpGDzCANlbXw==", + "dev": true, + "requires": { + "ajv": "^6.12.4", + "debug": "^4.1.1", + "espree": "^7.3.0", + "globals": "^13.9.0", + "ignore": "^4.0.6", + "import-fresh": "^3.2.1", + "js-yaml": "^3.13.1", + "minimatch": "^3.0.4", + "strip-json-comments": "^3.1.1" + } + }, + "@floating-ui/core": { + "version": "1.7.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@floating-ui/core/-/core-1.7.3.tgz", + "integrity": "sha512-sGnvb5dmrJaKEZ+LDIpguvdX3bDlEllmv4/ClQ9awcmCZrlx5jQyyMWFM5kBI+EyNOCDDiKk8il0zeuX3Zlg/w==", + "requires": { + "@floating-ui/utils": "^0.2.10" + } + }, + "@floating-ui/dom": { + "version": "1.7.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@floating-ui/dom/-/dom-1.7.3.tgz", + "integrity": "sha512-uZA413QEpNuhtb3/iIKoYMSK07keHPYeXF02Zhd6e213j+d1NamLix/mCLxBUDW/Gx52sPH2m+chlUsyaBs/Ag==", + "requires": { + "@floating-ui/core": "^1.7.3", + "@floating-ui/utils": "^0.2.10" + } + }, + "@floating-ui/react-dom": { + "version": "2.1.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@floating-ui/react-dom/-/react-dom-2.1.5.tgz", + "integrity": "sha512-HDO/1/1oH9fjj4eLgegrlH3dklZpHtUYYFiVwMUwfGvk9jWDRWqkklA2/NFScknrcNSspbV868WjXORvreDX+Q==", + "requires": { + "@floating-ui/dom": "^1.7.3" + } + }, + "@floating-ui/utils": { + "version": "0.2.10", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@floating-ui/utils/-/utils-0.2.10.tgz", + "integrity": "sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==" + }, + "@humanwhocodes/config-array": { + "version": "0.5.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@humanwhocodes/config-array/-/config-array-0.5.0.tgz", + "integrity": "sha512-FagtKFz74XrTl7y6HCzQpwDfXP0yhxe9lHLD1UZxjvZIcbyRz8zTFF/yYNfSfzU414eDwZ1SrO0Qvtyf+wFMQg==", + "dev": true, + "requires": { + "@humanwhocodes/object-schema": "^1.2.0", + "debug": "^4.1.1", + "minimatch": "^3.0.4" + } + }, + "@humanwhocodes/object-schema": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@humanwhocodes/object-schema/-/object-schema-1.2.1.tgz", + "integrity": "sha512-ZnQMnLV4e7hDlUvw8H+U8ASL02SS2Gn6+9Ac3wGGLIe7+je2AeAOxPY+izIPJDfFDb7eDjev0Us8MO1iFRN8hA==", + "dev": true + }, + "@isaacs/cliui": { + "version": "8.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@isaacs/cliui/-/cliui-8.0.2.tgz", + "integrity": "sha512-O8jcjabXaleOG9DQ0+ARXWZBTfnP4WNAqzuiJK7ll44AmxGKv/J2M4TPjxjY3znBCfvBXFzucm1twdyFybFqEA==", + "dev": true, + "requires": { + "string-width": "^5.1.2", + "string-width-cjs": "npm:string-width@^4.2.0", + "strip-ansi": "^7.0.1", + "strip-ansi-cjs": "npm:strip-ansi@^6.0.1", + "wrap-ansi": "^8.1.0", + "wrap-ansi-cjs": "npm:wrap-ansi@^7.0.0" + }, + "dependencies": { + "ansi-regex": { + "version": "6.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-regex/-/ansi-regex-6.1.0.tgz", + "integrity": "sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA==", + "dev": true + }, + "emoji-regex": { + "version": "9.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/emoji-regex/-/emoji-regex-9.2.2.tgz", + "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", + "dev": true + }, + "string-width": { + "version": "5.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string-width/-/string-width-5.1.2.tgz", + "integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==", + "dev": true, + "requires": { + "eastasianwidth": "^0.2.0", + "emoji-regex": "^9.2.2", + "strip-ansi": "^7.0.1" + } + }, + "strip-ansi": { + "version": "7.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strip-ansi/-/strip-ansi-7.1.0.tgz", + "integrity": "sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ==", + "dev": true, + "requires": { + "ansi-regex": "^6.0.1" + } + } + } + }, + "@jridgewell/gen-mapping": { + "version": "0.3.12", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@jridgewell/gen-mapping/-/gen-mapping-0.3.12.tgz", + "integrity": "sha512-OuLGC46TjB5BbN1dH8JULVVZY4WTdkF7tV9Ys6wLL1rubZnCMstOhNHueU5bLCrnRuDhKPDM4g6sw4Bel5Gzqg==", + "requires": { + "@jridgewell/sourcemap-codec": "^1.5.0", + "@jridgewell/trace-mapping": "^0.3.24" + } + }, + "@jridgewell/resolve-uri": { + "version": "3.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz", + "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==" + }, + "@jridgewell/sourcemap-codec": { + "version": "1.5.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.4.tgz", + "integrity": "sha512-VT2+G1VQs/9oz078bLrYbecdZKs912zQlkelYpuf+SXF+QvZDYJlbx/LSx+meSAwdDFnF8FVXW92AVjjkVmgFw==" + }, + "@jridgewell/trace-mapping": { + "version": "0.3.29", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@jridgewell/trace-mapping/-/trace-mapping-0.3.29.tgz", + "integrity": "sha512-uw6guiW/gcAGPDhLmd77/6lW8QLeiV5RUTsAX46Db6oLhGaVj4lhnPwb184s1bkc8kdVg/+h988dro8GRDpmYQ==", + "requires": { + "@jridgewell/resolve-uri": "^3.1.0", + "@jridgewell/sourcemap-codec": "^1.4.14" + } + }, + "@kurkle/color": { + "version": "0.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@kurkle/color/-/color-0.3.4.tgz", + "integrity": "sha512-M5UknZPHRu3DEDWoipU6sE8PdkZ6Z/S+v4dD+Ke8IaNlpdSQah50lz1KtcFBa2vsdOnwbbnxJwVM4wty6udA5w==" + }, + "@mui/base": { + "version": "5.0.0-beta.70", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/base/-/base-5.0.0-beta.70.tgz", + "integrity": "sha512-Tb/BIhJzb0pa5zv/wu7OdokY9ZKEDqcu1BDFnohyvGCoHuSXbEr90rPq1qeNW3XvTBIbNWHEF7gqge+xpUo6tQ==", + "requires": { + "@babel/runtime": "^7.26.0", + "@floating-ui/react-dom": "^2.1.1", + "@mui/types": "~7.2.24", + "@mui/utils": "^6.4.8", + "@popperjs/core": "^2.11.8", + "clsx": "^2.1.1", + "prop-types": "^15.8.1" + }, + "dependencies": { + "@mui/utils": { + "version": "6.4.9", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/utils/-/utils-6.4.9.tgz", + "integrity": "sha512-Y12Q9hbK9g+ZY0T3Rxrx9m2m10gaphDuUMgWxyV5kNJevVxXYCLclYUCC9vXaIk1/NdNDTcW2Yfr2OGvNFNmHg==", + "requires": { + "@babel/runtime": "^7.26.0", + "@mui/types": "~7.2.24", + "@types/prop-types": "^15.7.14", + "clsx": "^2.1.1", + "prop-types": "^15.8.1", + "react-is": "^19.0.0" + } + }, + "react-is": { + "version": "19.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-is/-/react-is-19.1.1.tgz", + "integrity": "sha512-tr41fA15Vn8p4X9ntI+yCyeGSf1TlYaY5vlTZfQmeLBrFo3psOPX6HhTDnFNL9uj3EhP0KAQ80cugCl4b4BERA==" + } + } + }, + "@mui/core-downloads-tracker": { + "version": "5.18.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/core-downloads-tracker/-/core-downloads-tracker-5.18.0.tgz", + "integrity": "sha512-jbhwoQ1AY200PSSOrNXmrFCaSDSJWP7qk6urkTmIirvRXDROkqe+QwcLlUiw/PrREwsIF/vm3/dAXvjlMHF0RA==" + }, + "@mui/lab": { + "version": "5.0.0-alpha.177", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/lab/-/lab-5.0.0-alpha.177.tgz", + "integrity": "sha512-bdCxxtNjlWAgN9rtrwlmFydJ1qxA3IIbb6OlomGFsIXw0zGoHomLyjvh72q/R3yUAC0kvSef18cHY1UalLylyQ==", + "dev": true, + "requires": { + "@babel/runtime": "^7.23.9", + "@mui/base": "5.0.0-beta.40-1", + "@mui/system": "^5.18.0", + "@mui/types": "~7.2.15", + "@mui/utils": "^5.17.1", + "clsx": "^2.1.0", + "prop-types": "^15.8.1" + }, + "dependencies": { + "@mui/base": { + "version": "5.0.0-beta.40-1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/base/-/base-5.0.0-beta.40-1.tgz", + "integrity": "sha512-agKXuNNy0bHUmeU7pNmoZwNFr7Hiyhojkb9+2PVyDG5+6RafYuyMgbrav8CndsB7KUc/U51JAw9vKNDLYBzaUA==", + "dev": true, + "requires": { + "@babel/runtime": "^7.23.9", + "@floating-ui/react-dom": "^2.0.8", + "@mui/types": "~7.2.15", + "@mui/utils": "^5.17.1", + "@popperjs/core": "^2.11.8", + "clsx": "^2.1.0", + "prop-types": "^15.8.1" + } + } + } + }, + "@mui/material": { + "version": "5.18.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/material/-/material-5.18.0.tgz", + "integrity": "sha512-bbH/HaJZpFtXGvWg3TsBWG4eyt3gah3E7nCNU8GLyRjVoWcA91Vm/T+sjHfUcwgJSw9iLtucfHBoq+qW/T30aA==", + "requires": { + "@babel/runtime": "^7.23.9", + "@mui/core-downloads-tracker": "^5.18.0", + "@mui/system": "^5.18.0", + "@mui/types": "~7.2.15", + "@mui/utils": "^5.17.1", + "@popperjs/core": "^2.11.8", + "@types/react-transition-group": "^4.4.10", + "clsx": "^2.1.0", + "csstype": "^3.1.3", + "prop-types": "^15.8.1", + "react-is": "^19.0.0", + "react-transition-group": "^4.4.5" + }, + "dependencies": { + "react-is": { + "version": "19.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-is/-/react-is-19.1.1.tgz", + "integrity": "sha512-tr41fA15Vn8p4X9ntI+yCyeGSf1TlYaY5vlTZfQmeLBrFo3psOPX6HhTDnFNL9uj3EhP0KAQ80cugCl4b4BERA==" + } + } + }, + "@mui/private-theming": { + "version": "5.17.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/private-theming/-/private-theming-5.17.1.tgz", + "integrity": "sha512-XMxU0NTYcKqdsG8LRmSoxERPXwMbp16sIXPcLVgLGII/bVNagX0xaheWAwFv8+zDK7tI3ajllkuD3GZZE++ICQ==", + "requires": { + "@babel/runtime": "^7.23.9", + "@mui/utils": "^5.17.1", + "prop-types": "^15.8.1" + } + }, + "@mui/styled-engine": { + "version": "5.18.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/styled-engine/-/styled-engine-5.18.0.tgz", + "integrity": "sha512-BN/vKV/O6uaQh2z5rXV+MBlVrEkwoS/TK75rFQ2mjxA7+NBo8qtTAOA4UaM0XeJfn7kh2wZ+xQw2HAx0u+TiBg==", + "requires": { + "@babel/runtime": "^7.23.9", + "@emotion/cache": "^11.13.5", + "@emotion/serialize": "^1.3.3", + "csstype": "^3.1.3", + "prop-types": "^15.8.1" + } + }, + "@mui/system": { + "version": "5.18.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/system/-/system-5.18.0.tgz", + "integrity": "sha512-ojZGVcRWqWhu557cdO3pWHloIGJdzVtxs3rk0F9L+x55LsUjcMUVkEhiF7E4TMxZoF9MmIHGGs0ZX3FDLAf0Xw==", + "requires": { + "@babel/runtime": "^7.23.9", + "@mui/private-theming": "^5.17.1", + "@mui/styled-engine": "^5.18.0", + "@mui/types": "~7.2.15", + "@mui/utils": "^5.17.1", + "clsx": "^2.1.0", + "csstype": "^3.1.3", + "prop-types": "^15.8.1" + } + }, + "@mui/types": { + "version": "7.2.24", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/types/-/types-7.2.24.tgz", + "integrity": "sha512-3c8tRt/CbWZ+pEg7QpSwbdxOk36EfmhbKf6AGZsD1EcLDLTSZoxxJ86FVtcjxvjuhdyBiWKSTGZFaXCnidO2kw==" + }, + "@mui/utils": { + "version": "5.17.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/utils/-/utils-5.17.1.tgz", + "integrity": "sha512-jEZ8FTqInt2WzxDV8bhImWBqeQRD99c/id/fq83H0ER9tFl+sfZlaAoCdznGvbSQQ9ividMxqSV2c7cC1vBcQg==", + "requires": { + "@babel/runtime": "^7.23.9", + "@mui/types": "~7.2.15", + "@types/prop-types": "^15.7.12", + "clsx": "^2.1.1", + "prop-types": "^15.8.1", + "react-is": "^19.0.0" + }, + "dependencies": { + "react-is": { + "version": "19.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-is/-/react-is-19.1.1.tgz", + "integrity": "sha512-tr41fA15Vn8p4X9ntI+yCyeGSf1TlYaY5vlTZfQmeLBrFo3psOPX6HhTDnFNL9uj3EhP0KAQ80cugCl4b4BERA==" + } + } + }, + "@mui/x-data-grid": { + "version": "5.17.26", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/x-data-grid/-/x-data-grid-5.17.26.tgz", + "integrity": "sha512-eGJq9J0g9cDGLFfMmugOadZx0mJeOd/yQpHwEa5gUXyONS6qF0OhXSWyDOhDdA3l2TOoQzotMN5dY/T4Wl1KYA==", + "requires": { + "@babel/runtime": "^7.18.9", + "@mui/utils": "^5.10.3", + "clsx": "^1.2.1", + "prop-types": "^15.8.1", + "reselect": "^4.1.6" + }, + "dependencies": { + "clsx": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/clsx/-/clsx-1.2.1.tgz", + "integrity": "sha512-EcR6r5a8bj6pu3ycsa/E/cKVGuTgZJZdsyUYHOksG/UHIiKfjxzRxYJpyVBwYaQeOvghal9fcc4PidlgzugAQg==" + } + } + }, + "@mui/x-date-pickers": { + "version": "6.20.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@mui/x-date-pickers/-/x-date-pickers-6.20.2.tgz", + "integrity": "sha512-x1jLg8R+WhvkmUETRfX2wC+xJreMii78EXKLl6r3G+ggcAZlPyt0myID1Amf6hvJb9CtR7CgUo8BwR+1Vx9Ggw==", + "requires": { + "@babel/runtime": "^7.23.2", + "@mui/base": "^5.0.0-beta.22", + "@mui/utils": "^5.14.16", + "@types/react-transition-group": "^4.4.8", + "clsx": "^2.0.0", + "prop-types": "^15.8.1", + "react-transition-group": "^4.4.5" + } + }, + "@npmcli/config": { + "version": "6.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@npmcli/config/-/config-6.4.1.tgz", + "integrity": "sha512-uSz+elSGzjCMANWa5IlbGczLYPkNI/LeR+cHrgaTqTrTSh9RHhOFA4daD2eRUz6lMtOW+Fnsb+qv7V2Zz8ML0g==", + "dev": true, + "requires": { + "@npmcli/map-workspaces": "^3.0.2", + "ci-info": "^4.0.0", + "ini": "^4.1.0", + "nopt": "^7.0.0", + "proc-log": "^3.0.0", + "read-package-json-fast": "^3.0.2", + "semver": "^7.3.5", + "walk-up-path": "^3.0.1" + }, + "dependencies": { + "semver": { + "version": "7.7.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/semver/-/semver-7.7.2.tgz", + "integrity": "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==", + "dev": true + } + } + }, + "@npmcli/map-workspaces": { + "version": "3.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@npmcli/map-workspaces/-/map-workspaces-3.0.6.tgz", + "integrity": "sha512-tkYs0OYnzQm6iIRdfy+LcLBjcKuQCeE5YLb8KnrIlutJfheNaPvPpgoFEyEFgbjzl5PLZ3IA/BWAwRU0eHuQDA==", + "dev": true, + "requires": { + "@npmcli/name-from-folder": "^2.0.0", + "glob": "^10.2.2", + "minimatch": "^9.0.0", + "read-package-json-fast": "^3.0.0" + }, + "dependencies": { + "brace-expansion": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/brace-expansion/-/brace-expansion-2.0.2.tgz", + "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==", + "dev": true, + "requires": { + "balanced-match": "^1.0.0" + } + }, + "glob": { + "version": "10.4.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/glob/-/glob-10.4.5.tgz", + "integrity": "sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg==", + "dev": true, + "requires": { + "foreground-child": "^3.1.0", + "jackspeak": "^3.1.2", + "minimatch": "^9.0.4", + "minipass": "^7.1.2", + "package-json-from-dist": "^1.0.0", + "path-scurry": "^1.11.1" + } + }, + "minimatch": { + "version": "9.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/minimatch/-/minimatch-9.0.5.tgz", + "integrity": "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==", + "dev": true, + "requires": { + "brace-expansion": "^2.0.1" + } + } + } + }, + "@npmcli/name-from-folder": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@npmcli/name-from-folder/-/name-from-folder-2.0.0.tgz", + "integrity": "sha512-pwK+BfEBZJbKdNYpHHRTNBwBoqrN/iIMO0AiGvYsp3Hoaq0WbgGSWQR6SCldZovoDpY3yje5lkFUe6gsDgJ2vg==", + "dev": true + }, + "@pkgjs/parseargs": { + "version": "0.11.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@pkgjs/parseargs/-/parseargs-0.11.0.tgz", + "integrity": "sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==", + "dev": true, + "optional": true + }, + "@pkgr/core": { + "version": "0.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@pkgr/core/-/core-0.1.2.tgz", + "integrity": "sha512-fdDH1LSGfZdTH2sxdpVMw31BanV28K/Gry0cVFxaNP77neJSkd82mM8ErPNYs9e+0O7SdHBLTDzDgwUuy18RnQ==", + "dev": true + }, + "@popperjs/core": { + "version": "2.11.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@popperjs/core/-/core-2.11.8.tgz", + "integrity": "sha512-P1st0aksCrn9sGZhp8GMYwBnQsbvAWsZAX44oXNNvLHGqAOcoVxmjZiohstwQ7SqKnbR47akdNi+uleWD8+g6A==" + }, + "@rollup/pluginutils": { + "version": "5.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/pluginutils/-/pluginutils-5.2.0.tgz", + "integrity": "sha512-qWJ2ZTbmumwiLFomfzTyt5Kng4hwPi9rwCYN4SHb6eaRU1KNO4ccxINHr/VhH4GgPlt1XfSTLX2LBTme8ne4Zw==", + "dev": true, + "requires": { + "@types/estree": "^1.0.0", + "estree-walker": "^2.0.2", + "picomatch": "^4.0.2" + } + }, + "@rollup/rollup-android-arm-eabi": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.46.2.tgz", + "integrity": "sha512-Zj3Hl6sN34xJtMv7Anwb5Gu01yujyE/cLBDB2gnHTAHaWS1Z38L7kuSG+oAh0giZMqG060f/YBStXtMH6FvPMA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-android-arm64": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.46.2.tgz", + "integrity": "sha512-nTeCWY83kN64oQ5MGz3CgtPx8NSOhC5lWtsjTs+8JAJNLcP3QbLCtDDgUKQc/Ro/frpMq4SHUaHN6AMltcEoLQ==", + "dev": true, + "optional": true + }, + "@rollup/rollup-darwin-arm64": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.46.2.tgz", + "integrity": "sha512-HV7bW2Fb/F5KPdM/9bApunQh68YVDU8sO8BvcW9OngQVN3HHHkw99wFupuUJfGR9pYLLAjcAOA6iO+evsbBaPQ==", + "dev": true, + "optional": true + }, + "@rollup/rollup-darwin-x64": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.46.2.tgz", + "integrity": "sha512-SSj8TlYV5nJixSsm/y3QXfhspSiLYP11zpfwp6G/YDXctf3Xkdnk4woJIF5VQe0of2OjzTt8EsxnJDCdHd2xMA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-freebsd-arm64": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.46.2.tgz", + "integrity": "sha512-ZyrsG4TIT9xnOlLsSSi9w/X29tCbK1yegE49RYm3tu3wF1L/B6LVMqnEWyDB26d9Ecx9zrmXCiPmIabVuLmNSg==", + "dev": true, + "optional": true + }, + "@rollup/rollup-freebsd-x64": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.46.2.tgz", + "integrity": "sha512-pCgHFoOECwVCJ5GFq8+gR8SBKnMO+xe5UEqbemxBpCKYQddRQMgomv1104RnLSg7nNvgKy05sLsY51+OVRyiVw==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-arm-gnueabihf": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.46.2.tgz", + "integrity": "sha512-EtP8aquZ0xQg0ETFcxUbU71MZlHaw9MChwrQzatiE8U/bvi5uv/oChExXC4mWhjiqK7azGJBqU0tt5H123SzVA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-arm-musleabihf": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.46.2.tgz", + "integrity": "sha512-qO7F7U3u1nfxYRPM8HqFtLd+raev2K137dsV08q/LRKRLEc7RsiDWihUnrINdsWQxPR9jqZ8DIIZ1zJJAm5PjQ==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-arm64-gnu": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.46.2.tgz", + "integrity": "sha512-3dRaqLfcOXYsfvw5xMrxAk9Lb1f395gkoBYzSFcc/scgRFptRXL9DOaDpMiehf9CO8ZDRJW2z45b6fpU5nwjng==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-arm64-musl": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.46.2.tgz", + "integrity": "sha512-fhHFTutA7SM+IrR6lIfiHskxmpmPTJUXpWIsBXpeEwNgZzZZSg/q4i6FU4J8qOGyJ0TR+wXBwx/L7Ho9z0+uDg==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-loongarch64-gnu": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-loongarch64-gnu/-/rollup-linux-loongarch64-gnu-4.46.2.tgz", + "integrity": "sha512-i7wfGFXu8x4+FRqPymzjD+Hyav8l95UIZ773j7J7zRYc3Xsxy2wIn4x+llpunexXe6laaO72iEjeeGyUFmjKeA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-ppc64-gnu": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.46.2.tgz", + "integrity": "sha512-B/l0dFcHVUnqcGZWKcWBSV2PF01YUt0Rvlurci5P+neqY/yMKchGU8ullZvIv5e8Y1C6wOn+U03mrDylP5q9Yw==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-riscv64-gnu": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.46.2.tgz", + "integrity": "sha512-32k4ENb5ygtkMwPMucAb8MtV8olkPT03oiTxJbgkJa7lJ7dZMr0GCFJlyvy+K8iq7F/iuOr41ZdUHaOiqyR3iQ==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-riscv64-musl": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.46.2.tgz", + "integrity": "sha512-t5B2loThlFEauloaQkZg9gxV05BYeITLvLkWOkRXogP4qHXLkWSbSHKM9S6H1schf/0YGP/qNKtiISlxvfmmZw==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-s390x-gnu": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.46.2.tgz", + "integrity": "sha512-YKjekwTEKgbB7n17gmODSmJVUIvj8CX7q5442/CK80L8nqOUbMtf8b01QkG3jOqyr1rotrAnW6B/qiHwfcuWQA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-x64-gnu": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.46.2.tgz", + "integrity": "sha512-Jj5a9RUoe5ra+MEyERkDKLwTXVu6s3aACP51nkfnK9wJTraCC8IMe3snOfALkrjTYd2G1ViE1hICj0fZ7ALBPA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-linux-x64-musl": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.46.2.tgz", + "integrity": "sha512-7kX69DIrBeD7yNp4A5b81izs8BqoZkCIaxQaOpumcJ1S/kmqNFjPhDu1LHeVXv0SexfHQv5cqHsxLOjETuqDuA==", + "dev": true, + "optional": true + }, + "@rollup/rollup-win32-arm64-msvc": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.46.2.tgz", + "integrity": "sha512-wiJWMIpeaak/jsbaq2HMh/rzZxHVW1rU6coyeNNpMwk5isiPjSTx0a4YLSlYDwBH/WBvLz+EtsNqQScZTLJy3g==", + "dev": true, + "optional": true + }, + "@rollup/rollup-win32-ia32-msvc": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.46.2.tgz", + "integrity": "sha512-gBgaUDESVzMgWZhcyjfs9QFK16D8K6QZpwAaVNJxYDLHWayOta4ZMjGm/vsAEy3hvlS2GosVFlBlP9/Wb85DqQ==", + "dev": true, + "optional": true + }, + "@rollup/rollup-win32-x64-msvc": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.46.2.tgz", + "integrity": "sha512-CvUo2ixeIQGtF6WvuB87XWqPQkoFAFqW+HUo/WzHwuHDvIwZCtjdWXoYCcr06iKGydiqTclC4jU/TNObC/xKZg==", + "dev": true, + "optional": true + }, + "@svgr/babel-plugin-add-jsx-attribute": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-add-jsx-attribute/-/babel-plugin-add-jsx-attribute-6.5.1.tgz", + "integrity": "sha512-9PYGcXrAxitycIjRmZB+Q0JaN07GZIWaTBIGQzfaZv+qr1n8X1XUEJ5rZ/vx6OVD9RRYlrNnXWExQXcmZeD/BQ==", + "dev": true + }, + "@svgr/babel-plugin-remove-jsx-attribute": { + "version": "8.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-remove-jsx-attribute/-/babel-plugin-remove-jsx-attribute-8.0.0.tgz", + "integrity": "sha512-BcCkm/STipKvbCl6b7QFrMh/vx00vIP63k2eM66MfHJzPr6O2U0jYEViXkHJWqXqQYjdeA9cuCl5KWmlwjDvbA==", + "dev": true + }, + "@svgr/babel-plugin-remove-jsx-empty-expression": { + "version": "8.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-remove-jsx-empty-expression/-/babel-plugin-remove-jsx-empty-expression-8.0.0.tgz", + "integrity": "sha512-5BcGCBfBxB5+XSDSWnhTThfI9jcO5f0Ai2V24gZpG+wXF14BzwxxdDb4g6trdOux0rhibGs385BeFMSmxtS3uA==", + "dev": true + }, + "@svgr/babel-plugin-replace-jsx-attribute-value": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-replace-jsx-attribute-value/-/babel-plugin-replace-jsx-attribute-value-6.5.1.tgz", + "integrity": "sha512-8DPaVVE3fd5JKuIC29dqyMB54sA6mfgki2H2+swh+zNJoynC8pMPzOkidqHOSc6Wj032fhl8Z0TVn1GiPpAiJg==", + "dev": true + }, + "@svgr/babel-plugin-svg-dynamic-title": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-svg-dynamic-title/-/babel-plugin-svg-dynamic-title-6.5.1.tgz", + "integrity": "sha512-FwOEi0Il72iAzlkaHrlemVurgSQRDFbk0OC8dSvD5fSBPHltNh7JtLsxmZUhjYBZo2PpcU/RJvvi6Q0l7O7ogw==", + "dev": true + }, + "@svgr/babel-plugin-svg-em-dimensions": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-svg-em-dimensions/-/babel-plugin-svg-em-dimensions-6.5.1.tgz", + "integrity": "sha512-gWGsiwjb4tw+ITOJ86ndY/DZZ6cuXMNE/SjcDRg+HLuCmwpcjOktwRF9WgAiycTqJD/QXqL2f8IzE2Rzh7aVXA==", + "dev": true + }, + "@svgr/babel-plugin-transform-react-native-svg": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-transform-react-native-svg/-/babel-plugin-transform-react-native-svg-6.5.1.tgz", + "integrity": "sha512-2jT3nTayyYP7kI6aGutkyfJ7UMGtuguD72OjeGLwVNyfPRBD8zQthlvL+fAbAKk5n9ZNcvFkp/b1lZ7VsYqVJg==", + "dev": true + }, + "@svgr/babel-plugin-transform-svg-component": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-plugin-transform-svg-component/-/babel-plugin-transform-svg-component-6.5.1.tgz", + "integrity": "sha512-a1p6LF5Jt33O3rZoVRBqdxL350oge54iZWHNI6LJB5tQ7EelvD/Mb1mfBiZNAan0dt4i3VArkFRjA4iObuNykQ==", + "dev": true + }, + "@svgr/babel-preset": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/babel-preset/-/babel-preset-6.5.1.tgz", + "integrity": "sha512-6127fvO/FF2oi5EzSQOAjo1LE3OtNVh11R+/8FXa+mHx1ptAaS4cknIjnUA7e6j6fwGGJ17NzaTJFUwOV2zwCw==", + "dev": true, + "requires": { + "@svgr/babel-plugin-add-jsx-attribute": "^6.5.1", + "@svgr/babel-plugin-remove-jsx-attribute": "*", + "@svgr/babel-plugin-remove-jsx-empty-expression": "*", + "@svgr/babel-plugin-replace-jsx-attribute-value": "^6.5.1", + "@svgr/babel-plugin-svg-dynamic-title": "^6.5.1", + "@svgr/babel-plugin-svg-em-dimensions": "^6.5.1", + "@svgr/babel-plugin-transform-react-native-svg": "^6.5.1", + "@svgr/babel-plugin-transform-svg-component": "^6.5.1" + } + }, + "@svgr/core": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/core/-/core-6.5.1.tgz", + "integrity": "sha512-/xdLSWxK5QkqG524ONSjvg3V/FkNyCv538OIBdQqPNaAta3AsXj/Bd2FbvR87yMbXO2hFSWiAe/Q6IkVPDw+mw==", + "dev": true, + "requires": { + "@babel/core": "^7.19.6", + "@svgr/babel-preset": "^6.5.1", + "@svgr/plugin-jsx": "^6.5.1", + "camelcase": "^6.2.0", + "cosmiconfig": "^7.0.1" + } + }, + "@svgr/hast-util-to-babel-ast": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/hast-util-to-babel-ast/-/hast-util-to-babel-ast-6.5.1.tgz", + "integrity": "sha512-1hnUxxjd83EAxbL4a0JDJoD3Dao3hmjvyvyEV8PzWmLK3B9m9NPlW7GKjFyoWE8nM7HnXzPcmmSyOW8yOddSXw==", + "dev": true, + "requires": { + "@babel/types": "^7.20.0", + "entities": "^4.4.0" + } + }, + "@svgr/plugin-jsx": { + "version": "6.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@svgr/plugin-jsx/-/plugin-jsx-6.5.1.tgz", + "integrity": "sha512-+UdQxI3jgtSjCykNSlEMuy1jSRQlGC7pqBCPvkG/2dATdWo082zHTTK3uhnAju2/6XpE6B5mZ3z4Z8Ns01S8Gw==", + "dev": true, + "requires": { + "@babel/core": "^7.19.6", + "@svgr/babel-preset": "^6.5.1", + "@svgr/hast-util-to-babel-ast": "^6.5.1", + "svg-parser": "^2.0.4" + } + }, + "@tippyjs/react": { + "version": "4.2.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@tippyjs/react/-/react-4.2.6.tgz", + "integrity": "sha512-91RicDR+H7oDSyPycI13q3b7o4O60wa2oRbjlz2fyRLmHImc4vyDwuUP8NtZaN0VARJY5hybvDYrFzhY9+Lbyw==", + "requires": { + "tippy.js": "^6.3.1" + } + }, + "@types/acorn": { + "version": "4.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/acorn/-/acorn-4.0.6.tgz", + "integrity": "sha512-veQTnWP+1D/xbxVrPC3zHnCZRjSrKfhbMUlEA43iMZLu7EsnTtkJklIuwrCPbOi8YkvDQAiW05VQQFvvz9oieQ==", + "dev": true, + "requires": { + "@types/estree": "*" + } + }, + "@types/concat-stream": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/concat-stream/-/concat-stream-2.0.3.tgz", + "integrity": "sha512-3qe4oQAPNwVNwK4C9c8u+VJqv9kez+2MR4qJpoPFfXtgxxif1QbFusvXzK0/Wra2VX07smostI2VMmJNSpZjuQ==", + "dev": true, + "requires": { + "@types/node": "*" + } + }, + "@types/debug": { + "version": "4.1.12", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/debug/-/debug-4.1.12.tgz", + "integrity": "sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ==", + "dev": true, + "requires": { + "@types/ms": "*" + } + }, + "@types/estree": { + "version": "1.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "dev": true + }, + "@types/estree-jsx": { + "version": "1.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/estree-jsx/-/estree-jsx-1.0.5.tgz", + "integrity": "sha512-52CcUVNFyfb1A2ALocQw/Dd1BQFNmSdkuC3BkZ6iqhdMfQz7JWOFRuJFloOzjk+6WijU56m9oKXFAXc7o3Towg==", + "dev": true, + "requires": { + "@types/estree": "*" + } + }, + "@types/hammerjs": { + "version": "2.0.46", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/hammerjs/-/hammerjs-2.0.46.tgz", + "integrity": "sha512-ynRvcq6wvqexJ9brDMS4BnBLzmr0e14d6ZJTEShTBWKymQiHwlAyGu0ZPEFI2Fh1U53F7tN9ufClWM5KvqkKOw==" + }, + "@types/hast": { + "version": "2.3.10", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/hast/-/hast-2.3.10.tgz", + "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", + "dev": true, + "requires": { + "@types/unist": "^2" + } + }, + "@types/is-empty": { + "version": "1.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/is-empty/-/is-empty-1.2.3.tgz", + "integrity": "sha512-4J1l5d79hoIvsrKh5VUKVRA1aIdsOb10Hu5j3J2VfP/msDnfTdGPmNp2E1Wg+vs97Bktzo+MZePFFXSGoykYJw==", + "dev": true + }, + "@types/mdast": { + "version": "3.0.15", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/mdast/-/mdast-3.0.15.tgz", + "integrity": "sha512-LnwD+mUEfxWMa1QpDraczIn6k0Ee3SMicuYSSzS6ZYl2gKS09EClnJYGd8Du6rfc5r/GZEk5o1mRb8TaTj03sQ==", + "dev": true, + "requires": { + "@types/unist": "^2" + } + }, + "@types/ms": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/ms/-/ms-2.1.0.tgz", + "integrity": "sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA==", + "dev": true + }, + "@types/node": { + "version": "18.19.121", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/node/-/node-18.19.121.tgz", + "integrity": "sha512-bHOrbyztmyYIi4f1R0s17QsPs1uyyYnGcXeZoGEd227oZjry0q6XQBQxd82X1I57zEfwO8h9Xo+Kl5gX1d9MwQ==", + "dev": true, + "requires": { + "undici-types": "~5.26.4" + } + }, + "@types/parse-json": { + "version": "4.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/parse-json/-/parse-json-4.0.2.tgz", + "integrity": "sha512-dISoDXWWQwUquiKsyZ4Ng+HX2KsPL7LyHKHQwgGFEA3IaKac4Obd+h2a/a6waisAoepJlBcx9paWqjA8/HVjCw==" + }, + "@types/prop-types": { + "version": "15.7.15", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/prop-types/-/prop-types-15.7.15.tgz", + "integrity": "sha512-F6bEyamV9jKGAFBEmlQnesRPGOQqS2+Uwi0Em15xenOxHaf2hv6L8YCVn3rPdPJOiJfPiCnLIRyvwVaqMY3MIw==" + }, + "@types/react-transition-group": { + "version": "4.4.12", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/react-transition-group/-/react-transition-group-4.4.12.tgz", + "integrity": "sha512-8TV6R3h2j7a91c+1DXdJi3Syo69zzIZbz7Lg5tORM5LEJG7X/E6a1V3drRyBRZq7/utz7A+c4OgYLiLcYGHG6w==" + }, + "@types/supports-color": { + "version": "8.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/supports-color/-/supports-color-8.1.3.tgz", + "integrity": "sha512-Hy6UMpxhE3j1tLpl27exp1XqHD7n8chAiNPzWfz16LPZoMMoSc4dzLl6w9qijkEb/r5O1ozdu1CWGA2L83ZeZg==", + "dev": true + }, + "@types/unist": { + "version": "2.0.11", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@types/unist/-/unist-2.0.11.tgz", + "integrity": "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA==", + "dev": true + }, + "@vitejs/plugin-react": { + "version": "2.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@vitejs/plugin-react/-/plugin-react-2.2.0.tgz", + "integrity": "sha512-FFpefhvExd1toVRlokZgxgy2JtnBOdp4ZDsq7ldCWaqGSGn9UhWMAVm/1lxPL14JfNS5yGz+s9yFrQY6shoStA==", + "dev": true, + "requires": { + "@babel/core": "^7.19.6", + "@babel/plugin-transform-react-jsx": "^7.19.0", + "@babel/plugin-transform-react-jsx-development": "^7.18.6", + "@babel/plugin-transform-react-jsx-self": "^7.18.6", + "@babel/plugin-transform-react-jsx-source": "^7.19.6", + "magic-string": "^0.26.7", + "react-refresh": "^0.14.0" + } + }, + "abbrev": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/abbrev/-/abbrev-2.0.0.tgz", + "integrity": "sha512-6/mh1E2u2YgEsCHdY0Yx5oW+61gZU+1vXaoiHHrpKeuRNNgFvS+/jrwHiQhB5apAf5oB7UB7E19ol2R2LKH8hQ==", + "dev": true + }, + "acorn": { + "version": "7.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/acorn/-/acorn-7.4.1.tgz", + "integrity": "sha512-nQyp0o1/mNdbTO1PO6kHkwSrmgZ0MT/jCCpNiwbUjGoRN4dlBhqJtoQuCnEOKzgTVwg0ZWiCoQy6SxMebQVh8A==", + "dev": true + }, + "acorn-jsx": { + "version": "5.3.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/acorn-jsx/-/acorn-jsx-5.3.2.tgz", + "integrity": "sha512-rq9s+JNhf0IChjtDXxllJ7g41oZk5SlXtp0LHwyA5cejwn7vKmKp4pPri6YEePv2PU65sAsegbXtIinmDFDXgQ==", + "dev": true + }, + "ahooks": { + "version": "3.9.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ahooks/-/ahooks-3.9.0.tgz", + "integrity": "sha512-r20/C38aFyU3Zqp3620gkdLnxmQhnmWORB3eGGTDlM4i/fOc0GUvM+f2oleMzEu7b3+pHXyzz+FB6ojxsUdYdw==", + "requires": { + "@babel/runtime": "^7.21.0", + "dayjs": "^1.9.1", + "intersection-observer": "^0.12.0", + "js-cookie": "^3.0.5", + "lodash": "^4.17.21", + "react-fast-compare": "^3.2.2", + "resize-observer-polyfill": "^1.5.1", + "screenfull": "^5.0.0", + "tslib": "^2.4.1" + } + }, + "ajv": { + "version": "6.12.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ajv/-/ajv-6.12.6.tgz", + "integrity": "sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g==", + "dev": true, + "requires": { + "fast-deep-equal": "^3.1.1", + "fast-json-stable-stringify": "^2.0.0", + "json-schema-traverse": "^0.4.1", + "uri-js": "^4.2.2" + } + }, + "ansi-colors": { + "version": "4.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-colors/-/ansi-colors-4.1.3.tgz", + "integrity": "sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw==", + "dev": true + }, + "ansi-regex": { + "version": "5.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "dev": true + }, + "ansi-styles": { + "version": "3.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dev": true, + "requires": { + "color-convert": "^1.9.0" + } + }, + "argparse": { + "version": "1.0.10", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/argparse/-/argparse-1.0.10.tgz", + "integrity": "sha512-o5Roy6tNG4SL/FOkCAN6RzjiakZS25RLYFrcMttJqbdd8BWrnA+fGz57iN5Pb06pvBGvl5gQ0B48dJlslXvoTg==", + "dev": true, + "requires": { + "sprintf-js": "~1.0.2" + } + }, + "array-buffer-byte-length": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/array-buffer-byte-length/-/array-buffer-byte-length-1.0.2.tgz", + "integrity": "sha512-LHE+8BuR7RYGDKvnrmcuSq3tDcKv9OFEXQt/HpbZhY7V6h0zlUXutnAD82GiFx9rdieCMjkvtcsPqBwgUl1Iiw==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "is-array-buffer": "^3.0.5" + } + }, + "array-includes": { + "version": "3.1.9", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/array-includes/-/array-includes-3.1.9.tgz", + "integrity": "sha512-FmeCCAenzH0KH381SPT5FZmiA/TmpndpcaShhfgEN9eCVjnFBqq3l1xrI42y8+PPLI6hypzou4GXw00WHmPBLQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "define-properties": "^1.2.1", + "es-abstract": "^1.24.0", + "es-object-atoms": "^1.1.1", + "get-intrinsic": "^1.3.0", + "is-string": "^1.1.1", + "math-intrinsics": "^1.1.0" + } + }, + "array.prototype.findlast": { + "version": "1.2.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/array.prototype.findlast/-/array.prototype.findlast-1.2.5.tgz", + "integrity": "sha512-CVvd6FHg1Z3POpBLxO6E6zr+rSKEQ9L6rZHAaY7lLfhKsWYUBBOuMs0e9o24oopj6H+geRCX0YJ+TJLBK2eHyQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.2", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0", + "es-shim-unscopables": "^1.0.2" + } + }, + "array.prototype.flat": { + "version": "1.3.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/array.prototype.flat/-/array.prototype.flat-1.3.3.tgz", + "integrity": "sha512-rwG/ja1neyLqCuGZ5YYrznA62D4mZXg0i1cIskIUKSiqF3Cje9/wXAls9B9s1Wa2fomMsIv8czB8jZcPmxCXFg==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-shim-unscopables": "^1.0.2" + } + }, + "array.prototype.flatmap": { + "version": "1.3.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/array.prototype.flatmap/-/array.prototype.flatmap-1.3.3.tgz", + "integrity": "sha512-Y7Wt51eKJSyi80hFrJCePGGNo5ktJCslFuboqJsbf57CCPcm5zztluPlc4/aD8sWsKvlwatezpV4U1efk8kpjg==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-shim-unscopables": "^1.0.2" + } + }, + "array.prototype.tosorted": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/array.prototype.tosorted/-/array.prototype.tosorted-1.1.4.tgz", + "integrity": "sha512-p6Fx8B7b7ZhL/gmUsAy0D15WhvDccw3mnGNbZpi3pmeJdxtWsj2jEaI4Y6oo3XiHfzuSgPwKc04MYt6KgvC/wA==", + "dev": true, + "requires": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.3", + "es-errors": "^1.3.0", + "es-shim-unscopables": "^1.0.2" + } + }, + "arraybuffer.prototype.slice": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/arraybuffer.prototype.slice/-/arraybuffer.prototype.slice-1.0.4.tgz", + "integrity": "sha512-BNoCY6SXXPQ7gF2opIP4GBE+Xw7U+pHMYKuzjgCN3GwiaIR09UUeKfheyIry77QtrCBlC0KK0q5/TER/tYh3PQ==", + "dev": true, + "requires": { + "array-buffer-byte-length": "^1.0.1", + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6", + "is-array-buffer": "^3.0.4" + } + }, + "astral-regex": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/astral-regex/-/astral-regex-2.0.0.tgz", + "integrity": "sha512-Z7tMw1ytTXt5jqMcOP+OQteU1VuNK9Y02uuJtKQ1Sv69jXQKKg5cibLwGJow8yzZP+eAc18EmLGPal0bp36rvQ==", + "dev": true + }, + "async-function": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/async-function/-/async-function-1.0.0.tgz", + "integrity": "sha512-hsU18Ae8CDTR6Kgu9DYf0EbCr/a5iGL0rytQDobUcdpYOKokk8LEjVphnXkDkgpi0wYVsqrXuP0bZxJaTqdgoA==", + "dev": true + }, + "autosuggest-highlight": { + "version": "3.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/autosuggest-highlight/-/autosuggest-highlight-3.3.4.tgz", + "integrity": "sha512-j6RETBD2xYnrVcoV1S5R4t3WxOlWZKyDQjkwnggDPSjF5L4jV98ZltBpvPvbkM1HtoSe5o+bNrTHyjPbieGeYA==", + "requires": { + "remove-accents": "^0.4.2" + } + }, + "available-typed-arrays": { + "version": "1.0.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/available-typed-arrays/-/available-typed-arrays-1.0.7.tgz", + "integrity": "sha512-wvUjBtSGN7+7SjNpq/9M2Tg350UZD3q62IFZLbRAR1bSMlCo1ZaeW+BJ+D090e4hIIZLBcTDWe4Mh4jvUDajzQ==", + "dev": true, + "requires": { + "possible-typed-array-names": "^1.0.0" + } + }, + "babel-plugin-macros": { + "version": "3.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/babel-plugin-macros/-/babel-plugin-macros-3.1.0.tgz", + "integrity": "sha512-Cg7TFGpIr01vOQNODXOOaGz2NpCU5gl8x1qJFbb6hbZxR7XrcE2vtbAsTAbJ7/xwJtUuJEw8K8Zr/AE0LHlesg==", + "requires": { + "@babel/runtime": "^7.12.5", + "cosmiconfig": "^7.0.0", + "resolve": "^1.19.0" + } + }, + "bail": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/bail/-/bail-2.0.2.tgz", + "integrity": "sha512-0xO6mYd7JB2YesxDKplafRpsiOzPt9V02ddPCLbY1xYGPOX24NTyN50qnUxgCPcSoYMhKpAuBTjQoRZCAkUDRw==", + "dev": true + }, + "balanced-match": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "dev": true + }, + "brace-expansion": { + "version": "1.1.12", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/brace-expansion/-/brace-expansion-1.1.12.tgz", + "integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==", + "dev": true, + "requires": { + "balanced-match": "^1.0.0", + "concat-map": "0.0.1" + } + }, + "browserslist": { + "version": "4.25.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/browserslist/-/browserslist-4.25.1.tgz", + "integrity": "sha512-KGj0KoOMXLpSNkkEI6Z6mShmQy0bc1I+T7K9N81k4WWMrfz+6fQ6es80B/YLAeRoKvjYE1YSHHOW1qe9xIVzHw==", + "dev": true, + "requires": { + "caniuse-lite": "^1.0.30001726", + "electron-to-chromium": "^1.5.173", + "node-releases": "^2.0.19", + "update-browserslist-db": "^1.1.3" + } + }, + "buffer-from": { + "version": "1.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/buffer-from/-/buffer-from-1.1.2.tgz", + "integrity": "sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ==", + "dev": true + }, + "call-bind": { + "version": "1.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/call-bind/-/call-bind-1.0.8.tgz", + "integrity": "sha512-oKlSFMcMwpUg2ednkhQ454wfWiU/ul3CkJe/PEHcTKuiX6RpbehUiFMXu13HalGZxfUwCQzZG747YXBn1im9ww==", + "dev": true, + "requires": { + "call-bind-apply-helpers": "^1.0.0", + "es-define-property": "^1.0.0", + "get-intrinsic": "^1.2.4", + "set-function-length": "^1.2.2" + } + }, + "call-bind-apply-helpers": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz", + "integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "function-bind": "^1.1.2" + } + }, + "call-bound": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/call-bound/-/call-bound-1.0.4.tgz", + "integrity": "sha512-+ys997U96po4Kx/ABpBCqhA9EuxJaQWDQg7295H4hBphv3IZg0boBKuwYpt4YXp6MZ5AmZQnU/tyMTlRpaSejg==", + "dev": true, + "requires": { + "call-bind-apply-helpers": "^1.0.2", + "get-intrinsic": "^1.3.0" + } + }, + "callsites": { + "version": "3.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/callsites/-/callsites-3.1.0.tgz", + "integrity": "sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ==" + }, + "camelcase": { + "version": "6.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/camelcase/-/camelcase-6.3.0.tgz", + "integrity": "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==", + "dev": true + }, + "caniuse-lite": { + "version": "1.0.30001731", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/caniuse-lite/-/caniuse-lite-1.0.30001731.tgz", + "integrity": "sha512-lDdp2/wrOmTRWuoB5DpfNkC0rJDU8DqRa6nYL6HK6sytw70QMopt/NIc/9SM7ylItlBWfACXk0tEn37UWM/+mg==", + "dev": true + }, + "ccount": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ccount/-/ccount-2.0.1.tgz", + "integrity": "sha512-eyrF0jiFpY+3drT6383f1qhkbGsLSifNAjA61IUjZjmLCWjItY6LB9ft9YhoDgwfmclB2zhu51Lc7+95b8NRAg==", + "dev": true + }, + "chalk": { + "version": "4.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/chalk/-/chalk-4.1.2.tgz", + "integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==", + "dev": true, + "requires": { + "ansi-styles": "^4.1.0", + "supports-color": "^7.1.0" + }, + "dependencies": { + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + }, + "has-flag": { + "version": "4.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-flag/-/has-flag-4.0.0.tgz", + "integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==", + "dev": true + }, + "supports-color": { + "version": "7.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/supports-color/-/supports-color-7.2.0.tgz", + "integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==", + "dev": true, + "requires": { + "has-flag": "^4.0.0" + } + } + } + }, + "character-entities": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-entities/-/character-entities-2.0.2.tgz", + "integrity": "sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ==", + "dev": true + }, + "character-entities-html4": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-entities-html4/-/character-entities-html4-2.1.0.tgz", + "integrity": "sha512-1v7fgQRj6hnSwFpq1Eu0ynr/CDEw0rXo2B61qXrLNdHZmPKgb7fqS1a2JwF0rISo9q77jDI8VMEHoApn8qDoZA==", + "dev": true + }, + "character-entities-legacy": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-entities-legacy/-/character-entities-legacy-3.0.0.tgz", + "integrity": "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ==", + "dev": true + }, + "character-reference-invalid": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-reference-invalid/-/character-reference-invalid-2.0.1.tgz", + "integrity": "sha512-iBZ4F4wRbyORVsu0jPV7gXkOsGYjGHPmAyv+HiHG8gi5PtC9KI2j1+v8/tlibRvjoWX027ypmG/n0HtO5t7unw==", + "dev": true + }, + "chart.js": { + "version": "4.5.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/chart.js/-/chart.js-4.5.0.tgz", + "integrity": "sha512-aYeC/jDgSEx8SHWZvANYMioYMZ2KX02W6f6uVfyteuCGcadDLcYVHdfdygsTQkQ4TKn5lghoojAsPj5pu0SnvQ==", + "requires": { + "@kurkle/color": "^0.3.0" + } + }, + "chartjs-adapter-date-fns": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/chartjs-adapter-date-fns/-/chartjs-adapter-date-fns-3.0.0.tgz", + "integrity": "sha512-Rs3iEB3Q5pJ973J93OBTpnP7qoGwvq3nUnoMdtxO+9aoJof7UFcRbWcIDteXuYd1fgAvct/32T9qaLyLuZVwCg==" + }, + "chartjs-plugin-zoom": { + "version": "2.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/chartjs-plugin-zoom/-/chartjs-plugin-zoom-2.2.0.tgz", + "integrity": "sha512-in6kcdiTlP6npIVLMd4zXZ08PDUXC52gZ4FAy5oyjk1zX3gKarXMAof7B9eFiisf9WOC3bh2saHg+J5WtLXZeA==", + "requires": { + "@types/hammerjs": "^2.0.45", + "hammerjs": "^2.0.8" + } + }, + "ci-info": { + "version": "4.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ci-info/-/ci-info-4.3.0.tgz", + "integrity": "sha512-l+2bNRMiQgcfILUi33labAZYIWlH1kWDp+ecNo5iisRKrbm0xcRyCww71/YU0Fkw0mAFpz9bJayXPjey6vkmaQ==", + "dev": true + }, + "clsx": { + "version": "2.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/clsx/-/clsx-2.1.1.tgz", + "integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==" + }, + "color": { + "version": "3.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color/-/color-3.2.1.tgz", + "integrity": "sha512-aBl7dZI9ENN6fUGC7mWpMTPNHmWUSNan9tuWN6ahh5ZLNk9baLJOnSMlrQkHcrfFgz2/RigjUVAjdx36VcemKA==", + "requires": { + "color-convert": "^1.9.3", + "color-string": "^1.6.0" + } + }, + "color-convert": { + "version": "1.9.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "requires": { + "color-name": "1.1.3" + } + }, + "color-name": { + "version": "1.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==" + }, + "color-string": { + "version": "1.9.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-string/-/color-string-1.9.1.tgz", + "integrity": "sha512-shrVawQFojnZv6xM40anx4CkoDP+fZsw/ZerEMsW/pyzsRbElpsL/DBVW7q3ExxwusdNXI3lXpuhEZkzs8p5Eg==", + "requires": { + "color-name": "^1.0.0", + "simple-swizzle": "^0.2.2" + } + }, + "concat-map": { + "version": "0.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/concat-map/-/concat-map-0.0.1.tgz", + "integrity": "sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==", + "dev": true + }, + "concat-stream": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/concat-stream/-/concat-stream-2.0.0.tgz", + "integrity": "sha512-MWufYdFw53ccGjCA+Ol7XJYpAlW6/prSMzuPOTRnJGcGzuhLn4Scrz7qf6o8bROZ514ltazcIFJZevcfbo0x7A==", + "dev": true, + "requires": { + "buffer-from": "^1.0.0", + "inherits": "^2.0.3", + "readable-stream": "^3.0.2", + "typedarray": "^0.0.6" + } + }, + "convert-source-map": { + "version": "1.9.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/convert-source-map/-/convert-source-map-1.9.0.tgz", + "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==" + }, + "cosmiconfig": { + "version": "7.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/cosmiconfig/-/cosmiconfig-7.1.0.tgz", + "integrity": "sha512-AdmX6xUzdNASswsFtmwSt7Vj8po9IuqXm0UXz7QKPuEUmPB4XyjGfaAr2PSuELMwkRMVH1EpIkX5bTZGRB3eCA==", + "requires": { + "@types/parse-json": "^4.0.0", + "import-fresh": "^3.2.1", + "parse-json": "^5.0.0", + "path-type": "^4.0.0", + "yaml": "^1.10.0" + } + }, + "cross-spawn": { + "version": "7.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/cross-spawn/-/cross-spawn-7.0.6.tgz", + "integrity": "sha512-uV2QOWP2nWzsy2aMp8aRibhi9dlzF5Hgh5SHaB9OiTGEyDTiJJyx0uy51QXdyWbtAHNua4XJzUKca3OzKUd3vA==", + "dev": true, + "requires": { + "path-key": "^3.1.0", + "shebang-command": "^2.0.0", + "which": "^2.0.1" + } + }, + "csstype": { + "version": "3.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/csstype/-/csstype-3.1.3.tgz", + "integrity": "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==" + }, + "data-view-buffer": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/data-view-buffer/-/data-view-buffer-1.0.2.tgz", + "integrity": "sha512-EmKO5V3OLXh1rtK2wgXRansaK1/mtVdTUEiEI0W8RkvgT05kfxaH29PliLnpLP73yYO6142Q72QNa8Wx/A5CqQ==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "is-data-view": "^1.0.2" + } + }, + "data-view-byte-length": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/data-view-byte-length/-/data-view-byte-length-1.0.2.tgz", + "integrity": "sha512-tuhGbE6CfTM9+5ANGf+oQb72Ky/0+s3xKUpHvShfiz2RxMFgFPjsXuRLBVMtvMs15awe45SRb83D6wH4ew6wlQ==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "is-data-view": "^1.0.2" + } + }, + "data-view-byte-offset": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/data-view-byte-offset/-/data-view-byte-offset-1.0.1.tgz", + "integrity": "sha512-BS8PfmtDGnrgYdOonGZQdLZslWIeCGFP9tpan0hi1Co2Zr2NKADsvGYA8XxuG/4UWgJ6Cjtv+YJnB6MM69QGlQ==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "is-data-view": "^1.0.1" + } + }, + "date-fns": { + "version": "2.30.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/date-fns/-/date-fns-2.30.0.tgz", + "integrity": "sha512-fnULvOpxnC5/Vg3NCiWelDsLiUc9bRwAPs/+LfTLNvetFCtCTN+yQz15C/fs4AwX1R9K5GLtLfn8QW+dWisaAw==", + "requires": { + "@babel/runtime": "^7.21.0" + } + }, + "date-fns-tz": { + "version": "1.3.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/date-fns-tz/-/date-fns-tz-1.3.8.tgz", + "integrity": "sha512-qwNXUFtMHTTU6CFSFjoJ80W8Fzzp24LntbjFFBgL/faqds4e5mo9mftoRLgr3Vi1trISsg4awSpYVsOQCRnapQ==" + }, + "dayjs": { + "version": "1.11.13", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/dayjs/-/dayjs-1.11.13.tgz", + "integrity": "sha512-oaMBel6gjolK862uaPQOVTA7q3TZhuSvuMQAAglQDOWYO9A91IrAOUJEyKVlqJlHE0vq5p5UXxzdPfMH/x6xNg==" + }, + "debug": { + "version": "4.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/debug/-/debug-4.4.1.tgz", + "integrity": "sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==", + "requires": { + "ms": "^2.1.3" + } + }, + "decode-named-character-reference": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/decode-named-character-reference/-/decode-named-character-reference-1.2.0.tgz", + "integrity": "sha512-c6fcElNV6ShtZXmsgNgFFV5tVX2PaV4g+MOAkb8eXHvn6sryJBrZa9r0zV6+dtTyoCKxtDy5tyQ5ZwQuidtd+Q==", + "dev": true, + "requires": { + "character-entities": "^2.0.0" + } + }, + "decode-uri-component": { + "version": "0.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/decode-uri-component/-/decode-uri-component-0.2.2.tgz", + "integrity": "sha512-FqUYQ+8o158GyGTrMFJms9qh3CqTKvAqgqsTnkLI8sKu0028orqBhxNMFkFen0zGyg6epACD32pjVk58ngIErQ==" + }, + "deep-is": { + "version": "0.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/deep-is/-/deep-is-0.1.4.tgz", + "integrity": "sha512-oIPzksmTg4/MriiaYGO+okXDT7ztn/w3Eptv/+gSIdMdKsJo0u4CfYNFJPy+4SKMuCqGw2wxnA+URMg3t8a/bQ==", + "dev": true + }, + "define-data-property": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/define-data-property/-/define-data-property-1.1.4.tgz", + "integrity": "sha512-rBMvIzlpA8v6E+SJZoo++HAYqsLrkg7MSfIinMPFhmkorw7X+dOXVJQs+QT69zGkzMyfDnIMN2Wid1+NbL3T+A==", + "dev": true, + "requires": { + "es-define-property": "^1.0.0", + "es-errors": "^1.3.0", + "gopd": "^1.0.1" + } + }, + "define-properties": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/define-properties/-/define-properties-1.2.1.tgz", + "integrity": "sha512-8QmQKqEASLd5nx0U1B1okLElbUuuttJ/AnYmRXbbbGDWh6uS208EjD4Xqq/I9wK7u0v6O08XhTWnt5XtEbR6Dg==", + "dev": true, + "requires": { + "define-data-property": "^1.0.1", + "has-property-descriptors": "^1.0.0", + "object-keys": "^1.1.1" + } + }, + "dequal": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/dequal/-/dequal-2.0.3.tgz", + "integrity": "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==", + "dev": true + }, + "diff": { + "version": "5.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/diff/-/diff-5.2.0.tgz", + "integrity": "sha512-uIFDxqpRZGZ6ThOk84hEfqWoHx2devRFvpTZcTHur85vImfaxUbTW9Ryh4CpCuDnToOP1CEtXKIgytHBPVff5A==", + "dev": true + }, + "doctrine": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/doctrine/-/doctrine-3.0.0.tgz", + "integrity": "sha512-yS+Q5i3hBf7GBkd4KG8a7eBNNWNGLTaEwwYWUijIYM7zrlYDM0BFXHjjPWlWZ1Rg7UaddZeIDmi9jF3HmqiQ2w==", + "dev": true, + "requires": { + "esutils": "^2.0.2" + } + }, + "dom-helpers": { + "version": "5.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/dom-helpers/-/dom-helpers-5.2.1.tgz", + "integrity": "sha512-nRCa7CK3VTrM2NmGkIy4cbK7IZlgBE/PYMn55rrXefr5xXDP0LdtfPnblFDoVdcAfslJ7or6iqAUnx0CCGIWQA==", + "requires": { + "@babel/runtime": "^7.8.7", + "csstype": "^3.0.2" + } + }, + "dunder-proto": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/dunder-proto/-/dunder-proto-1.0.1.tgz", + "integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==", + "dev": true, + "requires": { + "call-bind-apply-helpers": "^1.0.1", + "es-errors": "^1.3.0", + "gopd": "^1.2.0" + } + }, + "eastasianwidth": { + "version": "0.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eastasianwidth/-/eastasianwidth-0.2.0.tgz", + "integrity": "sha512-I88TYZWc9XiYHRQ4/3c5rjjfgkjhLyW2luGIheGERbNQ6OY7yTybanSpDXZa8y7VUP9YmDcYa+eyq4ca7iLqWA==", + "dev": true + }, + "electron-to-chromium": { + "version": "1.5.198", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/electron-to-chromium/-/electron-to-chromium-1.5.198.tgz", + "integrity": "sha512-G5COfnp3w+ydVu80yprgWSfmfQaYRh9DOxfhAxstLyetKaLyl55QrNjx8C38Pc/C+RaDmb1M0Lk8wPEMQ+bGgQ==", + "dev": true + }, + "emoji-regex": { + "version": "8.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==", + "dev": true + }, + "enquirer": { + "version": "2.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/enquirer/-/enquirer-2.4.1.tgz", + "integrity": "sha512-rRqJg/6gd538VHvR3PSrdRBb/1Vy2YfzHqzvbhGIQpDRKIa4FgV/54b5Q1xYSxOOwKvjXweS26E0Q+nAMwp2pQ==", + "dev": true, + "requires": { + "ansi-colors": "^4.1.1", + "strip-ansi": "^6.0.1" + } + }, + "entities": { + "version": "4.5.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/entities/-/entities-4.5.0.tgz", + "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==", + "dev": true + }, + "error-ex": { + "version": "1.3.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/error-ex/-/error-ex-1.3.2.tgz", + "integrity": "sha512-7dFHNmqeFSEt2ZBsCriorKnn3Z2pj+fd9kmI6QoWw4//DL+icEBfc0U7qJCisqrTsKTjw4fNFy2pW9OqStD84g==", + "requires": { + "is-arrayish": "^0.2.1" + } + }, + "es-abstract": { + "version": "1.24.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-abstract/-/es-abstract-1.24.0.tgz", + "integrity": "sha512-WSzPgsdLtTcQwm4CROfS5ju2Wa1QQcVeT37jFjYzdFz1r9ahadC8B8/a4qxJxM+09F18iumCdRmlr96ZYkQvEg==", + "dev": true, + "requires": { + "array-buffer-byte-length": "^1.0.2", + "arraybuffer.prototype.slice": "^1.0.4", + "available-typed-arrays": "^1.0.7", + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "data-view-buffer": "^1.0.2", + "data-view-byte-length": "^1.0.2", + "data-view-byte-offset": "^1.0.1", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "es-set-tostringtag": "^2.1.0", + "es-to-primitive": "^1.3.0", + "function.prototype.name": "^1.1.8", + "get-intrinsic": "^1.3.0", + "get-proto": "^1.0.1", + "get-symbol-description": "^1.1.0", + "globalthis": "^1.0.4", + "gopd": "^1.2.0", + "has-property-descriptors": "^1.0.2", + "has-proto": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "internal-slot": "^1.1.0", + "is-array-buffer": "^3.0.5", + "is-callable": "^1.2.7", + "is-data-view": "^1.0.2", + "is-negative-zero": "^2.0.3", + "is-regex": "^1.2.1", + "is-set": "^2.0.3", + "is-shared-array-buffer": "^1.0.4", + "is-string": "^1.1.1", + "is-typed-array": "^1.1.15", + "is-weakref": "^1.1.1", + "math-intrinsics": "^1.1.0", + "object-inspect": "^1.13.4", + "object-keys": "^1.1.1", + "object.assign": "^4.1.7", + "own-keys": "^1.0.1", + "regexp.prototype.flags": "^1.5.4", + "safe-array-concat": "^1.1.3", + "safe-push-apply": "^1.0.0", + "safe-regex-test": "^1.1.0", + "set-proto": "^1.0.0", + "stop-iteration-iterator": "^1.1.0", + "string.prototype.trim": "^1.2.10", + "string.prototype.trimend": "^1.0.9", + "string.prototype.trimstart": "^1.0.8", + "typed-array-buffer": "^1.0.3", + "typed-array-byte-length": "^1.0.3", + "typed-array-byte-offset": "^1.0.4", + "typed-array-length": "^1.0.7", + "unbox-primitive": "^1.1.0", + "which-typed-array": "^1.1.19" + } + }, + "es-define-property": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-define-property/-/es-define-property-1.0.1.tgz", + "integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==", + "dev": true + }, + "es-errors": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-errors/-/es-errors-1.3.0.tgz", + "integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==", + "dev": true + }, + "es-iterator-helpers": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-iterator-helpers/-/es-iterator-helpers-1.2.1.tgz", + "integrity": "sha512-uDn+FE1yrDzyC0pCo961B2IHbdM8y/ACZsKD4dG6WqrjV53BADjwa7D+1aom2rsNVfLyDgU/eigvlJGJ08OQ4w==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.6", + "es-errors": "^1.3.0", + "es-set-tostringtag": "^2.0.3", + "function-bind": "^1.1.2", + "get-intrinsic": "^1.2.6", + "globalthis": "^1.0.4", + "gopd": "^1.2.0", + "has-property-descriptors": "^1.0.2", + "has-proto": "^1.2.0", + "has-symbols": "^1.1.0", + "internal-slot": "^1.1.0", + "iterator.prototype": "^1.1.4", + "safe-array-concat": "^1.1.3" + } + }, + "es-object-atoms": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-object-atoms/-/es-object-atoms-1.1.1.tgz", + "integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==", + "dev": true, + "requires": { + "es-errors": "^1.3.0" + } + }, + "es-set-tostringtag": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz", + "integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6", + "has-tostringtag": "^1.0.2", + "hasown": "^2.0.2" + } + }, + "es-shim-unscopables": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-shim-unscopables/-/es-shim-unscopables-1.1.0.tgz", + "integrity": "sha512-d9T8ucsEhh8Bi1woXCf+TIKDIROLG5WCkxg8geBCbvk22kzwC5G2OnXVMO6FUsvQlgUUXQ2itephWDLqDzbeCw==", + "dev": true, + "requires": { + "hasown": "^2.0.2" + } + }, + "es-to-primitive": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/es-to-primitive/-/es-to-primitive-1.3.0.tgz", + "integrity": "sha512-w+5mJ3GuFL+NjVtJlvydShqE1eN3h3PbI7/5LAsYJP/2qtuMXjfL2LpHSRqo4b4eSF5K/DH1JXKUAHSB2UW50g==", + "dev": true, + "requires": { + "is-callable": "^1.2.7", + "is-date-object": "^1.0.5", + "is-symbol": "^1.0.4" + } + }, + "esbuild": { + "version": "0.25.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/esbuild/-/esbuild-0.25.8.tgz", + "integrity": "sha512-vVC0USHGtMi8+R4Kz8rt6JhEWLxsv9Rnu/lGYbPR8u47B+DCBksq9JarW0zOO7bs37hyOK1l2/oqtbciutL5+Q==", + "dev": true, + "requires": { + "@esbuild/aix-ppc64": "0.25.8", + "@esbuild/android-arm": "0.25.8", + "@esbuild/android-arm64": "0.25.8", + "@esbuild/android-x64": "0.25.8", + "@esbuild/darwin-arm64": "0.25.8", + "@esbuild/darwin-x64": "0.25.8", + "@esbuild/freebsd-arm64": "0.25.8", + "@esbuild/freebsd-x64": "0.25.8", + "@esbuild/linux-arm": "0.25.8", + "@esbuild/linux-arm64": "0.25.8", + "@esbuild/linux-ia32": "0.25.8", + "@esbuild/linux-loong64": "0.25.8", + "@esbuild/linux-mips64el": "0.25.8", + "@esbuild/linux-ppc64": "0.25.8", + "@esbuild/linux-riscv64": "0.25.8", + "@esbuild/linux-s390x": "0.25.8", + "@esbuild/linux-x64": "0.25.8", + "@esbuild/netbsd-arm64": "0.25.8", + "@esbuild/netbsd-x64": "0.25.8", + "@esbuild/openbsd-arm64": "0.25.8", + "@esbuild/openbsd-x64": "0.25.8", + "@esbuild/openharmony-arm64": "0.25.8", + "@esbuild/sunos-x64": "0.25.8", + "@esbuild/win32-arm64": "0.25.8", + "@esbuild/win32-ia32": "0.25.8", + "@esbuild/win32-x64": "0.25.8" + } + }, + "escalade": { + "version": "3.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/escalade/-/escalade-3.2.0.tgz", + "integrity": "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA==", + "dev": true + }, + "escape-string-regexp": { + "version": "4.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/escape-string-regexp/-/escape-string-regexp-4.0.0.tgz", + "integrity": "sha512-TtpcNJ3XAzx3Gq8sWRzJaVajRs0uVxA2YAkdb1jm2YkPz4G6egUFAyA3n5vtEIZefPk5Wa4UXbKuS5fKkJWdgA==" + }, + "eslint": { + "version": "7.32.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint/-/eslint-7.32.0.tgz", + "integrity": "sha512-VHZ8gX+EDfz+97jGcgyGCyRia/dPOd6Xh9yPv8Bl1+SoaIwD+a/vlrOmGRUyOYu7MwUhc7CxqeaDZU13S4+EpA==", + "dev": true, + "requires": { + "@babel/code-frame": "7.12.11", + "@eslint/eslintrc": "^0.4.3", + "@humanwhocodes/config-array": "^0.5.0", + "ajv": "^6.10.0", + "chalk": "^4.0.0", + "cross-spawn": "^7.0.2", + "debug": "^4.0.1", + "doctrine": "^3.0.0", + "enquirer": "^2.3.5", + "escape-string-regexp": "^4.0.0", + "eslint-scope": "^5.1.1", + "eslint-utils": "^2.1.0", + "eslint-visitor-keys": "^2.0.0", + "espree": "^7.3.1", + "esquery": "^1.4.0", + "esutils": "^2.0.2", + "fast-deep-equal": "^3.1.3", + "file-entry-cache": "^6.0.1", + "functional-red-black-tree": "^1.0.1", + "glob-parent": "^5.1.2", + "globals": "^13.6.0", + "ignore": "^4.0.6", + "import-fresh": "^3.0.0", + "imurmurhash": "^0.1.4", + "is-glob": "^4.0.0", + "js-yaml": "^3.13.1", + "json-stable-stringify-without-jsonify": "^1.0.1", + "levn": "^0.4.1", + "lodash.merge": "^4.6.2", + "minimatch": "^3.0.4", + "natural-compare": "^1.4.0", + "optionator": "^0.9.1", + "progress": "^2.0.0", + "regexpp": "^3.1.0", + "semver": "^7.2.1", + "strip-ansi": "^6.0.0", + "strip-json-comments": "^3.1.0", + "table": "^6.0.9", + "text-table": "^0.2.0", + "v8-compile-cache": "^2.0.3" + }, + "dependencies": { + "@babel/code-frame": { + "version": "7.12.11", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@babel/code-frame/-/code-frame-7.12.11.tgz", + "integrity": "sha512-Zt1yodBx1UcyiePMSkWnU4hPqhwq7hGi2nFL1LeA3EUl+q2LQx16MISgJ0+z7dnmgvP9QtIleuETGOiOH1RcIw==", + "dev": true, + "requires": { + "@babel/highlight": "^7.10.4" + } + }, + "semver": { + "version": "7.7.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/semver/-/semver-7.7.2.tgz", + "integrity": "sha512-RF0Fw+rO5AMf9MAyaRXI4AV0Ulj5lMHqVxxdSgiVbixSCXoEmmX/jk0CuJw4+3SqroYO9VoUh+HcuJivvtJemA==", + "dev": true + } + } + }, + "eslint-config-prettier": { + "version": "8.10.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-config-prettier/-/eslint-config-prettier-8.10.2.tgz", + "integrity": "sha512-/IGJ6+Dka158JnP5n5YFMOszjDWrXggGz1LaK/guZq9vZTmniaKlHcsscvkAhn9y4U+BU3JuUdYvtAMcv30y4A==", + "dev": true + }, + "eslint-config-react": { + "version": "1.1.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-config-react/-/eslint-config-react-1.1.7.tgz", + "integrity": "sha512-P4Z6u68wf0BvIvZNu+U8uQsk3DcZ1CcCI1XpUkJlG6vOa+iVcSQLgE01f2DB2kXlKRcT8/3dsH+wveLgvEgbkQ==", + "dev": true + }, + "eslint-mdx": { + "version": "2.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-mdx/-/eslint-mdx-2.3.4.tgz", + "integrity": "sha512-u4NszEUyoGtR7Q0A4qs0OymsEQdCO6yqWlTzDa9vGWsK7aMotdnW0hqifHTkf6lEtA2vHk2xlkWHTCrhYLyRbw==", + "dev": true, + "requires": { + "acorn": "^8.10.0", + "acorn-jsx": "^5.3.2", + "espree": "^9.6.1", + "estree-util-visit": "^1.2.1", + "remark-mdx": "^2.3.0", + "remark-parse": "^10.0.2", + "remark-stringify": "^10.0.3", + "synckit": "^0.9.0", + "tslib": "^2.6.1", + "unified": "^10.1.2", + "unified-engine": "^10.1.0", + "unist-util-visit": "^4.1.2", + "uvu": "^0.5.6", + "vfile": "^5.3.7" + }, + "dependencies": { + "acorn": { + "version": "8.15.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/acorn/-/acorn-8.15.0.tgz", + "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", + "dev": true + }, + "eslint-visitor-keys": { + "version": "3.4.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-visitor-keys/-/eslint-visitor-keys-3.4.3.tgz", + "integrity": "sha512-wpc+LXeiyiisxPlEkUzU6svyS1frIO3Mgxj1fdy7Pm8Ygzguax2N3Fa/D/ag1WqbOprdI+uY6wMUl8/a2G+iag==", + "dev": true + }, + "espree": { + "version": "9.6.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/espree/-/espree-9.6.1.tgz", + "integrity": "sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ==", + "dev": true, + "requires": { + "acorn": "^8.9.0", + "acorn-jsx": "^5.3.2", + "eslint-visitor-keys": "^3.4.1" + } + } + } + }, + "eslint-plugin-markdown": { + "version": "3.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-plugin-markdown/-/eslint-plugin-markdown-3.0.1.tgz", + "integrity": "sha512-8rqoc148DWdGdmYF6WSQFT3uQ6PO7zXYgeBpHAOAakX/zpq+NvFYbDA/H7PYzHajwtmaOzAwfxyl++x0g1/N9A==", + "dev": true, + "requires": { + "mdast-util-from-markdown": "^0.8.5" + }, + "dependencies": { + "character-entities": { + "version": "1.2.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-entities/-/character-entities-1.2.4.tgz", + "integrity": "sha512-iBMyeEHxfVnIakwOuDXpVkc54HijNgCyQB2w0VfGQThle6NXn50zU6V/u+LDhxHcDUPojn6Kpga3PTAD8W1bQw==", + "dev": true + }, + "character-entities-legacy": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-entities-legacy/-/character-entities-legacy-1.1.4.tgz", + "integrity": "sha512-3Xnr+7ZFS1uxeiUDvV02wQ+QDbc55o97tIV5zHScSPJpcLm/r0DFPcoY3tYRp+VZukxuMeKgXYmsXQHO05zQeA==", + "dev": true + }, + "character-reference-invalid": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/character-reference-invalid/-/character-reference-invalid-1.1.4.tgz", + "integrity": "sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==", + "dev": true + }, + "is-alphabetical": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-alphabetical/-/is-alphabetical-1.0.4.tgz", + "integrity": "sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg==", + "dev": true + }, + "is-alphanumerical": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-alphanumerical/-/is-alphanumerical-1.0.4.tgz", + "integrity": "sha512-UzoZUr+XfVz3t3v4KyGEniVL9BDRoQtY7tOyrRybkVNjDFWyo1yhXNGrrBTQxp3ib9BLAWs7k2YKBQsFRkZG9A==", + "dev": true, + "requires": { + "is-alphabetical": "^1.0.0", + "is-decimal": "^1.0.0" + } + }, + "is-decimal": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-decimal/-/is-decimal-1.0.4.tgz", + "integrity": "sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw==", + "dev": true + }, + "is-hexadecimal": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-hexadecimal/-/is-hexadecimal-1.0.4.tgz", + "integrity": "sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw==", + "dev": true + }, + "mdast-util-from-markdown": { + "version": "0.8.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-from-markdown/-/mdast-util-from-markdown-0.8.5.tgz", + "integrity": "sha512-2hkTXtYYnr+NubD/g6KGBS/0mFmBcifAsI0yIWRiRo0PjVs6SSOSOdtzbp6kSGnShDN6G5aWZpKQ2lWRy27mWQ==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0", + "mdast-util-to-string": "^2.0.0", + "micromark": "~2.11.0", + "parse-entities": "^2.0.0", + "unist-util-stringify-position": "^2.0.0" + } + }, + "mdast-util-to-string": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-to-string/-/mdast-util-to-string-2.0.0.tgz", + "integrity": "sha512-AW4DRS3QbBayY/jJmD8437V1Gombjf8RSOUCMFBuo5iHi58AGEgVCKQ+ezHkZZDpAQS75hcBMpLqjpJTjtUL7w==", + "dev": true + }, + "micromark": { + "version": "2.11.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark/-/micromark-2.11.4.tgz", + "integrity": "sha512-+WoovN/ppKolQOFIAajxi7Lu9kInbPxFuTBVEavFcL8eAfVstoc5MocPmqBeAdBOJV00uaVjegzH4+MA0DN/uA==", + "dev": true, + "requires": { + "debug": "^4.0.0", + "parse-entities": "^2.0.0" + } + }, + "parse-entities": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/parse-entities/-/parse-entities-2.0.0.tgz", + "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==", + "dev": true, + "requires": { + "character-entities": "^1.0.0", + "character-entities-legacy": "^1.0.0", + "character-reference-invalid": "^1.0.0", + "is-alphanumerical": "^1.0.0", + "is-decimal": "^1.0.0", + "is-hexadecimal": "^1.0.0" + } + }, + "unist-util-stringify-position": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz", + "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==", + "dev": true, + "requires": { + "@types/unist": "^2.0.2" + } + } + } + }, + "eslint-plugin-mdx": { + "version": "2.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-plugin-mdx/-/eslint-plugin-mdx-2.3.4.tgz", + "integrity": "sha512-kr6tgaifKL+AVGYMtdYc2VCsIjfYQXuUCKz4rK58d2DpnPFHrmgXIOC7NcMvaEld+VOEpxBSCCnjnsf4IVCQGg==", + "dev": true, + "requires": { + "eslint-mdx": "^2.3.4", + "eslint-plugin-markdown": "^3.0.1", + "remark-mdx": "^2.3.0", + "remark-parse": "^10.0.2", + "remark-stringify": "^10.0.3", + "tslib": "^2.6.1", + "unified": "^10.1.2", + "vfile": "^5.3.7" + } + }, + "eslint-plugin-prettier": { + "version": "3.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-plugin-prettier/-/eslint-plugin-prettier-3.4.1.tgz", + "integrity": "sha512-htg25EUYUeIhKHXjOinK4BgCcDwtLHjqaxCDsMy5nbnUMkKFvIhMVCp+5GFUXQ4Nr8lBsPqtGAqBenbpFqAA2g==", + "dev": true, + "requires": { + "prettier-linter-helpers": "^1.0.0" + } + }, + "eslint-plugin-react": { + "version": "7.37.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-plugin-react/-/eslint-plugin-react-7.37.5.tgz", + "integrity": "sha512-Qteup0SqU15kdocexFNAJMvCJEfa2xUKNV4CC1xsVMrIIqEy3SQ/rqyxCWNzfrd3/ldy6HMlD2e0JDVpDg2qIA==", + "dev": true, + "requires": { + "array-includes": "^3.1.8", + "array.prototype.findlast": "^1.2.5", + "array.prototype.flatmap": "^1.3.3", + "array.prototype.tosorted": "^1.1.4", + "doctrine": "^2.1.0", + "es-iterator-helpers": "^1.2.1", + "estraverse": "^5.3.0", + "hasown": "^2.0.2", + "jsx-ast-utils": "^2.4.1 || ^3.0.0", + "minimatch": "^3.1.2", + "object.entries": "^1.1.9", + "object.fromentries": "^2.0.8", + "object.values": "^1.2.1", + "prop-types": "^15.8.1", + "resolve": "^2.0.0-next.5", + "semver": "^6.3.1", + "string.prototype.matchall": "^4.0.12", + "string.prototype.repeat": "^1.0.0" + }, + "dependencies": { + "doctrine": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/doctrine/-/doctrine-2.1.0.tgz", + "integrity": "sha512-35mSku4ZXK0vfCuHEDAwt55dg2jNajHZ1odvF+8SSr82EsZY4QmXfuWso8oEd8zRhVObSN18aM0CjSdoBX7zIw==", + "dev": true, + "requires": { + "esutils": "^2.0.2" + } + }, + "estraverse": { + "version": "5.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true + }, + "resolve": { + "version": "2.0.0-next.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/resolve/-/resolve-2.0.0-next.5.tgz", + "integrity": "sha512-U7WjGVG9sH8tvjW5SmGbQuui75FiyjAX72HX15DwBBwF9dNiQZRQAg9nnPhYy+TUnE0+VcrttuvNI8oSxZcocA==", + "dev": true, + "requires": { + "is-core-module": "^2.13.0", + "path-parse": "^1.0.7", + "supports-preserve-symlinks-flag": "^1.0.0" + } + } + } + }, + "eslint-plugin-react-hooks": { + "version": "4.6.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-plugin-react-hooks/-/eslint-plugin-react-hooks-4.6.2.tgz", + "integrity": "sha512-QzliNJq4GinDBcD8gPB5v0wh6g8q3SUi6EFF0x8N/BL9PoVs0atuGc47ozMRyOWAKdwaZ5OnbOEa3WR+dSGKuQ==", + "dev": true + }, + "eslint-plugin-simple-import-sort": { + "version": "7.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-plugin-simple-import-sort/-/eslint-plugin-simple-import-sort-7.0.0.tgz", + "integrity": "sha512-U3vEDB5zhYPNfxT5TYR7u01dboFZp+HNpnGhkDB2g/2E4wZ/g1Q9Ton8UwCLfRV9yAKyYqDh62oHOamvkFxsvw==", + "dev": true + }, + "eslint-scope": { + "version": "5.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-scope/-/eslint-scope-5.1.1.tgz", + "integrity": "sha512-2NxwbF/hZ0KpepYN0cNbo+FN6XoK7GaHlQhgx/hIZl6Va0bF45RQOOwhLIy8lQDbuCiadSLCBnH2CFYquit5bw==", + "dev": true, + "requires": { + "esrecurse": "^4.3.0", + "estraverse": "^4.1.1" + } + }, + "eslint-utils": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-utils/-/eslint-utils-2.1.0.tgz", + "integrity": "sha512-w94dQYoauyvlDc43XnGB8lU3Zt713vNChgt4EWwhXAP2XkBvndfxF0AgIqKOOasjPIPzj9JqgwkwbCYD0/V3Zg==", + "dev": true, + "requires": { + "eslint-visitor-keys": "^1.1.0" + }, + "dependencies": { + "eslint-visitor-keys": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-visitor-keys/-/eslint-visitor-keys-1.3.0.tgz", + "integrity": "sha512-6J72N8UNa462wa/KFODt/PJ3IU60SDpC3QXC1Hjc1BXXpfL2C9R5+AU7jhe0F6GREqVMh4Juu+NY7xn+6dipUQ==", + "dev": true + } + } + }, + "eslint-visitor-keys": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-visitor-keys/-/eslint-visitor-keys-2.1.0.tgz", + "integrity": "sha512-0rSmRBzXgDzIsD6mGdJgevzgezI534Cer5L/vyMX0kHzT/jiB43jRhd9YUlMGYLQy2zprNmoT8qasCGtY+QaKw==", + "dev": true + }, + "espree": { + "version": "7.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/espree/-/espree-7.3.1.tgz", + "integrity": "sha512-v3JCNCE64umkFpmkFGqzVKsOT0tN1Zr+ueqLZfpV1Ob8e+CEgPWa+OxCoGH3tnhimMKIaBm4m/vaRpJ/krRz2g==", + "dev": true, + "requires": { + "acorn": "^7.4.0", + "acorn-jsx": "^5.3.1", + "eslint-visitor-keys": "^1.3.0" + }, + "dependencies": { + "eslint-visitor-keys": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/eslint-visitor-keys/-/eslint-visitor-keys-1.3.0.tgz", + "integrity": "sha512-6J72N8UNa462wa/KFODt/PJ3IU60SDpC3QXC1Hjc1BXXpfL2C9R5+AU7jhe0F6GREqVMh4Juu+NY7xn+6dipUQ==", + "dev": true + } + } + }, + "esprima": { + "version": "4.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/esprima/-/esprima-4.0.1.tgz", + "integrity": "sha512-eGuFFw7Upda+g4p+QHvnW0RyTX/SVeJBDM/gCtMARO0cLuT2HcEKnTPvhjV6aGeqrCB/sbNop0Kszm0jsaWU4A==", + "dev": true + }, + "esquery": { + "version": "1.6.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/esquery/-/esquery-1.6.0.tgz", + "integrity": "sha512-ca9pw9fomFcKPvFLXhBKUK90ZvGibiGOvRJNbjljY7s7uq/5YO4BOzcYtJqExdx99rF6aAcnRxHmcUHcz6sQsg==", + "dev": true, + "requires": { + "estraverse": "^5.1.0" + }, + "dependencies": { + "estraverse": { + "version": "5.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true + } + } + }, + "esrecurse": { + "version": "4.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/esrecurse/-/esrecurse-4.3.0.tgz", + "integrity": "sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag==", + "dev": true, + "requires": { + "estraverse": "^5.2.0" + }, + "dependencies": { + "estraverse": { + "version": "5.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estraverse/-/estraverse-5.3.0.tgz", + "integrity": "sha512-MMdARuVEQziNTeJD8DgMqmhwR11BRQ/cBP+pLtYdSTnf3MIO8fFeiINEbX36ZdNlfU/7A9f3gUw49B3oQsvwBA==", + "dev": true + } + } + }, + "estraverse": { + "version": "4.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estraverse/-/estraverse-4.3.0.tgz", + "integrity": "sha512-39nnKffWz8xN1BU/2c79n9nB9HDzo0niYUqx6xyqUnyoAnQyyWpOTdZEeiCch8BBu515t4wp9ZmgVfVhn9EBpw==", + "dev": true + }, + "estree-util-is-identifier-name": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estree-util-is-identifier-name/-/estree-util-is-identifier-name-2.1.0.tgz", + "integrity": "sha512-bEN9VHRyXAUOjkKVQVvArFym08BTWB0aJPppZZr0UNyAqWsLaVfAqP7hbaTJjzHifmB5ebnR8Wm7r7yGN/HonQ==", + "dev": true + }, + "estree-util-visit": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estree-util-visit/-/estree-util-visit-1.2.1.tgz", + "integrity": "sha512-xbgqcrkIVbIG+lI/gzbvd9SGTJL4zqJKBFttUl5pP27KhAjtMKbX/mQXJ7qgyXpMgVy/zvpm0xoQQaGL8OloOw==", + "dev": true, + "requires": { + "@types/estree-jsx": "^1.0.0", + "@types/unist": "^2.0.0" + } + }, + "estree-walker": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/estree-walker/-/estree-walker-2.0.2.tgz", + "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==", + "dev": true + }, + "esutils": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/esutils/-/esutils-2.0.3.tgz", + "integrity": "sha512-kVscqXk4OCp68SZ0dkgEKVi6/8ij300KBWTJq32P/dYeWTSwK41WyTxalN1eRmA5Z9UU/LX9D7FWSmV9SAYx6g==", + "dev": true + }, + "events": { + "version": "3.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/events/-/events-3.3.0.tgz", + "integrity": "sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q==" + }, + "extend": { + "version": "3.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/extend/-/extend-3.0.2.tgz", + "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==", + "dev": true + }, + "fast-deep-equal": { + "version": "3.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz", + "integrity": "sha512-f3qQ9oQy9j2AhBe/H9VC91wLmKBCCU/gDOnKNAYG5hswO7BLKj09Hc5HYNz9cGI++xlpDCIgDaitVs03ATR84Q==", + "dev": true + }, + "fast-diff": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fast-diff/-/fast-diff-1.3.0.tgz", + "integrity": "sha512-VxPP4NqbUjj6MaAOafWeUn2cXWLcCtljklUtZf0Ind4XQ+QPtmA0b18zZy0jIQx+ExRVCR/ZQpBmik5lXshNsw==", + "dev": true + }, + "fast-json-stable-stringify": { + "version": "2.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fast-json-stable-stringify/-/fast-json-stable-stringify-2.1.0.tgz", + "integrity": "sha512-lhd/wF+Lk98HZoTCtlVraHtfh5XYijIjalXck7saUtuanSDyLMxnHhSXEDJqHxD7msR8D0uCmqlkwjCV8xvwHw==", + "dev": true + }, + "fast-levenshtein": { + "version": "2.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fast-levenshtein/-/fast-levenshtein-2.0.6.tgz", + "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", + "dev": true + }, + "fast-uri": { + "version": "3.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fast-uri/-/fast-uri-3.0.6.tgz", + "integrity": "sha512-Atfo14OibSv5wAp4VWNsFYE1AchQRTv9cBGWET4pZWHzYshFSS9NQI6I57rdKn9croWVMbYFbLhJ+yJvmZIIHw==", + "dev": true + }, + "fault": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fault/-/fault-2.0.1.tgz", + "integrity": "sha512-WtySTkS4OKev5JtpHXnib4Gxiurzh5NCGvWrFaZ34m6JehfTUhKZvn9njTfw48t6JumVQOmrKqpmGcdwxnhqBQ==", + "dev": true, + "requires": { + "format": "^0.2.0" + } + }, + "fdir": { + "version": "6.4.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fdir/-/fdir-6.4.6.tgz", + "integrity": "sha512-hiFoqpyZcfNm1yc4u8oWCf9A2c4D3QjCrks3zmoVKVxpQRzmPNar1hUJcBG2RQHvEVGDN+Jm81ZheVLAQMK6+w==", + "dev": true + }, + "file-entry-cache": { + "version": "6.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/file-entry-cache/-/file-entry-cache-6.0.1.tgz", + "integrity": "sha512-7Gps/XWymbLk2QLYK4NzpMOrYjMhdIxXuIvy2QBsLE6ljuodKvdkWs/cpyJJ3CVIVpH0Oi1Hvg1ovbMzLdFBBg==", + "dev": true, + "requires": { + "flat-cache": "^3.0.4" + } + }, + "filter-obj": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/filter-obj/-/filter-obj-1.1.0.tgz", + "integrity": "sha512-8rXg1ZnX7xzy2NGDVkBVaAy+lSlPNwad13BtgSlLuxfIslyt5Vg64U7tFcCt4WS1R0hvtnQybT/IyCkGZ3DpXQ==" + }, + "find-root": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/find-root/-/find-root-1.1.0.tgz", + "integrity": "sha512-NKfW6bec6GfKc0SGx1e07QZY9PE99u0Bft/0rzSD5k3sO/vwkVUpDUKVm5Gpp5Ue3YfShPFTX2070tDs5kB9Ng==" + }, + "flame-chart-js": { + "version": "github:Granulate/flame-chart-js#cbead371f4f88843a37e91830edcbb5a3abcf20c", + "from": "github:Granulate/flame-chart-js", + "requires": { + "color": "^3.1.3", + "events": "^3.2.0" + } + }, + "flat-cache": { + "version": "3.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/flat-cache/-/flat-cache-3.2.0.tgz", + "integrity": "sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw==", + "dev": true, + "requires": { + "flatted": "^3.2.9", + "keyv": "^4.5.3", + "rimraf": "^3.0.2" + } + }, + "flatted": { + "version": "3.3.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/flatted/-/flatted-3.3.3.tgz", + "integrity": "sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==", + "dev": true + }, + "for-each": { + "version": "0.3.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/for-each/-/for-each-0.3.5.tgz", + "integrity": "sha512-dKx12eRCVIzqCxFGplyFKJMPvLEWgmNtUrpTiJIR5u97zEhRG8ySrtboPHZXx7daLxQVrl643cTzbab2tkQjxg==", + "dev": true, + "requires": { + "is-callable": "^1.2.7" + } + }, + "foreground-child": { + "version": "3.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/foreground-child/-/foreground-child-3.3.1.tgz", + "integrity": "sha512-gIXjKqtFuWEgzFRJA9WCQeSJLZDjgJUOMCMzxtvFq/37KojM1BFGufqsCy0r4qSQmYLsZYMeyRqzIWOMup03sw==", + "dev": true, + "requires": { + "cross-spawn": "^7.0.6", + "signal-exit": "^4.0.1" + } + }, + "format": { + "version": "0.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/format/-/format-0.2.2.tgz", + "integrity": "sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww==", + "dev": true + }, + "framer-motion": { + "version": "2.9.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/framer-motion/-/framer-motion-2.9.5.tgz", + "integrity": "sha512-epSX4Co1YbDv0mjfHouuY0q361TpHE7WQzCp/xMTilxy4kXd+Z23uJzPVorfzbm1a/9q1Yu8T5bndaw65NI4Tg==", + "requires": { + "@emotion/is-prop-valid": "^0.8.2", + "framesync": "^4.1.0", + "hey-listen": "^1.0.8", + "popmotion": "9.0.0-rc.20", + "style-value-types": "^3.1.9", + "tslib": "^1.10.0" + }, + "dependencies": { + "@emotion/is-prop-valid": { + "version": "0.8.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/is-prop-valid/-/is-prop-valid-0.8.8.tgz", + "integrity": "sha512-u5WtneEAr5IDG2Wv65yhunPSMLIpuKsbuOktRojfrEiEvRyC85LgPMZI63cr7NUqT8ZIGdSVg8ZKGxIug4lXcA==", + "optional": true, + "requires": { + "@emotion/memoize": "0.7.4" + } + }, + "@emotion/memoize": { + "version": "0.7.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/@emotion/memoize/-/memoize-0.7.4.tgz", + "integrity": "sha512-Ja/Vfqe3HpuzRsG1oBtWTHk2PGZ7GR+2Vz5iYGelAw8dx32K0y7PjVuxK6z1nMpZOqAFsRUPCkK1YjJ56qJlgw==", + "optional": true + }, + "tslib": { + "version": "1.14.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tslib/-/tslib-1.14.1.tgz", + "integrity": "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==" + } + } + }, + "framesync": { + "version": "4.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/framesync/-/framesync-4.1.0.tgz", + "integrity": "sha512-MmgZ4wCoeVxNbx2xp5hN/zPDCbLSKiDt4BbbslK7j/pM2lg5S0vhTNv1v8BCVb99JPIo6hXBFdwzU7Q4qcAaoQ==", + "requires": { + "hey-listen": "^1.0.5" + } + }, + "fs.realpath": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fs.realpath/-/fs.realpath-1.0.0.tgz", + "integrity": "sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==", + "dev": true + }, + "fsevents": { + "version": "2.3.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/fsevents/-/fsevents-2.3.3.tgz", + "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==", + "dev": true, + "optional": true + }, + "function-bind": { + "version": "1.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==" + }, + "function.prototype.name": { + "version": "1.1.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/function.prototype.name/-/function.prototype.name-1.1.8.tgz", + "integrity": "sha512-e5iwyodOHhbMr/yNrc7fDYG4qlbIvI5gajyzPnb5TCwyhjApznQh1BMFou9b30SevY43gCJKXycoCBjMbsuW0Q==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "functions-have-names": "^1.2.3", + "hasown": "^2.0.2", + "is-callable": "^1.2.7" + } + }, + "functional-red-black-tree": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/functional-red-black-tree/-/functional-red-black-tree-1.0.1.tgz", + "integrity": "sha512-dsKNQNdj6xA3T+QlADDA7mOSlX0qiMINjn0cgr+eGHGsbSHzTabcIogz2+p/iqP1Xs6EP/sS2SbqH+brGTbq0g==", + "dev": true + }, + "functions-have-names": { + "version": "1.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/functions-have-names/-/functions-have-names-1.2.3.tgz", + "integrity": "sha512-xckBUXyTIqT97tq2x2AMb+g163b5JFysYk0x4qxNFwbfQkmNZoiRHb6sPzI9/QV33WeuvVYBUIiD4NzNIyqaRQ==", + "dev": true + }, + "gensync": { + "version": "1.0.0-beta.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/gensync/-/gensync-1.0.0-beta.2.tgz", + "integrity": "sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==", + "dev": true + }, + "get-intrinsic": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/get-intrinsic/-/get-intrinsic-1.3.0.tgz", + "integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==", + "dev": true, + "requires": { + "call-bind-apply-helpers": "^1.0.2", + "es-define-property": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.1.1", + "function-bind": "^1.1.2", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "hasown": "^2.0.2", + "math-intrinsics": "^1.1.0" + } + }, + "get-proto": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/get-proto/-/get-proto-1.0.1.tgz", + "integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==", + "dev": true, + "requires": { + "dunder-proto": "^1.0.1", + "es-object-atoms": "^1.0.0" + } + }, + "get-symbol-description": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/get-symbol-description/-/get-symbol-description-1.1.0.tgz", + "integrity": "sha512-w9UMqWwJxHNOvoNzSJ2oPF5wvYcvP7jUvYzhp67yEhTi17ZDBBC1z9pTdGuzjD+EFIqLSYRweZjqfiPzQ06Ebg==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.6" + } + }, + "glob": { + "version": "7.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/glob/-/glob-7.2.3.tgz", + "integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==", + "dev": true, + "requires": { + "fs.realpath": "^1.0.0", + "inflight": "^1.0.4", + "inherits": "2", + "minimatch": "^3.1.1", + "once": "^1.3.0", + "path-is-absolute": "^1.0.0" + } + }, + "glob-parent": { + "version": "5.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "dev": true, + "requires": { + "is-glob": "^4.0.1" + } + }, + "globals": { + "version": "13.24.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/globals/-/globals-13.24.0.tgz", + "integrity": "sha512-AhO5QUcj8llrbG09iWhPU2B204J1xnPeL8kQmVorSsy+Sjj1sk8gIyh6cUocGmH4L0UuhAJy+hJMRA4mgA4mFQ==", + "dev": true, + "requires": { + "type-fest": "^0.20.2" + } + }, + "globalthis": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/globalthis/-/globalthis-1.0.4.tgz", + "integrity": "sha512-DpLKbNU4WylpxJykQujfCcwYWiV/Jhm50Goo0wrVILAv5jOr9d+H+UR3PhSCD2rCCEIg0uc+G+muBTwD54JhDQ==", + "dev": true, + "requires": { + "define-properties": "^1.2.1", + "gopd": "^1.0.1" + } + }, + "gopd": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/gopd/-/gopd-1.2.0.tgz", + "integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==", + "dev": true + }, + "hammerjs": { + "version": "2.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/hammerjs/-/hammerjs-2.0.8.tgz", + "integrity": "sha512-tSQXBXS/MWQOn/RKckawJ61vvsDpCom87JgxiYdGwHdOa0ht0vzUWDlfioofFCRU0L+6NGDt6XzbgoJvZkMeRQ==" + }, + "has-bigints": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-bigints/-/has-bigints-1.1.0.tgz", + "integrity": "sha512-R3pbpkcIqv2Pm3dUwgjclDRVmWpTJW2DcMzcIhEXEx1oh/CEMObMm3KLmRJOdvhM7o4uQBnwr8pzRK2sJWIqfg==", + "dev": true + }, + "has-flag": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "dev": true + }, + "has-property-descriptors": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-property-descriptors/-/has-property-descriptors-1.0.2.tgz", + "integrity": "sha512-55JNKuIW+vq4Ke1BjOTjM2YctQIvCT7GFzHwmfZPGo5wnrgkid0YQtnAleFSqumZm4az3n2BS+erby5ipJdgrg==", + "dev": true, + "requires": { + "es-define-property": "^1.0.0" + } + }, + "has-proto": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-proto/-/has-proto-1.2.0.tgz", + "integrity": "sha512-KIL7eQPfHQRC8+XluaIw7BHUwwqL19bQn4hzNgdr+1wXoU0KKj6rufu47lhY7KbJR2C6T6+PfyN0Ea7wkSS+qQ==", + "dev": true, + "requires": { + "dunder-proto": "^1.0.0" + } + }, + "has-symbols": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-symbols/-/has-symbols-1.1.0.tgz", + "integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==", + "dev": true + }, + "has-tostringtag": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/has-tostringtag/-/has-tostringtag-1.0.2.tgz", + "integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==", + "dev": true, + "requires": { + "has-symbols": "^1.0.3" + } + }, + "hasown": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "requires": { + "function-bind": "^1.1.2" + } + }, + "hey-listen": { + "version": "1.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/hey-listen/-/hey-listen-1.0.8.tgz", + "integrity": "sha512-COpmrF2NOg4TBWUJ5UVyaCU2A88wEMkUPK4hNqyCkqHbxT92BbvfjoSozkAIIm6XhicGlJHhFdullInrdhwU8Q==" + }, + "history": { + "version": "4.10.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/history/-/history-4.10.1.tgz", + "integrity": "sha512-36nwAD620w12kuzPAsyINPWJqlNbij+hpK1k9XRloDtym8mxzGYl2c17LnV6IAGB2Dmg4tEa7G7DlawS0+qjew==", + "requires": { + "@babel/runtime": "^7.1.2", + "loose-envify": "^1.2.0", + "resolve-pathname": "^3.0.0", + "tiny-invariant": "^1.0.2", + "tiny-warning": "^1.0.0", + "value-equal": "^1.0.1" + } + }, + "hoist-non-react-statics": { + "version": "3.3.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/hoist-non-react-statics/-/hoist-non-react-statics-3.3.2.tgz", + "integrity": "sha512-/gGivxi8JPKWNm/W0jSmzcMPpfpPLc3dY/6GxhX2hQ9iGj3aDfklV4ET7NjKpSinLpJ5vafa9iiGIEZg10SfBw==", + "requires": { + "react-is": "^16.7.0" + } + }, + "ignore": { + "version": "4.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ignore/-/ignore-4.0.6.tgz", + "integrity": "sha512-cyFDKrqc/YdcWFniJhzI42+AzS+gNwmUzOSFcRCQYwySuBBBy/KjuxWLZ/FHEH6Moq1NizMOBWyTcv8O4OZIMg==", + "dev": true + }, + "import-fresh": { + "version": "3.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/import-fresh/-/import-fresh-3.3.1.tgz", + "integrity": "sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==", + "requires": { + "parent-module": "^1.0.0", + "resolve-from": "^4.0.0" + } + }, + "import-meta-resolve": { + "version": "2.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/import-meta-resolve/-/import-meta-resolve-2.2.2.tgz", + "integrity": "sha512-f8KcQ1D80V7RnqVm+/lirO9zkOxjGxhaTC1IPrBGd3MEfNgmNG67tSUO9gTi2F3Blr2Az6g1vocaxzkVnWl9MA==", + "dev": true + }, + "imurmurhash": { + "version": "0.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/imurmurhash/-/imurmurhash-0.1.4.tgz", + "integrity": "sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==", + "dev": true + }, + "inflight": { + "version": "1.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/inflight/-/inflight-1.0.6.tgz", + "integrity": "sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==", + "dev": true, + "requires": { + "once": "^1.3.0", + "wrappy": "1" + } + }, + "inherits": { + "version": "2.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==", + "dev": true + }, + "ini": { + "version": "4.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ini/-/ini-4.1.3.tgz", + "integrity": "sha512-X7rqawQBvfdjS10YU1y1YVreA3SsLrW9dX2CewP2EbBJM4ypVNLDkO5y04gejPwKIY9lR+7r9gn3rFPt/kmWFg==", + "dev": true + }, + "internal-slot": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/internal-slot/-/internal-slot-1.1.0.tgz", + "integrity": "sha512-4gd7VpWNQNB4UKKCFFVcp1AVv+FMOgs9NKzjHKusc8jTMhd5eL1NqQqOpE0KzMds804/yHlglp3uxgluOqAPLw==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "hasown": "^2.0.2", + "side-channel": "^1.1.0" + } + }, + "intersection-observer": { + "version": "0.12.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/intersection-observer/-/intersection-observer-0.12.2.tgz", + "integrity": "sha512-7m1vEcPCxXYI8HqnL8CKI6siDyD+eIWSwgB3DZA+ZTogxk9I4CDnj4wilt9x/+/QbHI4YG5YZNmC6458/e9Ktg==" + }, + "is-alphabetical": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-alphabetical/-/is-alphabetical-2.0.1.tgz", + "integrity": "sha512-FWyyY60MeTNyeSRpkM2Iry0G9hpr7/9kD40mD/cGQEuilcZYS4okz8SN2Q6rLCJ8gbCt6fN+rC+6tMGS99LaxQ==", + "dev": true + }, + "is-alphanumerical": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-alphanumerical/-/is-alphanumerical-2.0.1.tgz", + "integrity": "sha512-hmbYhX/9MUMF5uh7tOXyK/n0ZvWpad5caBA17GsC6vyuCqaWliRG5K1qS9inmUhEMaOBIW7/whAnSwveW/LtZw==", + "dev": true, + "requires": { + "is-alphabetical": "^2.0.0", + "is-decimal": "^2.0.0" + } + }, + "is-array-buffer": { + "version": "3.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-array-buffer/-/is-array-buffer-3.0.5.tgz", + "integrity": "sha512-DDfANUiiG2wC1qawP66qlTugJeL5HyzMpfr8lLK+jMQirGzNod0B12cFB/9q838Ru27sBwfw78/rdoU7RERz6A==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "get-intrinsic": "^1.2.6" + } + }, + "is-arrayish": { + "version": "0.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-arrayish/-/is-arrayish-0.2.1.tgz", + "integrity": "sha512-zz06S8t0ozoDXMG+ube26zeCTNXcKIPJZJi8hBrF4idCLms4CG9QtK7qBl1boi5ODzFpjswb5JPmHCbMpjaYzg==" + }, + "is-async-function": { + "version": "2.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-async-function/-/is-async-function-2.1.1.tgz", + "integrity": "sha512-9dgM/cZBnNvjzaMYHVoxxfPj2QXt22Ev7SuuPrs+xav0ukGB0S6d4ydZdEiM48kLx5kDV+QBPrpVnFyefL8kkQ==", + "dev": true, + "requires": { + "async-function": "^1.0.0", + "call-bound": "^1.0.3", + "get-proto": "^1.0.1", + "has-tostringtag": "^1.0.2", + "safe-regex-test": "^1.1.0" + } + }, + "is-bigint": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-bigint/-/is-bigint-1.1.0.tgz", + "integrity": "sha512-n4ZT37wG78iz03xPRKJrHTdZbe3IicyucEtdRsV5yglwc3GyUfbAfpSeD0FJ41NbUNSt5wbhqfp1fS+BgnvDFQ==", + "dev": true, + "requires": { + "has-bigints": "^1.0.2" + } + }, + "is-boolean-object": { + "version": "1.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-boolean-object/-/is-boolean-object-1.2.2.tgz", + "integrity": "sha512-wa56o2/ElJMYqjCjGkXri7it5FbebW5usLw/nPmCMs5DeZ7eziSYZhSmPRn0txqeW4LnAmQQU7FgqLpsEFKM4A==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "has-tostringtag": "^1.0.2" + } + }, + "is-buffer": { + "version": "2.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-buffer/-/is-buffer-2.0.5.tgz", + "integrity": "sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ==", + "dev": true + }, + "is-callable": { + "version": "1.2.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-callable/-/is-callable-1.2.7.tgz", + "integrity": "sha512-1BC0BVFhS/p0qtw6enp8e+8OD0UrK0oFLztSjNzhcKA3WDuJxxAPXzPuPtKkjEY9UUoEWlX/8fgKeu2S8i9JTA==", + "dev": true + }, + "is-core-module": { + "version": "2.16.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-core-module/-/is-core-module-2.16.1.tgz", + "integrity": "sha512-UfoeMA6fIJ8wTYFEUjelnaGI67v6+N7qXJEvQuIGa99l4xsCruSYOVSQ0uPANn4dAzm8lkYPaKLrrijLq7x23w==", + "requires": { + "hasown": "^2.0.2" + } + }, + "is-data-view": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-data-view/-/is-data-view-1.0.2.tgz", + "integrity": "sha512-RKtWF8pGmS87i2D6gqQu/l7EYRlVdfzemCJN/P3UOs//x1QE7mfhvzHIApBTRf7axvT6DMGwSwBXYCT0nfB9xw==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "get-intrinsic": "^1.2.6", + "is-typed-array": "^1.1.13" + } + }, + "is-date-object": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-date-object/-/is-date-object-1.1.0.tgz", + "integrity": "sha512-PwwhEakHVKTdRNVOw+/Gyh0+MzlCl4R6qKvkhuvLtPMggI1WAHt9sOwZxQLSGpUaDnrdyDsomoRgNnCfKNSXXg==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "has-tostringtag": "^1.0.2" + } + }, + "is-decimal": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-decimal/-/is-decimal-2.0.1.tgz", + "integrity": "sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A==", + "dev": true + }, + "is-empty": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-empty/-/is-empty-1.2.0.tgz", + "integrity": "sha512-F2FnH/otLNJv0J6wc73A5Xo7oHLNnqplYqZhUu01tD54DIPvxIRSTSLkrUB/M0nHO4vo1O9PDfN4KoTxCzLh/w==", + "dev": true + }, + "is-extglob": { + "version": "2.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "dev": true + }, + "is-finalizationregistry": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-finalizationregistry/-/is-finalizationregistry-1.1.1.tgz", + "integrity": "sha512-1pC6N8qWJbWoPtEjgcL2xyhQOP491EQjeUo3qTKcmV8YSDDJrOepfG8pcC7h/QgnQHYSv0mJ3Z/ZWxmatVrysg==", + "dev": true, + "requires": { + "call-bound": "^1.0.3" + } + }, + "is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "dev": true + }, + "is-generator-function": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-generator-function/-/is-generator-function-1.1.0.tgz", + "integrity": "sha512-nPUB5km40q9e8UfN/Zc24eLlzdSf9OfKByBw9CIdw4H1giPMeA0OIJvbchsCu4npfI2QcMVBsGEBHKZ7wLTWmQ==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "get-proto": "^1.0.0", + "has-tostringtag": "^1.0.2", + "safe-regex-test": "^1.1.0" + } + }, + "is-glob": { + "version": "4.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "dev": true, + "requires": { + "is-extglob": "^2.1.1" + } + }, + "is-hexadecimal": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-hexadecimal/-/is-hexadecimal-2.0.1.tgz", + "integrity": "sha512-DgZQp241c8oO6cA1SbTEWiXeoxV42vlcJxgH+B3hi1AiqqKruZR3ZGF8In3fj4+/y/7rHvlOZLZtgJ/4ttYGZg==", + "dev": true + }, + "is-map": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-map/-/is-map-2.0.3.tgz", + "integrity": "sha512-1Qed0/Hr2m+YqxnM09CjA2d/i6YZNfF6R2oRAOj36eUdS6qIV/huPJNSEpKbupewFs+ZsJlxsjjPbc0/afW6Lw==", + "dev": true + }, + "is-negative-zero": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-negative-zero/-/is-negative-zero-2.0.3.tgz", + "integrity": "sha512-5KoIu2Ngpyek75jXodFvnafB6DJgr3u8uuK0LEZJjrU19DrMD3EVERaR8sjz8CCGgpZvxPl9SuE1GMVPFHx1mw==", + "dev": true + }, + "is-number-object": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-number-object/-/is-number-object-1.1.1.tgz", + "integrity": "sha512-lZhclumE1G6VYD8VHe35wFaIif+CTy5SJIi5+3y4psDgWu4wPDoBhF8NxUOinEc7pHgiTsT6MaBb92rKhhD+Xw==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "has-tostringtag": "^1.0.2" + } + }, + "is-plain-obj": { + "version": "4.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-plain-obj/-/is-plain-obj-4.1.0.tgz", + "integrity": "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg==", + "dev": true + }, + "is-regex": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-regex/-/is-regex-1.2.1.tgz", + "integrity": "sha512-MjYsKHO5O7mCsmRGxWcLWheFqN9DJ/2TmngvjKXihe6efViPqc274+Fx/4fYj/r03+ESvBdTXK0V6tA3rgez1g==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "gopd": "^1.2.0", + "has-tostringtag": "^1.0.2", + "hasown": "^2.0.2" + } + }, + "is-set": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-set/-/is-set-2.0.3.tgz", + "integrity": "sha512-iPAjerrse27/ygGLxw+EBR9agv9Y6uLeYVJMu+QNCoouJ1/1ri0mGrcWpfCqFZuzzx3WjtwxG098X+n4OuRkPg==", + "dev": true + }, + "is-shared-array-buffer": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-shared-array-buffer/-/is-shared-array-buffer-1.0.4.tgz", + "integrity": "sha512-ISWac8drv4ZGfwKl5slpHG9OwPNty4jOWPRIhBpxOoD+hqITiwuipOQ2bNthAzwA3B4fIjO4Nln74N0S9byq8A==", + "dev": true, + "requires": { + "call-bound": "^1.0.3" + } + }, + "is-string": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-string/-/is-string-1.1.1.tgz", + "integrity": "sha512-BtEeSsoaQjlSPBemMQIrY1MY0uM6vnS1g5fmufYOtnxLGUZM2178PKbhsk7Ffv58IX+ZtcvoGwccYsh0PglkAA==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "has-tostringtag": "^1.0.2" + } + }, + "is-symbol": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-symbol/-/is-symbol-1.1.1.tgz", + "integrity": "sha512-9gGx6GTtCQM73BgmHQXfDmLtfjjTUDSyoxTCbp5WtoixAhfgsDirWIcVQ/IHpvI5Vgd5i/J5F7B9cN/WlVbC/w==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "has-symbols": "^1.1.0", + "safe-regex-test": "^1.1.0" + } + }, + "is-typed-array": { + "version": "1.1.15", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-typed-array/-/is-typed-array-1.1.15.tgz", + "integrity": "sha512-p3EcsicXjit7SaskXHs1hA91QxgTw46Fv6EFKKGS5DRFLD8yKnohjF3hxoju94b/OcMZoQukzpPpBE9uLVKzgQ==", + "dev": true, + "requires": { + "which-typed-array": "^1.1.16" + } + }, + "is-weakmap": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-weakmap/-/is-weakmap-2.0.2.tgz", + "integrity": "sha512-K5pXYOm9wqY1RgjpL3YTkF39tni1XajUIkawTLUo9EZEVUFga5gSQJF8nNS7ZwJQ02y+1YCNYcMh+HIf1ZqE+w==", + "dev": true + }, + "is-weakref": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-weakref/-/is-weakref-1.1.1.tgz", + "integrity": "sha512-6i9mGWSlqzNMEqpCp93KwRS1uUOodk2OJ6b+sq7ZPDSy2WuI5NFIxp/254TytR8ftefexkWn5xNiHUNpPOfSew==", + "dev": true, + "requires": { + "call-bound": "^1.0.3" + } + }, + "is-weakset": { + "version": "2.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-weakset/-/is-weakset-2.0.4.tgz", + "integrity": "sha512-mfcwb6IzQyOKTs84CQMrOwW4gQcaTOAWJ0zzJCl2WSPDrWk/OzDaImWFH3djXhb24g4eudZfLRozAvPGw4d9hQ==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "get-intrinsic": "^1.2.6" + } + }, + "isarray": { + "version": "0.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/isarray/-/isarray-0.0.1.tgz", + "integrity": "sha512-D2S+3GLxWH+uhrNEcoh/fnmYeP8E8/zHl644d/jdA0g2uyXvy3sb0qxotE+ne0LtccHknQzWwZEzhak7oJ0COQ==" + }, + "isexe": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/isexe/-/isexe-2.0.0.tgz", + "integrity": "sha512-RHxMLp9lnKHGHRng9QFhRCMbYAcVpn69smSGcq3f36xjgVVWThj4qqLbTLlq7Ssj8B+fIQ1EuCEGI2lKsyQeIw==", + "dev": true + }, + "iterator.prototype": { + "version": "1.1.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/iterator.prototype/-/iterator.prototype-1.1.5.tgz", + "integrity": "sha512-H0dkQoCa3b2VEeKQBOxFph+JAbcrQdE7KC0UkqwpLmv2EC4P41QXP+rqo9wYodACiG5/WM5s9oDApTU8utwj9g==", + "dev": true, + "requires": { + "define-data-property": "^1.1.4", + "es-object-atoms": "^1.0.0", + "get-intrinsic": "^1.2.6", + "get-proto": "^1.0.0", + "has-symbols": "^1.1.0", + "set-function-name": "^2.0.2" + } + }, + "jackspeak": { + "version": "3.4.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/jackspeak/-/jackspeak-3.4.3.tgz", + "integrity": "sha512-OGlZQpz2yfahA/Rd1Y8Cd9SIEsqvXkLVoSw/cgwhnhFMDbsQFeZYoJJ7bIZBS9BcamUW96asq/npPWugM+RQBw==", + "dev": true, + "requires": { + "@isaacs/cliui": "^8.0.2", + "@pkgjs/parseargs": "^0.11.0" + } + }, + "js-cookie": { + "version": "3.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/js-cookie/-/js-cookie-3.0.5.tgz", + "integrity": "sha512-cEiJEAEoIbWfCZYKWhVwFuvPX1gETRYPw6LlaTKoxD3s2AkXzkCjnp6h0V77ozyqj0jakteJ4YqDJT830+lVGw==" + }, + "js-tokens": { + "version": "4.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/js-tokens/-/js-tokens-4.0.0.tgz", + "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==" + }, + "js-yaml": { + "version": "3.14.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/js-yaml/-/js-yaml-3.14.1.tgz", + "integrity": "sha512-okMH7OXXJ7YrN9Ok3/SXrnu4iX9yOk+25nqX4imS2npuvTYDmo/QEZoqwZkYaIDk3jVvBOTOIEgEhaLOynBS9g==", + "dev": true, + "requires": { + "argparse": "^1.0.7", + "esprima": "^4.0.0" + } + }, + "jsesc": { + "version": "3.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/jsesc/-/jsesc-3.1.0.tgz", + "integrity": "sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA==" + }, + "json-buffer": { + "version": "3.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json-buffer/-/json-buffer-3.0.1.tgz", + "integrity": "sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ==", + "dev": true + }, + "json-parse-even-better-errors": { + "version": "2.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json-parse-even-better-errors/-/json-parse-even-better-errors-2.3.1.tgz", + "integrity": "sha512-xyFwyhro/JEof6Ghe2iz2NcXoj2sloNsWr/XsERDK/oiPCfaNhl5ONfp+jQdAZRQQ0IJWNzH9zIZF7li91kh2w==" + }, + "json-schema-traverse": { + "version": "0.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json-schema-traverse/-/json-schema-traverse-0.4.1.tgz", + "integrity": "sha512-xbbCH5dCYU5T8LcEhhuh7HJ88HXuW3qsI3Y0zOZFKfZEHcpWiHU/Jxzk629Brsab/mMiHQti9wMP+845RPe3Vg==", + "dev": true + }, + "json-stable-stringify-without-jsonify": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json-stable-stringify-without-jsonify/-/json-stable-stringify-without-jsonify-1.0.1.tgz", + "integrity": "sha512-Bdboy+l7tA3OGW6FjyFHWkP5LuByj1Tk33Ljyq0axyzdk9//JSi2u3fP1QSmd1KNwq6VOKYGlAu87CisVir6Pw==", + "dev": true + }, + "json5": { + "version": "2.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json5/-/json5-2.2.3.tgz", + "integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==", + "dev": true + }, + "jsx-ast-utils": { + "version": "3.3.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/jsx-ast-utils/-/jsx-ast-utils-3.3.5.tgz", + "integrity": "sha512-ZZow9HBI5O6EPgSJLUb8n2NKgmVWTwCvHGwFuJlMjvLFqlGG6pjirPhtdsseaLZjSibD8eegzmYpUZwoIlj2cQ==", + "dev": true, + "requires": { + "array-includes": "^3.1.6", + "array.prototype.flat": "^1.3.1", + "object.assign": "^4.1.4", + "object.values": "^1.1.6" + } + }, + "keyv": { + "version": "4.5.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/keyv/-/keyv-4.5.4.tgz", + "integrity": "sha512-oxVHkHR/EJf2CNXnWxRLW6mg7JyCCUcG0DtEGmL2ctUo1PNTin1PUil+r/+4r5MpVgC/fn1kjsx7mjSujKqIpw==", + "dev": true, + "requires": { + "json-buffer": "3.0.1" + } + }, + "kleur": { + "version": "4.1.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/kleur/-/kleur-4.1.5.tgz", + "integrity": "sha512-o+NO+8WrRiQEE4/7nwRJhN1HWpVmJm511pBHUxPLtp0BUISzlBplORYSmTclCnJvQq2tKu/sgl3xVpkc7ZWuQQ==", + "dev": true + }, + "levn": { + "version": "0.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/levn/-/levn-0.4.1.tgz", + "integrity": "sha512-+bT2uH4E5LGE7h/n3evcS/sQlJXCpIp6ym8OWJ5eV6+67Dsql/LaaT7qJBAt2rzfoa/5QBGBhxDix1dMt2kQKQ==", + "dev": true, + "requires": { + "prelude-ls": "^1.2.1", + "type-check": "~0.4.0" + } + }, + "lines-and-columns": { + "version": "1.2.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lines-and-columns/-/lines-and-columns-1.2.4.tgz", + "integrity": "sha512-7ylylesZQ/PV29jhEDl3Ufjo6ZX7gCqJr5F7PKrqc93v7fzSymt1BpwEU8nAUXs8qzzvqhbjhK5QZg6Mt/HkBg==" + }, + "load-plugin": { + "version": "5.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/load-plugin/-/load-plugin-5.1.0.tgz", + "integrity": "sha512-Lg1CZa1CFj2CbNaxijTL6PCbzd4qGTlZov+iH2p5Xwy/ApcZJh+i6jMN2cYePouTfjJfrNu3nXFdEw8LvbjPFQ==", + "dev": true, + "requires": { + "@npmcli/config": "^6.0.0", + "import-meta-resolve": "^2.0.0" + } + }, + "lodash": { + "version": "4.17.21", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lodash/-/lodash-4.17.21.tgz", + "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==" + }, + "lodash.merge": { + "version": "4.6.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lodash.merge/-/lodash.merge-4.6.2.tgz", + "integrity": "sha512-0KpjqXRVvrYyCsX1swR/XTK0va6VQkQM6MNo7PqW77ByjAhoARA8EfrP1N4+KlKj8YS0ZUCtRT/YUuhyYDujIQ==", + "dev": true + }, + "lodash.truncate": { + "version": "4.4.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lodash.truncate/-/lodash.truncate-4.4.2.tgz", + "integrity": "sha512-jttmRe7bRse52OsWIMDLaXxWqRAmtIUccAQ3garviCqJjafXOfNMO0yMfNpdD6zbGaTU0P5Nz7e7gAT6cKmJRw==", + "dev": true + }, + "longest-streak": { + "version": "3.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/longest-streak/-/longest-streak-3.1.0.tgz", + "integrity": "sha512-9Ri+o0JYgehTaVBBDoMqIl8GXtbWg711O3srftcHhZ0dqnETqLaoIK0x17fUw9rFSlK/0NlsKe0Ahhyl5pXE2g==", + "dev": true + }, + "loose-envify": { + "version": "1.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/loose-envify/-/loose-envify-1.4.0.tgz", + "integrity": "sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==", + "requires": { + "js-tokens": "^3.0.0 || ^4.0.0" + } + }, + "lru-cache": { + "version": "5.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lru-cache/-/lru-cache-5.1.1.tgz", + "integrity": "sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==", + "dev": true, + "requires": { + "yallist": "^3.0.2" + } + }, + "magic-string": { + "version": "0.26.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/magic-string/-/magic-string-0.26.7.tgz", + "integrity": "sha512-hX9XH3ziStPoPhJxLq1syWuZMxbDvGNbVchfrdCtanC7D13888bMFow61x8axrx+GfHLtVeAx2kxL7tTGRl+Ow==", + "dev": true, + "requires": { + "sourcemap-codec": "^1.4.8" + } + }, + "match-sorter": { + "version": "6.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/match-sorter/-/match-sorter-6.3.4.tgz", + "integrity": "sha512-jfZW7cWS5y/1xswZo8VBOdudUiSd9nifYRWphc9M5D/ee4w4AoXLgBEdRbgVaxbMuagBPeUC5y2Hi8DO6o9aDg==", + "requires": { + "@babel/runtime": "^7.23.8", + "remove-accents": "0.5.0" + }, + "dependencies": { + "remove-accents": { + "version": "0.5.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/remove-accents/-/remove-accents-0.5.0.tgz", + "integrity": "sha512-8g3/Otx1eJaVD12e31UbJj1YzdtVvzH85HV7t+9MJYk/u3XmkOUJ5Ys9wQrf9PCPK8+xn4ymzqYCiZl6QWKn+A==" + } + } + }, + "math-intrinsics": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/math-intrinsics/-/math-intrinsics-1.1.0.tgz", + "integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==", + "dev": true + }, + "mdast-util-from-markdown": { + "version": "1.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-from-markdown/-/mdast-util-from-markdown-1.3.1.tgz", + "integrity": "sha512-4xTO/M8c82qBcnQc1tgpNtubGUW/Y1tBQ1B0i5CtSoelOLKFYlElIr3bvgREYYO5iRqbMY1YuqZng0GVOI8Qww==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0", + "@types/unist": "^2.0.0", + "decode-named-character-reference": "^1.0.0", + "mdast-util-to-string": "^3.1.0", + "micromark": "^3.0.0", + "micromark-util-decode-numeric-character-reference": "^1.0.0", + "micromark-util-decode-string": "^1.0.0", + "micromark-util-normalize-identifier": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "unist-util-stringify-position": "^3.0.0", + "uvu": "^0.5.0" + } + }, + "mdast-util-mdx": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-mdx/-/mdast-util-mdx-2.0.1.tgz", + "integrity": "sha512-38w5y+r8nyKlGvNjSEqWrhG0w5PmnRA+wnBvm+ulYCct7nsGYhFVb0lljS9bQav4psDAS1eGkP2LMVcZBi/aqw==", + "dev": true, + "requires": { + "mdast-util-from-markdown": "^1.0.0", + "mdast-util-mdx-expression": "^1.0.0", + "mdast-util-mdx-jsx": "^2.0.0", + "mdast-util-mdxjs-esm": "^1.0.0", + "mdast-util-to-markdown": "^1.0.0" + } + }, + "mdast-util-mdx-expression": { + "version": "1.3.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-mdx-expression/-/mdast-util-mdx-expression-1.3.2.tgz", + "integrity": "sha512-xIPmR5ReJDu/DHH1OoIT1HkuybIfRGYRywC+gJtI7qHjCJp/M9jrmBEJW22O8lskDWm562BX2W8TiAwRTb0rKA==", + "dev": true, + "requires": { + "@types/estree-jsx": "^1.0.0", + "@types/hast": "^2.0.0", + "@types/mdast": "^3.0.0", + "mdast-util-from-markdown": "^1.0.0", + "mdast-util-to-markdown": "^1.0.0" + } + }, + "mdast-util-mdx-jsx": { + "version": "2.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-mdx-jsx/-/mdast-util-mdx-jsx-2.1.4.tgz", + "integrity": "sha512-DtMn9CmVhVzZx3f+optVDF8yFgQVt7FghCRNdlIaS3X5Bnym3hZwPbg/XW86vdpKjlc1PVj26SpnLGeJBXD3JA==", + "dev": true, + "requires": { + "@types/estree-jsx": "^1.0.0", + "@types/hast": "^2.0.0", + "@types/mdast": "^3.0.0", + "@types/unist": "^2.0.0", + "ccount": "^2.0.0", + "mdast-util-from-markdown": "^1.1.0", + "mdast-util-to-markdown": "^1.3.0", + "parse-entities": "^4.0.0", + "stringify-entities": "^4.0.0", + "unist-util-remove-position": "^4.0.0", + "unist-util-stringify-position": "^3.0.0", + "vfile-message": "^3.0.0" + } + }, + "mdast-util-mdxjs-esm": { + "version": "1.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-mdxjs-esm/-/mdast-util-mdxjs-esm-1.3.1.tgz", + "integrity": "sha512-SXqglS0HrEvSdUEfoXFtcg7DRl7S2cwOXc7jkuusG472Mmjag34DUDeOJUZtl+BVnyeO1frIgVpHlNRWc2gk/w==", + "dev": true, + "requires": { + "@types/estree-jsx": "^1.0.0", + "@types/hast": "^2.0.0", + "@types/mdast": "^3.0.0", + "mdast-util-from-markdown": "^1.0.0", + "mdast-util-to-markdown": "^1.0.0" + } + }, + "mdast-util-phrasing": { + "version": "3.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-phrasing/-/mdast-util-phrasing-3.0.1.tgz", + "integrity": "sha512-WmI1gTXUBJo4/ZmSk79Wcb2HcjPJBzM1nlI/OUWA8yk2X9ik3ffNbBGsU+09BFmXaL1IBb9fiuvq6/KMiNycSg==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0", + "unist-util-is": "^5.0.0" + } + }, + "mdast-util-to-markdown": { + "version": "1.5.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-to-markdown/-/mdast-util-to-markdown-1.5.0.tgz", + "integrity": "sha512-bbv7TPv/WC49thZPg3jXuqzuvI45IL2EVAr/KxF0BSdHsU0ceFHOmwQn6evxAh1GaoK/6GQ1wp4R4oW2+LFL/A==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0", + "@types/unist": "^2.0.0", + "longest-streak": "^3.0.0", + "mdast-util-phrasing": "^3.0.0", + "mdast-util-to-string": "^3.0.0", + "micromark-util-decode-string": "^1.0.0", + "unist-util-visit": "^4.0.0", + "zwitch": "^2.0.0" + } + }, + "mdast-util-to-string": { + "version": "3.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mdast-util-to-string/-/mdast-util-to-string-3.2.0.tgz", + "integrity": "sha512-V4Zn/ncyN1QNSqSBxTrMOLpjr+IKdHl2v3KVLoWmDPscP4r9GcCi71gjgvUV1SFSKh92AjAG4peFuBl2/YgCJg==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0" + } + }, + "memoize-one": { + "version": "5.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/memoize-one/-/memoize-one-5.2.1.tgz", + "integrity": "sha512-zYiwtZUcYyXKo/np96AGZAckk+FWWsUdJ3cHGGmld7+AhvcWmQyGCYUh1hc4Q/pkOhb65dQR/pqCyK0cOaHz4Q==" + }, + "micromark": { + "version": "3.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark/-/micromark-3.2.0.tgz", + "integrity": "sha512-uD66tJj54JLYq0De10AhWycZWGQNUvDI55xPgk2sQM5kn1JYlhbCMTtEeT27+vAhW2FBQxLlOmS3pmA7/2z4aA==", + "dev": true, + "requires": { + "@types/debug": "^4.0.0", + "debug": "^4.0.0", + "decode-named-character-reference": "^1.0.0", + "micromark-core-commonmark": "^1.0.1", + "micromark-factory-space": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-chunked": "^1.0.0", + "micromark-util-combine-extensions": "^1.0.0", + "micromark-util-decode-numeric-character-reference": "^1.0.0", + "micromark-util-encode": "^1.0.0", + "micromark-util-normalize-identifier": "^1.0.0", + "micromark-util-resolve-all": "^1.0.0", + "micromark-util-sanitize-uri": "^1.0.0", + "micromark-util-subtokenize": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.1", + "uvu": "^0.5.0" + } + }, + "micromark-core-commonmark": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-core-commonmark/-/micromark-core-commonmark-1.1.0.tgz", + "integrity": "sha512-BgHO1aRbolh2hcrzL2d1La37V0Aoz73ymF8rAcKnohLy93titmv62E0gP8Hrx9PKcKrqCZ1BbLGbP3bEhoXYlw==", + "dev": true, + "requires": { + "decode-named-character-reference": "^1.0.0", + "micromark-factory-destination": "^1.0.0", + "micromark-factory-label": "^1.0.0", + "micromark-factory-space": "^1.0.0", + "micromark-factory-title": "^1.0.0", + "micromark-factory-whitespace": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-chunked": "^1.0.0", + "micromark-util-classify-character": "^1.0.0", + "micromark-util-html-tag-name": "^1.0.0", + "micromark-util-normalize-identifier": "^1.0.0", + "micromark-util-resolve-all": "^1.0.0", + "micromark-util-subtokenize": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.1", + "uvu": "^0.5.0" + } + }, + "micromark-extension-mdx-expression": { + "version": "1.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-extension-mdx-expression/-/micromark-extension-mdx-expression-1.0.8.tgz", + "integrity": "sha512-zZpeQtc5wfWKdzDsHRBY003H2Smg+PUi2REhqgIhdzAa5xonhP03FcXxqFSerFiNUr5AWmHpaNPQTBVOS4lrXw==", + "dev": true, + "requires": { + "@types/estree": "^1.0.0", + "micromark-factory-mdx-expression": "^1.0.0", + "micromark-factory-space": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-events-to-acorn": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "uvu": "^0.5.0" + } + }, + "micromark-extension-mdx-jsx": { + "version": "1.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-extension-mdx-jsx/-/micromark-extension-mdx-jsx-1.0.5.tgz", + "integrity": "sha512-gPH+9ZdmDflbu19Xkb8+gheqEDqkSpdCEubQyxuz/Hn8DOXiXvrXeikOoBA71+e8Pfi0/UYmU3wW3H58kr7akA==", + "dev": true, + "requires": { + "@types/acorn": "^4.0.0", + "@types/estree": "^1.0.0", + "estree-util-is-identifier-name": "^2.0.0", + "micromark-factory-mdx-expression": "^1.0.0", + "micromark-factory-space": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "uvu": "^0.5.0", + "vfile-message": "^3.0.0" + } + }, + "micromark-extension-mdx-md": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-extension-mdx-md/-/micromark-extension-mdx-md-1.0.1.tgz", + "integrity": "sha512-7MSuj2S7xjOQXAjjkbjBsHkMtb+mDGVW6uI2dBL9snOBCbZmoNgDAeZ0nSn9j3T42UE/g2xVNMn18PJxZvkBEA==", + "dev": true, + "requires": { + "micromark-util-types": "^1.0.0" + } + }, + "micromark-extension-mdxjs": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-extension-mdxjs/-/micromark-extension-mdxjs-1.0.1.tgz", + "integrity": "sha512-7YA7hF6i5eKOfFUzZ+0z6avRG52GpWR8DL+kN47y3f2KhxbBZMhmxe7auOeaTBrW2DenbbZTf1ea9tA2hDpC2Q==", + "dev": true, + "requires": { + "acorn": "^8.0.0", + "acorn-jsx": "^5.0.0", + "micromark-extension-mdx-expression": "^1.0.0", + "micromark-extension-mdx-jsx": "^1.0.0", + "micromark-extension-mdx-md": "^1.0.0", + "micromark-extension-mdxjs-esm": "^1.0.0", + "micromark-util-combine-extensions": "^1.0.0", + "micromark-util-types": "^1.0.0" + }, + "dependencies": { + "acorn": { + "version": "8.15.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/acorn/-/acorn-8.15.0.tgz", + "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", + "dev": true + } + } + }, + "micromark-extension-mdxjs-esm": { + "version": "1.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-extension-mdxjs-esm/-/micromark-extension-mdxjs-esm-1.0.5.tgz", + "integrity": "sha512-xNRBw4aoURcyz/S69B19WnZAkWJMxHMT5hE36GtDAyhoyn/8TuAeqjFJQlwk+MKQsUD7b3l7kFX+vlfVWgcX1w==", + "dev": true, + "requires": { + "@types/estree": "^1.0.0", + "micromark-core-commonmark": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-events-to-acorn": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "unist-util-position-from-estree": "^1.1.0", + "uvu": "^0.5.0", + "vfile-message": "^3.0.0" + } + }, + "micromark-factory-destination": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-factory-destination/-/micromark-factory-destination-1.1.0.tgz", + "integrity": "sha512-XaNDROBgx9SgSChd69pjiGKbV+nfHGDPVYFs5dOoDd7ZnMAE+Cuu91BCpsY8RT2NP9vo/B8pds2VQNCLiu0zhg==", + "dev": true, + "requires": { + "micromark-util-character": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-factory-label": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-factory-label/-/micromark-factory-label-1.1.0.tgz", + "integrity": "sha512-OLtyez4vZo/1NjxGhcpDSbHQ+m0IIGnT8BoPamh+7jVlzLJBH98zzuCoUeMxvM6WsNeh8wx8cKvqLiPHEACn0w==", + "dev": true, + "requires": { + "micromark-util-character": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "uvu": "^0.5.0" + } + }, + "micromark-factory-mdx-expression": { + "version": "1.0.9", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-factory-mdx-expression/-/micromark-factory-mdx-expression-1.0.9.tgz", + "integrity": "sha512-jGIWzSmNfdnkJq05c7b0+Wv0Kfz3NJ3N4cBjnbO4zjXIlxJr+f8lk+5ZmwFvqdAbUy2q6B5rCY//g0QAAaXDWA==", + "dev": true, + "requires": { + "@types/estree": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-events-to-acorn": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "unist-util-position-from-estree": "^1.0.0", + "uvu": "^0.5.0", + "vfile-message": "^3.0.0" + } + }, + "micromark-factory-space": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-factory-space/-/micromark-factory-space-1.1.0.tgz", + "integrity": "sha512-cRzEj7c0OL4Mw2v6nwzttyOZe8XY/Z8G0rzmWQZTBi/jjwyw/U4uqKtUORXQrR5bAZZnbTI/feRV/R7hc4jQYQ==", + "dev": true, + "requires": { + "micromark-util-character": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-factory-title": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-factory-title/-/micromark-factory-title-1.1.0.tgz", + "integrity": "sha512-J7n9R3vMmgjDOCY8NPw55jiyaQnH5kBdV2/UXCtZIpnHH3P6nHUKaH7XXEYuWwx/xUJcawa8plLBEjMPU24HzQ==", + "dev": true, + "requires": { + "micromark-factory-space": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-factory-whitespace": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-factory-whitespace/-/micromark-factory-whitespace-1.1.0.tgz", + "integrity": "sha512-v2WlmiymVSp5oMg+1Q0N1Lxmt6pMhIHD457whWM7/GUlEks1hI9xj5w3zbc4uuMKXGisksZk8DzP2UyGbGqNsQ==", + "dev": true, + "requires": { + "micromark-factory-space": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-util-character": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-character/-/micromark-util-character-1.2.0.tgz", + "integrity": "sha512-lXraTwcX3yH/vMDaFWCQJP1uIszLVebzUa3ZHdrgxr7KEU/9mL4mVgCpGbyhvNLNlauROiNUq7WN5u7ndbY6xg==", + "dev": true, + "requires": { + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-util-chunked": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-chunked/-/micromark-util-chunked-1.1.0.tgz", + "integrity": "sha512-Ye01HXpkZPNcV6FiyoW2fGZDUw4Yc7vT0E9Sad83+bEDiCJ1uXu0S3mr8WLpsz3HaG3x2q0HM6CTuPdcZcluFQ==", + "dev": true, + "requires": { + "micromark-util-symbol": "^1.0.0" + } + }, + "micromark-util-classify-character": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-classify-character/-/micromark-util-classify-character-1.1.0.tgz", + "integrity": "sha512-SL0wLxtKSnklKSUplok1WQFoGhUdWYKggKUiqhX+Swala+BtptGCu5iPRc+xvzJ4PXE/hwM3FNXsfEVgoZsWbw==", + "dev": true, + "requires": { + "micromark-util-character": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-util-combine-extensions": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-combine-extensions/-/micromark-util-combine-extensions-1.1.0.tgz", + "integrity": "sha512-Q20sp4mfNf9yEqDL50WwuWZHUrCO4fEyeDCnMGmG5Pr0Cz15Uo7KBs6jq+dq0EgX4DPwwrh9m0X+zPV1ypFvUA==", + "dev": true, + "requires": { + "micromark-util-chunked": "^1.0.0", + "micromark-util-types": "^1.0.0" + } + }, + "micromark-util-decode-numeric-character-reference": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-decode-numeric-character-reference/-/micromark-util-decode-numeric-character-reference-1.1.0.tgz", + "integrity": "sha512-m9V0ExGv0jB1OT21mrWcuf4QhP46pH1KkfWy9ZEezqHKAxkj4mPCy3nIH1rkbdMlChLHX531eOrymlwyZIf2iw==", + "dev": true, + "requires": { + "micromark-util-symbol": "^1.0.0" + } + }, + "micromark-util-decode-string": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-decode-string/-/micromark-util-decode-string-1.1.0.tgz", + "integrity": "sha512-YphLGCK8gM1tG1bd54azwyrQRjCFcmgj2S2GoJDNnh4vYtnL38JS8M4gpxzOPNyHdNEpheyWXCTnnTDY3N+NVQ==", + "dev": true, + "requires": { + "decode-named-character-reference": "^1.0.0", + "micromark-util-character": "^1.0.0", + "micromark-util-decode-numeric-character-reference": "^1.0.0", + "micromark-util-symbol": "^1.0.0" + } + }, + "micromark-util-encode": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-encode/-/micromark-util-encode-1.1.0.tgz", + "integrity": "sha512-EuEzTWSTAj9PA5GOAs992GzNh2dGQO52UvAbtSOMvXTxv3Criqb6IOzJUBCmEqrrXSblJIJBbFFv6zPxpreiJw==", + "dev": true + }, + "micromark-util-events-to-acorn": { + "version": "1.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-events-to-acorn/-/micromark-util-events-to-acorn-1.2.3.tgz", + "integrity": "sha512-ij4X7Wuc4fED6UoLWkmo0xJQhsktfNh1J0m8g4PbIMPlx+ek/4YdW5mvbye8z/aZvAPUoxgXHrwVlXAPKMRp1w==", + "dev": true, + "requires": { + "@types/acorn": "^4.0.0", + "@types/estree": "^1.0.0", + "@types/unist": "^2.0.0", + "estree-util-visit": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "uvu": "^0.5.0", + "vfile-message": "^3.0.0" + } + }, + "micromark-util-html-tag-name": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-html-tag-name/-/micromark-util-html-tag-name-1.2.0.tgz", + "integrity": "sha512-VTQzcuQgFUD7yYztuQFKXT49KghjtETQ+Wv/zUjGSGBioZnkA4P1XXZPT1FHeJA6RwRXSF47yvJ1tsJdoxwO+Q==", + "dev": true + }, + "micromark-util-normalize-identifier": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-normalize-identifier/-/micromark-util-normalize-identifier-1.1.0.tgz", + "integrity": "sha512-N+w5vhqrBihhjdpM8+5Xsxy71QWqGn7HYNUvch71iV2PM7+E3uWGox1Qp90loa1ephtCxG2ftRV/Conitc6P2Q==", + "dev": true, + "requires": { + "micromark-util-symbol": "^1.0.0" + } + }, + "micromark-util-resolve-all": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-resolve-all/-/micromark-util-resolve-all-1.1.0.tgz", + "integrity": "sha512-b/G6BTMSg+bX+xVCshPTPyAu2tmA0E4X98NSR7eIbeC6ycCqCeE7wjfDIgzEbkzdEVJXRtOG4FbEm/uGbCRouA==", + "dev": true, + "requires": { + "micromark-util-types": "^1.0.0" + } + }, + "micromark-util-sanitize-uri": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-sanitize-uri/-/micromark-util-sanitize-uri-1.2.0.tgz", + "integrity": "sha512-QO4GXv0XZfWey4pYFndLUKEAktKkG5kZTdUNaTAkzbuJxn2tNBOr+QtxR2XpWaMhbImT2dPzyLrPXLlPhph34A==", + "dev": true, + "requires": { + "micromark-util-character": "^1.0.0", + "micromark-util-encode": "^1.0.0", + "micromark-util-symbol": "^1.0.0" + } + }, + "micromark-util-subtokenize": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-subtokenize/-/micromark-util-subtokenize-1.1.0.tgz", + "integrity": "sha512-kUQHyzRoxvZO2PuLzMt2P/dwVsTiivCK8icYTeR+3WgbuPqfHgPPy7nFKbeqRivBvn/3N3GBiNC+JRTMSxEC7A==", + "dev": true, + "requires": { + "micromark-util-chunked": "^1.0.0", + "micromark-util-symbol": "^1.0.0", + "micromark-util-types": "^1.0.0", + "uvu": "^0.5.0" + } + }, + "micromark-util-symbol": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-symbol/-/micromark-util-symbol-1.1.0.tgz", + "integrity": "sha512-uEjpEYY6KMs1g7QfJ2eX1SQEV+ZT4rUD3UcF6l57acZvLNK7PBZL+ty82Z1qhK1/yXIY4bdx04FKMgR0g4IAag==", + "dev": true + }, + "micromark-util-types": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/micromark-util-types/-/micromark-util-types-1.1.0.tgz", + "integrity": "sha512-ukRBgie8TIAcacscVHSiddHjO4k/q3pnedmzMQ4iwDcK0FtFCohKOlFbaOL/mPgfnPsL3C1ZyxJa4sbWrBl3jg==", + "dev": true + }, + "minimatch": { + "version": "3.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/minimatch/-/minimatch-3.1.2.tgz", + "integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==", + "dev": true, + "requires": { + "brace-expansion": "^1.1.7" + } + }, + "minipass": { + "version": "7.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/minipass/-/minipass-7.1.2.tgz", + "integrity": "sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==", + "dev": true + }, + "mri": { + "version": "1.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/mri/-/mri-1.2.0.tgz", + "integrity": "sha512-tzzskb3bG8LvYGFF/mDTpq3jpI6Q9wc3LEmBaghu+DdCssd1FakN7Bc0hVNmEyGq1bq3RgfkCb3cmQLpNPOroA==", + "dev": true + }, + "ms": { + "version": "2.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ms/-/ms-2.1.3.tgz", + "integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==" + }, + "nanoid": { + "version": "3.3.11", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/nanoid/-/nanoid-3.3.11.tgz", + "integrity": "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==", + "dev": true + }, + "natural-compare": { + "version": "1.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/natural-compare/-/natural-compare-1.4.0.tgz", + "integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==", + "dev": true + }, + "node-releases": { + "version": "2.0.19", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/node-releases/-/node-releases-2.0.19.tgz", + "integrity": "sha512-xxOWJsBKtzAq7DY0J+DTzuz58K8e7sJbdgwkbMWQe8UYB6ekmsQ45q0M/tJDsGaZmbC+l7n57UV8Hl5tHxO9uw==", + "dev": true + }, + "nopt": { + "version": "7.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/nopt/-/nopt-7.2.1.tgz", + "integrity": "sha512-taM24ViiimT/XntxbPyJQzCG+p4EKOpgD3mxFwW38mGjVUrfERQOeY4EDHjdnptttfHuHQXFx+lTP08Q+mLa/w==", + "dev": true, + "requires": { + "abbrev": "^2.0.0" + } + }, + "npm-normalize-package-bin": { + "version": "3.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/npm-normalize-package-bin/-/npm-normalize-package-bin-3.0.1.tgz", + "integrity": "sha512-dMxCf+zZ+3zeQZXKxmyuCKlIDPGuv8EF940xbkC4kQVDTtqoh6rJFO+JTKSA6/Rwi0getWmtuy4Itup0AMcaDQ==", + "dev": true + }, + "object-assign": { + "version": "4.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object-assign/-/object-assign-4.1.1.tgz", + "integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==" + }, + "object-inspect": { + "version": "1.13.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object-inspect/-/object-inspect-1.13.4.tgz", + "integrity": "sha512-W67iLl4J2EXEGTbfeHCffrjDfitvLANg0UlX3wFUUSTx92KXRFegMHUVgSqE+wvhAbi4WqjGg9czysTV2Epbew==", + "dev": true + }, + "object-keys": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object-keys/-/object-keys-1.1.1.tgz", + "integrity": "sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA==", + "dev": true + }, + "object.assign": { + "version": "4.1.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object.assign/-/object.assign-4.1.7.tgz", + "integrity": "sha512-nK28WOo+QIjBkDduTINE4JkF/UJJKyf2EJxvJKfblDpyg0Q+pkOHNTL0Qwy6NP6FhE/EnzV73BxxqcJaXY9anw==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0", + "has-symbols": "^1.1.0", + "object-keys": "^1.1.1" + } + }, + "object.entries": { + "version": "1.1.9", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object.entries/-/object.entries-1.1.9.tgz", + "integrity": "sha512-8u/hfXFRBD1O0hPUjioLhoWFHRmt6tKA4/vZPyckBr18l1KE9uHrFaFaUi8MDRTpi4uak2goyPTSNJLXX2k2Hw==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.1.1" + } + }, + "object.fromentries": { + "version": "2.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object.fromentries/-/object.fromentries-2.0.8.tgz", + "integrity": "sha512-k6E21FzySsSK5a21KRADBd/NGneRegFO5pLHfdQLpRDETUNJueLXs3WCzyQ3tFRDYgbq3KHGXfTbi2bs8WQ6rQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.2", + "es-object-atoms": "^1.0.0" + } + }, + "object.values": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/object.values/-/object.values-1.2.1.tgz", + "integrity": "sha512-gXah6aZrcUxjWg2zR2MwouP2eHlCBzdV4pygudehaKXSGW4v2AsRQUK+lwwXhii6KFZcunEnmSUoYp5CXibxtA==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0" + } + }, + "once": { + "version": "1.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dev": true, + "requires": { + "wrappy": "1" + } + }, + "optionator": { + "version": "0.9.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/optionator/-/optionator-0.9.4.tgz", + "integrity": "sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==", + "dev": true, + "requires": { + "deep-is": "^0.1.3", + "fast-levenshtein": "^2.0.6", + "levn": "^0.4.1", + "prelude-ls": "^1.2.1", + "type-check": "^0.4.0", + "word-wrap": "^1.2.5" + } + }, + "own-keys": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/own-keys/-/own-keys-1.0.1.tgz", + "integrity": "sha512-qFOyK5PjiWZd+QQIh+1jhdb9LpxTF0qs7Pm8o5QHYZ0M3vKqSqzsZaEB6oWlxZ+q2sJBMI/Ktgd2N5ZwQoRHfg==", + "dev": true, + "requires": { + "get-intrinsic": "^1.2.6", + "object-keys": "^1.1.1", + "safe-push-apply": "^1.0.0" + } + }, + "package-json-from-dist": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/package-json-from-dist/-/package-json-from-dist-1.0.1.tgz", + "integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==", + "dev": true + }, + "parent-module": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/parent-module/-/parent-module-1.0.1.tgz", + "integrity": "sha512-GQ2EWRpQV8/o+Aw8YqtfZZPfNRWZYkbidE9k5rpl/hC3vtHHBfGm2Ifi6qWV+coDGkrUKZAxE3Lot5kcsRlh+g==", + "requires": { + "callsites": "^3.0.0" + } + }, + "parse-entities": { + "version": "4.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/parse-entities/-/parse-entities-4.0.2.tgz", + "integrity": "sha512-GG2AQYWoLgL877gQIKeRPGO1xF9+eG1ujIb5soS5gPvLQ1y2o8FL90w2QWNdf9I361Mpp7726c+lj3U0qK1uGw==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "character-entities-legacy": "^3.0.0", + "character-reference-invalid": "^2.0.0", + "decode-named-character-reference": "^1.0.0", + "is-alphanumerical": "^2.0.0", + "is-decimal": "^2.0.0", + "is-hexadecimal": "^2.0.0" + } + }, + "parse-json": { + "version": "5.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/parse-json/-/parse-json-5.2.0.tgz", + "integrity": "sha512-ayCKvm/phCGxOkYRSCM82iDwct8/EonSEgCSxWxD7ve6jHggsFl4fZVQBPRNgQoKiuV/odhFrGzQXZwbifC8Rg==", + "requires": { + "@babel/code-frame": "^7.0.0", + "error-ex": "^1.3.1", + "json-parse-even-better-errors": "^2.3.0", + "lines-and-columns": "^1.1.6" + } + }, + "path-is-absolute": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/path-is-absolute/-/path-is-absolute-1.0.1.tgz", + "integrity": "sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==", + "dev": true + }, + "path-key": { + "version": "3.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/path-key/-/path-key-3.1.1.tgz", + "integrity": "sha512-ojmeN0qd+y0jszEtoY48r0Peq5dwMEkIlCOu6Q5f41lfkswXuKtYrhgoTpLnyIcHm24Uhqx+5Tqm2InSwLhE6Q==", + "dev": true + }, + "path-parse": { + "version": "1.0.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/path-parse/-/path-parse-1.0.7.tgz", + "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==" + }, + "path-scurry": { + "version": "1.11.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/path-scurry/-/path-scurry-1.11.1.tgz", + "integrity": "sha512-Xa4Nw17FS9ApQFJ9umLiJS4orGjm7ZzwUrwamcGQuHSzDyth9boKDaycYdDcZDuqYATXw4HFXgaqWTctW/v1HA==", + "dev": true, + "requires": { + "lru-cache": "^10.2.0", + "minipass": "^5.0.0 || ^6.0.2 || ^7.0.0" + }, + "dependencies": { + "lru-cache": { + "version": "10.4.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lru-cache/-/lru-cache-10.4.3.tgz", + "integrity": "sha512-JNAzZcXrCt42VGLuYz0zfAzDfAvJWW6AfYlDBQyDV5DClI2m5sAmK+OIO7s59XfsRsWHp02jAJrRadPRGTt6SQ==", + "dev": true + } + } + }, + "path-to-regexp": { + "version": "1.9.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/path-to-regexp/-/path-to-regexp-1.9.0.tgz", + "integrity": "sha512-xIp7/apCFJuUHdDLWe8O1HIkb0kQrOMb/0u6FXQjemHn/ii5LrIzU6bdECnsiTF/GjZkMEKg1xdiZwNqDYlZ6g==", + "requires": { + "isarray": "0.0.1" + } + }, + "path-type": { + "version": "4.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/path-type/-/path-type-4.0.0.tgz", + "integrity": "sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw==" + }, + "picocolors": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/picocolors/-/picocolors-1.1.1.tgz", + "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==" + }, + "picomatch": { + "version": "4.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/picomatch/-/picomatch-4.0.3.tgz", + "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", + "dev": true + }, + "popmotion": { + "version": "9.0.0-rc.20", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/popmotion/-/popmotion-9.0.0-rc.20.tgz", + "integrity": "sha512-f98sny03WuA+c8ckBjNNXotJD4G2utG/I3Q23NU69OEafrXtxxSukAaJBxzbtxwDvz3vtZK69pu9ojdkMoBNTg==", + "requires": { + "framesync": "^4.1.0", + "hey-listen": "^1.0.8", + "style-value-types": "^3.1.9", + "tslib": "^1.10.0" + }, + "dependencies": { + "tslib": { + "version": "1.14.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tslib/-/tslib-1.14.1.tgz", + "integrity": "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==" + } + } + }, + "possible-typed-array-names": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/possible-typed-array-names/-/possible-typed-array-names-1.1.0.tgz", + "integrity": "sha512-/+5VFTchJDoVj3bhoqi6UeymcD00DAwb1nJwamzPvHEszJ4FpF6SNNbUbOS8yI56qHzdV8eK0qEfOSiodkTdxg==", + "dev": true + }, + "postcss": { + "version": "8.5.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/postcss/-/postcss-8.5.6.tgz", + "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==", + "dev": true, + "requires": { + "nanoid": "^3.3.11", + "picocolors": "^1.1.1", + "source-map-js": "^1.2.1" + } + }, + "prelude-ls": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/prelude-ls/-/prelude-ls-1.2.1.tgz", + "integrity": "sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==", + "dev": true + }, + "prettier": { + "version": "2.8.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/prettier/-/prettier-2.8.8.tgz", + "integrity": "sha512-tdN8qQGvNjw4CHbY+XXk0JgCXn9QiF21a55rBe5LJAU+kDyC4WQn4+awm2Xfk2lQMk5fKup9XgzTZtGkjBdP9Q==", + "dev": true + }, + "prettier-linter-helpers": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/prettier-linter-helpers/-/prettier-linter-helpers-1.0.0.tgz", + "integrity": "sha512-GbK2cP9nraSSUF9N2XwUwqfzlAFlMNYYl+ShE/V+H8a9uNl/oUqB1w2EL54Jh0OlyRSd8RfWYJ3coVS4TROP2w==", + "dev": true, + "requires": { + "fast-diff": "^1.1.2" + } + }, + "proc-log": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/proc-log/-/proc-log-3.0.0.tgz", + "integrity": "sha512-++Vn7NS4Xf9NacaU9Xq3URUuqZETPsf8L4j5/ckhaRYsfPeRyzGw+iDjFhV/Jr3uNmTvvddEJFWh5R1gRgUH8A==", + "dev": true + }, + "progress": { + "version": "2.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/progress/-/progress-2.0.3.tgz", + "integrity": "sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA==", + "dev": true + }, + "prop-types": { + "version": "15.8.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/prop-types/-/prop-types-15.8.1.tgz", + "integrity": "sha512-oj87CgZICdulUohogVAR7AjlC0327U4el4L6eAvOqCeudMDVU0NThNaV+b9Df4dXgSP1gXMTnPdhfe/2qDH5cg==", + "requires": { + "loose-envify": "^1.4.0", + "object-assign": "^4.1.1", + "react-is": "^16.13.1" + } + }, + "punycode": { + "version": "2.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/punycode/-/punycode-2.3.1.tgz", + "integrity": "sha512-vYt7UD1U9Wg6138shLtLOvdAu+8DsC/ilFtEVHcH+wydcSpNE20AfSOduf6MkRFahL5FY7X1oU7nKVZFtfq8Fg==", + "dev": true + }, + "query-string": { + "version": "7.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/query-string/-/query-string-7.1.3.tgz", + "integrity": "sha512-hh2WYhq4fi8+b+/2Kg9CEge4fDPvHS534aOOvOZeQ3+Vf2mCFsaFBYj0i+iXcAq6I9Vzp5fjMFBlONvayDC1qg==", + "requires": { + "decode-uri-component": "^0.2.2", + "filter-obj": "^1.1.0", + "split-on-first": "^1.0.0", + "strict-uri-encode": "^2.0.0" + } + }, + "react": { + "version": "18.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react/-/react-18.3.1.tgz", + "integrity": "sha512-wS+hAgJShR0KhEvPJArfuPVN1+Hz1t0Y6n5jLrGQbkb4urgPE/0Rve+1kMB1v/oWgHgm4WIcV+i7F2pTVj+2iQ==", + "requires": { + "loose-envify": "^1.1.0" + } + }, + "react-chartjs-2": { + "version": "5.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-chartjs-2/-/react-chartjs-2-5.3.0.tgz", + "integrity": "sha512-UfZZFnDsERI3c3CZGxzvNJd02SHjaSJ8kgW1djn65H1KK8rehwTjyrRKOG3VTMG8wtHZ5rgAO5oTHtHi9GCCmw==" + }, + "react-csv": { + "version": "2.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-csv/-/react-csv-2.2.2.tgz", + "integrity": "sha512-RG5hOcZKZFigIGE8LxIEV/OgS1vigFQT4EkaHeKgyuCbUAu9Nbd/1RYq++bJcJJ9VOqO/n9TZRADsXNDR4VEpw==" + }, + "react-dom": { + "version": "18.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-dom/-/react-dom-18.3.1.tgz", + "integrity": "sha512-5m4nQKp+rZRb09LNH59GM4BxTh9251/ylbKIbpe7TpGxfJ+9kv6BLkLBXIjjspbgbnIBNqlI23tRnTWT0snUIw==", + "requires": { + "loose-envify": "^1.1.0", + "scheduler": "^0.23.2" + } + }, + "react-error-boundary": { + "version": "3.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-error-boundary/-/react-error-boundary-3.1.4.tgz", + "integrity": "sha512-uM9uPzZJTF6wRQORmSrvOIgt4lJ9MC1sNgEOj2XGsDTRE4kmpWxg7ENK9EWNKJRMAOY9z0MuF4yIfl6gp4sotA==", + "requires": { + "@babel/runtime": "^7.12.5" + } + }, + "react-fast-compare": { + "version": "3.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-fast-compare/-/react-fast-compare-3.2.2.tgz", + "integrity": "sha512-nsO+KSNgo1SbJqJEYRE9ERzo7YtYbou/OqjSQKxV7jcKox7+usiUVZOAC+XnDOABXggQTno0Y1CpVnuWEc1boQ==" + }, + "react-is": { + "version": "16.13.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-is/-/react-is-16.13.1.tgz", + "integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==" + }, + "react-refresh": { + "version": "0.14.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-refresh/-/react-refresh-0.14.2.tgz", + "integrity": "sha512-jCvmsr+1IUSMUyzOkRcvnVbX3ZYC6g9TDrDbFuFmRDq7PD4yaGbLKNQL6k2jnArV8hjYxh7hVhAZB6s9HDGpZA==", + "dev": true + }, + "react-router": { + "version": "5.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-router/-/react-router-5.3.4.tgz", + "integrity": "sha512-Ys9K+ppnJah3QuaRiLxk+jDWOR1MekYQrlytiXxC1RyfbdsZkS5pvKAzCCr031xHixZwpnsYNT5xysdFHQaYsA==", + "requires": { + "@babel/runtime": "^7.12.13", + "history": "^4.9.0", + "hoist-non-react-statics": "^3.1.0", + "loose-envify": "^1.3.1", + "path-to-regexp": "^1.7.0", + "prop-types": "^15.6.2", + "react-is": "^16.6.0", + "tiny-invariant": "^1.0.2", + "tiny-warning": "^1.0.0" + } + }, + "react-router-dom": { + "version": "5.3.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-router-dom/-/react-router-dom-5.3.4.tgz", + "integrity": "sha512-m4EqFMHv/Ih4kpcBCONHbkT68KoAeHN4p3lAGoNryfHi0dMy0kCzEZakiKRsvg5wHZ/JLrLW8o8KomWiz/qbYQ==", + "requires": { + "@babel/runtime": "^7.12.13", + "history": "^4.9.0", + "loose-envify": "^1.3.1", + "prop-types": "^15.6.2", + "react-router": "5.3.4", + "tiny-invariant": "^1.0.2", + "tiny-warning": "^1.0.0" + } + }, + "react-transition-group": { + "version": "4.4.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-transition-group/-/react-transition-group-4.4.5.tgz", + "integrity": "sha512-pZcd1MCJoiKiBR2NRxeCRg13uCXbydPnmB4EOeRrY7480qNWO8IIgQG6zlDkm6uRMsURXPuKq0GWtiM59a5Q6g==", + "requires": { + "@babel/runtime": "^7.5.5", + "dom-helpers": "^5.0.1", + "loose-envify": "^1.4.0", + "prop-types": "^15.6.2" + } + }, + "react-window": { + "version": "1.8.11", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/react-window/-/react-window-1.8.11.tgz", + "integrity": "sha512-+SRbUVT2scadgFSWx+R1P754xHPEqvcfSfVX10QYg6POOz+WNgkN48pS+BtZNIMGiL1HYrSEiCkwsMS15QogEQ==", + "requires": { + "@babel/runtime": "^7.0.0", + "memoize-one": ">=3.1.1 <6" + } + }, + "read-package-json-fast": { + "version": "3.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/read-package-json-fast/-/read-package-json-fast-3.0.2.tgz", + "integrity": "sha512-0J+Msgym3vrLOUB3hzQCuZHII0xkNGCtz/HJH9xZshwv9DbDwkw1KaE3gx/e2J5rpEY5rtOy6cyhKOPrkP7FZw==", + "dev": true, + "requires": { + "json-parse-even-better-errors": "^3.0.0", + "npm-normalize-package-bin": "^3.0.0" + }, + "dependencies": { + "json-parse-even-better-errors": { + "version": "3.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json-parse-even-better-errors/-/json-parse-even-better-errors-3.0.2.tgz", + "integrity": "sha512-fi0NG4bPjCHunUJffmLd0gxssIgkNmArMvis4iNah6Owg1MCJjWhEcDLmsK6iGkJq3tHwbDkTlce70/tmXN4cQ==", + "dev": true + } + } + }, + "readable-stream": { + "version": "3.6.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/readable-stream/-/readable-stream-3.6.2.tgz", + "integrity": "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==", + "dev": true, + "requires": { + "inherits": "^2.0.3", + "string_decoder": "^1.1.1", + "util-deprecate": "^1.0.1" + } + }, + "reflect.getprototypeof": { + "version": "1.0.10", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/reflect.getprototypeof/-/reflect.getprototypeof-1.0.10.tgz", + "integrity": "sha512-00o4I+DVrefhv+nX0ulyi3biSHCPDe+yLv5o/p6d/UVlirijB8E16FtfwSAi4g3tcqrQ4lRAqQSoFEZJehYEcw==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.9", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0", + "get-intrinsic": "^1.2.7", + "get-proto": "^1.0.1", + "which-builtin-type": "^1.2.1" + } + }, + "regexp.prototype.flags": { + "version": "1.5.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/regexp.prototype.flags/-/regexp.prototype.flags-1.5.4.tgz", + "integrity": "sha512-dYqgNSZbDwkaJ2ceRd9ojCGjBq+mOm9LmtXnAnEGyHhN/5R7iDW2TRw3h+o/jCFxus3P2LfWIIiwowAjANm7IA==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "define-properties": "^1.2.1", + "es-errors": "^1.3.0", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "set-function-name": "^2.0.2" + } + }, + "regexpp": { + "version": "3.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/regexpp/-/regexpp-3.2.0.tgz", + "integrity": "sha512-pq2bWo9mVD43nbts2wGv17XLiNLya+GklZ8kaDLV2Z08gDCsGpnKn9BFMepvWuHCbyVvY7J5o5+BVvoQbmlJLg==", + "dev": true + }, + "remark-mdx": { + "version": "2.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/remark-mdx/-/remark-mdx-2.3.0.tgz", + "integrity": "sha512-g53hMkpM0I98MU266IzDFMrTD980gNF3BJnkyFcmN+dD873mQeD5rdMO3Y2X+x8umQfbSE0PcoEDl7ledSA+2g==", + "dev": true, + "requires": { + "mdast-util-mdx": "^2.0.0", + "micromark-extension-mdxjs": "^1.0.0" + } + }, + "remark-parse": { + "version": "10.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/remark-parse/-/remark-parse-10.0.2.tgz", + "integrity": "sha512-3ydxgHa/ZQzG8LvC7jTXccARYDcRld3VfcgIIFs7bI6vbRSxJJmzgLEIIoYKyrfhaY+ujuWaf/PJiMZXoiCXgw==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0", + "mdast-util-from-markdown": "^1.0.0", + "unified": "^10.0.0" + } + }, + "remark-stringify": { + "version": "10.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/remark-stringify/-/remark-stringify-10.0.3.tgz", + "integrity": "sha512-koyOzCMYoUHudypbj4XpnAKFbkddRMYZHwghnxd7ue5210WzGw6kOBwauJTRUMq16jsovXx8dYNvSSWP89kZ3A==", + "dev": true, + "requires": { + "@types/mdast": "^3.0.0", + "mdast-util-to-markdown": "^1.0.0", + "unified": "^10.0.0" + } + }, + "remove-accents": { + "version": "0.4.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/remove-accents/-/remove-accents-0.4.4.tgz", + "integrity": "sha512-EpFcOa/ISetVHEXqu+VwI96KZBmq+a8LJnGkaeFw45epGlxIZz5dhEEnNZMsQXgORu3qaMoLX4qJCzOik6ytAg==" + }, + "require-from-string": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/require-from-string/-/require-from-string-2.0.2.tgz", + "integrity": "sha512-Xf0nWe6RseziFMu+Ap9biiUbmplq6S9/p+7w7YXP/JBHhrUDDUhwa+vANyubuqfZWTveU//DYVGsDG7RKL/vEw==", + "dev": true + }, + "reselect": { + "version": "4.1.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/reselect/-/reselect-4.1.8.tgz", + "integrity": "sha512-ab9EmR80F/zQTMNeneUr4cv+jSwPJgIlvEmVwLerwrWVbpLlBuls9XHzIeTFy4cegU2NHBp3va0LKOzU5qFEYQ==" + }, + "resize-observer-polyfill": { + "version": "1.5.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/resize-observer-polyfill/-/resize-observer-polyfill-1.5.1.tgz", + "integrity": "sha512-LwZrotdHOo12nQuZlHEmtuXdqGoOD0OhaxopaNFxWzInpEgaLWoVuAMbTzixuosCx2nEG58ngzW3vxdWoxIgdg==" + }, + "resolve": { + "version": "1.22.10", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/resolve/-/resolve-1.22.10.tgz", + "integrity": "sha512-NPRy+/ncIMeDlTAsuqwKIiferiawhefFJtkNSW0qZJEqMEb+qBt/77B/jGeeek+F0uOeN05CDa6HXbbIgtVX4w==", + "requires": { + "is-core-module": "^2.16.0", + "path-parse": "^1.0.7", + "supports-preserve-symlinks-flag": "^1.0.0" + } + }, + "resolve-from": { + "version": "4.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/resolve-from/-/resolve-from-4.0.0.tgz", + "integrity": "sha512-pb/MYmXstAkysRFx8piNI1tGFNQIFA3vkE3Gq4EuA1dF6gHp/+vgZqsCGJapvy8N3Q+4o7FwvquPJcnZ7RYy4g==" + }, + "resolve-pathname": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/resolve-pathname/-/resolve-pathname-3.0.0.tgz", + "integrity": "sha512-C7rARubxI8bXFNB/hqcp/4iUeIXJhJZvFPFPiSPRnhU5UPxzMFIl+2E6yY6c4k9giDJAhtV+enfA+G89N6Csng==" + }, + "rimraf": { + "version": "3.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/rimraf/-/rimraf-3.0.2.tgz", + "integrity": "sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==", + "dev": true, + "requires": { + "glob": "^7.1.3" + } + }, + "rollup": { + "version": "4.46.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/rollup/-/rollup-4.46.2.tgz", + "integrity": "sha512-WMmLFI+Boh6xbop+OAGo9cQ3OgX9MIg7xOQjn+pTCwOkk+FNDAeAemXkJ3HzDJrVXleLOFVa1ipuc1AmEx1Dwg==", + "dev": true, + "requires": { + "@rollup/rollup-android-arm-eabi": "4.46.2", + "@rollup/rollup-android-arm64": "4.46.2", + "@rollup/rollup-darwin-arm64": "4.46.2", + "@rollup/rollup-darwin-x64": "4.46.2", + "@rollup/rollup-freebsd-arm64": "4.46.2", + "@rollup/rollup-freebsd-x64": "4.46.2", + "@rollup/rollup-linux-arm-gnueabihf": "4.46.2", + "@rollup/rollup-linux-arm-musleabihf": "4.46.2", + "@rollup/rollup-linux-arm64-gnu": "4.46.2", + "@rollup/rollup-linux-arm64-musl": "4.46.2", + "@rollup/rollup-linux-loongarch64-gnu": "4.46.2", + "@rollup/rollup-linux-ppc64-gnu": "4.46.2", + "@rollup/rollup-linux-riscv64-gnu": "4.46.2", + "@rollup/rollup-linux-riscv64-musl": "4.46.2", + "@rollup/rollup-linux-s390x-gnu": "4.46.2", + "@rollup/rollup-linux-x64-gnu": "4.46.2", + "@rollup/rollup-linux-x64-musl": "4.46.2", + "@rollup/rollup-win32-arm64-msvc": "4.46.2", + "@rollup/rollup-win32-ia32-msvc": "4.46.2", + "@rollup/rollup-win32-x64-msvc": "4.46.2", + "@types/estree": "1.0.8", + "fsevents": "~2.3.2" + } + }, + "sade": { + "version": "1.8.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/sade/-/sade-1.8.1.tgz", + "integrity": "sha512-xal3CZX1Xlo/k4ApwCFrHVACi9fBqJ7V+mwhBsuf/1IOKbBy098Fex+Wa/5QMubw09pSZ/u8EY8PWgevJsXp1A==", + "dev": true, + "requires": { + "mri": "^1.1.0" + } + }, + "safe-array-concat": { + "version": "1.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/safe-array-concat/-/safe-array-concat-1.1.3.tgz", + "integrity": "sha512-AURm5f0jYEOydBj7VQlVvDrjeFgthDdEF5H1dP+6mNpoXOMo1quQqJ4wvJDyRZ9+pO3kGWoOdmV08cSv2aJV6Q==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.2", + "get-intrinsic": "^1.2.6", + "has-symbols": "^1.1.0", + "isarray": "^2.0.5" + }, + "dependencies": { + "isarray": { + "version": "2.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/isarray/-/isarray-2.0.5.tgz", + "integrity": "sha512-xHjhDr3cNBK0BzdUJSPXZntQUx/mwMS5Rw4A7lPJ90XGAO6ISP/ePDNuo0vhqOZU+UD5JoodwCAAoZQd3FeAKw==", + "dev": true + } + } + }, + "safe-buffer": { + "version": "5.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/safe-buffer/-/safe-buffer-5.2.1.tgz", + "integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==", + "dev": true + }, + "safe-push-apply": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/safe-push-apply/-/safe-push-apply-1.0.0.tgz", + "integrity": "sha512-iKE9w/Z7xCzUMIZqdBsp6pEQvwuEebH4vdpjcDWnyzaI6yl6O9FHvVpmGelvEHNsoY6wGblkxR6Zty/h00WiSA==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "isarray": "^2.0.5" + }, + "dependencies": { + "isarray": { + "version": "2.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/isarray/-/isarray-2.0.5.tgz", + "integrity": "sha512-xHjhDr3cNBK0BzdUJSPXZntQUx/mwMS5Rw4A7lPJ90XGAO6ISP/ePDNuo0vhqOZU+UD5JoodwCAAoZQd3FeAKw==", + "dev": true + } + } + }, + "safe-regex-test": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/safe-regex-test/-/safe-regex-test-1.1.0.tgz", + "integrity": "sha512-x/+Cz4YrimQxQccJf5mKEbIa1NzeCRNI5Ecl/ekmlYaampdNLPalVyIcCZNNH3MvmqBugV5TMYZXv0ljslUlaw==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "is-regex": "^1.2.1" + } + }, + "scheduler": { + "version": "0.23.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/scheduler/-/scheduler-0.23.2.tgz", + "integrity": "sha512-UOShsPwz7NrMUqhR6t0hWjFduvOzbtv7toDH1/hIrfRNIDBnnBWd0CwJTGvTpngVlmwGCdP9/Zl/tVrDqcuYzQ==", + "requires": { + "loose-envify": "^1.1.0" + } + }, + "screenfull": { + "version": "5.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/screenfull/-/screenfull-5.2.0.tgz", + "integrity": "sha512-9BakfsO2aUQN2K9Fdbj87RJIEZ82Q9IGim7FqM5OsebfoFC6ZHXgDq/KvniuLTPdeM8wY2o6Dj3WQ7KeQCj3cA==" + }, + "semver": { + "version": "6.3.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/semver/-/semver-6.3.1.tgz", + "integrity": "sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==", + "dev": true + }, + "serialize-query-params": { + "version": "1.3.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/serialize-query-params/-/serialize-query-params-1.3.6.tgz", + "integrity": "sha512-VlH7sfWNyPVZClPkRacopn6sn5uQMXBsjPVz1+pBHX895VpcYVznfJtZ49e6jymcrz+l/vowkepCZn/7xEAEdw==" + }, + "set-function-length": { + "version": "1.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/set-function-length/-/set-function-length-1.2.2.tgz", + "integrity": "sha512-pgRc4hJ4/sNjWCSS9AmnS40x3bNMDTknHgL5UaMBTMyJnU90EgWh1Rz+MC9eFu4BuN/UwZjKQuY/1v3rM7HMfg==", + "dev": true, + "requires": { + "define-data-property": "^1.1.4", + "es-errors": "^1.3.0", + "function-bind": "^1.1.2", + "get-intrinsic": "^1.2.4", + "gopd": "^1.0.1", + "has-property-descriptors": "^1.0.2" + } + }, + "set-function-name": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/set-function-name/-/set-function-name-2.0.2.tgz", + "integrity": "sha512-7PGFlmtwsEADb0WYyvCMa1t+yke6daIG4Wirafur5kcf+MhUnPms1UeR0CKQdTZD81yESwMHbtn+TR+dMviakQ==", + "dev": true, + "requires": { + "define-data-property": "^1.1.4", + "es-errors": "^1.3.0", + "functions-have-names": "^1.2.3", + "has-property-descriptors": "^1.0.2" + } + }, + "set-proto": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/set-proto/-/set-proto-1.0.0.tgz", + "integrity": "sha512-RJRdvCo6IAnPdsvP/7m6bsQqNnn1FCBX5ZNtFL98MmFF/4xAIJTIg1YbHW5DC2W5SKZanrC6i4HsJqlajw/dZw==", + "dev": true, + "requires": { + "dunder-proto": "^1.0.1", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0" + } + }, + "shebang-command": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/shebang-command/-/shebang-command-2.0.0.tgz", + "integrity": "sha512-kHxr2zZpYtdmrN1qDjrrX/Z1rR1kG8Dx+gkpK1G4eXmvXswmcE1hTWBWYUzlraYw1/yZp6YuDY77YtvbN0dmDA==", + "dev": true, + "requires": { + "shebang-regex": "^3.0.0" + } + }, + "shebang-regex": { + "version": "3.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/shebang-regex/-/shebang-regex-3.0.0.tgz", + "integrity": "sha512-7++dFhtcx3353uBaq8DDR4NuxBetBzC7ZQOhmTQInHEd6bSrXdiEyzCvG07Z44UYdLShWUyXt5M/yhz8ekcb1A==", + "dev": true + }, + "side-channel": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/side-channel/-/side-channel-1.1.0.tgz", + "integrity": "sha512-ZX99e6tRweoUXqR+VBrslhda51Nh5MTQwou5tnUDgbtyM0dBgmhEDtWGP/xbKn6hqfPRHujUNwz5fy/wbbhnpw==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3", + "side-channel-list": "^1.0.0", + "side-channel-map": "^1.0.1", + "side-channel-weakmap": "^1.0.2" + } + }, + "side-channel-list": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/side-channel-list/-/side-channel-list-1.0.0.tgz", + "integrity": "sha512-FCLHtRD/gnpCiCHEiJLOwdmFP+wzCmDEkc9y7NsYxeF4u7Btsn1ZuwgwJGxImImHicJArLP4R0yX4c2KCrMrTA==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "object-inspect": "^1.13.3" + } + }, + "side-channel-map": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/side-channel-map/-/side-channel-map-1.0.1.tgz", + "integrity": "sha512-VCjCNfgMsby3tTdo02nbjtM/ewra6jPHmpThenkTYh8pG9ucZ/1P8So4u4FGBek/BjpOVsDCMoLA/iuBKIFXRA==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3" + } + }, + "side-channel-weakmap": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/side-channel-weakmap/-/side-channel-weakmap-1.0.2.tgz", + "integrity": "sha512-WPS/HvHQTYnHisLo9McqBHOJk2FkHO/tlpvldyrnem4aeQp4hai3gythswg6p01oSoTl58rcpiFAjF2br2Ak2A==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "es-errors": "^1.3.0", + "get-intrinsic": "^1.2.5", + "object-inspect": "^1.13.3", + "side-channel-map": "^1.0.1" + } + }, + "signal-exit": { + "version": "4.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/signal-exit/-/signal-exit-4.1.0.tgz", + "integrity": "sha512-bzyZ1e88w9O1iNJbKnOlvYTrWPDl46O1bG0D3XInv+9tkPrxrN8jUUTiFlDkkmKWgn1M6CfIA13SuGqOa9Korw==", + "dev": true + }, + "simple-swizzle": { + "version": "0.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/simple-swizzle/-/simple-swizzle-0.2.2.tgz", + "integrity": "sha512-JA//kQgZtbuY83m+xT+tXJkmJncGMTFT+C+g2h2R9uxkYIrE2yy9sgmcLhCnw57/WSD+Eh3J97FPEDFnbXnDUg==", + "requires": { + "is-arrayish": "^0.3.1" + }, + "dependencies": { + "is-arrayish": { + "version": "0.3.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/is-arrayish/-/is-arrayish-0.3.2.tgz", + "integrity": "sha512-eVRqCvVlZbuw3GrM63ovNSNAeA1K16kaR/LRY/92w0zxQ5/1YzwblUX652i4Xs9RwAGjW9d9y6X88t8OaAJfWQ==" + } + } + }, + "slice-ansi": { + "version": "4.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/slice-ansi/-/slice-ansi-4.0.0.tgz", + "integrity": "sha512-qMCMfhY040cVHT43K9BFygqYbUPFZKHOg7K73mtTWJRb8pyP3fzf4Ixd5SzdEJQ6MRUg/WBnOLxghZtKKurENQ==", + "dev": true, + "requires": { + "ansi-styles": "^4.0.0", + "astral-regex": "^2.0.0", + "is-fullwidth-code-point": "^3.0.0" + }, + "dependencies": { + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + } + } + }, + "source-map": { + "version": "0.5.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/source-map/-/source-map-0.5.7.tgz", + "integrity": "sha512-LbrmJOMUSdEVxIKvdcJzQC+nQhe8FUZQTXQy6+I75skNgn3OoQ0DZA8YnFa7gp8tqtL3KPf1kmo0R5DoApeSGQ==" + }, + "source-map-js": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/source-map-js/-/source-map-js-1.2.1.tgz", + "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==", + "dev": true + }, + "sourcemap-codec": { + "version": "1.4.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/sourcemap-codec/-/sourcemap-codec-1.4.8.tgz", + "integrity": "sha512-9NykojV5Uih4lgo5So5dtw+f0JgJX30KCNI8gwhz2J9A15wD0Ml6tjHKwf6fTSa6fAdVBdZeNOs9eJ71qCk8vA==", + "dev": true + }, + "split-on-first": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/split-on-first/-/split-on-first-1.1.0.tgz", + "integrity": "sha512-43ZssAJaMusuKWL8sKUBQXHWOpq8d6CfN/u1p4gUzfJkM05C8rxTmYrkIPTXapZpORA6LkkzcUulJ8FqA7Uudw==" + }, + "sprintf-js": { + "version": "1.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/sprintf-js/-/sprintf-js-1.0.3.tgz", + "integrity": "sha512-D9cPgkvLlV3t3IzL0D0YLvGA9Ahk4PcvVwUbN0dSGr1aP0Nrt4AEnTUbuGvquEC0mA64Gqt1fzirlRs5ibXx8g==", + "dev": true + }, + "stop-iteration-iterator": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/stop-iteration-iterator/-/stop-iteration-iterator-1.1.0.tgz", + "integrity": "sha512-eLoXW/DHyl62zxY4SCaIgnRhuMr6ri4juEYARS8E6sCEqzKpOiE521Ucofdx+KnDZl5xmvGYaaKCk5FEOxJCoQ==", + "dev": true, + "requires": { + "es-errors": "^1.3.0", + "internal-slot": "^1.1.0" + } + }, + "strict-uri-encode": { + "version": "2.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strict-uri-encode/-/strict-uri-encode-2.0.0.tgz", + "integrity": "sha512-QwiXZgpRcKkhTj2Scnn++4PKtWsH0kpzZ62L2R6c/LUVYv7hVnZqcg2+sMuT6R7Jusu1vviK/MFsu6kNJfWlEQ==" + }, + "string-width": { + "version": "4.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "requires": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + } + }, + "string-width-cjs": { + "version": "npm:string-width@4.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dev": true, + "requires": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + } + }, + "string.prototype.matchall": { + "version": "4.0.12", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string.prototype.matchall/-/string.prototype.matchall-4.0.12.tgz", + "integrity": "sha512-6CC9uyBL+/48dYizRf7H7VAYCMCNTBeM78x/VTUe9bFEaxBepPJDa1Ow99LqI/1yF7kuy7Q3cQsYMrcjGUcskA==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.3", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.6", + "es-errors": "^1.3.0", + "es-object-atoms": "^1.0.0", + "get-intrinsic": "^1.2.6", + "gopd": "^1.2.0", + "has-symbols": "^1.1.0", + "internal-slot": "^1.1.0", + "regexp.prototype.flags": "^1.5.3", + "set-function-name": "^2.0.2", + "side-channel": "^1.1.0" + } + }, + "string.prototype.repeat": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string.prototype.repeat/-/string.prototype.repeat-1.0.0.tgz", + "integrity": "sha512-0u/TldDbKD8bFCQ/4f5+mNRrXwZ8hg2w7ZR8wa16e8z9XpePWl3eGEcUD0OXpEH/VJH/2G3gjUtR3ZOiBe2S/w==", + "dev": true, + "requires": { + "define-properties": "^1.1.3", + "es-abstract": "^1.17.5" + } + }, + "string.prototype.trim": { + "version": "1.2.10", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string.prototype.trim/-/string.prototype.trim-1.2.10.tgz", + "integrity": "sha512-Rs66F0P/1kedk5lyYyH9uBzuiI/kNRmwJAR9quK6VOtIpZ2G+hMZd+HQbbv25MgCA6gEffoMZYxlTod4WcdrKA==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.2", + "define-data-property": "^1.1.4", + "define-properties": "^1.2.1", + "es-abstract": "^1.23.5", + "es-object-atoms": "^1.0.0", + "has-property-descriptors": "^1.0.2" + } + }, + "string.prototype.trimend": { + "version": "1.0.9", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string.prototype.trimend/-/string.prototype.trimend-1.0.9.tgz", + "integrity": "sha512-G7Ok5C6E/j4SGfyLCloXTrngQIQU3PWtXGst3yM7Bea9FRURf1S42ZHlZZtsNque2FN2PoUhfZXYLNWwEr4dLQ==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "call-bound": "^1.0.2", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0" + } + }, + "string.prototype.trimstart": { + "version": "1.0.8", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string.prototype.trimstart/-/string.prototype.trimstart-1.0.8.tgz", + "integrity": "sha512-UXSH262CSZY1tfu3G3Secr6uGLCFVPMhIqHjlgCUtCCcgihYc/xKs9djMTMUOb2j1mVSeU8EU6NWc/iQKU6Gfg==", + "dev": true, + "requires": { + "call-bind": "^1.0.7", + "define-properties": "^1.2.1", + "es-object-atoms": "^1.0.0" + } + }, + "string_decoder": { + "version": "1.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string_decoder/-/string_decoder-1.3.0.tgz", + "integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==", + "dev": true, + "requires": { + "safe-buffer": "~5.2.0" + } + }, + "stringify-entities": { + "version": "4.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/stringify-entities/-/stringify-entities-4.0.4.tgz", + "integrity": "sha512-IwfBptatlO+QCJUo19AqvrPNqlVMpW9YEL2LIVY+Rpv2qsjCGxaDLNRgeGsQWJhfItebuJhsGSLjaBbNSQ+ieg==", + "dev": true, + "requires": { + "character-entities-html4": "^2.0.0", + "character-entities-legacy": "^3.0.0" + } + }, + "strip-ansi": { + "version": "6.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "requires": { + "ansi-regex": "^5.0.1" + } + }, + "strip-ansi-cjs": { + "version": "npm:strip-ansi@6.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dev": true, + "requires": { + "ansi-regex": "^5.0.1" + } + }, + "strip-json-comments": { + "version": "3.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strip-json-comments/-/strip-json-comments-3.1.1.tgz", + "integrity": "sha512-6fPc+R4ihwqP6N/aIv2f1gMH8lOVtWQHoqC4yK6oSDVVocumAsfCqjkXnqiYMhmMwS/mEHLp7Vehlt3ql6lEig==", + "dev": true + }, + "style-value-types": { + "version": "3.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/style-value-types/-/style-value-types-3.2.0.tgz", + "integrity": "sha512-ih0mGsrYYmVvdDi++/66O6BaQPRPRMQHoZevNNdMMcPlP/cH28Rnfsqf1UEba/Bwfuw9T8BmIMwbGdzsPwQKrQ==", + "requires": { + "hey-listen": "^1.0.8", + "tslib": "^1.10.0" + }, + "dependencies": { + "tslib": { + "version": "1.14.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tslib/-/tslib-1.14.1.tgz", + "integrity": "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==" + } + } + }, + "stylis": { + "version": "4.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/stylis/-/stylis-4.2.0.tgz", + "integrity": "sha512-Orov6g6BB1sDfYgzWfTHDOxamtX1bE/zo104Dh9e6fqJ3PooipYyfJ0pUmrZO2wAvO8YbEyeFrkV91XTsGMSrw==" + }, + "supports-color": { + "version": "5.5.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dev": true, + "requires": { + "has-flag": "^3.0.0" + } + }, + "supports-preserve-symlinks-flag": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz", + "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==" + }, + "svg-parser": { + "version": "2.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/svg-parser/-/svg-parser-2.0.4.tgz", + "integrity": "sha512-e4hG1hRwoOdRb37cIMSgzNsxyzKfayW6VOflrwvR+/bzrkyxY/31WkbgnQpgtrNp1SdpJvpUAGTa/ZoiPNDuRQ==", + "dev": true + }, + "synckit": { + "version": "0.9.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/synckit/-/synckit-0.9.3.tgz", + "integrity": "sha512-JJoOEKTfL1urb1mDoEblhD9NhEbWmq9jHEMEnxoC4ujUaZ4itA8vKgwkFAyNClgxplLi9tsUKX+EduK0p/l7sg==", + "dev": true, + "requires": { + "@pkgr/core": "^0.1.0", + "tslib": "^2.6.2" + } + }, + "table": { + "version": "6.9.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/table/-/table-6.9.0.tgz", + "integrity": "sha512-9kY+CygyYM6j02t5YFHbNz2FN5QmYGv9zAjVp4lCDjlCw7amdckXlEt/bjMhUIfj4ThGRE4gCUH5+yGnNuPo5A==", + "dev": true, + "requires": { + "ajv": "^8.0.1", + "lodash.truncate": "^4.4.2", + "slice-ansi": "^4.0.0", + "string-width": "^4.2.3", + "strip-ansi": "^6.0.1" + }, + "dependencies": { + "ajv": { + "version": "8.17.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ajv/-/ajv-8.17.1.tgz", + "integrity": "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g==", + "dev": true, + "requires": { + "fast-deep-equal": "^3.1.3", + "fast-uri": "^3.0.1", + "json-schema-traverse": "^1.0.0", + "require-from-string": "^2.0.2" + } + }, + "json-schema-traverse": { + "version": "1.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/json-schema-traverse/-/json-schema-traverse-1.0.0.tgz", + "integrity": "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==", + "dev": true + } + } + }, + "text-table": { + "version": "0.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/text-table/-/text-table-0.2.0.tgz", + "integrity": "sha512-N+8UisAXDGk8PFXP4HAzVR9nbfmVJ3zYLAWiTIoqC5v5isinhr+r5uaO8+7r3BMfuNIufIsA7RdpVgacC2cSpw==", + "dev": true + }, + "tiny-invariant": { + "version": "1.3.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tiny-invariant/-/tiny-invariant-1.3.3.tgz", + "integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==" + }, + "tiny-warning": { + "version": "1.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tiny-warning/-/tiny-warning-1.0.3.tgz", + "integrity": "sha512-lBN9zLN/oAf68o3zNXYrdCt1kP8WsiGW8Oo2ka41b2IM5JL/S1CTyX1rW0mb/zSuJun0ZUrDxx4sqvYS2FWzPA==" + }, + "tinyglobby": { + "version": "0.2.14", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tinyglobby/-/tinyglobby-0.2.14.tgz", + "integrity": "sha512-tX5e7OM1HnYr2+a2C/4V0htOcSQcoSTH9KgJnVvNm5zm/cyEWKJ7j7YutsH9CxMdtOkkLFy2AHrMci9IM8IPZQ==", + "dev": true, + "requires": { + "fdir": "^6.4.4", + "picomatch": "^4.0.2" + } + }, + "tippy.js": { + "version": "6.3.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tippy.js/-/tippy.js-6.3.7.tgz", + "integrity": "sha512-E1d3oP2emgJ9dRQZdf3Kkn0qJgI6ZLpyS5z6ZkY1DF3kaQaBsGZsndEpHwx+eC+tYM41HaSNvNtLx8tU57FzTQ==", + "requires": { + "@popperjs/core": "^2.9.0" + } + }, + "to-vfile": { + "version": "7.2.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/to-vfile/-/to-vfile-7.2.4.tgz", + "integrity": "sha512-2eQ+rJ2qGbyw3senPI0qjuM7aut8IYXK6AEoOWb+fJx/mQYzviTckm1wDjq91QYHAPBTYzmdJXxMFA6Mk14mdw==", + "dev": true, + "requires": { + "is-buffer": "^2.0.0", + "vfile": "^5.1.0" + } + }, + "trough": { + "version": "2.2.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/trough/-/trough-2.2.0.tgz", + "integrity": "sha512-tmMpK00BjZiUyVyvrBK7knerNgmgvcV/KLVyuma/SC+TQN167GrMRciANTz09+k3zW8L8t60jWO1GpfkZdjTaw==", + "dev": true + }, + "tslib": { + "version": "2.8.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/tslib/-/tslib-2.8.1.tgz", + "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==" + }, + "type-check": { + "version": "0.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/type-check/-/type-check-0.4.0.tgz", + "integrity": "sha512-XleUoc9uwGXqjWwXaUTZAmzMcFZ5858QA2vvx1Ur5xIcixXIP+8LnFDgRplU30us6teqdlskFfu+ae4K79Ooew==", + "dev": true, + "requires": { + "prelude-ls": "^1.2.1" + } + }, + "type-fest": { + "version": "0.20.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/type-fest/-/type-fest-0.20.2.tgz", + "integrity": "sha512-Ne+eE4r0/iWnpAxD852z3A+N0Bt5RN//NjJwRd2VFHEmrywxf5vsZlh4R6lixl6B+wz/8d+maTSAkN1FIkI3LQ==", + "dev": true + }, + "typed-array-buffer": { + "version": "1.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/typed-array-buffer/-/typed-array-buffer-1.0.3.tgz", + "integrity": "sha512-nAYYwfY3qnzX30IkA6AQZjVbtK6duGontcQm1WSG1MD94YLqK0515GNApXkoxKOWMusVssAHWLh9SeaoefYFGw==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "es-errors": "^1.3.0", + "is-typed-array": "^1.1.14" + } + }, + "typed-array-byte-length": { + "version": "1.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/typed-array-byte-length/-/typed-array-byte-length-1.0.3.tgz", + "integrity": "sha512-BaXgOuIxz8n8pIq3e7Atg/7s+DpiYrxn4vdot3w9KbnBhcRQq6o3xemQdIfynqSeXeDrF32x+WvfzmOjPiY9lg==", + "dev": true, + "requires": { + "call-bind": "^1.0.8", + "for-each": "^0.3.3", + "gopd": "^1.2.0", + "has-proto": "^1.2.0", + "is-typed-array": "^1.1.14" + } + }, + "typed-array-byte-offset": { + "version": "1.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/typed-array-byte-offset/-/typed-array-byte-offset-1.0.4.tgz", + "integrity": "sha512-bTlAFB/FBYMcuX81gbL4OcpH5PmlFHqlCCpAl8AlEzMz5k53oNDvN8p1PNOWLEmI2x4orp3raOFB51tv9X+MFQ==", + "dev": true, + "requires": { + "available-typed-arrays": "^1.0.7", + "call-bind": "^1.0.8", + "for-each": "^0.3.3", + "gopd": "^1.2.0", + "has-proto": "^1.2.0", + "is-typed-array": "^1.1.15", + "reflect.getprototypeof": "^1.0.9" + } + }, + "typed-array-length": { + "version": "1.0.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/typed-array-length/-/typed-array-length-1.0.7.tgz", + "integrity": "sha512-3KS2b+kL7fsuk/eJZ7EQdnEmQoaho/r6KUef7hxvltNA5DR8NAUM+8wJMbJyZ4G9/7i3v5zPBIMN5aybAh2/Jg==", + "dev": true, + "requires": { + "call-bind": "^1.0.7", + "for-each": "^0.3.3", + "gopd": "^1.0.1", + "is-typed-array": "^1.1.13", + "possible-typed-array-names": "^1.0.0", + "reflect.getprototypeof": "^1.0.6" + } + }, + "typedarray": { + "version": "0.0.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/typedarray/-/typedarray-0.0.6.tgz", + "integrity": "sha512-/aCDEGatGvZ2BIk+HmLf4ifCJFwvKFNb9/JeZPMulfgFracn9QFcAf5GO8B/mweUjSoblS5In0cWhqpfs/5PQA==", + "dev": true + }, + "unbox-primitive": { + "version": "1.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unbox-primitive/-/unbox-primitive-1.1.0.tgz", + "integrity": "sha512-nWJ91DjeOkej/TA8pXQ3myruKpKEYgqvpw9lz4OPHj/NWFNluYrjbz9j01CJ8yKQd2g4jFoOkINCTW2I5LEEyw==", + "dev": true, + "requires": { + "call-bound": "^1.0.3", + "has-bigints": "^1.0.2", + "has-symbols": "^1.1.0", + "which-boxed-primitive": "^1.1.1" + } + }, + "undici-types": { + "version": "5.26.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/undici-types/-/undici-types-5.26.5.tgz", + "integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==", + "dev": true + }, + "unified": { + "version": "10.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unified/-/unified-10.1.2.tgz", + "integrity": "sha512-pUSWAi/RAnVy1Pif2kAoeWNBa3JVrx0MId2LASj8G+7AiHWoKZNTomq6LG326T68U7/e263X6fTdcXIy7XnF7Q==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "bail": "^2.0.0", + "extend": "^3.0.0", + "is-buffer": "^2.0.0", + "is-plain-obj": "^4.0.0", + "trough": "^2.0.0", + "vfile": "^5.0.0" + } + }, + "unified-engine": { + "version": "10.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unified-engine/-/unified-engine-10.1.0.tgz", + "integrity": "sha512-5+JDIs4hqKfHnJcVCxTid1yBoI/++FfF/1PFdSMpaftZZZY+qg2JFruRbf7PaIwa9KgLotXQV3gSjtY0IdcFGQ==", + "dev": true, + "requires": { + "@types/concat-stream": "^2.0.0", + "@types/debug": "^4.0.0", + "@types/is-empty": "^1.0.0", + "@types/node": "^18.0.0", + "@types/unist": "^2.0.0", + "concat-stream": "^2.0.0", + "debug": "^4.0.0", + "fault": "^2.0.0", + "glob": "^8.0.0", + "ignore": "^5.0.0", + "is-buffer": "^2.0.0", + "is-empty": "^1.0.0", + "is-plain-obj": "^4.0.0", + "load-plugin": "^5.0.0", + "parse-json": "^6.0.0", + "to-vfile": "^7.0.0", + "trough": "^2.0.0", + "unist-util-inspect": "^7.0.0", + "vfile-message": "^3.0.0", + "vfile-reporter": "^7.0.0", + "vfile-statistics": "^2.0.0", + "yaml": "^2.0.0" + }, + "dependencies": { + "brace-expansion": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/brace-expansion/-/brace-expansion-2.0.2.tgz", + "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==", + "dev": true, + "requires": { + "balanced-match": "^1.0.0" + } + }, + "glob": { + "version": "8.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/glob/-/glob-8.1.0.tgz", + "integrity": "sha512-r8hpEjiQEYlF2QU0df3dS+nxxSIreXQS1qRhMJM0Q5NDdR386C7jb7Hwwod8Fgiuex+k0GFjgft18yvxm5XoCQ==", + "dev": true, + "requires": { + "fs.realpath": "^1.0.0", + "inflight": "^1.0.4", + "inherits": "2", + "minimatch": "^5.0.1", + "once": "^1.3.0" + } + }, + "ignore": { + "version": "5.3.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ignore/-/ignore-5.3.2.tgz", + "integrity": "sha512-hsBTNUqQTDwkWtcdYI2i06Y/nUBEsNEDJKjWdigLvegy8kDuJAS8uRlpkkcQpyEXL0Z/pjDy5HBmMjRCJ2gq+g==", + "dev": true + }, + "lines-and-columns": { + "version": "2.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/lines-and-columns/-/lines-and-columns-2.0.4.tgz", + "integrity": "sha512-wM1+Z03eypVAVUCE7QdSqpVIvelbOakn1M0bPDoA4SGWPx3sNDVUiMo3L6To6WWGClB7VyXnhQ4Sn7gxiJbE6A==", + "dev": true + }, + "minimatch": { + "version": "5.1.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/minimatch/-/minimatch-5.1.6.tgz", + "integrity": "sha512-lKwV/1brpG6mBUFHtb7NUmtABCb2WZZmm2wNiOA5hAb8VdCS4B3dtMWyvcoViccwAW/COERjXLt0zP1zXUN26g==", + "dev": true, + "requires": { + "brace-expansion": "^2.0.1" + } + }, + "parse-json": { + "version": "6.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/parse-json/-/parse-json-6.0.2.tgz", + "integrity": "sha512-SA5aMiaIjXkAiBrW/yPgLgQAQg42f7K3ACO+2l/zOvtQBwX58DMUsFJXelW2fx3yMBmWOVkR6j1MGsdSbCA4UA==", + "dev": true, + "requires": { + "@babel/code-frame": "^7.16.0", + "error-ex": "^1.3.2", + "json-parse-even-better-errors": "^2.3.1", + "lines-and-columns": "^2.0.2" + } + }, + "yaml": { + "version": "2.8.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/yaml/-/yaml-2.8.1.tgz", + "integrity": "sha512-lcYcMxX2PO9XMGvAJkJ3OsNMw+/7FKes7/hgerGUYWIoWu5j/+YQqcZr5JnPZWzOsEBgMbSbiSTn/dv/69Mkpw==", + "dev": true + } + } + }, + "unist-util-inspect": { + "version": "7.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-inspect/-/unist-util-inspect-7.0.2.tgz", + "integrity": "sha512-Op0XnmHUl6C2zo/yJCwhXQSm/SmW22eDZdWP2qdf4WpGrgO1ZxFodq+5zFyeRGasFjJotAnLgfuD1jkcKqiH1Q==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0" + } + }, + "unist-util-is": { + "version": "5.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-is/-/unist-util-is-5.2.1.tgz", + "integrity": "sha512-u9njyyfEh43npf1M+yGKDGVPbY/JWEemg5nH05ncKPfi+kBbKBJoTdsogMu33uhytuLlv9y0O7GH7fEdwLdLQw==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0" + } + }, + "unist-util-position-from-estree": { + "version": "1.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-position-from-estree/-/unist-util-position-from-estree-1.1.2.tgz", + "integrity": "sha512-poZa0eXpS+/XpoQwGwl79UUdea4ol2ZuCYguVaJS4qzIOMDzbqz8a3erUCOmubSZkaOuGamb3tX790iwOIROww==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0" + } + }, + "unist-util-remove-position": { + "version": "4.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-remove-position/-/unist-util-remove-position-4.0.2.tgz", + "integrity": "sha512-TkBb0HABNmxzAcfLf4qsIbFbaPDvMO6wa3b3j4VcEzFVaw1LBKwnW4/sRJ/atSLSzoIg41JWEdnE7N6DIhGDGQ==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "unist-util-visit": "^4.0.0" + } + }, + "unist-util-stringify-position": { + "version": "3.0.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-stringify-position/-/unist-util-stringify-position-3.0.3.tgz", + "integrity": "sha512-k5GzIBZ/QatR8N5X2y+drfpWG8IDBzdnVj6OInRNWm1oXrzydiaAT2OQiA8DPRRZyAKb9b6I2a6PxYklZD0gKg==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0" + } + }, + "unist-util-visit": { + "version": "4.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-visit/-/unist-util-visit-4.1.2.tgz", + "integrity": "sha512-MSd8OUGISqHdVvfY9TPhyK2VdUrPgxkUtWSuMHF6XAAFuL4LokseigBnZtPnJMu+FbynTkFNnFlyjxpVKujMRg==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "unist-util-is": "^5.0.0", + "unist-util-visit-parents": "^5.1.1" + } + }, + "unist-util-visit-parents": { + "version": "5.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/unist-util-visit-parents/-/unist-util-visit-parents-5.1.3.tgz", + "integrity": "sha512-x6+y8g7wWMyQhL1iZfhIPhDAs7Xwbn9nRosDXl7qoPTSCy0yNxnKc+hWokFifWQIDGi154rdUqKvbCa4+1kLhg==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "unist-util-is": "^5.0.0" + } + }, + "update-browserslist-db": { + "version": "1.1.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/update-browserslist-db/-/update-browserslist-db-1.1.3.tgz", + "integrity": "sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw==", + "dev": true, + "requires": { + "escalade": "^3.2.0", + "picocolors": "^1.1.1" + } + }, + "uri-js": { + "version": "4.4.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/uri-js/-/uri-js-4.4.1.tgz", + "integrity": "sha512-7rKUyy33Q1yc98pQ1DAmLtwX109F7TIfWlW1Ydo8Wl1ii1SeHieeh0HHfPeL2fMXK6z0s8ecKs9frCuLJvndBg==", + "dev": true, + "requires": { + "punycode": "^2.1.0" + } + }, + "use-query-params": { + "version": "1.2.3", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/use-query-params/-/use-query-params-1.2.3.tgz", + "integrity": "sha512-cdG0tgbzK+FzsV6DAt2CN8Saa3WpRnze7uC4Rdh7l15epSFq7egmcB/zuREvPNwO5Yk80nUpDZpiyHsoq50d8w==", + "requires": { + "serialize-query-params": "^1.3.5" + } + }, + "use-resize-observer": { + "version": "7.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/use-resize-observer/-/use-resize-observer-7.0.1.tgz", + "integrity": "sha512-tJESENDoVXfzkv1Cl9dJ13ySgENcKjvEKSU7QwjckjxjXg/MV2zW1CjEUtLpmXY084womIxJROUR3L1SuqlvOw==", + "dev": true, + "requires": { + "resize-observer-polyfill": "^1.5.1" + } + }, + "util-deprecate": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/util-deprecate/-/util-deprecate-1.0.2.tgz", + "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==", + "dev": true + }, + "uvu": { + "version": "0.5.6", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/uvu/-/uvu-0.5.6.tgz", + "integrity": "sha512-+g8ENReyr8YsOc6fv/NVJs2vFdHBnBNdfE49rshrTzDWOlUx4Gq7KOS2GD8eqhy2j+Ejq29+SbKH8yjkAqXqoA==", + "dev": true, + "requires": { + "dequal": "^2.0.0", + "diff": "^5.0.0", + "kleur": "^4.0.3", + "sade": "^1.7.3" + } + }, + "v8-compile-cache": { + "version": "2.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/v8-compile-cache/-/v8-compile-cache-2.4.0.tgz", + "integrity": "sha512-ocyWc3bAHBB/guyqJQVI5o4BZkPhznPYUG2ea80Gond/BgNWpap8TOmLSeeQG7bnh2KMISxskdADG59j7zruhw==", + "dev": true + }, + "value-equal": { + "version": "1.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/value-equal/-/value-equal-1.0.1.tgz", + "integrity": "sha512-NOJ6JZCAWr0zlxZt+xqCHNTEKOsrks2HQd4MqhP1qy4z1SkbEP467eNx6TgDKXMvUOb+OENfJCZwM+16n7fRfw==" + }, + "vfile": { + "version": "5.3.7", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vfile/-/vfile-5.3.7.tgz", + "integrity": "sha512-r7qlzkgErKjobAmyNIkkSpizsFPYiUPuJb5pNW1RB4JcYVZhs4lIbVqk8XPk033CV/1z8ss5pkax8SuhGpcG8g==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "is-buffer": "^2.0.0", + "unist-util-stringify-position": "^3.0.0", + "vfile-message": "^3.0.0" + } + }, + "vfile-message": { + "version": "3.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vfile-message/-/vfile-message-3.1.4.tgz", + "integrity": "sha512-fa0Z6P8HUrQN4BZaX05SIVXic+7kE3b05PWAtPuYP9QLHsLKYR7/AlLW3NtOrpXRLeawpDLMsVkmk5DG0NXgWw==", + "dev": true, + "requires": { + "@types/unist": "^2.0.0", + "unist-util-stringify-position": "^3.0.0" + } + }, + "vfile-reporter": { + "version": "7.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vfile-reporter/-/vfile-reporter-7.0.5.tgz", + "integrity": "sha512-NdWWXkv6gcd7AZMvDomlQbK3MqFWL1RlGzMn++/O2TI+68+nqxCPTvLugdOtfSzXmjh+xUyhp07HhlrbJjT+mw==", + "dev": true, + "requires": { + "@types/supports-color": "^8.0.0", + "string-width": "^5.0.0", + "supports-color": "^9.0.0", + "unist-util-stringify-position": "^3.0.0", + "vfile": "^5.0.0", + "vfile-message": "^3.0.0", + "vfile-sort": "^3.0.0", + "vfile-statistics": "^2.0.0" + }, + "dependencies": { + "ansi-regex": { + "version": "6.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-regex/-/ansi-regex-6.1.0.tgz", + "integrity": "sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA==", + "dev": true + }, + "emoji-regex": { + "version": "9.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/emoji-regex/-/emoji-regex-9.2.2.tgz", + "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", + "dev": true + }, + "string-width": { + "version": "5.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string-width/-/string-width-5.1.2.tgz", + "integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==", + "dev": true, + "requires": { + "eastasianwidth": "^0.2.0", + "emoji-regex": "^9.2.2", + "strip-ansi": "^7.0.1" + } + }, + "strip-ansi": { + "version": "7.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strip-ansi/-/strip-ansi-7.1.0.tgz", + "integrity": "sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ==", + "dev": true, + "requires": { + "ansi-regex": "^6.0.1" + } + }, + "supports-color": { + "version": "9.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/supports-color/-/supports-color-9.4.0.tgz", + "integrity": "sha512-VL+lNrEoIXww1coLPOmiEmK/0sGigko5COxI09KzHc2VJXJsQ37UaQ+8quuxjDeA7+KnLGTWRyOXSLLR2Wb4jw==", + "dev": true + } + } + }, + "vfile-sort": { + "version": "3.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vfile-sort/-/vfile-sort-3.0.1.tgz", + "integrity": "sha512-1os1733XY6y0D5x0ugqSeaVJm9lYgj0j5qdcZQFyxlZOSy1jYarL77lLyb5gK4Wqr1d5OxmuyflSO3zKyFnTFw==", + "dev": true, + "requires": { + "vfile": "^5.0.0", + "vfile-message": "^3.0.0" + } + }, + "vfile-statistics": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vfile-statistics/-/vfile-statistics-2.0.1.tgz", + "integrity": "sha512-W6dkECZmP32EG/l+dp2jCLdYzmnDBIw6jwiLZSER81oR5AHRcVqL+k3Z+pfH1R73le6ayDkJRMk0sutj1bMVeg==", + "dev": true, + "requires": { + "vfile": "^5.0.0", + "vfile-message": "^3.0.0" + } + }, + "vite": { + "version": "6.3.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vite/-/vite-6.3.5.tgz", + "integrity": "sha512-cZn6NDFE7wdTpINgs++ZJ4N49W2vRp8LCKrn3Ob1kYNtOo21vfDoaV5GzBfLU4MovSAB8uNRm4jgzVQZ+mBzPQ==", + "dev": true, + "requires": { + "esbuild": "^0.25.0", + "fdir": "^6.4.4", + "fsevents": "~2.3.3", + "picomatch": "^4.0.2", + "postcss": "^8.5.3", + "rollup": "^4.34.9", + "tinyglobby": "^0.2.13" + } + }, + "vite-plugin-svgr": { + "version": "2.4.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/vite-plugin-svgr/-/vite-plugin-svgr-2.4.0.tgz", + "integrity": "sha512-q+mJJol6ThvqkkJvvVFEndI4EaKIjSI0I3jNFgSoC9fXAz1M7kYTVUin8fhUsFojFDKZ9VHKtX6NXNaOLpbsHA==", + "dev": true, + "requires": { + "@rollup/pluginutils": "^5.0.2", + "@svgr/core": "^6.5.1" + } + }, + "walk-up-path": { + "version": "3.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/walk-up-path/-/walk-up-path-3.0.1.tgz", + "integrity": "sha512-9YlCL/ynK3CTlrSRrDxZvUauLzAswPCrsaCgilqFevUYpeEW0/3ScEjaa3kbW/T0ghhkEr7mv+fpjqn1Y1YuTA==", + "dev": true + }, + "which": { + "version": "2.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/which/-/which-2.0.2.tgz", + "integrity": "sha512-BLI3Tl1TW3Pvl70l3yq3Y64i+awpwXqsGBYWkkqMtnbXgrMD+yj7rhW0kuEDxzJaYXGjEW5ogapKNMEKNMjibA==", + "dev": true, + "requires": { + "isexe": "^2.0.0" + } + }, + "which-boxed-primitive": { + "version": "1.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/which-boxed-primitive/-/which-boxed-primitive-1.1.1.tgz", + "integrity": "sha512-TbX3mj8n0odCBFVlY8AxkqcHASw3L60jIuF8jFP78az3C2YhmGvqbHBpAjTRH2/xqYunrJ9g1jSyjCjpoWzIAA==", + "dev": true, + "requires": { + "is-bigint": "^1.1.0", + "is-boolean-object": "^1.2.1", + "is-number-object": "^1.1.1", + "is-string": "^1.1.1", + "is-symbol": "^1.1.1" + } + }, + "which-builtin-type": { + "version": "1.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/which-builtin-type/-/which-builtin-type-1.2.1.tgz", + "integrity": "sha512-6iBczoX+kDQ7a3+YJBnh3T+KZRxM/iYNPXicqk66/Qfm1b93iu+yOImkg0zHbj5LNOcNv1TEADiZ0xa34B4q6Q==", + "dev": true, + "requires": { + "call-bound": "^1.0.2", + "function.prototype.name": "^1.1.6", + "has-tostringtag": "^1.0.2", + "is-async-function": "^2.0.0", + "is-date-object": "^1.1.0", + "is-finalizationregistry": "^1.1.0", + "is-generator-function": "^1.0.10", + "is-regex": "^1.2.1", + "is-weakref": "^1.0.2", + "isarray": "^2.0.5", + "which-boxed-primitive": "^1.1.0", + "which-collection": "^1.0.2", + "which-typed-array": "^1.1.16" + }, + "dependencies": { + "isarray": { + "version": "2.0.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/isarray/-/isarray-2.0.5.tgz", + "integrity": "sha512-xHjhDr3cNBK0BzdUJSPXZntQUx/mwMS5Rw4A7lPJ90XGAO6ISP/ePDNuo0vhqOZU+UD5JoodwCAAoZQd3FeAKw==", + "dev": true + } + } + }, + "which-collection": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/which-collection/-/which-collection-1.0.2.tgz", + "integrity": "sha512-K4jVyjnBdgvc86Y6BkaLZEN933SwYOuBFkdmBu9ZfkcAbdVbpITnDmjvZ/aQjRXQrv5EPkTnD1s39GiiqbngCw==", + "dev": true, + "requires": { + "is-map": "^2.0.3", + "is-set": "^2.0.3", + "is-weakmap": "^2.0.2", + "is-weakset": "^2.0.3" + } + }, + "which-typed-array": { + "version": "1.1.19", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/which-typed-array/-/which-typed-array-1.1.19.tgz", + "integrity": "sha512-rEvr90Bck4WZt9HHFC4DJMsjvu7x+r6bImz0/BrbWb7A2djJ8hnZMrWnHo9F8ssv0OMErasDhftrfROTyqSDrw==", + "dev": true, + "requires": { + "available-typed-arrays": "^1.0.7", + "call-bind": "^1.0.8", + "call-bound": "^1.0.4", + "for-each": "^0.3.5", + "get-proto": "^1.0.1", + "gopd": "^1.2.0", + "has-tostringtag": "^1.0.2" + } + }, + "word-wrap": { + "version": "1.2.5", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/word-wrap/-/word-wrap-1.2.5.tgz", + "integrity": "sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==", + "dev": true + }, + "wrap-ansi": { + "version": "8.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/wrap-ansi/-/wrap-ansi-8.1.0.tgz", + "integrity": "sha512-si7QWI6zUMq56bESFvagtmzMdGOtoxfR+Sez11Mobfc7tm+VkUckk9bW2UeffTGVUbOksxmSw0AA2gs8g71NCQ==", + "dev": true, + "requires": { + "ansi-styles": "^6.1.0", + "string-width": "^5.0.1", + "strip-ansi": "^7.0.1" + }, + "dependencies": { + "ansi-regex": { + "version": "6.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-regex/-/ansi-regex-6.1.0.tgz", + "integrity": "sha512-7HSX4QQb4CspciLpVFwyRe79O3xsIZDDLER21kERQ71oaPodF8jL725AgJMFAYbooIqolJoRLuM81SpeUkpkvA==", + "dev": true + }, + "ansi-styles": { + "version": "6.2.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-styles/-/ansi-styles-6.2.1.tgz", + "integrity": "sha512-bN798gFfQX+viw3R7yrGWRqnrN2oRkEkUjjl4JNn4E8GxxbjtG3FbrEIIY3l8/hrwUwIeCZvi4QuOTP4MErVug==", + "dev": true + }, + "emoji-regex": { + "version": "9.2.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/emoji-regex/-/emoji-regex-9.2.2.tgz", + "integrity": "sha512-L18DaJsXSUk2+42pv8mLs5jJT2hqFkFE4j21wOmgbUqsZ2hL72NsUU785g9RXgo3s0ZNgVl42TiHp3ZtOv/Vyg==", + "dev": true + }, + "string-width": { + "version": "5.1.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/string-width/-/string-width-5.1.2.tgz", + "integrity": "sha512-HnLOCR3vjcY8beoNLtcjZ5/nxn2afmME6lhrDrebokqMap+XbeW8n9TXpPDOqdGK5qcI3oT0GKTW6wC7EMiVqA==", + "dev": true, + "requires": { + "eastasianwidth": "^0.2.0", + "emoji-regex": "^9.2.2", + "strip-ansi": "^7.0.1" + } + }, + "strip-ansi": { + "version": "7.1.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/strip-ansi/-/strip-ansi-7.1.0.tgz", + "integrity": "sha512-iq6eVVI64nQQTRYq2KtEg2d2uU7LElhTJwsH4YzIHZshxlgZms/wIc4VoDQTlG/IvVIrBKG06CrZnp0qv7hkcQ==", + "dev": true, + "requires": { + "ansi-regex": "^6.0.1" + } + } + } + }, + "wrap-ansi-cjs": { + "version": "npm:wrap-ansi@7.0.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dev": true, + "requires": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "dependencies": { + "ansi-styles": { + "version": "4.3.0", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dev": true, + "requires": { + "color-convert": "^2.0.1" + } + }, + "color-convert": { + "version": "2.0.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dev": true, + "requires": { + "color-name": "~1.1.4" + } + }, + "color-name": { + "version": "1.1.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==", + "dev": true + } + } + }, + "wrappy": { + "version": "1.0.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==", + "dev": true + }, + "yallist": { + "version": "3.1.1", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/yallist/-/yallist-3.1.1.tgz", + "integrity": "sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==", + "dev": true + }, + "yaml": { + "version": "1.10.2", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/yaml/-/yaml-1.10.2.tgz", + "integrity": "sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg==" + }, + "zwitch": { + "version": "2.0.4", + "resolved": "https://artifacts-prod-use1.pinadmin.com/artifactory/api/npm/node-npm-yarn-prod-virtual/zwitch/-/zwitch-2.0.4.tgz", + "integrity": "sha512-bXE4cR/kVZhKZX/RjPEflHaKVhUVl85noU3v6b8apfQEc1x4A+zBxjZ4lN8LqGd6WZ3dl98pY4o717VFmoPp+A==", + "dev": true + } + } +} diff --git a/src/gprofiler/frontend/src/api/urls.js b/src/gprofiler/frontend/src/api/urls.js index 1f8c0a8f..1dff7234 100644 --- a/src/gprofiler/frontend/src/api/urls.js +++ b/src/gprofiler/frontend/src/api/urls.js @@ -39,6 +39,12 @@ export const DATA_URLS = { GET_NODES_AND_CORES_GRAPH_METRICS: `${API_PREFIX}/metrics/graph/nodes_and_cores`, GET_SAMPLES: `${API_PREFIX}/metrics/samples`, GET_API_KEY: `${API_PREFIX}/api_key`, + // Profiling endpoints + GET_PROFILING_HOST_STATUS: `${API_PREFIX}/metrics/profiling/host_status`, + POST_PROFILING_REQUEST: `${API_PREFIX}/metrics/profile_request`, + POST_HEARTBEAT: `${API_PREFIX}/metrics/heartbeat`, + POST_COMMAND_COMPLETION: `${API_PREFIX}/metrics/command_completion`, + // Filter endpoints FILTERS: `${API_PREFIX}${FILETERS_PREFIX}`, GET_FILTER_OPTIONS_VALUE: (filterType, params) => `${API_PREFIX}${FILETERS_PREFIX}/tags/${filterType}?${stringify(params)}`, diff --git a/src/gprofiler/frontend/src/components/common/dataDisplay/table/MuiTable.jsx b/src/gprofiler/frontend/src/components/common/dataDisplay/table/MuiTable.jsx index f4aaf3d4..e2ad9e92 100644 --- a/src/gprofiler/frontend/src/components/common/dataDisplay/table/MuiTable.jsx +++ b/src/gprofiler/frontend/src/components/common/dataDisplay/table/MuiTable.jsx @@ -47,6 +47,9 @@ const MuiTable = ({ autoPageSize = false, size = 'normal', sx = undefined, + checkboxSelection = false, + onSelectionModelChange = undefined, + selectionModel = [], }) => { const isDarkMode = variant !== 'light'; const isSmallTableMode = size === 'small'; @@ -89,11 +92,13 @@ const MuiTable = ({ disableColumnFilter disableColumnMenu disableColumnSelector - disableSelectionOnClick initialState={initialState} disableVirtualization={multipleLinesCells} getRowId={getRowId} autoPageSize={autoPageSize} + checkboxSelection={checkboxSelection} + onSelectionModelChange={onSelectionModelChange} + selectionModel={selectionModel} /> ); diff --git a/src/gprofiler/frontend/src/components/common/icon/iconsData.js b/src/gprofiler/frontend/src/components/common/icon/iconsData.js index 48d4e03a..eb8d4f0f 100644 --- a/src/gprofiler/frontend/src/components/common/icon/iconsData.js +++ b/src/gprofiler/frontend/src/components/common/icon/iconsData.js @@ -101,6 +101,7 @@ export const ICONS = { 'M19,3H5C3.89,3 3,3.89 3,5V19A2,2 0 0,0 5,21H19A2,2 0 0,0 21,19V5C21,3.89 20.1,3 19,3M19,5V19H5V5H19Z', Kibana: 'M28.1,32H5.6l13.2-15.8C24.4,19.9,28.1,25.6,28.1,32z M28.1,0.1H4.1v28.7L28.1,0.1z', Eraser: 'M16.24,3.56L21.19,8.5C21.97,9.29 21.97,10.55 21.19,11.34L12,20.53C10.44,22.09 7.91,22.09 6.34,20.53L2.81,17C2.03,16.21 2.03,14.95 2.81,14.16L13.41,3.56C14.2,2.78 15.46,2.78 16.24,3.56M4.22,15.58L7.76,19.11C8.54,19.9 9.8,19.9 10.59,19.11L14.12,15.58L9.17,10.63L4.22,15.58Z', + Crosshairs: 'M12,2A10,10 0 0,0 2,12A10,10 0 0,0 12,22A10,10 0 0,0 22,12A10,10 0 0,0 12,2M12,4A8,8 0 0,1 20,12A8,8 0 0,1 12,20A8,8 0 0,1 4,12A8,8 0 0,1 12,4M11,6V9.07C9.61,9.41 8.5,10.57 8.15,12H5V13H8.15C8.5,14.43 9.61,15.59 11,15.93V19H13V15.93C14.39,15.59 15.5,14.43 15.85,13H19V12H15.85C15.5,10.57 14.39,9.41 13,9.07V6M12,10A2,2 0 0,1 14,12A2,2 0 0,1 12,14A2,2 0 0,1 10,12A2,2 0 0,1 12,10Z', }; export const ICONS_NAMES = { @@ -162,6 +163,7 @@ export const ICONS_NAMES = { CheckBoxBlank: 'CheckBoxBlank', Kibana: 'Kibana', Eraser: 'Eraser', + Crosshairs: 'Crosshairs', }; export const VIEW_BOXES = { diff --git a/src/gprofiler/frontend/src/components/console/ProfilingStatusPage.jsx b/src/gprofiler/frontend/src/components/console/ProfilingStatusPage.jsx new file mode 100644 index 00000000..3cec3382 --- /dev/null +++ b/src/gprofiler/frontend/src/components/console/ProfilingStatusPage.jsx @@ -0,0 +1,542 @@ +import { + Box, + Typography, + Dialog, + DialogTitle, + DialogContent, + DialogActions, + Button, + Divider, + Chip +} from '@mui/material'; +import queryString from 'query-string'; +import React, { useCallback, useEffect, useState } from 'react'; +import { useHistory, useLocation } from 'react-router-dom'; + +import { DATA_URLS } from '../../api/urls'; +import { PAGES } from '../../utils/consts'; +import MuiTable from '../common/dataDisplay/table/MuiTable'; +import PageHeader from '../common/layout/PageHeader'; +import ProfilingHeader from './header/ProfilingHeader'; +import ProfilingTopPanel from './header/ProfilingTopPanel'; + +const columns = [ + { field: 'service', headerName: 'service name', flex: 1, sortable: true }, + { field: 'host', headerName: 'host name', flex: 1, sortable: true }, + { field: 'pids', headerName: 'pids (if profiled)', flex: 1, sortable: true }, + { field: 'ip', headerName: 'IP', flex: 1, sortable: true }, + { field: 'commandType', headerName: 'command type', flex: 1, sortable: true }, + { field: 'status', headerName: 'profiling status', flex: 1, sortable: true }, + { + field: 'heartbeat_timestamp', + headerName: 'last heartbeat', + flex: 1, + sortable: true, + renderCell: (params) => { + if (!params.value) return 'N/A'; + try { + // The backend sends UTC timestamp without 'Z' suffix, so we need to explicitly treat it as UTC + let utcTimestamp = params.value; + if (!utcTimestamp.endsWith('Z') && !utcTimestamp.includes('+') && !utcTimestamp.includes('-', 10)) { + utcTimestamp += 'Z'; + } + + const utcDate = new Date(utcTimestamp); + // Convert to user's local timezone + const localDateTimeString = utcDate.toLocaleString(navigator.language, { + day: '2-digit', + month: '2-digit', + year: 'numeric', + hour: '2-digit', + minute: '2-digit', + second: '2-digit', + hour12: true, + }); + return localDateTimeString; + } catch (error) { + return 'Invalid date'; + } + }, + }, + { + field: 'profile', + headerName: 'profile', + flex: 1, + renderCell: (params) => { + const { host, service, commandType, status } = params.row; + + // Only show profile link for rows with commandType="start" and status="completed" + if (commandType !== 'start' || status !== 'completed') { + return ''; + } + + if (!host || !service) return ''; + + const baseUrl = `${window.location.protocol}//${window.location.host}`; + const profileUrl = `${baseUrl}${PAGES.profiles.to}?filter=hn,is,${encodeURIComponent(host)}>ab=1&pm=1&rtms=1&service=${encodeURIComponent(service)}&time=1h&view=flamegraph&wp=100`; + + return ( + e.target.style.textDecoration = 'underline'} + onMouseOut={(e) => e.target.style.textDecoration = 'none'} + > + View Profile + + ); + }, + }, +]; + +const ProfilingStatusPage = () => { + const [rows, setRows] = useState([]); + const [loading, setLoading] = useState(false); + const [selectionModel, setSelectionModel] = useState([]); + const [filters, setFilters] = useState({ + service: '', + hostname: '', + pids: '', + ip: '', + commandType: '', + status: '', + }); + const [appliedFilters, setAppliedFilters] = useState({ + service: '', + hostname: '', + pids: '', + ip: '', + commandType: '', + status: '', + }); + + // PerfSpect state + const [enablePerfSpect, setEnablePerfSpect] = useState(false); + + // Profiling frequency state + const [profilingFrequency, setProfilingFrequency] = useState(11); + + // Max processes state + const [maxProcesses, setMaxProcesses] = useState(10); + + // Profiler configurations state + const [profilerConfigs, setProfilerConfigs] = useState({ + perf: 'enabled_restricted', // 'enabled_restricted', 'enabled_aggressive', 'disabled' + async_profiler: 'enabled', // 'enabled', 'disabled' + pyperf: 'enabled', // 'enabled', 'disabled' + pyspy: 'enabled_fallback', // 'enabled_fallback', 'enabled', 'disabled' + rbspy: 'enabled', // 'enabled', 'disabled' + phpspy: 'enabled', // 'enabled', 'disabled' + dotnet_trace: 'enabled', // 'enabled', 'disabled' + nodejs_perf: 'enabled', // 'enabled', 'disabled' + }); + + // Confirmation dialog state + const [confirmationDialog, setConfirmationDialog] = useState({ + open: false, + action: null, + selectedRows: [], + serviceGroups: {}, + }); + + const history = useHistory(); + const location = useLocation(); + + const fetchProfilingStatus = useCallback((filterParams) => { + setLoading(true); + + // Build query parameters + const params = new URLSearchParams(); + + if (filterParams.service) { + params.append('service_name', filterParams.service); + } + if (filterParams.hostname) { + params.append('hostname', filterParams.hostname); + } + if (filterParams.pids) { + params.append('pids', filterParams.pids); + } + if (filterParams.ip) { + params.append('ip_address', filterParams.ip); + } + if (filterParams.commandType) { + params.append('command_type', filterParams.commandType); + } + if (filterParams.status) { + params.append('profiling_status', filterParams.status); + } + + const url = params.toString() + ? `${DATA_URLS.GET_PROFILING_HOST_STATUS}?${params.toString()}` + : DATA_URLS.GET_PROFILING_HOST_STATUS; + + fetch(url) + .then((res) => res.json()) + .then((data) => { + setRows( + data.map((row) => ({ + id: row.id, + service: row.service_name, + host: row.hostname, + pids: row.pids, + ip: row.ip_address, + commandType: row.command_type || 'N/A', + status: row.profiling_status, + heartbeat_timestamp: row.heartbeat_timestamp, + })) + ); + setLoading(false); + }) + .catch(() => setLoading(false)); + }, []); // No dependencies needed since it takes filterParams as argument + + // Initialize filters from URL parameters for direct URL visits (shareable links) + useEffect(() => { + const searchParams = queryString.parse(location.search); + const hasFilterParams = ['service', 'hostname', 'pids', 'ip', 'commandType', 'status'].some( + param => searchParams[param] + ); + + // Only initialize from URL if there are actual filter parameters + if (hasFilterParams) { + const urlFilters = { + service: searchParams.service || '', + hostname: searchParams.hostname || '', + pids: searchParams.pids || '', + ip: searchParams.ip || '', + commandType: searchParams.commandType || '', + status: searchParams.status || '', + }; + setFilters(urlFilters); + setAppliedFilters(urlFilters); + // Automatically fetch data with URL filters on page load + fetchProfilingStatus(urlFilters); + } else { + // No URL params, fetch all data + const emptyFilters = { + service: '', + hostname: '', + pids: '', + ip: '', + commandType: '', + status: '', + }; + fetchProfilingStatus(emptyFilters); + } + + // Clean up profile-specific parameters if they exist (mixed URLs) + const profileParams = ['gtab', 'view', 'time', 'startTime', 'endTime', 'filter', 'rt', 'rtms', 'p', 'pm', 'wt', 'wp', 'search', 'fullscreen']; + const hasProfileParams = profileParams.some(param => searchParams[param]); + + if (hasProfileParams) { + // Remove only profile params, keep filter params + const cleanedParams = { ...searchParams }; + profileParams.forEach(param => { + delete cleanedParams[param]; + }); + history.replace({ search: queryString.stringify(cleanedParams) }); + } + }, [fetchProfilingStatus, history, location.search]); // Add dependencies + + // Auto-refresh every 30 seconds for dynamic profiling + useEffect(() => { + const refreshInterval = setInterval(() => { + // Refresh with current applied filters + fetchProfilingStatus(appliedFilters); + }, 30000); // 30 seconds + + // Cleanup interval on component unmount + return () => clearInterval(refreshInterval); + }, [appliedFilters, fetchProfilingStatus]); // Re-create interval when filters change + + // Update URL when filters change (with focus preservation) + const updateURL = useCallback( + (newFilters) => { + // Use replace instead of push to avoid navigation history buildup + // and reduce re-render impact on focus + const searchParams = {}; + + // Add new filter parameters + Object.keys(newFilters).forEach((key) => { + if (newFilters[key]) { + searchParams[key] = newFilters[key]; + } + }); + + const newSearch = queryString.stringify(searchParams); + + // Use replace instead of push to minimize focus disruption + if (newSearch === '') { + history.replace('/profiling'); + } else { + history.replace({ pathname: '/profiling', search: newSearch }); + } + }, + [history] + ); + + // Function to update individual filter (optimized for focus preservation) + const updateFilter = useCallback((field, value) => { + setFilters(prev => ({ ...prev, [field]: value })); + }, []); // Stable function reference + + // Apply filters function + const applyFilters = useCallback(() => { + setAppliedFilters(filters); + fetchProfilingStatus(filters); + updateURL(filters); + }, [filters, fetchProfilingStatus, updateURL]); + + // Clear all filters function + const clearAllFilters = useCallback(() => { + const emptyFilters = { + service: '', + hostname: '', + pids: '', + ip: '', + commandType: '', + status: '', + }; + setFilters(emptyFilters); + setAppliedFilters(emptyFilters); + fetchProfilingStatus(emptyFilters); + updateURL(emptyFilters); + }, [fetchProfilingStatus, updateURL]); + + // Bulk Start/Stop handlers + function handleBulkAction(action) { + const selectedRows = rows.filter((row) => selectionModel.includes(row.id)); + + // Group selected rows by service name + const serviceGroups = selectedRows.reduce((groups, row) => { + if (!groups[row.service]) { + groups[row.service] = []; + } + groups[row.service].push(row.host); + return groups; + }, {}); + + // Show confirmation dialog + setConfirmationDialog({ + open: true, + action, + selectedRows, + serviceGroups, + }); + } + + // Execute the actual profiling action after confirmation + function executeProfilingAction() { + const { action, serviceGroups } = confirmationDialog; + + // Create one request per service with all hosts for that service + const requests = Object.entries(serviceGroups).map(([serviceName, hosts]) => { + const target_host = hosts.reduce((hostObj, host) => { + hostObj[host] = null; + return hostObj; + }, {}); + + const submitData = { + service_name: serviceName, + request_type: action, + continuous: true, + duration: 60, // Default duration, can't be adjusted yet + frequency: profilingFrequency, // Use frequency from UI + profiling_mode: 'cpu', // Default profiling mode, can't be adjusted yet + target_hosts: target_host, + additional_args: { + enable_perfspect: enablePerfSpect, // Include PerfSpect setting + profiler_configs: profilerConfigs, // Include all profiler configurations + max_processes: maxProcesses, // Include max processes setting + }, + }; + + // append 'stop_level: host' when action is 'stop' + if (action === 'stop') { + submitData.stop_level = 'host'; + } + + return fetch(DATA_URLS.POST_PROFILING_REQUEST, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify(submitData), + }); + }); + + // Close dialog and wait for all requests to finish before refreshing + setConfirmationDialog({ open: false, action: null, selectedRows: [], serviceGroups: {} }); + Promise.all(requests).then(() => { + // Maintain current filter state when refreshing + fetchProfilingStatus(appliedFilters); + setSelectionModel([]); // Clear all checkboxes after API requests complete + setEnablePerfSpect(false); // Reset PerfSpect checkbox after action completes + // Note: Keep profiling frequency as is - user may want to reuse the same frequency + }); + } + + // Close confirmation dialog without action + function handleDialogClose() { + setConfirmationDialog({ open: false, action: null, selectedRows: [], serviceGroups: {} }); + } + + return ( + + + + + + + {/* Data Table */} + + + + + + {/* Confirmation Dialog */} + + + + Confirm {confirmationDialog.action === 'start' ? 'Start' : 'Stop'} Profiling + + + + + Are you sure you want to {confirmationDialog.action} profiling for the following hosts? + + + {/* Selected Hosts Summary */} + + + Selected Hosts ({confirmationDialog.selectedRows.length}): + + {Object.entries(confirmationDialog.serviceGroups).map(([serviceName, hosts]) => ( + + + {serviceName}: + + + {hosts.map((host) => ( + + ))} + + + ))} + + + {/* Configuration Summary (only for start action) */} + {confirmationDialog.action === 'start' && ( + <> + + + Profiling Configuration: + + + + {/* Basic Settings */} + + + Basic Settings: + + • Frequency: {profilingFrequency} Hz + • Max Processes: {maxProcesses} + • PerfSpect HW Metrics: {enablePerfSpect ? 'Enabled' : 'Disabled'} + • Duration: 60 seconds + • Mode: CPU profiling + + + {/* Profiler Settings */} + + + Profiler Settings: + + • Perf (C/C++/Go): { + profilerConfigs.perf === 'enabled_restricted' ? 'Enabled (Restricted)' : + profilerConfigs.perf === 'enabled_aggressive' ? 'Enabled (Aggressive)' : 'Disabled' + } + • Java Async Profiler: {profilerConfigs.async_profiler === 'enabled' ? 'Enabled' : 'Disabled'} + • Pyperf (Python): {profilerConfigs.pyperf === 'enabled' ? 'Enabled' : 'Disabled'} + • Pyspy (Python): { + profilerConfigs.pyspy === 'enabled_fallback' ? 'Enabled (Fallback)' : + profilerConfigs.pyspy === 'enabled' ? 'Enabled' : 'Disabled' + } + • Rbspy (Ruby): {profilerConfigs.rbspy === 'enabled' ? 'Enabled' : 'Disabled'} + • PHPspy (PHP): {profilerConfigs.phpspy === 'enabled' ? 'Enabled' : 'Disabled'} + • .NET Trace: {profilerConfigs.dotnet_trace === 'enabled' ? 'Enabled' : 'Disabled'} + • NodeJS Perf: {profilerConfigs.nodejs_perf === 'enabled' ? 'Enabled' : 'Disabled'} + + + + )} + + + + + + + + ); +}; + +export default ProfilingStatusPage; diff --git a/src/gprofiler/frontend/src/components/console/header/ProfilingHeader.jsx b/src/gprofiler/frontend/src/components/console/header/ProfilingHeader.jsx new file mode 100644 index 00000000..28094ff9 --- /dev/null +++ b/src/gprofiler/frontend/src/components/console/header/ProfilingHeader.jsx @@ -0,0 +1,293 @@ +import { Box, Button, Collapse, FormControl, InputLabel, MenuItem, Select, TextField } from '@mui/material'; +import React, { useState } from 'react'; + +import { COLORS } from '../../../theme/colors'; +import Icon from '../../common/icon/Icon'; +import { ICONS_NAMES } from '../../common/icon/iconsData'; +import Flexbox from '../../common/layout/Flexbox'; + +const ProfilingHeader = ({ filters, updateFilter, isLoading = false, onApplyFilters, onClearFilters }) => { + const [filtersExpanded, setFiltersExpanded] = useState(false); + + return ( + + {/* Filter Toggle Button */} + + + + + {/* Collapsible Filters Section */} + + + + {/* Service Name Filter */} + updateFilter('service', e.target.value)} + placeholder='Filter by service...' + disabled={isLoading} + fullWidth + sx={{ + backgroundColor: 'white !important', + borderRadius: '4px', + boxShadow: '0px 2px 4px rgba(0, 0, 0, 0.1)', + '& .MuiOutlinedInput-root': { + backgroundColor: 'white !important', + '& fieldset': { + borderColor: 'rgba(0, 0, 0, 0.23)', + }, + '&:hover fieldset': { + borderColor: 'rgba(0, 0, 0, 0.87)', + }, + }, + '& .MuiInputBase-input': { + backgroundColor: 'white !important', + }, + }} + /> + + {/* Hostname Filter */} + updateFilter('hostname', e.target.value)} + placeholder='Filter by hostname...' + disabled={isLoading} + fullWidth + sx={{ + backgroundColor: 'white !important', + borderRadius: '4px', + boxShadow: '0px 2px 4px rgba(0, 0, 0, 0.1)', + '& .MuiOutlinedInput-root': { + backgroundColor: 'white !important', + '& fieldset': { + borderColor: 'rgba(0, 0, 0, 0.23)', + }, + '&:hover fieldset': { + borderColor: 'rgba(0, 0, 0, 0.87)', + }, + }, + '& .MuiInputBase-input': { + backgroundColor: 'white !important', + }, + }} + /> + + {/* PIDs Filter */} + updateFilter('pids', e.target.value)} + placeholder='Filter by PIDs...' + disabled={isLoading} + fullWidth + sx={{ + backgroundColor: 'white !important', + borderRadius: '4px', + boxShadow: '0px 2px 4px rgba(0, 0, 0, 0.1)', + '& .MuiOutlinedInput-root': { + backgroundColor: 'white !important', + '& fieldset': { + borderColor: 'rgba(0, 0, 0, 0.23)', + }, + '&:hover fieldset': { + borderColor: 'rgba(0, 0, 0, 0.87)', + }, + }, + '& .MuiInputBase-input': { + backgroundColor: 'white !important', + }, + }} + /> + + {/* IP Address Filter */} + updateFilter('ip', e.target.value)} + placeholder='Filter by IP...' + disabled={isLoading} + fullWidth + sx={{ + backgroundColor: 'white !important', + borderRadius: '4px', + boxShadow: '0px 2px 4px rgba(0, 0, 0, 0.1)', + '& .MuiOutlinedInput-root': { + backgroundColor: 'white !important', + '& fieldset': { + borderColor: 'rgba(0, 0, 0, 0.23)', + }, + '&:hover fieldset': { + borderColor: 'rgba(0, 0, 0, 0.87)', + }, + }, + '& .MuiInputBase-input': { + backgroundColor: 'white !important', + }, + }} + /> + + {/* Command Type Filter */} + + Command Type + + + + {/* Profiling Status Filter */} + + Profiling Status + + + + + {/* Action Buttons */} + + + + + + + + ); +}; + +export default ProfilingHeader; diff --git a/src/gprofiler/frontend/src/components/console/header/ProfilingTopPanel.jsx b/src/gprofiler/frontend/src/components/console/header/ProfilingTopPanel.jsx new file mode 100644 index 00000000..942c8e73 --- /dev/null +++ b/src/gprofiler/frontend/src/components/console/header/ProfilingTopPanel.jsx @@ -0,0 +1,330 @@ +import { Box, Button, Divider, Typography, FormControlLabel, Checkbox, Tooltip, TextField, Radio, RadioGroup, Accordion, AccordionSummary, AccordionDetails } from '@mui/material'; +import React from 'react'; + +import { COLORS } from '../../../theme/colors'; +import Flexbox from '../../common/layout/Flexbox'; + +const PanelDivider = () => ; + +const ProfilingTopPanel = ({ + selectionModel, + handleBulkAction, + fetchProfilingStatus, + filters, + loading, + rowsCount, + clearAllFilters, + enablePerfSpect, + onPerfSpectChange, + profilingFrequency, + onProfilingFrequencyChange, + maxProcesses, + onMaxProcessesChange, + profilerConfigs, + onProfilerConfigsChange, +}) => { + const hasActiveFilters = Object.values(filters).some((value) => value); + + // Helper function to handle profiler config changes + const handleProfilerConfigChange = (profilerKey, value) => { + onProfilerConfigsChange(prev => ({ + ...prev, + [profilerKey]: value + })); + }; + + // Profiler configuration definitions + const profilerDefinitions = [ + { + key: 'perf', + name: 'Perf Profiler', + description: 'C, C++, Go, Kernel', + options: [ + { value: 'enabled_restricted', label: 'Enabled Restricted', tooltip: 'Profiles only top N containers/process', default: true }, + { value: 'enabled_aggressive', label: 'Enabled Aggressive', tooltip: 'Profiles all processes' }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'async_profiler', + name: 'Async Profiler', + description: 'Java', + options: [ + { value: 'enabled', label: 'Enabled', default: true }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'pyperf', + name: 'Pyperf', + description: "Python's highly optimized eBPF. Exception - arm64 hosts", + options: [ + { value: 'enabled', label: 'Enabled', default: true }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'pyspy', + name: 'Pyspy', + description: 'Python', + options: [ + { value: 'enabled_fallback', label: 'Enabled as Fallback for Pyperf', default: true }, + { value: 'enabled', label: 'Enabled' }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'rbspy', + name: 'Rbspy', + description: 'Ruby', + options: [ + { value: 'enabled', label: 'Enabled', default: true }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'phpspy', + name: 'PHPspy', + description: 'PHP', + options: [ + { value: 'enabled', label: 'Enabled', default: true }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'dotnet_trace', + name: '.NET Trace', + description: '.NET', + options: [ + { value: 'enabled', label: 'Enabled', default: true }, + { value: 'disabled', label: 'Disabled' } + ] + }, + { + key: 'nodejs_perf', + name: 'Perf', + description: 'NodeJS', + options: [ + { value: 'enabled', label: 'Enabled', default: true }, + { value: 'disabled', label: 'Disabled' } + ] + } + ]; + + return ( + + + + {/* Left side - Action Buttons */} + }> + + + + + {/* PerfSpect Hardware Metrics Checkbox */} + + onPerfSpectChange(e.target.checked)} + size="small" + color="primary" + /> + } + label={ + + PerfSpect HW Metrics + + } + sx={{ ml: 1 }} + /> + + + {/* Profiling Frequency Field */} + + + + Profiling Frequency: + + { + const value = parseInt(e.target.value, 10); + if (!isNaN(value) && value > 0 && value <= 1000) { + onProfilingFrequencyChange(value); + } + }} + type="number" + size="small" + inputProps={{ + min: 1, + max: 1000, + style: { textAlign: 'center' } + }} + sx={{ + width: '70px', + '& .MuiOutlinedInput-root': { + height: '32px', + fontSize: '0.875rem' + } + }} + /> + + Hz + + + + + {/* Max Processes Field */} + + + + Max Processes: + + { + const value = parseInt(e.target.value, 10); + if (!isNaN(value) && value >= 0 && value <= 1000) { + onMaxProcessesChange(value); + } + }} + type="number" + size="small" + inputProps={{ + min: 0, + max: 1000, + style: { textAlign: 'center' } + }} + sx={{ + width: '70px', + '& .MuiOutlinedInput-root': { + height: '32px', + fontSize: '0.875rem' + } + }} + /> + + procs + + + + + + {/* Right side - Info and Clear Filters */} + }> + {hasActiveFilters && ( + + )} + + {rowsCount} hosts found + + + + + {/* Profiler Configuration Section */} + + + ▼} + sx={{ + backgroundColor: '#f5f5f5', + '& .MuiAccordionSummary-content': { + alignItems: 'center' + } + }} + > + + Click for Advanced Profiler Configuration + + + + + {profilerDefinitions.map((profiler) => ( + + + {profiler.name} + + + {profiler.description} + + handleProfilerConfigChange(profiler.key, e.target.value)} + > + {profiler.options.map((option) => ( + + } + label={ + + {option.label} + + } + sx={{ mb: 0.5 }} + /> + + ))} + + + ))} + + + + + + ); +}; + +export default ProfilingTopPanel; diff --git a/src/gprofiler/frontend/src/components/sideNavBar/SideNavBar.jsx b/src/gprofiler/frontend/src/components/sideNavBar/SideNavBar.jsx index f98fffa6..7a95494c 100644 --- a/src/gprofiler/frontend/src/components/sideNavBar/SideNavBar.jsx +++ b/src/gprofiler/frontend/src/components/sideNavBar/SideNavBar.jsx @@ -49,6 +49,13 @@ let navigationItems = [ icon: , selectedIcon: , }, + { + key: PAGES.profiling.key, + label: PAGES.profiling.label, + to: PAGES.profiling.to, + icon: , + selectedIcon: , + }, { key: PAGES.comparison.key, label: PAGES.comparison.label, @@ -92,8 +99,9 @@ const SideNavBar = ({ onSidebarToggle, isCollapsed }) => { + component={item.key === 'profiling' ? 'a' : Link} + href={item.key === 'profiling' ? '/profiling' : undefined} + to={item.key === 'profiling' ? undefined : item.to}> {location.pathname === item.to ? item.selectedIcon : item.icon} @@ -112,7 +120,7 @@ const SideNavBar = ({ onSidebarToggle, isCollapsed }) => { }> {PAGES.installation.label} diff --git a/src/gprofiler/frontend/src/router/routes.jsx b/src/gprofiler/frontend/src/router/routes.jsx index 1dc99e94..6604207e 100644 --- a/src/gprofiler/frontend/src/router/routes.jsx +++ b/src/gprofiler/frontend/src/router/routes.jsx @@ -16,6 +16,7 @@ */ } +import ProfilingStatusPage from '../components/console/ProfilingStatusPage'; import InstallationPage from '../components/installation/InstallationPage'; import OverviewPage from '../components/overview/OverviewPage'; import WelcomePage from '../components/welcome/WelcomePage'; @@ -25,4 +26,5 @@ export const ROUTES = [ { path: PAGES.overview.to, exact: false, component: }, { path: PAGES.welcome.to, exact: true, component: }, { path: PAGES.installation.to, exact: false, component: }, + { path: PAGES.profiling.to, exact: false, component: }, ]; diff --git a/src/gprofiler/frontend/src/utils/consts.js b/src/gprofiler/frontend/src/utils/consts.js index 768f1cdc..f8a64085 100644 --- a/src/gprofiler/frontend/src/utils/consts.js +++ b/src/gprofiler/frontend/src/utils/consts.js @@ -47,6 +47,11 @@ export const PAGES = { label: 'Login', to: '/login', }, + profiling: { + key: 'profiling', + label: 'Dynamic Profiling', + to: '/profiling', + }, }; export const EXTERNAL_URLS = { diff --git a/src/gprofiler/frontend/src/utils/datetimesUtils.js b/src/gprofiler/frontend/src/utils/datetimesUtils.js index a90075f1..f4953638 100644 --- a/src/gprofiler/frontend/src/utils/datetimesUtils.js +++ b/src/gprofiler/frontend/src/utils/datetimesUtils.js @@ -43,6 +43,7 @@ export const TIME_UNITS = { export const TIME_FORMATS = { DATETIME_BASIC: `yyyy-MM-dd'T'HH:mm:00`, DATETIME_PRINTED: 'dd/MM/yyyy HH:mm', + DATETIME_WITH_SECONDS: 'dd/MM/yyyy HH:mm:ss', DATE_BASIC: 'dd/MM/yyyy', TIME_24H: 'HH:mm', }; diff --git a/src/gprofiler/nginx/nginx.conf b/src/gprofiler/nginx/nginx.conf index 5c885a72..e75f3533 100644 --- a/src/gprofiler/nginx/nginx.conf +++ b/src/gprofiler/nginx/nginx.conf @@ -56,6 +56,12 @@ http { ssl_ecdh_curve secp384r1; + # Map for handling X-Forwarded-Proto from load balancer + map $http_x_forwarded_proto $forwarded_proto { + default $http_x_forwarded_proto; + "" $scheme; + } + upstream api { server unix:/tmp/mysite.sock; } @@ -76,7 +82,7 @@ http { gzip_types text/plain application/json image/x-icon application/javascript image/svg+xml text/html text/javascript; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header X-Forwarded-Proto $scheme; + proxy_set_header X-Forwarded-Proto $forwarded_proto; proxy_set_header Host $http_host; # we don't want nginx trying to do something clever with # redirects, we set the Host: header above already. diff --git a/src/gprofiler/requirements.txt b/src/gprofiler/requirements.txt index 91cf0091..49748ad6 100644 --- a/src/gprofiler/requirements.txt +++ b/src/gprofiler/requirements.txt @@ -24,3 +24,4 @@ setproctitle==1.3.3 fastapi==0.109.0 uvicorn==0.27.0 gunicorn==23.0.0 +slack_sdk==3.36.0 diff --git a/src/gprofiler/run.sh b/src/gprofiler/run.sh index a10918fe..d9241b47 100755 --- a/src/gprofiler/run.sh +++ b/src/gprofiler/run.sh @@ -48,6 +48,7 @@ gunicorn_cmd_line=" --workers=${GUNICORN_PROCESS_COUNT} \ --max-requests-jitter=1000 \ --timeout=300 \ --preload \ + --forwarded-allow-ips=* \ --log-level=${GUNICORN_LOG_LEVEL} \ --pid=${gunicorn_pid_file}" diff --git a/src/gprofiler_flamedb_rest/handlers/handlers.go b/src/gprofiler_flamedb_rest/handlers/handlers.go index 85e53b37..7657b50e 100644 --- a/src/gprofiler_flamedb_rest/handlers/handlers.go +++ b/src/gprofiler_flamedb_rest/handlers/handlers.go @@ -291,7 +291,7 @@ func (h Handlers) GetMetricsCpuTrends(c *gin.Context) { } func (h Handlers) GetLastHTML(c *gin.Context) { - params, query, err := parseParams(common.MetricsLastHTMLParams{}, nil, c) + params, query, err := parseParams(common.MetricsLastHTMLParams{}, MetricsQueryParser, c) if err != nil { return } diff --git a/src/gprofiler_indexer/args.go b/src/gprofiler_indexer/args.go index 0ed18bae..ca1bc0fe 100644 --- a/src/gprofiler_indexer/args.go +++ b/src/gprofiler_indexer/args.go @@ -57,7 +57,7 @@ func (ca *CLIArgs) ParseArgs() { flag.StringVar(&ca.SQSQueue, "sqs-queue", LookupEnvOrString("SQS_QUEUE_URL", ca.SQSQueue), "SQS Queue name to listen") flag.StringVar(&ca.S3Bucket, "s3-bucket", LookupEnvOrString("S3_BUCKET", ca.S3Bucket), "S3 bucket name") - flag.StringVar(&ca.AWSEndpoint, "aws-endpoint", LookupEnvOrString("S3_ENDPOINT", ca.AWSEndpoint), "AWS Endpoint URL") + flag.StringVar(&ca.AWSEndpoint, "aws-endpoint", LookupEnvOrString("AWS_ENDPOINT_URL", ca.AWSEndpoint), "AWS Endpoint URL") flag.StringVar(&ca.AWSRegion, "aws-region", LookupEnvOrString("AWS_REGION", ca.AWSRegion), "AWS Region") flag.StringVar(&ca.ClickHouseAddr, "clickhouse-addr", LookupEnvOrString("CLICKHOUSE_ADDR", ca.ClickHouseAddr), "ClickHouse address like 127.0.0.1:9000") diff --git a/src/gprofiler_indexer/callstacks.go b/src/gprofiler_indexer/callstacks.go index 852364ef..936a5b4a 100644 --- a/src/gprofiler_indexer/callstacks.go +++ b/src/gprofiler_indexer/callstacks.go @@ -222,7 +222,10 @@ func (pw *ProfilesWriter) writeMetrics(serviceId uint32, instanceType string, MemoryAverageUsedPercent: memoryAverageUsedPercent, HTMLPath: path, } + log.Debugf("DEBUG: Sending metric record to channel - ServiceId=%d, HostName=%s, HTMLPath=%s", + serviceId, hostname, path) pw.metricsRecords <- metricRecord + log.Debugf("DEBUG: Metric record sent to channel successfully") } func (pw *ProfilesWriter) ParseStackFrameFile(sess *session.Session, task SQSMessage, s3bucket string, @@ -290,10 +293,17 @@ func (pw *ProfilesWriter) ParseStackFrameFile(sess *session.Session, task SQSMes } } + // DEBUG: Log the condition values + log.Debugf("DEBUG: hostname=%s, htmlBlobPath='%s', CPUAvg=%f, MemoryAvg=%f", + fileInfo.Metadata.Hostname, htmlBlobPath, fileInfo.Metrics.CPUAvg, fileInfo.Metrics.MemoryAvg) + if htmlBlobPath != "" || (fileInfo.Metrics.CPUAvg != 0 && fileInfo.Metrics.MemoryAvg != 0) { + log.Debugf("DEBUG: Writing metrics for hostname=%s", fileInfo.Metadata.Hostname) pw.writeMetrics(uint32(serviceId), fileInfo.Metadata.CloudInfo.InstanceType, fileInfo.Metadata.Hostname, timestamp, fileInfo.Metrics.CPUAvg, fileInfo.Metrics.MemoryAvg, htmlBlobPath) + } else { + log.Debugf("DEBUG: SKIPPING metrics write for hostname=%s - condition failed", fileInfo.Metadata.Hostname) } return nil diff --git a/src/gprofiler_indexer/main.go b/src/gprofiler_indexer/main.go index 0c5dae2e..a0b0054a 100644 --- a/src/gprofiler_indexer/main.go +++ b/src/gprofiler_indexer/main.go @@ -46,6 +46,7 @@ func main() { args.ParseArgs() logger.Infof("Starting %s", AppName) + tasks := make(chan SQSMessage, args.Concurrency) channels := RecordChannels{ StacksRecords: make(chan StackRecord, args.ClickHouseStacksBatchSize), @@ -107,5 +108,6 @@ func main() { }() buffWriterWaitGroup.Wait() + logger.Info("Graceful shutdown") } diff --git a/src/gprofiler_indexer/queue.go b/src/gprofiler_indexer/queue.go index f0b054c1..aef38543 100644 --- a/src/gprofiler_indexer/queue.go +++ b/src/gprofiler_indexer/queue.go @@ -65,6 +65,8 @@ func ListenSqs(ctx context.Context, args *CLIArgs, ch chan<- SQSMessage, wg *syn if err != nil { logger.Errorf("Got an error getting the queue URL: %v", err) + + // SLI Metric: SQS queue URL resolution failure (infrastructure error - counts against SLO) return } @@ -81,6 +83,8 @@ func ListenSqs(ctx context.Context, args *CLIArgs, ch chan<- SQSMessage, wg *syn }) if recvErr != nil { logger.Error(recvErr) + + // SLI Metric: SQS message receive failure (infrastructure error - counts against SLO) continue } @@ -88,7 +92,24 @@ func ListenSqs(ctx context.Context, args *CLIArgs, ch chan<- SQSMessage, wg *syn var sqsMessage SQSMessage parseErr := json.Unmarshal([]byte(*message.Body), &sqsMessage) if parseErr != nil { - logger.Errorf("Error while parsing %v", parseErr) + logger.Errorf("Error while parsing SQS message body: %v", parseErr) + + // SLI Metric: SQS message parse failure (client error - malformed JSON) + + // Delete malformed messages to prevent infinite retry loop + // This is a permanent client error that won't be fixed by retrying + svc := sqs.New(sess) + _, deleteErr := svc.DeleteMessage(&sqs.DeleteMessageInput{ + QueueUrl: urlResult.QueueUrl, + ReceiptHandle: message.ReceiptHandle, + }) + if deleteErr != nil { + logger.Errorf("Failed to delete malformed message: %v", deleteErr) + + // SLI Metric: SQS message deletion failure (infrastructure error - counts against SLO) + } else { + logger.Warnf("Deleted malformed SQS message to prevent reprocessing") + } continue } sqsMessage.QueueURL = *urlResult.QueueUrl diff --git a/src/gprofiler_indexer/rule/comprehensive_optimization_global_all_services.sql b/src/gprofiler_indexer/rule/comprehensive_optimization_global_all_services.sql new file mode 100644 index 00000000..e69de29b diff --git a/src/gprofiler_indexer/sql/FlameDB_ClickHouse_Schema_README.md b/src/gprofiler_indexer/sql/FlameDB_ClickHouse_Schema_README.md new file mode 100644 index 00000000..2c8d22d5 --- /dev/null +++ b/src/gprofiler_indexer/sql/FlameDB_ClickHouse_Schema_README.md @@ -0,0 +1,415 @@ +# FlameDB ClickHouse Cluster Schema Documentation + +## Overview + +FlameDB uses a distributed ClickHouse cluster architecture to store and analyze profiling data at scale. The schema implements a tiered storage approach with multiple aggregation levels to optimize both storage efficiency and query performance. + +## Architecture Components + +### 1. ClickHouse Cluster Setup + +The system runs on a **ClickHouse cluster** consisting of multiple nodes: +- **Sharding**: Data is distributed across nodes using hash-based partitioning +- **Replication**: Each shard is replicated for high availability +- **Distributed Tables**: Virtual tables that route queries across all cluster nodes + +### 2. Table Architecture Pattern + +Each data type follows a consistent 3-layer pattern: + +``` +┌─────────────────────┐ +│ Distributed Table │ ← Virtual table (routes queries) +│ (flamedb.samples) │ +└─────────────────────┘ + │ + ▼ +┌─────────────────────┐ +│ Local Tables │ ← Physical tables (store actual data) +│ (samples_local) │ +└─────────────────────┘ + │ + ▼ +┌─────────────────────┐ +│ Materialized Views │ ← Auto-aggregation (where applicable) +│ (samples_1hour_mv) │ +└─────────────────────┘ +``` + +## Schema Tables + +### Raw Data Storage + +#### `flamedb.samples` (Distributed) → `flamedb.samples_local` (Physical) +- **Purpose**: Stores raw profiling stack traces +- **Partitioning**: By date (`toYYYYMMDD(Timestamp)`) +- **Sharding Key**: `CallStackHash` +- **Retention**: 7 days (configurable) +- **Columns**: + - `Timestamp`: When the sample was collected + - `ServiceId`: Application/service identifier + - `HostName`, `ContainerName`: Runtime environment + - `CallStackHash`, `CallStackName`, `CallStackParent`: Stack trace data + - `NumSamples`: Sample count + +#### `flamedb.metrics` (Distributed) → `flamedb.metrics_local` (Physical) +- **Purpose**: Stores CPU and memory metrics per host +- **Sharding Key**: `ServiceId` +- **Retention**: 90 days (recommended) +- **Columns**: + - `CPUAverageUsedPercent`, `MemoryAverageUsedPercent`: Resource usage + - `HostName`, `InstanceType`: Host information + +### Aggregated Data (Materialized Views) + +The schema creates multiple aggregation levels to optimize query performance: + +#### 1-Minute Aggregation +- **Table**: `flamedb.samples_1min` → `flamedb.samples_1min_local` +- **Purpose**: Time-series charts and recent analysis +- **Aggregation**: Root stack frames only (`CallStackParent = 0`) +- **Retention**: 30 days (recommended) + +#### 1-Hour Aggregation +- **Tables**: + - `flamedb.samples_1hour` → `flamedb.samples_1hour_local_store` + - `flamedb.samples_1hour_all` → `flamedb.samples_1hour_all_local_store` +- **Purpose**: Medium-term trend analysis +- **Variants**: + - `samples_1hour`: Maintains host/container breakdown + - `samples_1hour_all`: Aggregates across all hosts/containers +- **Retention**: 90 days (recommended) + +#### 1-Day Aggregation +- **Tables**: + - `flamedb.samples_1day` → `flamedb.samples_1day_local_store` + - `flamedb.samples_1day_all` → `flamedb.samples_1day_all_local_store` +- **Purpose**: Long-term historical analysis +- **Variants**: Same as 1-hour (with/without host breakdown) +- **Retention**: 365 days (recommended) + +## Data Flow and Materialized Views + +### How Materialized Views Work + +**Materialized Views** in ClickHouse act as **real-time data transformation pipelines** that automatically aggregate data from `samples_local` whenever new data is inserted. They provide: + +- **Immediate execution**: Views trigger on every INSERT to `samples_local` +- **Atomic operations**: Each INSERT processes all views in the same transaction +- **Automatic aggregation**: No batch processing delays - aggregation happens instantly + +### Data Propagation Flow + +``` +┌─────────────────┐ +│ samples_local │ ← Raw profiling data inserted here +│ (Raw Storage) │ (CallStacks, NumSamples, Timestamps, etc.) +└─────────┬───────┘ + │ + │ Every INSERT triggers ALL materialized views simultaneously + │ + ├─────────────────────────────────────────────────────────────┐ + │ │ + ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ +│samples_1min_mv │ │samples_1hour_all│ +│ │ │ _local │ +│ SELECT │ │ │ +│ toStartOfMinute │ │ SELECT │ +│ sum(NumSamples) │ │ toStartOfHour │ +│ WHERE Parent=0 │ │ sum(NumSamples) │ +│ GROUP BY... │ │ GROUP BY... │ +└─────────┬───────┘ └─────────┬───────┘ + │ │ + ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ +│samples_1min │ │samples_1hour_all│ +│ _local │ │ _local_store │ +│ (30-day TTL) │ │ (90-day TTL) │ +└─────────────────┘ └─────────────────┘ + + ├─────────────────────────────────────────────────────────────┐ + │ │ + ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ +│samples_1hour │ │samples_1day_all │ +│ _local │ │ _local │ +│ │ │ │ +│ SELECT │ │ SELECT │ +│ toStartOfHour │ │ toStartOfDay │ +│ sum(NumSamples) │ │ sum(NumSamples) │ +│ GROUP BY Host.. │ │ GROUP BY... │ +└─────────┬───────┘ └─────────┬───────┘ + │ │ + ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ +│samples_1hour │ │samples_1day_all │ +│ _local_store │ │ _local_store │ +│ (90-day TTL) │ │ (365-day TTL) │ +└─────────────────┘ └─────────────────┘ + + │ + ▼ +┌─────────────────┐ +│samples_1day │ +│ _local │ +│ │ +│ SELECT │ +│ toStartOfDay │ +│ sum(NumSamples) │ +│ GROUP BY Host.. │ +└─────────┬───────┘ + │ + ▼ +┌─────────────────┐ +│samples_1day │ +│ _local_store │ +│ (365-day TTL) │ +└─────────────────┘ +``` + +### Materialized View Details + +#### 1. **1-Minute Aggregation** (`samples_1min_mv`) +- **Purpose**: Time-series charts and recent analysis +- **Filter**: Root stack frames only (`WHERE CallStackParent = 0`) +- **Aggregation**: Groups by service, host, container per minute +- **Storage**: `SummingMergeTree` engine automatically merges duplicate rows + +#### 2. **1-Hour Aggregations** +- **Per-Host** (`samples_1hour_local`): Maintains host/container breakdown +- **All-Hosts** (`samples_1hour_all_local`): Aggregates across all infrastructure +- **Aggregation**: Groups by service, call stack, time bucket +- **Use Case**: Medium-term trend analysis + +#### 3. **1-Day Aggregations** +- **Per-Host** (`samples_1day_local`): Maintains host/container breakdown +- **All-Hosts** (`samples_1day_all_local`): Aggregates across all infrastructure +- **Aggregation**: Groups by service, call stack, daily buckets +- **Use Case**: Long-term historical analysis + +### Key Aggregation Functions + +```sql +-- Time bucketing +toStartOfMinute(Timestamp) -- Rounds to minute boundary +toStartOfHour(Timestamp) -- Rounds to hour boundary +toStartOfDay(Timestamp) -- Rounds to day boundary + +-- Sample aggregation +sum(NumSamples) -- Combines sample counts +sum(ErrNumSamples) -- Combines error counts + +-- Metadata preservation +any(CallStackName) -- Keeps representative value +anyLast(InsertionTimestamp) -- Keeps latest insertion time +``` + +### Example Materialized View SQL + +#### 1-Minute Aggregation (Root Frames Only) +```sql +CREATE MATERIALIZED VIEW flamedb.samples_1min_mv TO flamedb.samples_1min_local +AS SELECT + toStartOfMinute(Timestamp) AS Timestamp, + ServiceId, + InstanceType, + ContainerEnvName, + HostName, + ContainerName, + sum(NumSamples) AS NumSamples, + HostNameHash, + ContainerNameHash, + anyLast(InsertionTimestamp) as InsertionTimestamp +FROM flamedb.samples_local +WHERE CallStackParent = 0 -- Only root stack frames for time-series +GROUP BY ServiceId, InstanceType, ContainerEnvName, HostName, ContainerName, + HostNameHash, ContainerNameHash, Timestamp; +``` + +#### 1-Hour Aggregation (All Hosts Combined) +```sql +CREATE MATERIALIZED VIEW flamedb.samples_1hour_all_local TO flamedb.samples_1hour_all_local_store +AS SELECT + toStartOfHour(Timestamp) AS Timestamp, + ServiceId, + CallStackHash, + any(CallStackName) as CallStackName, + any(CallStackParent) as CallStackParent, + sum(NumSamples) as NumSamples, + sum(ErrNumSamples) as ErrNumSamples, + anyLast(InsertionTimestamp) as InsertionTimestamp +FROM flamedb.samples_local +GROUP BY ServiceId, CallStackHash, Timestamp; +``` + +### Storage Efficiency Benefits + +The materialized view architecture provides significant storage optimization: + +- **Raw data** (`samples_local`): ~95% of storage, 7-day retention +- **1-minute aggregated**: ~3% of storage, 30-day retention +- **1-hour aggregated**: ~1.5% of storage, 90-day retention +- **1-day aggregated**: ~0.5% of storage, 365-day retention + +**Total storage reduction**: ~95% while maintaining full query capabilities across different time ranges and granularities. + +### Real-time Processing Guarantees + +1. **Atomicity**: All materialized views process within the same transaction as the original INSERT +2. **Consistency**: No data loss or partial aggregation states +3. **Immediate availability**: Aggregated data is queryable instantly after INSERT completes +4. **Automatic deduplication**: `SummingMergeTree` engine handles duplicate entries by summing `NumSamples` + +## Query Routing Logic + +The FlameDB REST API intelligently selects tables based on time range and resolution: + +### Table Selection Rules +- **Raw data** (`samples`): Recent detailed queries (< 1 hour or when `resolution=raw`) +- **1-hour aggregation** (`samples_1hour`): Medium-term queries (1-24 hours) +- **1-day aggregation** (`samples_1day`): Long-term queries (> 24 hours) +- **Historical** (`samples_1day_all`): Very old data (> 14 days retention period) + +### Multi-Range Queries +For queries spanning multiple time periods, the API splits requests: +``` +24-hour query example: +├─→ First hour: Use raw data (samples) +├─→ Middle 22 hours: Use 1-hour aggregation (samples_1hour) +└─→ Last hour: Use raw data (samples) +``` + +## TTL (Time To Live) Strategy + +### Recommended Retention Policies + +```sql +-- Raw data: High volume, short retention +ALTER TABLE flamedb.samples_local ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 7 DAY; + +-- 1-minute aggregation: Medium volume, medium retention +ALTER TABLE flamedb.samples_1min_local ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 30 DAY; + +-- 1-hour aggregations: Lower volume, longer retention +ALTER TABLE flamedb.samples_1hour_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 90 DAY; +ALTER TABLE flamedb.samples_1hour_all_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 90 DAY; + +-- 1-day aggregations: Lowest volume, longest retention +ALTER TABLE flamedb.samples_1day_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 365 DAY; +ALTER TABLE flamedb.samples_1day_all_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 365 DAY; + +-- Metrics: Resource usage trends +ALTER TABLE flamedb.metrics_local ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 90 DAY; + +-- Optional: Add TTL for deletion of old partitions (more aggressive cleanup) +-- This will delete entire partitions when they expire, which is more efficient +-- ALTER TABLE flamedb.samples_local ON CLUSTER '{cluster}' MODIFY TTL Timestamp + INTERVAL 7 DAY DELETE; +``` + +### Storage Efficiency Benefits +- **Raw data**: ~95% of storage, 7-day retention +- **1-hour aggregated**: ~4% of storage, 90-day retention +- **1-day aggregated**: ~1% of storage, 365-day retention + +This provides: +- Detailed recent analysis (7 days of raw data) +- Medium-term trends (90 days of hourly data) +- Long-term historical analysis (1 year of daily data) + +## Cluster Configuration + +### Replication +Tables use `ReplicatedMergeTree` engine: +```sql +ENGINE = ReplicatedMergeTree( + '/clickhouse/{installation}/{cluster}/tables/{shard}/{database}/{table}', + '{replica}' +) +``` + +### Sharding +Data distribution across cluster nodes: +- **samples**: Sharded by `CallStackHash` (evenly distributes stack traces) +- **metrics**: Sharded by `ServiceId` (groups service metrics together) + +### Partitioning +All tables are partitioned by date (`toYYYYMMDD(Timestamp)`) for: +- Efficient TTL deletion (drops entire partitions) +- Improved query performance (partition pruning) +- Parallel processing across time ranges + +## Usage Examples + +### Querying Recent Data (< 1 hour) +```sql +-- Uses raw data automatically +SELECT CallStackName, sum(NumSamples) +FROM flamedb.samples +WHERE ServiceId = 123 + AND Timestamp >= now() - INTERVAL 30 MINUTE +GROUP BY CallStackName; +``` + +### Querying Historical Trends (> 1 day) +```sql +-- Uses 1-day aggregation automatically +SELECT toDate(Timestamp), sum(NumSamples) +FROM flamedb.samples_1day +WHERE ServiceId = 123 + AND Timestamp >= now() - INTERVAL 30 DAY +GROUP BY toDate(Timestamp); +``` + +### Checking Data Retention +```sql +-- Check oldest available data +SELECT min(Timestamp) as oldest_data FROM flamedb.samples; + +-- Check TTL configuration +SELECT database, table, ttl_expression +FROM system.table_ttl_info +WHERE database = 'flamedb'; +``` + +## Troubleshooting + +### Missing Historical Data +If data older than 7 days is missing from `flamedb.samples`: +1. **Check TTL settings**: Raw data has 7-day retention by default +2. **Use aggregated tables**: Query `samples_1hour` or `samples_1day` for older data +3. **Verify cluster health**: Ensure all nodes are operational + +### Query Performance +- **Recent data**: Query `samples` (distributed table) +- **Medium-term analysis**: Query `samples_1hour` +- **Long-term trends**: Query `samples_1day` +- **Cross-host analysis**: Use `_all` variants for pre-aggregated data + +### Storage Monitoring +```sql +-- Check table sizes +SELECT + database, table, + sum(bytes_on_disk) as size_bytes, + count(*) as partitions +FROM system.parts +WHERE database = 'flamedb' AND active = 1 +GROUP BY database, table +ORDER BY size_bytes DESC; +``` + +## Best Practices + +1. **Use appropriate aggregation level** for your query time range +2. **Apply TTL to local tables only** (not distributed tables) +3. **Monitor storage usage** and adjust retention as needed +4. **Partition by date** for efficient TTL and query performance +5. **Shard by high-cardinality fields** for even distribution diff --git a/src/gprofiler_indexer/sql/create_ch_schema_cluster_mode.sql b/src/gprofiler_indexer/sql/create_ch_schema_cluster_mode.sql index 02c5054c..9e3a127e 100644 --- a/src/gprofiler_indexer/sql/create_ch_schema_cluster_mode.sql +++ b/src/gprofiler_indexer/sql/create_ch_schema_cluster_mode.sql @@ -274,3 +274,28 @@ CREATE TABLE IF NOT EXISTS flamedb.samples_1min ON CLUSTER '{cluster}' AS flamedb.samples_1min_local ENGINE = Distributed('{cluster}', flamedb, samples_1min_local); + +-- Raw data: High volume, short retention +ALTER TABLE flamedb.samples_local ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 7 DAY; + +-- 1-minute aggregation: Medium volume, medium retention +ALTER TABLE flamedb.samples_1min_local ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 30 DAY; + +-- 1-hour aggregations: Lower volume, longer retention +ALTER TABLE flamedb.samples_1hour_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 90 DAY; +ALTER TABLE flamedb.samples_1hour_all_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 90 DAY; + +-- 1-day aggregations: Lowest volume, longest retention +ALTER TABLE flamedb.samples_1day_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 365 DAY; +ALTER TABLE flamedb.samples_1day_all_local_store ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 365 DAY; + +-- Metrics: Resource usage trends +ALTER TABLE flamedb.metrics_local ON CLUSTER '{cluster}' +MODIFY TTL Timestamp + INTERVAL 90 DAY; + diff --git a/src/gprofiler_indexer/worker.go b/src/gprofiler_indexer/worker.go index 90c10573..2c0f5cf1 100644 --- a/src/gprofiler_indexer/worker.go +++ b/src/gprofiler_indexer/worker.go @@ -29,6 +29,17 @@ import ( log "github.com/sirupsen/logrus" ) +// deleteMessageWithMetrics handles SQS message deletion and SLI metric tracking for failures +func deleteMessageWithMetrics(sess *session.Session, task SQSMessage) { + errDelete := deleteMessage(sess, task.QueueURL, task.MessageHandle) + if errDelete != nil { + log.Errorf("Unable to delete message from %s, err %v", task.QueueURL, errDelete) + + // SLI Metric: SQS delete failure (server error - counts against SLO) + // The event was processed but we couldn't clean up + } +} + func Worker(workerIdx int, args *CLIArgs, tasks <-chan SQSMessage, pw *ProfilesWriter, wg *sync.WaitGroup) { var buf []byte var err error @@ -41,24 +52,27 @@ func Worker(workerIdx int, args *CLIArgs, tasks <-chan SQSMessage, pw *ProfilesW } if args.AWSEndpoint != "" { sessionOptions.Config = aws.Config{ - Region: aws.String(args.AWSRegion), - Endpoint: aws.String(args.AWSEndpoint), + Region: aws.String(args.AWSRegion), + Endpoint: aws.String(args.AWSEndpoint), + S3ForcePathStyle: aws.Bool(true), } } sess := session.Must(session.NewSessionWithOptions(sessionOptions)) for task := range tasks { useSQS := task.Service != "" - log.Debugf("got new file %s from %d", task.Filename, task.ServiceId) + serviceName := task.Service + log.Debugf("got new file %s from service %s (ID: %d)", task.Filename, serviceName, task.ServiceId) + if useSQS { fullPath := fmt.Sprintf("products/%s/stacks/%s", task.Service, task.Filename) buf, err = GetFileFromS3(sess, args.S3Bucket, fullPath) if err != nil { log.Errorf("Error while fetching file from S3: %v", err) - errDelete := deleteMessage(sess, task.QueueURL, task.MessageHandle) - if errDelete != nil { - log.Errorf("Unable to delete message from %s, err %v", task.QueueURL, errDelete) - } + // SLI Metric: S3 fetch failure (server error - counts against SLO) + + // Delete message from SQS after unsuccessful S3 fetch + deleteMessageWithMetrics(sess, task) continue } temp = strings.Split(task.Filename, "_")[0] @@ -69,6 +83,7 @@ func Worker(workerIdx int, args *CLIArgs, tasks <-chan SQSMessage, pw *ProfilesW temp = strings.Join(tokens[:3], ":") } } + layout := ISODateTimeFormat timestamp, tsErr := time.Parse(layout, temp) log.Debugf("parsed timestamp is: %v", timestamp) @@ -76,16 +91,26 @@ func Worker(workerIdx int, args *CLIArgs, tasks <-chan SQSMessage, pw *ProfilesW log.Debugf("Unable to fetch timestamp from filename %s, fallback to the current time", temp) timestamp = time.Now().UTC() } + + // Parse stack frame file and write to ClickHouse err := pw.ParseStackFrameFile(sess, task, args.S3Bucket, timestamp, buf) if err != nil { log.Errorf("Error while parsing stack frame file: %v", err) + + // SLI Metric: Parse event failure or write profile to column DB failure (server error - counts against SLO) + if useSQS { + + // Delete message from SQS after unsuccessful parse/write into column DB + deleteMessageWithMetrics(sess, task) + } + continue } + // Delete message from SQS after successful processing if useSQS { - errDelete := deleteMessage(sess, task.QueueURL, task.MessageHandle) - if errDelete != nil { - log.Errorf("Unable to delete message from %s, err %v", task.QueueURL, errDelete) - } + deleteMessageWithMetrics(sess, task) + + // SLI Metric: Success! Event processed completely } } log.Debugf("Worker %d finished", workerIdx) diff --git a/src/tests/.dockerignore b/src/tests/.dockerignore new file mode 100644 index 00000000..58a6faf6 --- /dev/null +++ b/src/tests/.dockerignore @@ -0,0 +1,13 @@ +# Docker related +Dockerfile +.dockerignore + +# Pytest related +__pycache__/ +.pytest_cache/ + +# Environment variables related +.env + +# Documentation realated +README.md \ No newline at end of file diff --git a/src/tests/.env b/src/tests/.env new file mode 100644 index 00000000..05c09a13 --- /dev/null +++ b/src/tests/.env @@ -0,0 +1,13 @@ +TEST_BACKEND=True +TEST_DB=True +TEST_MANAGED_BACKEND=True +TEST_MANAGED_DB=True +BACKEND_URL=https://localhost +BACKEND_PORT=4433 +BACKEND_USER=test-user +BACKEND_PASSWORD=tester123 +POSTGRES_USER=performance_studio +POSTGRES_PASSWORD=performance_studio_password +POSTGRES_DB=performance_studio +POSTGRES_PORT=5432 +POSTGRES_HOST=localhost \ No newline at end of file diff --git a/src/tests/Dockerfile b/src/tests/Dockerfile new file mode 100644 index 00000000..1a1cf635 --- /dev/null +++ b/src/tests/Dockerfile @@ -0,0 +1,21 @@ +# Minimal Dockerfile for gprofiler-performance-studio test runner +FROM python:3.11-slim + +# Set working directory +WORKDIR /app + +# Install system dependencies required for psycopg2 +RUN apt-get update && apt-get install -y \ + libpq-dev \ + gcc \ + && rm -rf /var/lib/apt/lists/* + +# Copy requirements and install Python dependencies +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +# Copy test files and scripts +COPY . . + +# Default command to run tests +CMD ["python", "run_tests.py"] \ No newline at end of file diff --git a/src/tests/README.md b/src/tests/README.md new file mode 100644 index 00000000..8e881f33 --- /dev/null +++ b/src/tests/README.md @@ -0,0 +1,35 @@ +# Info +This is the implementation of the tests to be used by the [test environment](../../test/README.md). + +* The test framework used is **pytest**; +* Implemented tests can vary from: + * Unit; + * Integration; + * E2E (end-to-end); + +## Running tests locally +1 - Create venv +```sh +python3 -m venv .venv +``` +```sh +source .venv/bin/activate +``` +2 - Install requirements +```sh +pip install --no-cache-dir -r requirements.txt +``` +3 - Run tests +```sh +python run_tests.py --test-path integration/ +``` + +## Running tests on container +1- Build container image +```sh +docker build -t gprofiler-test . +``` +2 - Start test container in attached mode +```sh +docker run --rm -t --network host --env-file .env gprofiler-test python run_tests.py --test-path integration/ +``` \ No newline at end of file diff --git a/src/tests/conftest.py b/src/tests/conftest.py new file mode 100644 index 00000000..3eabe77f --- /dev/null +++ b/src/tests/conftest.py @@ -0,0 +1,115 @@ +import base64 + +import pytest + + +def pytest_addoption(parser): + """Add custom command line options for pytest.""" + + # Test flags + parser.addoption( + "--test-backend", + action="store", + default="True", + help="Enable or disable backend tests", + ) + + parser.addoption( + "--test-db", + action="store", + default="True", + help="Enable or disable database tests", + ) + + parser.addoption( + "--test-managed-backend", + action="store", + default="True", + help="Enable or disable managed backend tests", + ) + + parser.addoption( + "--test-managed-db", + action="store", + default="True", + help="Enable or disable managed database tests", + ) + + # Backend configuration + parser.addoption( + "--backend-url", + action="store", + default="http://localhost", + help="Backend URL for testing", + ) + + parser.addoption( + "--backend-port", + action="store", + type=int, + default=8080, + help="Backend port for testing", + ) + + parser.addoption( + "--backend-user", + action="store", + default="test-user", + help="Backend username for authentication", + ) + + parser.addoption( + "--backend-password", + action="store", + default="tester123", + help="Backend password for authentication", + ) + + # PostgreSQL configuration + parser.addoption( + "--postgres-user", + action="store", + default="performance_studio", + help="PostgreSQL username", + ) + + parser.addoption( + "--postgres-password", + action="store", + default="performance_studio_password", + help="PostgreSQL password", + ) + + parser.addoption( + "--postgres-db", + action="store", + default="performance_studio_db", + help="PostgreSQL database name", + ) + + parser.addoption( + "--postgres-port", + action="store", + type=int, + default=5432, + help="PostgreSQL port", + ) + + parser.addoption("--postgres-host", action="store", default="localhost", help="PostgreSQL host") + + +@pytest.fixture(scope="session") +def backend_base_url(pytestconfig) -> str: + """Get backend base URL from pytest config.""" + backend_url = pytestconfig.getoption("--backend-url", default="http://localhost") + backend_port = pytestconfig.getoption("--backend-port", default=5000) + return f"{backend_url}:{backend_port}" + + +@pytest.fixture(scope="session") +def credentials(pytestconfig) -> dict[str, str]: + """Get credentials from pytest config.""" + username = pytestconfig.getoption("--backend-user", default="test-user") + password = pytestconfig.getoption("--backend-password", default="tester123") + basic_auth = base64.b64encode(f"{username}:{password}".encode()).decode() + return {"Authorization": f"Basic {basic_auth}"} diff --git a/src/tests/integration/backend_db/test_create_profile_request.py b/src/tests/integration/backend_db/test_create_profile_request.py new file mode 100644 index 00000000..9524693a --- /dev/null +++ b/src/tests/integration/backend_db/test_create_profile_request.py @@ -0,0 +1,1744 @@ +#!/usr/bin/env python3 +""" +Integration test for profile request creation and database verification. + +This module contains integration tests that validate the complete flow from API request +to database storage, including: +1. Making API calls to create profile requests +2. Verifying database entries in PostgreSQL tables +3. Testing the full end-to-end flow from API to database +4. Validating data consistency between API responses and database state +""" + +import uuid +from datetime import datetime +from typing import Any, Dict, List, Optional + +import psycopg2 +import psycopg2.extras +import pytest +import requests + + +@pytest.fixture(scope="session") +def postgres_connection(pytestconfig): + """Create a PostgreSQL connection for database verification.""" + # Get PostgreSQL configuration from pytest config + postgres_user = pytestconfig.getoption("--postgres-user", default="postgres") + postgres_password = pytestconfig.getoption("--postgres-password", default="password") + postgres_host = pytestconfig.getoption("--postgres-host", default="localhost") + postgres_port = pytestconfig.getoption("--postgres-port", default=54321) + postgres_db = pytestconfig.getoption("--postgres-db", default="gprofiler") + + # Create connection + conn = psycopg2.connect( + host=postgres_host, + port=postgres_port, + user=postgres_user, + password=postgres_password, + database=postgres_db, + ) + conn.autocommit = True + + yield conn + + # Cleanup + conn.close() + + +@pytest.fixture(scope="session") +def test_service_name() -> str: + """Generate a unique service name for testing.""" + return f"test-service-{uuid.uuid4().hex[:8]}" + + +@pytest.fixture(scope="session") +def test_hostname() -> str: + """Generate a unique hostname for testing.""" + return f"test-host-{uuid.uuid4().hex[:8]}" + + +@pytest.fixture(scope="session") +def profile_request_url(backend_base_url) -> str: + """Get the full URL for the profile request endpoint.""" + return f"{backend_base_url}/api/metrics/profile_request" + + +@pytest.fixture(scope="session") +def heartbeat_url(backend_base_url) -> str: + """Get the full URL for the heartbeat endpoint.""" + return f"{backend_base_url}/api/metrics/heartbeat" + + +@pytest.fixture(scope="session") +def db_setup_and_teardown(postgres_connection, test_service_name, test_hostname): + print("\nCleaning up test data before tests...") + cleanup_test_data(postgres_connection, test_service_name, test_hostname) + + yield # This is where the tests will run + + print("\nCleaning up test data after tests...") + cleanup_test_data(postgres_connection, test_service_name, test_hostname) + + +@pytest.fixture(scope="session") +def valid_heartbeat_data(test_hostname: str, test_service_name: str) -> Dict[str, Any]: + """Provide valid heartbeat data for testing.""" + return { + "hostname": test_hostname, + "ip_address": "127.0.0.1", + "service_name": test_service_name, + "status": "active", + "timestamp": datetime.now().isoformat(), + "last_command_id": None, + } + + +@pytest.fixture(scope="session") +def valid_start_request_data_single_host_stop_level_host(test_service_name: str, test_hostname: str) -> Dict[str, Any]: + """Provide valid start request data for testing.""" + return { + "service_name": test_service_name, + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": { + test_hostname: None, + }, + "stop_level": "host", + "additional_args": {"test": True, "environment": "integration_test"}, + } + + +@pytest.fixture(scope="session") +def valid_stop_request_data_single_host_stop_level_host(test_service_name: str, test_hostname: str) -> Dict[str, Any]: + """Provide valid stop request data for testing.""" + return { + "service_name": test_service_name, + "request_type": "stop", + "target_hosts": { + test_hostname: None, + }, + "stop_level": "host", + } + + +@pytest.fixture(scope="session") +def valid_start_request_data_single_host_stop_level_process_single_process( + test_service_name: str, test_hostname: str +) -> Dict[str, Any]: + """Provide valid start request data for testing.""" + return { + "service_name": test_service_name, + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": { + test_hostname: [1234], + }, # Single PID for process-level profiling + "stop_level": "process", + "additional_args": {"test": True, "environment": "integration_test"}, + } + + +@pytest.fixture(scope="session") +def valid_stop_request_data_single_host_stop_level_process_single_process( + test_service_name: str, test_hostname: str +) -> Dict[str, Any]: + """Provide valid stop request data for testing.""" + return { + "service_name": test_service_name, + "request_type": "stop", + "target_hosts": { + test_hostname: [1234], # Single PID for process-level stop + }, + "stop_level": "process", + } + + +@pytest.fixture(scope="session") +def valid_start_request_data_single_host_stop_level_process_multi_process( + test_service_name: str, test_hostname: str +) -> Dict[str, Any]: + """Provide valid start request data for testing.""" + return { + "service_name": test_service_name, + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hosts": { + test_hostname: [1234, 5678], # Multiple PIDs for process-level profiling + }, + "stop_level": "process", + "additional_args": {"test": True, "environment": "integration_test"}, + } + + +@pytest.fixture(scope="session") +def valid_stop_request_data_single_host_stop_level_process_multi_process( + test_service_name: str, test_hostname: str +) -> Dict[str, Any]: + """Provide valid stop request data for testing.""" + return { + "service_name": test_service_name, + "request_type": "stop", + "target_hosts": { + test_hostname: [1234, 5678], # Multiple PIDs for process-level stop + }, + "stop_level": "process", + } + + +def get_profiling_request_from_db(conn, request_id: str) -> Optional[Dict[str, Any]]: + """Get a profiling request from the database by request_id.""" + with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cursor: + cursor.execute( + """ + SELECT * FROM ProfilingRequests + WHERE request_id = %s + """, + (request_id,), + ) + result = cursor.fetchone() + return dict(result) if result else None + + +def get_profiling_commands_from_db( + conn, service_name: str, hostname: str = None, command_ids: list[str] = None +) -> List[Dict[str, Any]]: + """Get profiling commands from the database by service and optionally hostname.""" + if command_ids: + formatted_command_ids_ = [f"'{command_id}'" for command_id in command_ids] + formatted_command_ids = f"({','.join(formatted_command_ids_)})" + + with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cursor: + cursor.execute( + f""" + SELECT + * + FROM + ProfilingCommands + WHERE + TRUE + AND service_name = '{service_name}' + AND {f"hostname = '{hostname}'" if hostname else "TRUE"} + AND {f"command_id IN {formatted_command_ids}" if command_ids else "TRUE"} + ORDER BY + created_at DESC + """, + ) + + results = cursor.fetchall() + return [dict(result) for result in results] + + +def get_host_heartbeats_from_db(conn, hostname: str, service_name: str = None) -> List[Dict[str, Any]]: + """Get host heartbeats from the database by hostname and optionally service.""" + with conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor) as cursor: + if service_name: + cursor.execute( + """ + SELECT * FROM HostHeartbeats + WHERE hostname = %s AND service_name = %s + ORDER BY heartbeat_timestamp DESC + """, + (hostname, service_name), + ) + else: + cursor.execute( + """ + SELECT * FROM HostHeartbeats + WHERE hostname = %s + ORDER BY heartbeat_timestamp DESC + """, + (hostname,), + ) + results = cursor.fetchall() + return [dict(result) for result in results] + + +def cleanup_test_data(conn, service_name: str, hostname: str = None): + """Clean up test data from the database.""" + with conn.cursor() as cursor: + # Clean up in reverse dependency order + if hostname: + cursor.execute("DELETE FROM HostHeartbeats WHERE hostname = %s", (hostname,)) + cursor.execute("DELETE FROM ProfilingCommands WHERE hostname = %s", (hostname,)) + cursor.execute("DELETE FROM ProfilingExecutions WHERE hostname = %s", (hostname,)) + + cursor.execute("DELETE FROM ProfilingRequests WHERE service_name = %s", (service_name,)) + cursor.execute("DELETE FROM ProfilingCommands WHERE service_name = %s", (service_name,)) + + +def send_heartbeat_and_verify( + heartbeat_url: str, + heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + expected_command_present: bool = False, +) -> Optional[Dict[str, Any]]: + """ + Send a heartbeat request and verify the response and database entry. + + Returns the received command if any, None otherwise. + """ + # Send heartbeat request + response = requests.post( + heartbeat_url, + headers=credentials, + json=heartbeat_data, + timeout=10, + verify=False, + ) + + # Verify heartbeat response + assert response.status_code == 200, f"Heartbeat failed: {response.status_code}: {response.text}" + result = response.json() + assert "message" in result, "Heartbeat response should contain 'message' field" + + # Verify heartbeat was stored in database + db_heartbeats = get_host_heartbeats_from_db( + postgres_connection, heartbeat_data["hostname"], heartbeat_data["service_name"] + ) + assert len(db_heartbeats) >= 1, "No heartbeat entries found in database" + + # Verify most recent heartbeat matches our data + latest_heartbeat = db_heartbeats[0] + assert latest_heartbeat["hostname"] == heartbeat_data["hostname"] + assert latest_heartbeat["service_name"] == heartbeat_data["service_name"] + assert latest_heartbeat["ip_address"] == heartbeat_data["ip_address"] + assert latest_heartbeat["status"] == heartbeat_data["status"] + + # Check for command in response + received_command = None + profiling_command = result.get("profiling_command", None) + command_id = result.get("command_id", None) + if profiling_command and command_id: + assert expected_command_present, "Received unexpected command in heartbeat response" + received_command = { + "command_id": result["command_id"], + "profiling_command": result["profiling_command"], + } + print( + f"📋 Received command via heartbeat: {result['profiling_command'].get('command_type', 'unknown')} (ID: {result['command_id']})" + ) + else: + assert not expected_command_present, "Expected command in heartbeat response but none received" + print("📭 No commands received in heartbeat response") + + print(f"✅ Heartbeat successfully sent and verified in database for {heartbeat_data['hostname']}") + return received_command + + +@pytest.mark.usefixtures("db_setup_and_teardown") +class TestProfileRequestIntegration: + """Integration tests for profile request creation and database verification.""" + + @pytest.mark.order(1) + def test_create_start_profile_request_for_single_host_stop_level_host( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_host: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a start profile request, sending heartbeat, and verify database entries.""" + + # Step 1: Make API call to create profile request + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_start_request_data_single_host_stop_level_host, + timeout=10, + verify=False, + ) + + # Verify API response + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + result = response.json() + + # Validate response structure + assert "success" in result + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + assert result["success"] is True + + request_id = result["request_id"] + command_ids = result["command_ids"] + + # Verify request_id is a valid UUID + assert request_id + assert len(command_ids) >= 1 + + # Step 2: Verify ProfilingRequests table entry + db_request = get_profiling_request_from_db(postgres_connection, request_id) + assert db_request is not None, f"Request {request_id} not found in database" + + # Verify request data matches what was sent + assert db_request["service_name"] == valid_start_request_data_single_host_stop_level_host["service_name"] + assert db_request["request_type"] == valid_start_request_data_single_host_stop_level_host["request_type"] + assert db_request["duration"] == valid_start_request_data_single_host_stop_level_host["duration"] + assert db_request["frequency"] == valid_start_request_data_single_host_stop_level_host["frequency"] + assert db_request["profiling_mode"] == valid_start_request_data_single_host_stop_level_host["profiling_mode"] + assert db_request["target_hostnames"] == list( + valid_start_request_data_single_host_stop_level_host["target_hosts"].keys() + ) + assert db_request["status"] == "pending" + + # Verify additional_args JSON + stored_additional_args = db_request["additional_args"] + if stored_additional_args: + assert stored_additional_args == valid_start_request_data_single_host_stop_level_host["additional_args"] + + # Verify timestamps + assert db_request["created_at"] is not None + assert db_request["updated_at"] is not None + + # Step 3: Verify ProfilingCommands table entries + db_commands = get_profiling_commands_from_db( + postgres_connection, + test_service_name, + test_hostname, + command_ids=command_ids, + ) + assert len(db_commands) >= 1, "No commands found in database" + + # Verify at least one command matches our request + command_found = False + for db_command in db_commands: + if db_command["command_id"] in command_ids: + command_found = True + assert db_command["hostname"] == test_hostname + assert db_command["service_name"] == test_service_name + assert db_command["command_type"] == "start" + assert db_command["status"] == "pending" + + # Verify request_ids contains our request + request_ids_in_command = db_command["request_ids"] + assert request_id in request_ids_in_command + + # Verify combined_config contains expected fields + combined_config = db_command["combined_config"] + assert combined_config is not None + assert ( + combined_config.get("duration") == valid_start_request_data_single_host_stop_level_host["duration"] + ) + assert ( + combined_config.get("frequency") + == valid_start_request_data_single_host_stop_level_host["frequency"] + ) + assert ( + combined_config.get("profiling_mode") + == valid_start_request_data_single_host_stop_level_host["profiling_mode"] + ) + + break + + assert command_found, f"Command with ID in {command_ids} not found in database" + + # Step 4: Send heartbeat and verify command delivery + print("🔄 Sending heartbeat to retrieve commands...") + received_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 5: Verify the received command matches our created command + assert received_command is not None, "Expected to receive a command via heartbeat" + assert ( + received_command["command_id"] in command_ids + ), f"Received command ID {received_command['command_id']} not in expected IDs {command_ids}" + + # Verify command content + profiling_command = received_command["profiling_command"] + assert profiling_command["command_type"] == "start" + assert ( + profiling_command["combined_config"]["duration"] + == valid_start_request_data_single_host_stop_level_host["duration"] + ) + assert ( + profiling_command["combined_config"]["frequency"] + == valid_start_request_data_single_host_stop_level_host["frequency"] + ) + assert ( + profiling_command["combined_config"]["profiling_mode"] + == valid_start_request_data_single_host_stop_level_host["profiling_mode"] + ) + + # Step 6: Send another heartbeat with last_command_id to simulate acknowledgment + print("🔄 Sending acknowledgment heartbeat...") + heartbeat_with_ack = valid_heartbeat_data.copy() + heartbeat_with_ack["last_command_id"] = received_command["command_id"] + + ack_command = send_heartbeat_and_verify( + heartbeat_url, + heartbeat_with_ack, + credentials, + postgres_connection, + expected_command_present=False, # Should not receive the same command again + ) + + # Should not receive the same command again + assert ack_command is None, "Should not receive the same command after acknowledgment" + + print( + f"✅ End-to-end integration test passed: Request {request_id} created, command delivered via heartbeat, and acknowledged" + ) + + @pytest.mark.order(2) + def test_create_stop_profile_request_for_single_host_stop_level_host( + self, + profile_request_url: str, + heartbeat_url: str, + valid_stop_request_data_single_host_stop_level_host: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a stop profile request, sending heartbeat, and verify database entries.""" + + # Step 1: Make API call to create stop profile request + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_stop_request_data_single_host_stop_level_host, + timeout=10, + verify=False, + ) + + # Verify API response + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + result = response.json() + + # Validate response structure + assert "success" in result + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + assert result["success"] is True + + request_id = result["request_id"] + command_ids = result["command_ids"] + + # Step 2: Verify ProfilingRequests table entry + db_request = get_profiling_request_from_db(postgres_connection, request_id) + assert db_request is not None, f"Request {request_id} not found in database" + + # Verify request data matches what was sent + assert db_request["service_name"] == valid_stop_request_data_single_host_stop_level_host["service_name"] + assert db_request["request_type"] == valid_stop_request_data_single_host_stop_level_host["request_type"] + assert db_request["target_hostnames"] == list( + valid_stop_request_data_single_host_stop_level_host["target_hosts"].keys() + ) + assert db_request["status"] == "pending" + + # Step 3: Verify ProfilingCommands table entries for stop command + db_commands = get_profiling_commands_from_db(postgres_connection, test_service_name, test_hostname) + + # For stop commands, there should be at least one command + if len(command_ids) > 0: + assert len(db_commands) >= 1, "No stop commands found in database" + + # Verify stop command properties + stop_command_found = False + for db_command in db_commands: + if db_command["command_id"] in command_ids: + stop_command_found = True + assert db_command["hostname"] == test_hostname + assert db_command["service_name"] == test_service_name + assert db_command["command_type"] == "stop" + assert db_command["status"] == "pending" + break + + assert stop_command_found, f"Stop command with ID in {command_ids} not found in database" + + # Step 4: Send heartbeat and verify stop command delivery + print("🔄 Sending heartbeat to retrieve stop commands...") + received_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 5: Verify the received command matches our created stop command + assert received_command is not None, "Expected to receive a stop command via heartbeat" + assert ( + received_command["command_id"] in command_ids + ), f"Received command ID {received_command['command_id']} not in expected IDs {command_ids}" + + # Verify stop command content + profiling_command = received_command["profiling_command"] + assert profiling_command["command_type"] == "stop" + assert ( + profiling_command["combined_config"]["stop_level"] + == valid_stop_request_data_single_host_stop_level_host["stop_level"] + ) + + # Step 6: Send acknowledgment heartbeat + print("🔄 Sending acknowledgment heartbeat for stop command...") + heartbeat_with_ack = valid_heartbeat_data.copy() + heartbeat_with_ack["last_command_id"] = received_command["command_id"] + + ack_command = send_heartbeat_and_verify( + heartbeat_url, + heartbeat_with_ack, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert ack_command is None, "Should not receive the same stop command after acknowledgment" + + print( + f"✅ End-to-end stop integration test passed: Stop request {request_id} created, command delivered via heartbeat, and acknowledged" + ) + + @pytest.mark.order(3) + def test_create_start_profile_request_for_single_host_stop_level_process_single_process( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_process_single_process: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a start profile request with single process PID, sending heartbeat, and verify database entries.""" + + # Step 1: Make API call to create profile request + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_start_request_data_single_host_stop_level_process_single_process, + timeout=10, + verify=False, + ) + + # Verify API response + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + result = response.json() + + # Validate response structure + assert "success" in result + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + assert result["success"] is True + + request_id = result["request_id"] + command_ids = result["command_ids"] + + # Verify request_id is a valid UUID + assert request_id + assert len(command_ids) >= 1 + + # Step 2: Verify ProfilingRequests table entry + db_request = get_profiling_request_from_db(postgres_connection, request_id) + assert db_request is not None, f"Request {request_id} not found in database" + + # Verify request data matches what was sent + assert ( + db_request["service_name"] + == valid_start_request_data_single_host_stop_level_process_single_process["service_name"] + ) + assert ( + db_request["request_type"] + == valid_start_request_data_single_host_stop_level_process_single_process["request_type"] + ) + assert ( + db_request["duration"] == valid_start_request_data_single_host_stop_level_process_single_process["duration"] + ) + assert ( + db_request["frequency"] + == valid_start_request_data_single_host_stop_level_process_single_process["frequency"] + ) + assert ( + db_request["profiling_mode"] + == valid_start_request_data_single_host_stop_level_process_single_process["profiling_mode"] + ) + assert db_request["target_hostnames"] == list( + valid_start_request_data_single_host_stop_level_process_single_process["target_hosts"].keys() + ) + assert ( + db_request["stop_level"] + == valid_start_request_data_single_host_stop_level_process_single_process["stop_level"] + ) + assert db_request["status"] == "pending" + + # Verify additional_args JSON + stored_additional_args = db_request["additional_args"] + if stored_additional_args: + assert ( + stored_additional_args + == valid_start_request_data_single_host_stop_level_process_single_process["additional_args"] + ) + + # Verify timestamps + assert db_request["created_at"] is not None + assert db_request["updated_at"] is not None + + # Step 3: Verify ProfilingCommands table entries + db_commands = get_profiling_commands_from_db( + postgres_connection, + test_service_name, + test_hostname, + command_ids=command_ids, + ) + assert len(db_commands) >= 1, "No commands found in database" + + # Verify at least one command matches our request + command_found = False + for db_command in db_commands: + if db_command["command_id"] in command_ids: + command_found = True + assert db_command["hostname"] == test_hostname + assert db_command["service_name"] == test_service_name + assert db_command["command_type"] == "start" + assert db_command["status"] == "pending" + + # Verify request_ids contains our request + request_ids_in_command = db_command["request_ids"] + assert request_id in request_ids_in_command + + # Verify combined_config contains expected fields including PIDs + combined_config = db_command["combined_config"] + assert combined_config is not None + assert ( + combined_config.get("duration") + == valid_start_request_data_single_host_stop_level_process_single_process["duration"] + ) + assert ( + combined_config.get("frequency") + == valid_start_request_data_single_host_stop_level_process_single_process["frequency"] + ) + assert ( + combined_config.get("profiling_mode") + == valid_start_request_data_single_host_stop_level_process_single_process["profiling_mode"] + ) + assert ( + combined_config.get("pids") + == valid_start_request_data_single_host_stop_level_process_single_process["target_hosts"][ + test_hostname + ] + ) + + break + + assert command_found, f"Command with ID in {command_ids} not found in database" + + # Step 4: Send heartbeat and verify command delivery + print("🔄 Sending heartbeat to retrieve process-level start commands...") + received_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 5: Verify the received command matches our created command + assert received_command is not None, "Expected to receive a command via heartbeat" + assert ( + received_command["command_id"] in command_ids + ), f"Received command ID {received_command['command_id']} not in expected IDs {command_ids}" + + # Verify command content including process-level details + profiling_command = received_command["profiling_command"] + assert profiling_command["command_type"] == "start" + assert ( + profiling_command["combined_config"]["duration"] + == valid_start_request_data_single_host_stop_level_process_single_process["duration"] + ) + assert ( + profiling_command["combined_config"]["frequency"] + == valid_start_request_data_single_host_stop_level_process_single_process["frequency"] + ) + assert ( + profiling_command["combined_config"]["profiling_mode"] + == valid_start_request_data_single_host_stop_level_process_single_process["profiling_mode"] + ) + assert ( + profiling_command["combined_config"]["pids"] + == valid_start_request_data_single_host_stop_level_process_single_process["target_hosts"][test_hostname] + ) + + # Step 6: Send another heartbeat with last_command_id to simulate acknowledgment + print("🔄 Sending acknowledgment heartbeat for process-level start command...") + heartbeat_with_ack = valid_heartbeat_data.copy() + heartbeat_with_ack["last_command_id"] = received_command["command_id"] + + ack_command = send_heartbeat_and_verify( + heartbeat_url, + heartbeat_with_ack, + credentials, + postgres_connection, + expected_command_present=False, # Should not receive the same command again + ) + + # Should not receive the same command again + assert ack_command is None, "Should not receive the same command after acknowledgment" + + print( + f"✅ End-to-end process-level start integration test passed: Request {request_id} with PID {valid_start_request_data_single_host_stop_level_process_single_process['target_hosts'][test_hostname]} created, command delivered via heartbeat, and acknowledged" + ) + + @pytest.mark.order(4) + def test_create_stop_profile_request_for_single_host_stop_level_process_single_process( + self, + profile_request_url: str, + heartbeat_url: str, + valid_stop_request_data_single_host_stop_level_process_single_process: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a stop profile request with single process PID, sending heartbeat, and verify database entries.""" + + # Step 1: Make API call to create stop profile request + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_stop_request_data_single_host_stop_level_process_single_process, + timeout=10, + verify=False, + ) + + # Verify API response + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + result = response.json() + + # Validate response structure + assert "success" in result + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + assert result["success"] is True + + request_id = result["request_id"] + command_ids = result["command_ids"] + + # Step 2: Verify ProfilingRequests table entry + db_request = get_profiling_request_from_db(postgres_connection, request_id) + assert db_request is not None, f"Request {request_id} not found in database" + + # Verify request data matches what was sent + assert ( + db_request["service_name"] + == valid_stop_request_data_single_host_stop_level_process_single_process["service_name"] + ) + assert ( + db_request["request_type"] + == valid_stop_request_data_single_host_stop_level_process_single_process["request_type"] + ) + assert db_request["target_hostnames"] == list( + valid_stop_request_data_single_host_stop_level_process_single_process["target_hosts"].keys() + ) + assert db_request["status"] == "pending" + + # Step 3: Verify ProfilingCommands table entries for stop command + db_commands = get_profiling_commands_from_db(postgres_connection, test_service_name, test_hostname) + + # For stop commands, there should be at least one command + if len(command_ids) > 0: + assert len(db_commands) >= 1, "No stop commands found in database" + + # Verify stop command properties + stop_command_found = False + for db_command in db_commands: + if db_command["command_id"] in command_ids: + stop_command_found = True + assert db_command["hostname"] == test_hostname + assert db_command["service_name"] == test_service_name + assert db_command["command_type"] == "stop" + assert db_command["status"] == "pending" + break + + assert stop_command_found, f"Stop command with ID in {command_ids} not found in database" + + # Step 4: Send heartbeat and verify stop command delivery + print("🔄 Sending heartbeat to retrieve stop commands...") + received_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 5: Verify the received command matches our created stop command + assert received_command is not None, "Expected to receive a stop command via heartbeat" + assert ( + received_command["command_id"] in command_ids + ), f"Received command ID {received_command['command_id']} not in expected IDs {command_ids}" + + # Verify stop command content + profiling_command = received_command["profiling_command"] + assert profiling_command["command_type"] == "stop" + # For this test case, we stop all PIDs for the host + # Then the stop command should stop the entire host + assert profiling_command["combined_config"]["stop_level"] == "host" + + # Step 6: Send acknowledgment heartbeat + print("🔄 Sending acknowledgment heartbeat for stop command...") + heartbeat_with_ack = valid_heartbeat_data.copy() + heartbeat_with_ack["last_command_id"] = received_command["command_id"] + + ack_command = send_heartbeat_and_verify( + heartbeat_url, + heartbeat_with_ack, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert ack_command is None, "Should not receive the same stop command after acknowledgment" + + print( + f"✅ End-to-end process-level stop integration test passed: Stop request {request_id} with PID {valid_stop_request_data_single_host_stop_level_process_single_process['target_hosts'][test_hostname]} created, command delivered via heartbeat, and acknowledged" + ) + + @pytest.mark.order(5) + def test_create_start_profile_request_for_single_host_stop_level_process_multi_process( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_process_multi_process: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a start profile request with multiple process PIDs, sending heartbeat, and verify database entries.""" + + # Step 1: Make API call to create profile request + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_start_request_data_single_host_stop_level_process_multi_process, + timeout=10, + verify=False, + ) + + # Verify API response + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + result = response.json() + + # Validate response structure + assert "success" in result + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + assert result["success"] is True + + request_id = result["request_id"] + command_ids = result["command_ids"] + + # Verify request_id is a valid UUID + assert request_id + assert len(command_ids) >= 1 + + # Step 2: Verify ProfilingRequests table entry + db_request = get_profiling_request_from_db(postgres_connection, request_id) + assert db_request is not None, f"Request {request_id} not found in database" + + # Verify request data matches what was sent + assert ( + db_request["service_name"] + == valid_start_request_data_single_host_stop_level_process_multi_process["service_name"] + ) + assert ( + db_request["request_type"] + == valid_start_request_data_single_host_stop_level_process_multi_process["request_type"] + ) + assert ( + db_request["duration"] == valid_start_request_data_single_host_stop_level_process_multi_process["duration"] + ) + assert ( + db_request["frequency"] + == valid_start_request_data_single_host_stop_level_process_multi_process["frequency"] + ) + assert ( + db_request["profiling_mode"] + == valid_start_request_data_single_host_stop_level_process_multi_process["profiling_mode"] + ) + assert db_request["target_hostnames"] == list( + valid_start_request_data_single_host_stop_level_process_multi_process["target_hosts"].keys() + ) + assert ( + db_request["stop_level"] + == valid_start_request_data_single_host_stop_level_process_multi_process["stop_level"] + ) + assert db_request["status"] == "pending" + + # Verify additional_args JSON + stored_additional_args = db_request["additional_args"] + if stored_additional_args: + assert ( + stored_additional_args + == valid_start_request_data_single_host_stop_level_process_multi_process["additional_args"] + ) + + # Verify timestamps + assert db_request["created_at"] is not None + assert db_request["updated_at"] is not None + + # Step 3: Verify ProfilingCommands table entries + db_commands = get_profiling_commands_from_db( + postgres_connection, + test_service_name, + test_hostname, + command_ids=command_ids, + ) + assert len(db_commands) >= 1, "No commands found in database" + + # Verify at least one command matches our request + command_found = False + for db_command in db_commands: + if db_command["command_id"] in command_ids: + command_found = True + assert db_command["hostname"] == test_hostname + assert db_command["service_name"] == test_service_name + assert db_command["command_type"] == "start" + assert db_command["status"] == "pending" + + # Verify request_ids contains our request + request_ids_in_command = db_command["request_ids"] + assert request_id in request_ids_in_command + + # Verify combined_config contains expected fields including multiple PIDs + combined_config = db_command["combined_config"] + assert combined_config is not None + assert ( + combined_config.get("duration") + == valid_start_request_data_single_host_stop_level_process_multi_process["duration"] + ) + assert ( + combined_config.get("frequency") + == valid_start_request_data_single_host_stop_level_process_multi_process["frequency"] + ) + assert ( + combined_config.get("profiling_mode") + == valid_start_request_data_single_host_stop_level_process_multi_process["profiling_mode"] + ) + assert ( + combined_config.get("pids") + == valid_start_request_data_single_host_stop_level_process_multi_process["target_hosts"][ + test_hostname + ] + ) + + # Verify multiple PIDs are correctly stored + expected_pids = valid_start_request_data_single_host_stop_level_process_multi_process["target_hosts"][ + test_hostname + ] + assert len(combined_config.get("pids", [])) == len( + expected_pids + ), f"Expected {len(expected_pids)} PIDs, got {len(combined_config.get('pids', []))}" + for pid in expected_pids: + assert pid in combined_config.get("pids", []), f"PID {pid} not found in command config" + + break + + assert command_found, f"Command with ID in {command_ids} not found in database" + + # Step 4: Send heartbeat and verify command delivery + print("🔄 Sending heartbeat to retrieve multi-process start commands...") + received_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 5: Verify the received command matches our created command + assert received_command is not None, "Expected to receive a command via heartbeat" + assert ( + received_command["command_id"] in command_ids + ), f"Received command ID {received_command['command_id']} not in expected IDs {command_ids}" + + # Verify command content including multi-process details + profiling_command = received_command["profiling_command"] + assert profiling_command["command_type"] == "start" + assert ( + profiling_command["combined_config"]["duration"] + == valid_start_request_data_single_host_stop_level_process_multi_process["duration"] + ) + assert ( + profiling_command["combined_config"]["frequency"] + == valid_start_request_data_single_host_stop_level_process_multi_process["frequency"] + ) + assert ( + profiling_command["combined_config"]["profiling_mode"] + == valid_start_request_data_single_host_stop_level_process_multi_process["profiling_mode"] + ) + assert ( + profiling_command["combined_config"]["pids"] + == valid_start_request_data_single_host_stop_level_process_multi_process["target_hosts"][test_hostname] + ) + + # Step 6: Send another heartbeat with last_command_id to simulate acknowledgment + print("🔄 Sending acknowledgment heartbeat for multi-process start command...") + heartbeat_with_ack = valid_heartbeat_data.copy() + heartbeat_with_ack["last_command_id"] = received_command["command_id"] + + ack_command = send_heartbeat_and_verify( + heartbeat_url, + heartbeat_with_ack, + credentials, + postgres_connection, + expected_command_present=False, # Should not receive the same command again + ) + + # Should not receive the same command again + assert ack_command is None, "Should not receive the same command after acknowledgment" + + print( + f"✅ End-to-end multi-process start integration test passed: Request {request_id} with PIDs {valid_start_request_data_single_host_stop_level_process_multi_process['target_hosts'][test_hostname]} created, command delivered via heartbeat, and acknowledged" + ) + + @pytest.mark.order(6) + def test_create_stop_profile_request_for_single_host_stop_level_process_multi_process( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_process_multi_process: Dict[str, Any], + valid_stop_request_data_single_host_stop_level_process_multi_process: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a stop profile request with multiple process PIDs, sending heartbeat, and verify database entries.""" + + # Step 1: Make API call to create stop profile request + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_stop_request_data_single_host_stop_level_process_multi_process, + timeout=10, + verify=False, + ) + + # Verify API response + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + result = response.json() + + # Validate response structure + assert "success" in result + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + assert result["success"] is True + + request_id = result["request_id"] + command_ids = result["command_ids"] + + # Step 2: Verify ProfilingRequests table entry + db_request = get_profiling_request_from_db(postgres_connection, request_id) + assert db_request is not None, f"Request {request_id} not found in database" + + # Verify request data matches what was sent + assert ( + db_request["service_name"] + == valid_stop_request_data_single_host_stop_level_process_multi_process["service_name"] + ) + assert ( + db_request["request_type"] + == valid_stop_request_data_single_host_stop_level_process_multi_process["request_type"] + ) + assert db_request["target_hostnames"] == list( + valid_stop_request_data_single_host_stop_level_process_multi_process["target_hosts"].keys() + ) + assert ( + db_request["stop_level"] + == valid_stop_request_data_single_host_stop_level_process_multi_process["stop_level"] + ) + assert db_request["status"] == "pending" + + # Step 3: Verify ProfilingCommands table entries for stop command + db_commands = get_profiling_commands_from_db(postgres_connection, test_service_name, test_hostname) + + # For stop commands, there should be at least one command + if len(command_ids) > 0: + assert len(db_commands) >= 1, "No stop commands found in database" + + # Verify stop command properties + stop_command_found = False + for db_command in db_commands: + if db_command["command_id"] in command_ids: + stop_command_found = True + assert db_command["hostname"] == test_hostname + assert db_command["service_name"] == test_service_name + assert db_command["command_type"] == "stop" + assert db_command["status"] == "pending" + + # For this test case, we stop all PIDs for the host + # Then the stop command should stop the entire host + combined_config = db_command["combined_config"] + assert combined_config is not None + assert combined_config.get("stop_level") == "host" + break + + assert stop_command_found, f"Stop command with ID in {command_ids} not found in database" + + # Step 4: Send heartbeat and verify stop command delivery + print("🔄 Sending heartbeat to retrieve multi-process stop commands...") + received_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 5: Verify the received command matches our created stop command + assert received_command is not None, "Expected to receive a stop command via heartbeat" + assert ( + received_command["command_id"] in command_ids + ), f"Received command ID {received_command['command_id']} not in expected IDs {command_ids}" + + # Verify stop command content including multi-process details + profiling_command = received_command["profiling_command"] + assert profiling_command["command_type"] == "stop" + # For this test case, we stop all PIDs for the host + # Then the stop command should stop the entire host + assert profiling_command["combined_config"]["stop_level"] == "host" + + # Step 6: Send acknowledgment heartbeat + print("🔄 Sending acknowledgment heartbeat for multi-process stop command...") + heartbeat_with_ack = valid_heartbeat_data.copy() + heartbeat_with_ack["last_command_id"] = received_command["command_id"] + + ack_command = send_heartbeat_and_verify( + heartbeat_url, + heartbeat_with_ack, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert ack_command is None, "Should not receive the same stop command after acknowledgment" + + print( + f"✅ End-to-end multi-process stop integration test passed: Stop request {request_id} with PID {valid_stop_request_data_single_host_stop_level_process_multi_process['target_hosts'][test_hostname]} created, command delivered via heartbeat, and acknowledged" + ) + + @pytest.mark.order(7) + def test_start_multi_process_then_stop_single_process_verify_remaining_pids( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_process_multi_process: Dict[str, Any], + valid_stop_request_data_single_host_stop_level_process_single_process: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test starting multi-process profiling, then stopping single process, and verify remaining PIDs continue profiling.""" + + # Step 1: Start profiling multiple PIDs + print("🚀 Step 1: Starting profiling for multiple PIDs...") + start_response = requests.post( + profile_request_url, + headers=credentials, + json=valid_start_request_data_single_host_stop_level_process_multi_process, + timeout=10, + verify=False, + ) + + assert ( + start_response.status_code == 200 + ), f"Start request failed: {start_response.status_code}: {start_response.text}" + start_result = start_response.json() + + start_request_id = start_result["request_id"] + + print(f"✅ Multi-process start request created: {start_request_id}") + + # Step 2: Send heartbeat to get the initial start command + print("🔄 Step 2: Retrieving initial multi-process start command...") + initial_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Verify initial command has multiple PIDs + assert initial_command is not None, "Expected to receive initial start command" + initial_pids = initial_command["profiling_command"]["combined_config"]["pids"] + expected_initial_pids = valid_start_request_data_single_host_stop_level_process_multi_process["target_hosts"][ + test_hostname + ] + + assert len(initial_pids) == len( + expected_initial_pids + ), f"Expected {len(expected_initial_pids)} initial PIDs, got {len(initial_pids)}" + for pid in expected_initial_pids: + assert pid in initial_pids, f"PID {pid} not found in initial command" + + print(f"✅ Initial start command received with PIDs: {initial_pids}") + + # Step 3: Acknowledge the initial command + print("🔄 Step 3: Acknowledging initial multi-process start command...") + ack_heartbeat = valid_heartbeat_data.copy() + ack_heartbeat["last_command_id"] = initial_command["command_id"] + + ack_response = send_heartbeat_and_verify( + heartbeat_url, + ack_heartbeat, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert ack_response is None, "Should not receive command after acknowledgment" + print("✅ Initial command acknowledged successfully") + + # Step 4: Stop profiling for single PID + print("🛑 Step 4: Stopping profiling for single PID...") + stop_response = requests.post( + profile_request_url, + headers=credentials, + json=valid_stop_request_data_single_host_stop_level_process_single_process, + timeout=10, + verify=False, + ) + + assert ( + stop_response.status_code == 200 + ), f"Stop request failed: {stop_response.status_code}: {stop_response.text}" + stop_result = stop_response.json() + + stop_request_id = stop_result["request_id"] + + print(f"✅ Single-process stop request created: {stop_request_id}") + + # Step 5: Send heartbeat to get the resulting command (should be start with remaining PIDs) + print("🔄 Step 5: Retrieving command after stopping single PID...") + resulting_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 6: Verify the resulting command is a start command with remaining PIDs + assert resulting_command is not None, "Expected to receive resulting command" + + profiling_command = resulting_command["profiling_command"] + assert ( + profiling_command["command_type"] == "start" + ), f"Expected 'start' command, got '{profiling_command['command_type']}'" + + # Calculate expected remaining PIDs + stopped_pids = valid_stop_request_data_single_host_stop_level_process_single_process["target_hosts"][ + test_hostname + ] + remaining_pids = [pid for pid in expected_initial_pids if pid not in stopped_pids] + + # Verify remaining PIDs in the command + command_pids = profiling_command["combined_config"]["pids"] + assert len(command_pids) == len( + remaining_pids + ), f"Expected {len(remaining_pids)} remaining PIDs, got {len(command_pids)}" + + for pid in remaining_pids: + assert pid in command_pids, f"Remaining PID {pid} not found in command" + + for pid in stopped_pids: + assert pid not in command_pids, f"Stopped PID {pid} should not be in command" + + print(f"✅ Resulting start command has correct remaining PIDs: {command_pids}") + print( + f"🎯 Successfully verified differential PID management: Started {expected_initial_pids}, stopped {stopped_pids}, remaining {remaining_pids}" + ) + + # Step 7: Acknowledge the resulting command + print("🔄 Step 7: Acknowledging resulting command...") + final_ack_heartbeat = valid_heartbeat_data.copy() + assert resulting_command is not None, "resulting_command is None, cannot access 'command_id'" + final_ack_heartbeat["last_command_id"] = resulting_command["command_id"] + + final_ack_response = send_heartbeat_and_verify( + heartbeat_url, + final_ack_heartbeat, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert final_ack_response is None, "Should not receive command after final acknowledgment" + + print( + f"✅ End-to-end differential PID management test passed: Started PIDs {expected_initial_pids}, stopped PIDs {stopped_pids}, remaining PIDs {remaining_pids} continue profiling" + ) + + @pytest.mark.order(8) + def test_start_multi_process_then_stop_single_process_database_consistency( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_process_multi_process: Dict[str, Any], + valid_stop_request_data_single_host_stop_level_process_single_process: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test database consistency when starting multi-process profiling then stopping single process.""" + + # Step 1: Create start request for multiple PIDs + print("🚀 Step 1: Creating start request for multiple PIDs...") + start_response = requests.post( + profile_request_url, + headers=credentials, + json=valid_start_request_data_single_host_stop_level_process_multi_process, + timeout=10, + verify=False, + ) + + assert start_response.status_code == 200 + start_result = start_response.json() + start_request_id = start_result["request_id"] + + # Verify start request in database + db_start_request = get_profiling_request_from_db(postgres_connection, start_request_id) + assert db_start_request is not None + assert db_start_request["request_type"] == "start" + assert db_start_request["target_hostnames"] == [test_hostname] + + print(f"✅ Start request verified in database: {start_request_id}") + + # Step 2: Get initial start command via heartbeat + print("🔄 Step 2: Getting initial start command...") + initial_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Acknowledge initial command + ack_heartbeat = valid_heartbeat_data.copy() + assert initial_command is not None, "Expected to receive initial command via heartbeat" + ack_heartbeat["last_command_id"] = initial_command["command_id"] + send_heartbeat_and_verify( + heartbeat_url, + ack_heartbeat, + credentials, + postgres_connection, + expected_command_present=False, + ) + + # Step 3: Create stop request for single PID + print("🛑 Step 3: Creating stop request for single PID...") + stop_response = requests.post( + profile_request_url, + headers=credentials, + json=valid_stop_request_data_single_host_stop_level_process_single_process, + timeout=10, + verify=False, + ) + + assert stop_response.status_code == 200 + stop_result = stop_response.json() + stop_request_id = stop_result["request_id"] + + # Step 4: Verify database entries for both requests + print("🔍 Step 4: Verifying database consistency...") + + # Verify stop request in database + db_stop_request = get_profiling_request_from_db(postgres_connection, stop_request_id) + assert db_stop_request is not None + assert db_stop_request["request_type"] == "stop" + assert db_stop_request["target_hostnames"] == [test_hostname] + + # Verify ProfilingCommands table has correct entries + db_commands = get_profiling_commands_from_db(postgres_connection, test_service_name, test_hostname) + + # Should have at least one command entry for the resulting state + assert len(db_commands) >= 1, "No commands found in database" + + # Find the most recent command (should be start with remaining PIDs) + most_recent_command = db_commands[0] # Commands are ordered by created_at DESC + + # Step 5: Get the resulting command via heartbeat and verify database consistency + print("🔄 Step 5: Getting resulting command and verifying database consistency...") + resulting_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Verify command consistency between database and heartbeat response + assert resulting_command is not None, "Expected to receive resulting command via heartbeat" + assert resulting_command["command_id"] == most_recent_command["command_id"] + assert resulting_command["profiling_command"]["command_type"] == most_recent_command["command_type"] + + # Verify PID consistency + assert resulting_command is not None, "resulting_command is None, cannot access 'profiling_command'" + heartbeat_pids = resulting_command["profiling_command"]["combined_config"]["pids"] + db_pids = most_recent_command["combined_config"]["pids"] + + assert heartbeat_pids == db_pids, f"PID mismatch: heartbeat={heartbeat_pids}, database={db_pids}" + + # Calculate and verify expected remaining PIDs + initial_pids = valid_start_request_data_single_host_stop_level_process_multi_process["target_hosts"][ + test_hostname + ] + stopped_pids = valid_stop_request_data_single_host_stop_level_process_single_process["target_hosts"][ + test_hostname + ] + expected_remaining_pids = [pid for pid in initial_pids if pid not in stopped_pids] + + assert set(heartbeat_pids) == set( + expected_remaining_pids + ), f"Expected remaining PIDs {expected_remaining_pids}, got {heartbeat_pids}" + + # Step 6: Verify HostHeartbeats table entries + print("🔍 Step 6: Verifying heartbeat history in database...") + heartbeat_entries = get_host_heartbeats_from_db(postgres_connection, test_hostname, test_service_name) + + # Should have multiple heartbeat entries from our test + assert len(heartbeat_entries) == 1, f"Expected one heartbeat entry, got {len(heartbeat_entries)}" + + # Verify latest heartbeat entry + latest_heartbeat = heartbeat_entries[0] + assert latest_heartbeat["hostname"] == test_hostname + assert latest_heartbeat["service_name"] == test_service_name + assert latest_heartbeat["status"] == "active" + + # Final acknowledgment + final_ack_heartbeat = valid_heartbeat_data.copy() + assert resulting_command is not None, "resulting_command is None, cannot access 'command_id'" + final_ack_heartbeat["last_command_id"] = resulting_command["command_id"] + send_heartbeat_and_verify( + heartbeat_url, + final_ack_heartbeat, + credentials, + postgres_connection, + expected_command_present=False, + ) + + print( + "✅ Database consistency test passed: All database tables (ProfilingRequests, ProfilingCommands, HostHeartbeats) maintain consistency throughout differential PID management workflow" + ) + print(f"🎯 Verified workflow: Start {initial_pids} → Stop {stopped_pids} → Continue {expected_remaining_pids}") + + @pytest.mark.order(9) + def test_start_profiling_then_update_frequency_verify_command_update( + self, + profile_request_url: str, + heartbeat_url: str, + valid_start_request_data_single_host_stop_level_host: Dict[str, Any], + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + postgres_connection, + test_service_name: str, + test_hostname: str, + ): + """Test creating a profiling request, then updating it with different frequency, and verify the resulting command has updated frequency.""" + + # Step 1: Create initial profiling request + print("🚀 Step 1: Creating initial profiling request...") + initial_request_data = valid_start_request_data_single_host_stop_level_host.copy() + initial_frequency = initial_request_data["frequency"] + + initial_response = requests.post( + profile_request_url, + headers=credentials, + json=initial_request_data, + timeout=10, + verify=False, + ) + + assert ( + initial_response.status_code == 200 + ), f"Initial request failed: {initial_response.status_code}: {initial_response.text}" + initial_result = initial_response.json() + + initial_request_id = initial_result["request_id"] + + print(f"✅ Initial profiling request created: {initial_request_id} with frequency {initial_frequency}") + + # Step 2: Send heartbeat to get the initial command + print("🔄 Step 2: Retrieving initial profiling command...") + initial_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Verify initial command has the original frequency + assert initial_command is not None, "Expected to receive initial command" + initial_cmd_frequency = initial_command["profiling_command"]["combined_config"]["frequency"] + assert ( + initial_cmd_frequency == initial_frequency + ), f"Expected initial frequency {initial_frequency}, got {initial_cmd_frequency}" + + print(f"✅ Initial command received with frequency: {initial_cmd_frequency}") + + # Step 3: Acknowledge the initial command + print("🔄 Step 3: Acknowledging initial command...") + ack_heartbeat = valid_heartbeat_data.copy() + ack_heartbeat["last_command_id"] = initial_command["command_id"] + + ack_response = send_heartbeat_and_verify( + heartbeat_url, + ack_heartbeat, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert ack_response is None, "Should not receive command after acknowledgment" + print("✅ Initial command acknowledged successfully") + + # Step 4: Create updated profiling request with different frequency + print("🔄 Step 4: Creating updated profiling request with different frequency...") + updated_request_data = initial_request_data.copy() + updated_frequency = initial_frequency + 5 # Change frequency by adding 5 + updated_request_data["frequency"] = updated_frequency + + updated_response = requests.post( + profile_request_url, + headers=credentials, + json=updated_request_data, + timeout=10, + verify=False, + ) + + assert ( + updated_response.status_code == 200 + ), f"Updated request failed: {updated_response.status_code}: {updated_response.text}" + updated_result = updated_response.json() + + updated_request_id = updated_result["request_id"] + updated_command_ids = updated_result["command_ids"] + + print(f"✅ Updated profiling request created: {updated_request_id} with frequency {updated_frequency}") + + # Step 5: Send heartbeat to get the updated command + print("🔄 Step 5: Retrieving updated profiling command...") + updated_command = send_heartbeat_and_verify( + heartbeat_url, + valid_heartbeat_data, + credentials, + postgres_connection, + expected_command_present=True, + ) + + # Step 6: Verify the updated command is a start command with the new frequency + assert updated_command is not None, "Expected to receive updated command" + + profiling_command = updated_command["profiling_command"] + assert ( + profiling_command["command_type"] == "start" + ), f"Expected 'start' command, got '{profiling_command['command_type']}'" + + # Verify the command has the updated frequency + command_frequency = profiling_command["combined_config"]["frequency"] + assert ( + command_frequency == updated_frequency + ), f"Expected updated frequency {updated_frequency}, got {command_frequency}" + + # Verify other configuration remains the same + assert ( + profiling_command["combined_config"]["duration"] == initial_request_data["duration"] + ), "Duration should remain unchanged" + assert ( + profiling_command["combined_config"]["profiling_mode"] == initial_request_data["profiling_mode"] + ), "Profiling mode should remain unchanged" + + print(f"✅ Updated command received with frequency: {command_frequency}") + print(f"🎯 Successfully verified frequency update: {initial_frequency} → {updated_frequency}") + + # Step 7: Verify database consistency + print("🔍 Step 7: Verifying database consistency...") + + # Verify both requests exist in database + db_initial_request = get_profiling_request_from_db(postgres_connection, initial_request_id) + db_updated_request = get_profiling_request_from_db(postgres_connection, updated_request_id) + + assert db_initial_request is not None, "Initial request should exist in database" + assert db_updated_request is not None, "Updated request should exist in database" + + # Verify frequencies in database + assert db_initial_request["frequency"] == initial_frequency, "Initial request frequency mismatch in database" + assert db_updated_request["frequency"] == updated_frequency, "Updated request frequency mismatch in database" + + # Verify the command in database has the updated frequency + db_commands = get_profiling_commands_from_db( + postgres_connection, + test_service_name, + test_hostname, + command_ids=updated_command_ids, + ) + + assert len(db_commands) >= 1, "No updated commands found in database" + + # Find the command that matches our updated command + matching_command = None + for db_command in db_commands: + if db_command["command_id"] == updated_command["command_id"]: + matching_command = db_command + break + + assert matching_command is not None, "Updated command not found in database" + assert ( + matching_command["combined_config"]["frequency"] == updated_frequency + ), "Command frequency mismatch in database" + + print("✅ Database consistency verified") + + # Step 8: Acknowledge the updated command + print("🔄 Step 8: Acknowledging updated command...") + final_ack_heartbeat = valid_heartbeat_data.copy() + final_ack_heartbeat["last_command_id"] = updated_command["command_id"] + + final_ack_response = send_heartbeat_and_verify( + heartbeat_url, + final_ack_heartbeat, + credentials, + postgres_connection, + expected_command_present=False, + ) + + assert final_ack_response is None, "Should not receive command after final acknowledgment" + + print( + f"✅ End-to-end frequency update test passed: Initial frequency {initial_frequency} → Updated frequency {updated_frequency}, command properly updated and delivered" + ) diff --git a/src/tests/requirements.txt b/src/tests/requirements.txt new file mode 100644 index 00000000..99d53360 --- /dev/null +++ b/src/tests/requirements.txt @@ -0,0 +1,3 @@ +pytest~=8.4.1 +requests~=2.32.4 +psycopg2-binary==2.9.9 \ No newline at end of file diff --git a/src/tests/run_tests.py b/src/tests/run_tests.py new file mode 100644 index 00000000..c91c0c3c --- /dev/null +++ b/src/tests/run_tests.py @@ -0,0 +1,227 @@ +import argparse +import os +import sys + +import pytest + +TEST_BACKEND = os.getenv("TEST_BACKEND", "True") +TEST_DB = os.getenv("TEST_DB", "True") + +TEST_MANAGED_BACKEND = os.getenv("TEST_MANAGED_BACKEND", "True") +TEST_MANAGED_DB = os.getenv("TEST_MANAGED_DB", "True") + +BACKEND_URL = os.getenv("BACKEND_URL", "https://localhost") +BACKEND_PORT = os.getenv("BACKEND_PORT", 4433) +BACKEND_USER = os.getenv("BACKEND_USER", "test-user") +BACKEND_PASSWORD = os.getenv("BACKEND_PASSWORD", "tester123") + +POSTGRES_USER = os.getenv("POSTGRES_USER", "performance_studio") +POSTGRES_PASSWORD = os.getenv("POSTGRES_PASSWORD", "performance_studio_password") +POSTGRES_DB = os.getenv("POSTGRES_DB", "performance_studio") +POSTGRES_PORT = os.getenv("POSTGRES_PORT", 5432) +POSTGRES_HOST = os.getenv("POSTGRES_HOST", "localhost") + +""" +This script is used to run tests with configurable parameters. +The test options can be set at the command line or through environment variables. +The fallback strategy for the configuration is: +1. Command line arguments +2. Environment variables +3. Default values defined in the script +""" + + +def parse_arguments(): + """Parse command line arguments for test configuration.""" + parser = argparse.ArgumentParser( + description="Run tests with configurable parameters", + formatter_class=argparse.ArgumentDefaultsHelpFormatter, + ) + + # Test flags + parser.add_argument( + "--test-backend", + default=TEST_BACKEND, + type=bool, + help="Enable or disable backend tests", + ) + + parser.add_argument("--test-db", default=TEST_DB, type=bool, help="Enable or disable database tests") + + parser.add_argument( + "--test-managed-backend", + default=TEST_MANAGED_BACKEND, + type=bool, + help="Enable or disable managed backend tests", + ) + + parser.add_argument( + "--test-managed-db", + default=TEST_MANAGED_DB, + type=bool, + help="Enable or disable managed database tests", + ) + + # Backend configuration + parser.add_argument("--backend-url", default=BACKEND_URL, help="Backend URL for testing") + + parser.add_argument( + "--backend-port", + type=int, + default=BACKEND_PORT, + help="Backend port for testing", + ) + + parser.add_argument( + "--backend-user", + default=BACKEND_USER, + help="Backend username for authentication", + ) + + parser.add_argument( + "--backend-password", + default=BACKEND_PASSWORD, + help="Backend password for authentication", + ) + + # PostgreSQL configuration + parser.add_argument("--postgres-user", default=POSTGRES_USER, help="PostgreSQL username") + + parser.add_argument("--postgres-password", default=POSTGRES_PASSWORD, help="PostgreSQL password") + + parser.add_argument("--postgres-db", default=POSTGRES_DB, help="PostgreSQL database name") + + parser.add_argument("--postgres-port", type=int, default=POSTGRES_PORT, help="PostgreSQL port") + + parser.add_argument("--postgres-host", default=POSTGRES_HOST, help="PostgreSQL host") + + # Pytest configuration + parser.add_argument("--test-path", default=".", help="Path to test files or directories") + + parser.add_argument( + "--verbose", + "-v", + action="count", + default=0, + help="Increase verbosity (can be used multiple times: -v, -vv, -vvv)", + ) + + parser.add_argument( + "--capture", + choices=["yes", "no", "all", "sys"], + default="no", + help="Capture output during test execution", + ) + + parser.add_argument("--markers", "-m", help="Run tests matching given mark expression") + + parser.add_argument("--keyword", "-k", help="Run tests matching given substring expression") + + parser.add_argument("--maxfail", type=int, help="Stop after first num failures or errors") + + parser.add_argument( + "--tb", + choices=["auto", "long", "short", "line", "native", "no"], + default="short", + help="Traceback print mode", + ) + + parser.add_argument("--junit-xml", help="Create junit-xml style report file at given path") + + parser.add_argument("--html", help="Create html report file at given path (requires pytest-html)") + + parser.add_argument("--extra-args", nargs="*", help="Additional arguments to pass to pytest") + + return parser.parse_args() + + +def build_pytest_args(args): + """Build pytest arguments from parsed command line arguments.""" + pytest_args = [] + + # Add test path + if args.test_path: + pytest_args.append(args.test_path) + + # Add verbosity + if args.verbose > 0: + pytest_args.append("-" + "v" * min(args.verbose, 3)) + + # Add capture mode + if args.capture == "no": + pytest_args.append("-s") + else: + pytest_args.extend(["--capture", args.capture]) + + # Add markers + if args.markers: + pytest_args.extend(["-m", args.markers]) + + # Add keyword filtering + if args.keyword: + pytest_args.extend(["-k", args.keyword]) + + # Add max failures + if args.maxfail is not None: + pytest_args.extend(["--maxfail", str(args.maxfail)]) + + # Add traceback mode + if args.tb: + pytest_args.extend(["--tb", args.tb]) + + # Add junit xml report + if args.junit_xml: + pytest_args.extend(["--junit-xml", args.junit_xml]) + + # Add html report + if args.html: + pytest_args.extend(["--html", args.html]) + + # Add any extra arguments + if args.extra_args: + pytest_args.extend(args.extra_args) + + # Add custom environment variables as pytest options + pytest_args.extend( + [ + f"--backend-url={args.backend_url}", + f"--backend-port={args.backend_port}", + f"--backend-user={args.backend_user}", + f"--backend-password={args.backend_password}", + f"--postgres-user={args.postgres_user}", + f"--postgres-password={args.postgres_password}", + f"--postgres-db={args.postgres_db}", + f"--postgres-port={args.postgres_port}", + f"--postgres-host={args.postgres_host}", + f"--test-backend={args.test_backend}", + f"--test-db={args.test_db}", + f"--test-managed-backend={args.test_managed_backend}", + f"--test-managed-db={args.test_managed_db}", + ] + ) + + return pytest_args + + +def main(): + """Main function to run tests with parsed arguments.""" + args = parse_arguments() + + print("🧪 Running tests with configuration:") + print("=" * 50) + + # Build pytest arguments + pytest_args = build_pytest_args(args) + + print(f"\n📋 Pytest command: pytest {' '.join(pytest_args)}") + print("=" * 50) + + # Run pytest + exit_code = pytest.main(pytest_args) + + # Exit with the same code as pytest + sys.exit(exit_code) + + +if __name__ == "__main__": + main() diff --git a/src/tests/unit/backend/test_metrics_command_completion.py b/src/tests/unit/backend/test_metrics_command_completion.py new file mode 100644 index 00000000..04e3d28d --- /dev/null +++ b/src/tests/unit/backend/test_metrics_command_completion.py @@ -0,0 +1,684 @@ +#!/usr/bin/env python3 +""" +Unit tests for the POST /api/metrics/command_completion endpoint. + +This module contains pytest-based unit tests that validate: +1. Valid command completion requests with all required fields +2. Invalid requests with missing required fields +3. Invalid requests with wrong data types +4. Edge cases and boundary conditions +5. Response structure validation +6. Command status updates and related request updates +""" + +import uuid +from typing import Any, Dict + +import pytest +import requests + + +@pytest.fixture +def command_completion_url(backend_base_url) -> str: + """Get the full URL for the command completion endpoint.""" + return f"{backend_base_url}/api/metrics/command_completion" + + +@pytest.fixture +def valid_completion_data() -> Dict[str, Any]: + """Provide valid command completion data for testing.""" + return { + "command_id": str(uuid.uuid4()), # Unique command ID + "hostname": "test-host", + "status": "completed", + "execution_time": 30, + "error_message": None, + "results_path": "/path/to/results", + } + + +class TestCommandCompletionEndpoint: + """Test class for the command completion endpoint.""" + + def test_valid_completion_request( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test a valid command completion request.""" + response = requests.post( + command_completion_url, + headers=credentials, + json=valid_completion_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert "success" in result, "Response should contain 'success' field" + assert "message" in result, "Response should contain 'message' field" + assert result["success"] is True, "Success should be True" + assert "Command completion recorded" in result["message"] + + def test_valid_failed_completion( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test a valid failed command completion request.""" + failed_completion = valid_completion_data.copy() + failed_completion.update( + { + "status": "failed", + "error_message": "Profiling process crashed", + "execution_time": 15, + "results_path": None, + } + ) + + response = requests.post( + command_completion_url, + headers=credentials, + json=failed_completion, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert result["success"] is True + assert "Command completion recorded" in result["message"] + + def test_missing_command_id( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing command_id field.""" + invalid_request = valid_completion_data.copy() + del invalid_request["command_id"] + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_missing_hostname( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing hostname field.""" + invalid_request = valid_completion_data.copy() + del invalid_request["hostname"] + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_missing_status( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing status field.""" + invalid_request = valid_completion_data.copy() + del invalid_request["status"] + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_command_id( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with empty command_id.""" + invalid_request = valid_completion_data.copy() + invalid_request["command_id"] = "" + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_hostname( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with empty hostname.""" + invalid_request = valid_completion_data.copy() + invalid_request["hostname"] = "" + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_status_value( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid status value.""" + invalid_request = valid_completion_data.copy() + invalid_request["status"] = "invalid_status" + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_execution_time_type( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid execution_time data type.""" + invalid_request = valid_completion_data.copy() + invalid_request["execution_time"] = "not_a_number" + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_negative_execution_time( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with negative execution_time value.""" + invalid_request = valid_completion_data.copy() + invalid_request["execution_time"] = -10 + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # Negative execution time might be allowed or rejected depending on validation + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_zero_execution_time( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with zero execution_time value.""" + valid_request = valid_completion_data.copy() + valid_request["execution_time"] = 0 + + response = requests.post( + command_completion_url, + headers=credentials, + json=valid_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_none_values_for_optional_fields( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with None values for optional fields.""" + completion_data = valid_completion_data.copy() + completion_data.update({"execution_time": None, "error_message": None, "results_path": None}) + + response = requests.post( + command_completion_url, + headers=credentials, + json=completion_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_none_values_for_required_fields( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with None values for required fields.""" + invalid_request = valid_completion_data.copy() + invalid_request["command_id"] = None + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_large_execution_time( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with very large execution_time value.""" + large_time_request = valid_completion_data.copy() + large_time_request["execution_time"] = 86400 # 24 hours + + response = requests.post( + command_completion_url, + headers=credentials, + json=large_time_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_long_error_message( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with very long error message.""" + long_error_request = valid_completion_data.copy() + long_error_request.update({"status": "failed", "error_message": "A" * 1000}) # Very long error message + + response = requests.post( + command_completion_url, + headers=credentials, + json=long_error_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_malformed_json(self, command_completion_url: str, credentials: Dict[str, Any]): + """Test request with malformed JSON.""" + response = requests.post( + command_completion_url, + data="{'invalid': json}", # Invalid JSON + headers={**credentials, "Content-Type": "application/json"}, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_request_body(self, command_completion_url: str, credentials: Dict[str, Any]): + """Test request with empty body.""" + response = requests.post( + command_completion_url, + headers=credentials, + json={}, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_additional_fields( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that additional fields are handled gracefully.""" + extended_request = valid_completion_data.copy() + extended_request["extra_field"] = "extra_value" + extended_request["metadata"] = {"key": "value"} + + response = requests.post( + command_completion_url, + headers=credentials, + json=extended_request, + timeout=10, + verify=False, + ) + + # Additional fields should be ignored, not cause errors + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_different_status_values( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test different valid status values.""" + valid_statuses = ["completed", "failed"] + + for status in valid_statuses: + status_request = valid_completion_data.copy() + status_request["status"] = status + status_request["command_id"] = str(uuid.uuid4()) # Unique command ID + + response = requests.post( + command_completion_url, + headers=credentials, + json=status_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Failed with status: {status}, code: {response.status_code}" + + def test_results_path_variations( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test different results path formats.""" + path_variations = [ + "/local/path/to/results", + "s3://bucket/key/results.tar.gz", + "gs://bucket/results/file.json", + "https://example.com/results", + "", # Empty string + ] + + for i, path in enumerate(path_variations): + path_request = valid_completion_data.copy() + path_request["results_path"] = path + path_request["command_id"] = str(uuid.uuid4()) # Unique command ID + + response = requests.post( + command_completion_url, + headers=credentials, + json=path_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Failed with path: {path}, code: {response.status_code}" + + def test_command_completion_without_optional_fields( + self, + command_completion_url: str, + credentials: Dict[str, Any], + ): + """Test minimal valid request with only required fields.""" + minimal_request = { + "command_id": str(uuid.uuid4()), # Unique command ID + "hostname": "test-minimal-host", + "status": "completed", + } + + response = requests.post( + command_completion_url, + headers=credentials, + json=minimal_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert result["success"] is True + + def test_multiple_completions_same_command( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test multiple completion reports for the same command.""" + # First completion + first_response = requests.post( + command_completion_url, + headers=credentials, + json=valid_completion_data, + timeout=10, + verify=False, + ) + + # Second completion (should still succeed) + second_completion = valid_completion_data.copy() + second_completion["execution_time"] = 45 # Different execution time + + second_response = requests.post( + command_completion_url, + headers=credentials, + json=second_completion, + timeout=10, + verify=False, + ) + + # Both should succeed (idempotent or update behavior) + assert first_response.status_code == 200 + assert second_response.status_code == 200 + + +class TestCommandCompletionResponseStructure: + """Test class for validating command completion response structure and content.""" + + def test_response_content_type( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that response has correct content type.""" + response = requests.post( + command_completion_url, + headers=credentials, + json=valid_completion_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + assert "application/json" in response.headers.get("content-type", "") + + def test_response_structure_consistency( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that multiple completion requests return consistent response structure.""" + responses = [] + + for i in range(3): + completion_data = valid_completion_data.copy() + completion_data["command_id"] = str(uuid.uuid4()) # Unique command ID for each request + + response = requests.post( + command_completion_url, + headers=credentials, + json=completion_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + responses.append(response.json()) + + # All successful responses should have the same structure + if len(responses) > 1: + first_keys = set(responses[0].keys()) + for response_data in responses[1:]: + assert set(response_data.keys()) == first_keys, "Response structure should be consistent" + + # All responses should contain required fields + for response_data in responses: + assert "success" in response_data, "All responses should contain 'success' field" + assert "message" in response_data, "All responses should contain 'message' field" + + def test_error_response_structure( + self, + command_completion_url: str, + credentials: Dict[str, Any], + ): + """Test error response structure for invalid requests.""" + invalid_request = { + "command_id": "", # Empty command ID should cause error + "hostname": "test-host", + "status": "completed", + } + + response = requests.post( + command_completion_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # Should return error status + assert response.status_code in [400, 422, 500] + + def test_success_message_format( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that success message contains command ID.""" + response = requests.post( + command_completion_url, + headers=credentials, + json=valid_completion_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + result = response.json() + assert result["success"] is True + assert valid_completion_data["command_id"] in result["message"] + + def test_completion_request_idempotency( + self, + command_completion_url: str, + valid_completion_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that completion requests are handled appropriately when repeated.""" + # Send initial completion + first_response = requests.post( + command_completion_url, + headers=credentials, + json=valid_completion_data, + timeout=10, + verify=False, + ) + + # Send identical completion + second_response = requests.post( + command_completion_url, + headers=credentials, + json=valid_completion_data, + timeout=10, + verify=False, + ) + + # Both should succeed (system should handle duplicates gracefully) + assert first_response.status_code == 200 + assert second_response.status_code == 200 + + if first_response.status_code == 200 and second_response.status_code == 200: + first_result = first_response.json() + second_result = second_response.json() + + # Both should indicate success + assert first_result["success"] is True + assert second_result["success"] is True diff --git a/src/tests/unit/backend/test_metrics_heartbeat.py b/src/tests/unit/backend/test_metrics_heartbeat.py new file mode 100644 index 00000000..1284f1b6 --- /dev/null +++ b/src/tests/unit/backend/test_metrics_heartbeat.py @@ -0,0 +1,638 @@ +#!/usr/bin/env python3 +""" +Unit tests for the POST /api/metrics/heartbeat endpoint. + +This module contains pytest-based unit tests that validate: +1. Valid heartbeat requests with all required fields +2. Invalid requests with missing required fields +3. Invalid requests with wrong data types +4. Edge cases and boundary conditions +5. Response structure validation +6. Command delivery through heartbeat responses +""" + +from datetime import datetime +from typing import Any, Dict + +import pytest +import requests + + +@pytest.fixture +def heartbeat_url(backend_base_url) -> str: + """Get the full URL for the heartbeat endpoint.""" + return f"{backend_base_url}/api/metrics/heartbeat" + + +@pytest.fixture +def valid_heartbeat_data() -> Dict[str, Any]: + """Provide valid heartbeat data for testing.""" + return { + "hostname": "test-host", + "ip_address": "127.0.0.1", + "service_name": "test-service", + "status": "active", + "timestamp": datetime.now().isoformat(), + "last_command_id": None, + } + + +class TestHeartbeatEndpoint: + """Test class for the heartbeat endpoint.""" + + def test_valid_heartbeat_request( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test a valid heartbeat request.""" + response = requests.post( + heartbeat_url, + headers=credentials, + json=valid_heartbeat_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert "message" in result, "Response should contain 'message' field" + + # Check for optional command fields + if "profiling_command" in result: + assert "command_id" in result, "Response with command should contain 'command_id' field" + + def test_heartbeat_with_last_command_id( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test heartbeat request with last_command_id.""" + heartbeat_data = valid_heartbeat_data.copy() + heartbeat_data["last_command_id"] = "test-command-123" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=heartbeat_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert "message" in result + + def test_missing_hostname( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing hostname field.""" + invalid_request = valid_heartbeat_data.copy() + del invalid_request["hostname"] + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_missing_ip_address( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing ip_address field.""" + invalid_request = valid_heartbeat_data.copy() + del invalid_request["ip_address"] + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_missing_service_name( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing service_name field.""" + invalid_request = valid_heartbeat_data.copy() + del invalid_request["service_name"] + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_missing_status( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing status field.""" + invalid_request = valid_heartbeat_data.copy() + del invalid_request["status"] + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # Status might have a default value, so it could be valid + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_empty_hostname( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with empty hostname.""" + invalid_request = valid_heartbeat_data.copy() + invalid_request["hostname"] = "" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_service_name( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with empty service_name.""" + invalid_request = valid_heartbeat_data.copy() + invalid_request["service_name"] = "" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_ip_address_format( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid IP address format.""" + invalid_request = valid_heartbeat_data.copy() + invalid_request["ip_address"] = "invalid.ip.address" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # Depending on validation, this might be accepted or rejected + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_invalid_status_value( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid status value.""" + invalid_request = valid_heartbeat_data.copy() + invalid_request["status"] = "invalid_status" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # Might be accepted or rejected depending on validation + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_none_values( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with None values for required fields.""" + invalid_request = valid_heartbeat_data.copy() + invalid_request["hostname"] = None + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_timestamp_format( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid timestamp format.""" + invalid_request = valid_heartbeat_data.copy() + invalid_request["timestamp"] = "invalid-timestamp" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # Timestamp validation depends on implementation + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_missing_timestamp( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing timestamp (should use server time).""" + heartbeat_data = valid_heartbeat_data.copy() + del heartbeat_data["timestamp"] + + response = requests.post( + heartbeat_url, + headers=credentials, + json=heartbeat_data, + timeout=10, + verify=False, + ) + + # Missing timestamp should be handled gracefully + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_ipv6_address( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with IPv6 address.""" + ipv6_request = valid_heartbeat_data.copy() + ipv6_request["ip_address"] = "::1" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=ipv6_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_long_hostname( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with very long hostname.""" + long_hostname_request = valid_heartbeat_data.copy() + long_hostname_request["hostname"] = "a" * 255 # Very long hostname + + response = requests.post( + heartbeat_url, + headers=credentials, + json=long_hostname_request, + timeout=10, + verify=False, + ) + + # Should either accept or reject based on hostname length limits + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_malformed_json(self, heartbeat_url: str, credentials: Dict[str, Any]): + """Test request with malformed JSON.""" + response = requests.post( + heartbeat_url, + data="{'invalid': json}", # Invalid JSON + headers={**credentials, "Content-Type": "application/json"}, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_request_body(self, heartbeat_url: str, credentials: Dict[str, Any]): + """Test request with empty body.""" + response = requests.post(heartbeat_url, headers=credentials, json={}, timeout=10, verify=False) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_additional_fields( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that additional fields are handled gracefully.""" + extended_request = valid_heartbeat_data.copy() + extended_request["extra_field"] = "extra_value" + extended_request["metadata"] = {"key": "value"} + + response = requests.post( + heartbeat_url, + headers=credentials, + json=extended_request, + timeout=10, + verify=False, + ) + + # Additional fields should be ignored, not cause errors + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + def test_status_variations( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test different valid status values.""" + valid_statuses = ["active", "inactive", "error", "pending"] + + for status in valid_statuses: + status_request = valid_heartbeat_data.copy() + status_request["status"] = status + status_request["hostname"] = f"test-host-{status}" # Unique hostname + + response = requests.post( + heartbeat_url, + headers=credentials, + json=status_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 200, + 400, + 422, + ], f"Failed with status: {status}, code: {response.status_code}" + + def test_multiple_heartbeats_same_host( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test multiple heartbeats from the same host.""" + for i in range(3): + heartbeat_data = valid_heartbeat_data.copy() + heartbeat_data["timestamp"] = datetime.now().isoformat() + + response = requests.post( + heartbeat_url, + headers=credentials, + json=heartbeat_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Heartbeat {i+1} failed: {response.status_code}: {response.text}" + + +class TestHeartbeatResponseStructure: + """Test class for validating heartbeat response structure and content.""" + + def test_response_content_type( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that response has correct content type.""" + response = requests.post( + heartbeat_url, + headers=credentials, + json=valid_heartbeat_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + assert "application/json" in response.headers.get("content-type", "") + + def test_response_structure_consistency( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that multiple heartbeat requests return consistent response structure.""" + responses = [] + + for i in range(3): + heartbeat_data = valid_heartbeat_data.copy() + heartbeat_data["hostname"] = f"test-host-{i}" + + response = requests.post( + heartbeat_url, + headers=credentials, + json=heartbeat_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + responses.append(response.json()) + + # All successful responses should have at least the message field + for response_data in responses: + assert "message" in response_data, "All responses should contain 'message' field" + + def test_command_response_structure( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test response structure when commands are present.""" + response = requests.post( + heartbeat_url, + headers=credentials, + json=valid_heartbeat_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + result = response.json() + + assert "command_id" in result, "Response should contain 'command_id' field" + if result.get("command_id", None) is not None: + assert isinstance(result["command_id"], str), "command_id should be a string" + + assert "profiling_command" in result, "Response should contain 'profiling_command' field" + if result.get("profiling_command", None) is not None: + assert isinstance(result["profiling_command"], dict), "profiling_command should be a dictionary" + + # Validate command structure + command = result["profiling_command"] + expected_fields = ["command_type"] # Minimum expected fields + for field in expected_fields: + if field in command: + assert command[field] is not None, f"Command field {field} should not be None" + + def test_heartbeat_idempotency( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that heartbeat requests are idempotent.""" + # Send initial heartbeat + first_response = requests.post( + heartbeat_url, + headers=credentials, + json=valid_heartbeat_data, + timeout=10, + verify=False, + ) + + # Send identical heartbeat + second_response = requests.post( + heartbeat_url, + headers=credentials, + json=valid_heartbeat_data, + timeout=10, + verify=False, + ) + + assert first_response.status_code == second_response.status_code + + if first_response.status_code == 200 and second_response.status_code == 200: + first_result = first_response.json() + second_result = second_response.json() + + # Both should contain message field + assert "message" in first_result + assert "message" in second_result + + def test_heartbeat_timestamp_handling( + self, + heartbeat_url: str, + valid_heartbeat_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test various timestamp formats.""" + timestamp_formats = [ + datetime.now().isoformat(), + datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ"), + datetime.now().strftime("%Y-%m-%d %H:%M:%S"), + ] + + for timestamp in timestamp_formats: + heartbeat_data = valid_heartbeat_data.copy() + heartbeat_data["timestamp"] = timestamp + heartbeat_data["hostname"] = f"test-host-{timestamp}" # Unique hostname + + response = requests.post( + heartbeat_url, + headers=credentials, + json=heartbeat_data, + timeout=10, + verify=False, + ) + + # Should handle various timestamp formats gracefully + assert response.status_code in [ + 200, + 400, + 422, + ], f"Failed with timestamp format: {timestamp}" diff --git a/src/tests/unit/backend/test_metrics_profile_request.py b/src/tests/unit/backend/test_metrics_profile_request.py new file mode 100644 index 00000000..940a3671 --- /dev/null +++ b/src/tests/unit/backend/test_metrics_profile_request.py @@ -0,0 +1,585 @@ +#!/usr/bin/env python3 +""" +Unit tests for the POST /api/metrics/profile_request endpoint. + +This module contains pytest-based unit tests that validate: +1. Valid profiling requests with all required fields +2. Invalid requests with missing required fields +3. Invalid requests with wrong data types +4. Edge cases and boundary conditions +5. Response structure validation +""" + +from typing import Any, Dict + +import pytest +import requests + + +@pytest.fixture +def profile_request_url(backend_base_url) -> str: + """Get the full URL for the profile request endpoint.""" + return f"{backend_base_url}/api/metrics/profile_request" + + +@pytest.fixture +def valid_request_data() -> Dict[str, Any]: + """Provide valid request data for testing.""" + return { + "service_name": "test-service", + "request_type": "start", + "duration": 60, + "frequency": 11, + "profiling_mode": "cpu", + "target_hostnames": ["test-host"], + "additional_args": {"test": True}, + "stop_level": "host", + } + + +class TestProfileRequestEndpoint: + """Test class for the profile request endpoint.""" + + def test_valid_start_request( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test a valid start profiling request.""" + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_request_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert "message" in result, "Response should contain 'message' field" + assert "request_id" in result, "Response should contain 'request_id' field" + assert "command_ids" in result, "Response should contain 'command_ids' field" + + # Validate that IDs are non-empty strings + assert isinstance(result["request_id"], str) and result["request_id"], "request_id should be a non-empty string" + assert ( + isinstance(result["command_ids"], list) and result["command_ids"] + ), "command_id should be a non-empty list" + + def test_valid_stop_request( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test a valid stop profiling request.""" + stop_request_data = valid_request_data.copy() + stop_request_data["request_type"] = "stop" + + response = requests.post( + profile_request_url, + headers=credentials, + json=stop_request_data, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert "message" in result + assert "request_id" in result + assert "command_ids" in result + + def test_missing_service_name( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing service_name field.""" + invalid_request = valid_request_data.copy() + del invalid_request["service_name"] + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_missing_request_type( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing request_type field.""" + invalid_request = valid_request_data.copy() + del invalid_request["request_type"] + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + def test_missing_target_hostnames( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with missing target_hostnames field.""" + invalid_request = valid_request_data.copy() + del invalid_request["target_hostnames"] + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_target_hostnames( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with empty target_hostnames list.""" + invalid_request = valid_request_data.copy() + invalid_request["target_hostnames"] = [] + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_request_type( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid request_type value.""" + invalid_request = valid_request_data.copy() + invalid_request["request_type"] = "invalid_type" + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_duration_type( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid duration data type.""" + invalid_request = valid_request_data.copy() + invalid_request["duration"] = "not_a_number" + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_frequency_type( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid frequency data type.""" + invalid_request = valid_request_data.copy() + invalid_request["frequency"] = "not_a_number" + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_negative_duration( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with negative duration value.""" + invalid_request = valid_request_data.copy() + invalid_request["duration"] = -1 + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_zero_duration( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with zero duration value.""" + invalid_request = valid_request_data.copy() + invalid_request["duration"] = 0 + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_invalid_profiling_mode( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with invalid profiling_mode value.""" + invalid_request = valid_request_data.copy() + invalid_request["profiling_mode"] = "invalid_mode" + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + # This might be valid depending on backend implementation + # Adjust assertion based on actual backend behavior + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_multiple_target_hostnames( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with multiple target hostnames.""" + multi_host_request = valid_request_data.copy() + multi_host_request["target_hostnames"] = ["host1", "host2", "host3"] + + response = requests.post( + profile_request_url, + headers=credentials, + json=multi_host_request, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert "request_id" in result + assert "command_id" in result + + def test_large_duration_value( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with large duration value.""" + large_duration_request = valid_request_data.copy() + large_duration_request["duration"] = 86400 # 24 hours + + response = requests.post( + profile_request_url, + headers=credentials, + json=large_duration_request, + timeout=10, + verify=False, + ) + + # Should either accept or reject based on business rules + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_high_frequency_value( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with high frequency value.""" + high_freq_request = valid_request_data.copy() + high_freq_request["frequency"] = 1000 + + response = requests.post( + profile_request_url, + headers=credentials, + json=high_freq_request, + timeout=10, + verify=False, + ) + + # Should either accept or reject based on business rules + assert response.status_code in [ + 200, + 400, + 422, + ], f"Unexpected status code: {response.status_code}" + + def test_empty_service_name( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with empty service_name.""" + invalid_request = valid_request_data.copy() + invalid_request["service_name"] = "" + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_none_values( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test request with None values for required fields.""" + invalid_request = valid_request_data.copy() + invalid_request["service_name"] = None + + response = requests.post( + profile_request_url, + headers=credentials, + json=invalid_request, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_malformed_json(self, profile_request_url: str, credentials: Dict[str, Any]): + """Test request with malformed JSON.""" + response = requests.post( + profile_request_url, + data="{'invalid': json}", # Invalid JSON + headers={**credentials, "Content-Type": "application/json"}, + timeout=10, + verify=False, + ) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_empty_request_body(self, profile_request_url: str, credentials: Dict[str, Any]): + """Test request with empty body.""" + response = requests.post(profile_request_url, headers=credentials, json={}, timeout=10, verify=False) + + assert response.status_code in [ + 400, + 422, + ], f"Expected 400 or 422, got {response.status_code}" + + def test_additional_args_structure( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that additional_args accepts various data structures.""" + test_cases = [ + {"key": "value"}, + {"nested": {"key": "value"}}, + {"list": [1, 2, 3]}, + {"boolean": True}, + {"number": 42}, + ] + + for additional_args in test_cases: + request_data = valid_request_data.copy() + request_data["additional_args"] = additional_args + + response = requests.post( + profile_request_url, + headers=credentials, + json=request_data, + timeout=10, + verify=False, + ) + + assert ( + response.status_code == 200 + ), f"Failed with additional_args: {additional_args}, status: {response.status_code}" + + +class TestProfileRequestResponseStructure: + """Test class for validating response structure and content.""" + + def test_response_content_type( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that response has correct content type.""" + response = requests.post( + profile_request_url, + headers=credentials, + json=valid_request_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + assert "application/json" in response.headers.get("content-type", "") + + def test_response_structure_consistency( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that multiple requests return consistent response structure.""" + responses = [] + + for i in range(3): + request_data = valid_request_data.copy() + request_data["service_name"] = f"test-service-{i}" + + response = requests.post( + profile_request_url, + headers=credentials, + json=request_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + responses.append(response.json()) + + # All successful responses should have the same structure + if len(responses) > 1: + first_keys = set(responses[0].keys()) + for response in responses[1:]: + assert set(response.json().keys()) == first_keys, "Response structure should be consistent" + + def test_unique_identifiers( + self, + profile_request_url: str, + valid_request_data: Dict[str, Any], + credentials: Dict[str, Any], + ): + """Test that each request generates unique identifiers.""" + request_id_set = set() + command_id_set = set() + + for i in range(5): + request_data = valid_request_data.copy() + request_data["service_name"] = f"test-service-{i}" + + response = requests.post( + profile_request_url, + headers=credentials, + json=request_data, + timeout=10, + verify=False, + ) + + if response.status_code == 200: + result = response.json() + request_id = result.get("request_id") + command_ids = result.get("command_ids") + + assert request_id not in request_id_set, "request_id should be unique" + request_id_set.add(request_id) + + for command_id in command_ids: + assert command_id not in command_id_set, "command_id should be unique" + command_id_set.add(command_id) diff --git a/src/tests/unit/backend/test_profiling_host_status.py b/src/tests/unit/backend/test_profiling_host_status.py new file mode 100644 index 00000000..93aca19a --- /dev/null +++ b/src/tests/unit/backend/test_profiling_host_status.py @@ -0,0 +1,638 @@ +#!/usr/bin/env python3 +""" +Unit tests for the GET /api/metrics/profiling/host_status endpoint. + +This module contains pytest-based unit tests that validate: +1. N+1 query optimization (single query with LEFT JOIN) +2. Filter functionality for all parameters +3. Combined filters (AND logic) +4. Response structure validation +5. Performance improvements +6. NULL command handling (stopped status) +""" + +from typing import Any, Dict, List + +import pytest +import requests + + +@pytest.fixture +def profiling_host_status_url(backend_base_url) -> str: + """Get the full URL for the profiling host status endpoint.""" + return f"{backend_base_url}/api/metrics/profiling/host_status" + + +class TestProfilingHostStatusEndpoint: + """Test class for the profiling host status endpoint.""" + + def test_get_all_hosts_no_filters( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 1: Get all hosts without any filters + + Expected: Returns all active hosts with heartbeats + """ + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # Validate response structure + if len(result) > 0: + host = result[0] + required_fields = [ + "id", "service_name", "hostname", "ip_address", + "pids", "command_type", "profiling_status", "heartbeat_timestamp" + ] + for field in required_fields: + assert field in host, f"Response should contain '{field}' field" + + def test_filter_by_service_name( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 2: Filter by service name + + Expected: Returns only hosts matching service_name + """ + response = requests.get( + f"{profiling_host_status_url}?service_name=devapp", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should match the filter + for host in result: + assert "devapp" in host["service_name"].lower(), f"Host service_name should contain 'devapp'" + + def test_filter_by_hostname_partial( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 3: Filter by hostname (partial match) + + Expected: Returns hosts with hostnames containing the search term + """ + response = requests.get( + f"{profiling_host_status_url}?hostname=restricted", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should match the filter + for host in result: + assert "restricted" in host["hostname"].lower(), f"Host hostname should contain 'restricted'" + + def test_filter_by_ip_address_prefix( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 4: Filter by IP address prefix (subnet filtering) + + Expected: Returns hosts with IPs matching the prefix + Note: Tests inet::text casting for PostgreSQL compatibility + """ + response = requests.get( + f"{profiling_host_status_url}?ip_address=10.9", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should match the filter + for host in result: + assert host["ip_address"].startswith("10.9"), f"Host IP should start with '10.9'" + + def test_filter_by_profiling_status_sent( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 5: Filter by profiling status (sent) + + Expected: Returns only hosts with sent commands + """ + response = requests.get( + f"{profiling_host_status_url}?profiling_status=sent", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should have status 'sent' + for host in result: + assert host["profiling_status"] == "sent", f"Host status should be 'sent'" + + def test_filter_by_profiling_status_stopped( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 6: Filter by profiling status (stopped) + + Expected: Returns hosts with no active commands (NULL → stopped) + Note: Tests NULL handling in optimized query + """ + response = requests.get( + f"{profiling_host_status_url}?profiling_status=stopped", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should have status 'stopped' and command_type 'N/A' + for host in result: + assert host["profiling_status"] == "stopped", f"Host status should be 'stopped'" + assert host["command_type"] == "N/A", f"Stopped hosts should have command_type 'N/A'" + + def test_filter_by_command_type( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 7: Filter by command type + + Expected: Returns only hosts with specified command type + """ + response = requests.get( + f"{profiling_host_status_url}?command_type=start", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should have command_type 'start' + for host in result: + assert host["command_type"] == "start", f"Host command_type should be 'start'" + + def test_combined_filters_service_and_hostname( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 8: Combined filters - service name + hostname + + Expected: Returns hosts matching BOTH filters (AND logic) + """ + response = requests.get( + f"{profiling_host_status_url}?service_name=devapp&hostname=restricted", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should match both filters + for host in result: + assert "devapp" in host["service_name"].lower(), f"Should match service filter" + assert "restricted" in host["hostname"].lower(), f"Should match hostname filter" + + def test_combined_filters_service_ip_status( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 9: Combined filters - service + IP + status + + Expected: Returns hosts matching ALL THREE filters (AND logic) + """ + response = requests.get( + f"{profiling_host_status_url}?service_name=devapp&ip_address=10.9&profiling_status=sent", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # All returned hosts should match all filters + for host in result: + assert "devapp" in host["service_name"].lower(), f"Should match service filter" + assert host["ip_address"].startswith("10.9"), f"Should match IP filter" + assert host["profiling_status"] == "sent", f"Should match status filter" + + def test_exact_match_mode( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 10: Exact match mode + + Expected: Returns only exact matches when exact_match=true + """ + response = requests.get( + f"{profiling_host_status_url}?hostname=devrestricted-achatharajupalli&exact_match=true", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.status_code}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # Should return exact match only + for host in result: + assert host["hostname"] == "devrestricted-achatharajupalli", f"Should be exact match" + + def test_multiple_service_names( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 11: Multiple service names (OR logic) + + Expected: Returns hosts matching ANY of the service names + """ + response = requests.get( + f"{profiling_host_status_url}?service_name=devapp&service_name=web-service", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + # Hosts should match at least one of the service names + for host in result: + service = host["service_name"].lower() + assert "devapp" in service or "web-service" in service, \ + f"Should match one of the service names" + + def test_no_results_for_invalid_filter( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 12: No results for invalid filter + + Expected: Returns empty array for non-existent filter value + """ + response = requests.get( + f"{profiling_host_status_url}?service_name=nonexistent-service-12345", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}: {response.text}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + assert len(result) == 0, "Should return empty array for non-existent filter" + + +class TestProfilingHostStatusPerformance: + """Test class for performance validation.""" + + def test_response_time_acceptable( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 13: Response time is acceptable + + Expected: Response time < 100ms (even with filters) + Note: With N+1 problem, this would be much slower + """ + import time + + start = time.time() + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + elapsed = (time.time() - start) * 1000 # Convert to ms + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + assert elapsed < 100, f"Response time should be < 100ms, got {elapsed:.2f}ms" + + def test_response_time_with_filters( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 14: Response time with multiple filters + + Expected: Response time < 100ms even with multiple filters + Note: Database-side filtering ensures constant performance + """ + import time + + start = time.time() + response = requests.get( + f"{profiling_host_status_url}?service_name=devapp&hostname=restricted&profiling_status=sent", + headers=credentials, + timeout=10, + verify=False, + ) + elapsed = (time.time() - start) * 1000 + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + assert elapsed < 100, f"Response time should be < 100ms with filters, got {elapsed:.2f}ms" + + +class TestProfilingHostStatusDataValidation: + """Test class for data validation.""" + + def test_pids_array_structure( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 15: PIDs array structure + + Expected: pids field is always an array (may be empty) + """ + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + for host in result: + assert isinstance(host["pids"], list), f"pids should be an array" + # All elements should be integers + for pid in host["pids"]: + assert isinstance(pid, int), f"PIDs should be integers" + + def test_heartbeat_timestamp_format( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 16: Heartbeat timestamp format + + Expected: heartbeat_timestamp is valid ISO 8601 datetime + """ + from datetime import datetime + + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + for host in result: + timestamp = host["heartbeat_timestamp"] + # Should be parseable as ISO 8601 + try: + datetime.fromisoformat(timestamp.replace('Z', '+00:00')) + except ValueError: + pytest.fail(f"Invalid timestamp format: {timestamp}") + + def test_profiling_status_valid_values( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 17: Profiling status contains only valid values + + Expected: profiling_status is one of: pending, sent, completed, failed, stopped + """ + valid_statuses = {"pending", "sent", "completed", "failed", "stopped"} + + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + for host in result: + status = host["profiling_status"] + assert status in valid_statuses, \ + f"Invalid status '{status}'. Must be one of: {valid_statuses}" + + def test_command_type_valid_values( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 18: Command type contains only valid values + + Expected: command_type is one of: start, stop, N/A + """ + valid_command_types = {"start", "stop", "N/A"} + + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + for host in result: + cmd_type = host["command_type"] + assert cmd_type in valid_command_types, \ + f"Invalid command_type '{cmd_type}'. Must be one of: {valid_command_types}" + + def test_stopped_hosts_have_na_command( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 19: Stopped hosts have N/A command type + + Expected: All hosts with status='stopped' have command_type='N/A' + Note: Tests NULL command handling + """ + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + for host in result: + if host["profiling_status"] == "stopped": + assert host["command_type"] == "N/A", \ + f"Stopped hosts should have command_type='N/A'" + + def test_active_hosts_have_valid_command( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 20: Active hosts have valid command type + + Expected: Hosts with status != 'stopped' have command_type = 'start' or 'stop' + """ + response = requests.get( + profiling_host_status_url, + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + for host in result: + if host["profiling_status"] != "stopped": + assert host["command_type"] in ["start", "stop"], \ + f"Active hosts should have valid command_type" + + +class TestProfilingHostStatusEdgeCases: + """Test class for edge cases.""" + + def test_empty_filter_value( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 21: Empty filter value + + Expected: Empty filter value returns all results (ignored) + """ + response = requests.get( + f"{profiling_host_status_url}?service_name=", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + result = response.json() + assert isinstance(result, list), "Response should be a list" + + def test_case_sensitivity( + self, + profiling_host_status_url: str, + credentials: Dict[str, Any], + ): + """ + TEST 22: Case insensitivity + + Expected: Filters are case-insensitive (LIKE with ILIKE behavior) + """ + # Test with lowercase + response_lower = requests.get( + f"{profiling_host_status_url}?service_name=devapp", + headers=credentials, + timeout=10, + verify=False, + ) + + # Test with uppercase + response_upper = requests.get( + f"{profiling_host_status_url}?service_name=DEVAPP", + headers=credentials, + timeout=10, + verify=False, + ) + + assert response_lower.status_code == 200 + assert response_upper.status_code == 200 + + result_lower = response_lower.json() + result_upper = response_upper.json() + + # Both should return the same results + assert len(result_lower) == len(result_upper), \ + "Case variations should return same results" + + + + + + + diff --git a/test/.env b/test/.env new file mode 100644 index 00000000..a4b3ad7f --- /dev/null +++ b/test/.env @@ -0,0 +1,55 @@ +COMMON_LOGS_DIR=/logs +# Will be used in agent installation templates +DOMAIN=http://localhost + +# AWS: + +AWS_REGION=us-east-1 + +# AWS credentials, can be empty if running on EC2 with IAM role +AWS_ACCESS_KEY_ID= +AWS_SECRET_ACCESS_KEY= +AWS_SESSION_TOKEN= + +# postgres: +TEST_TEST= +POSTGRES_USER=performance_studio +POSTGRES_PASSWORD=performance_studio_password +POSTGRES_DB=performance_studio +POSTGRES_PORT=5432 +# docker-compose service name, can be changed to remote host +POSTGRES_HOST=db_postgres + +# clickhouse: + +CLICKHOUSE_USER=dbuser +CLICKHOUSE_PASSWORD=simplePassword +# docker-compose service name, can be changed to remote host +CLICKHOUSE_HOST=db_clickhouse + +# ch-rest-service: +REST_USERNAME=user +REST_PASSWORD=pass + +# webapp: + +BUCKET_NAME=performance_studio_bucket +SQS_INDEXER_QUEUE_URL=performance_studio_queue +WEBAPP_APP_LOG_FILE_PATH="webapp.log" + +# agents-logs: + +AGENTS_LOGS_APP_LOG_FILE_PATH="${COMMON_LOGS_DIR}/agents-logs-app.log" +AGENTS_LOGS_LOG_FILE_PATH="${COMMON_LOGS_DIR}/agents-logs.log" + +# gprofile-tester: + +TEST_BACKEND=True +TEST_DB=True +TEST_MANAGED_BACKEND=True +TEST_MANAGED_DB=True +# docker-compose service name, can be changed to remote host +BACKEND_URL=https://nginx-load-balancer +BACKEND_PORT=443 +BACKEND_USER=test-user +BACKEND_PASSWORD=tester123 \ No newline at end of file diff --git a/test/README.md b/test/README.md new file mode 100644 index 00000000..33363de1 --- /dev/null +++ b/test/README.md @@ -0,0 +1,70 @@ +# How to setup the test environment +This README aims to guide the user on how to run tests against **Gprofiler Performance Studio** services. + +The executed tests can vary from: +* Unit; +* Integration; +* E2E (end-to-end); + +The tested services can either: +* Run locally on the host machine, and **be managed** by the test environment; +* Run locally on the host machine, and **not be managed** by the test environment; +* Run remotely on a different machine; + +By configuring the files on this folder, **the user should be capable to run tests for any combination of the above options for all services that compose the application**. + +## Requirements +* Docker; +* Docker Compose; + +If running services locally and managed by the test environment: +* You should be capable to run **Gprofiler Performance Studio** locally: [tutorial here](../README.md#usage); + +## Configuring the test environment +This configuration determines: +* Which tests will be run; +* Which services will be run and managed by the test environment; +* Addresses, credentials and fine grain options for tests and services; + +Overview of configuration files: +* [.env](./.env): This file configures all the **environment variables** that will be passed to the tests and services managed by the test environment. The user can modify this file to choose: + + * Addresses of services used by tests; + * Credentials to be used to access services by tests; + * Test options; + * Service specific configuration (for services managed by the test environment); + +* [docker-compose.yml](./docker-compose.yml): This file configures the containers that will be managed by the test environment. The user can modify this file to choose: + + * Tests to be run; + * Ports to expose to host for each service managed by the test environment (should be rarely needed*); + * Network options for the test container (to run the test container in "host" network mode in case there are local gprofiler services not managed by the test environment); + + +## Pre-deployment tests +**Description**: This test case aims to spin up local containers for all services and run tests against then. + +**Use case**: Pre-deployment testing. + +Start all containers, then attach to test container +```sh +docker compose --profile with-all up -d --build && docker attach gprofiler-tester +``` +Stop all containers and remove persistent volumes +```sh +docker compose --profile with-all down -v +``` + +## Post deployment tests +**Description**: This test case aims to spin up only the test container and run tests against remote services. + +**Use case**: Post-deployment testing. + +Start all containers, then attach to test container +```sh +docker compose up -d --build && docker attach gprofiler-tester +``` +Stop all containers and remove persistent volumes +```sh +docker compose down -v +``` \ No newline at end of file diff --git a/test/docker-compose.yml b/test/docker-compose.yml new file mode 100644 index 00000000..96c4a08b --- /dev/null +++ b/test/docker-compose.yml @@ -0,0 +1,257 @@ +# +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +services: + # --- + db_clickhouse: + image: clickhouse/clickhouse-server:22.8 + profiles: ["with-clickhouse", "with-all"] + container_name: gprofiler-ps-clickhouse + restart: always + # Uncomment the following lines to expose ClickHouse ports + # ports: + # - "8123:8123" + # - "9000:9000" + # - "9009:9009" + ulimits: + nofile: + soft: 262144 + hard: 262144 + environment: + - CLICKHOUSE_USER=$CLICKHOUSE_USER + - CLICKHOUSE_PASSWORD=$CLICKHOUSE_PASSWORD + volumes: + - ../src/gprofiler_indexer/sql/create_ch_schema.sql:/docker-entrypoint-initdb.d/create_schema.sql + - logs:/var/log/clickhouse-server/ + - db_clickhouse:/var/lib/clickhouse/ + + # --- + db_postgres: + image: postgres:15.1 + profiles: ["with-postgres", "with-all"] + container_name: gprofiler-ps-postgres + restart: always + # Uncomment the following lines to expose Postgres ports + # ports: + # - "5432:5432" + environment: + - POSTGRES_USER=$POSTGRES_USER + - POSTGRES_PASSWORD=$POSTGRES_PASSWORD + - POSTGRES_DB=$POSTGRES_DB + volumes: + - db_postgres:/var/lib/postgresql/data + - ../scripts/setup/postgres/gprofiler_recreate.sql:/docker-entrypoint-initdb.d/create_scheme.sql + + # --- + webapp: + build: + context: ../src + dockerfile: gprofiler/Dockerfile + profiles: ["with-webapp", "with-all"] + container_name: gprofiler-ps-webapp + restart: always + # Uncomment the following lines to expose webapp ports + # ports: + # - "80:80" + environment: + - BUCKET_NAME=$BUCKET_NAME + - QUERY_API_BASE_URL=https://ch-rest-service:4433 + - REST_VERIFY_TLS=FALSE + - REST_USERNAME=$REST_USERNAME + - REST_PASSWORD=$REST_PASSWORD + - SQS_INDEXER_QUEUE_URL=$SQS_INDEXER_QUEUE_URL + - GPROFILER_POSTGRES_DB_NAME=$POSTGRES_DB + - GPROFILER_POSTGRES_PORT=$POSTGRES_PORT + - GPROFILER_POSTGRES_HOST=$POSTGRES_HOST + - GPROFILER_POSTGRES_USERNAME=$POSTGRES_USER + - GPROFILER_POSTGRES_PASSWORD=$POSTGRES_PASSWORD + - APP_LOG_FILE_PATH=$WEBAPP_APP_LOG_FILE_PATH + - APP_LOG_LEVEL=INFO + - AWS_METADATA_SERVICE_NUM_ATTEMPTS=100 + - REDIRECT_DOMAIN=$DOMAIN + - SQS_ENDPOINT_URL=https://sqs.${AWS_REGION}.amazonaws.com + - AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID + - AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY + - AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN + - AWS_DEFAULT_REGION=$AWS_REGION + + # --- + ch-rest-service: + build: + context: ../src/gprofiler_flamedb_rest + dockerfile: Dockerfile + profiles: ["with-ch-rest-service", "with-all"] + container_name: gprofiler-ps-ch-rest-service + restart: always + # Uncomment the following lines to expose ClickHouse REST service ports + # ports: + # - "8080:8080" + environment: + - CLICKHOUSE_ADDR=${CLICKHOUSE_HOST}:9000?username=${CLICKHOUSE_USER}&password=${CLICKHOUSE_PASSWORD} + - CLICKHOUSE_STACKS_TABLE=flamedb.samples + - CLICKHOUSE_METRICS_TABLE=flamedb.metrics + - AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID + - AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY + - AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN + - CERT_FILE_PATH=/tls/ch_rest_cert.pem + - KEY_FILE_PATH=/tls/ch_rest_key.pem + - BASIC_AUTH_CREDENTIALS=${REST_USERNAME}:${REST_PASSWORD} + volumes: + - "../deploy/tls:/tls" + healthcheck: + test: echo 'SELECT database,name from system.tables' | curl -s 'http://$CLICKHOUSE_USER:$CLICKHOUSE_PASSWORD@$CLICKHOUSE_HOST:8123/?query=' --data-binary @- | grep -q "samples" + interval: 10s + retries: 10 + start_period: 20s + + + # --- + agents-logs-backend: + user: "888:888" + build: + context: ../src + dockerfile: gprofiler_logging/Dockerfile + profiles: ["with-agents-logs-backend", "with-all"] + container_name: gprofiler-ps-agents-logs-backend + restart: always + # Uncomment the following lines to expose agents logs backend ports + # ports: + # - "80:80" + environment: + - APP_LOG_FILE_PATH=$AGENTS_LOGS_APP_LOG_FILE_PATH + - ENV=open + - LOG_FILE_PATH=$AGENTS_LOGS_LOG_FILE_PATH + - GPROFILER_POSTGRES_DB_NAME=$POSTGRES_DB + - GPROFILER_POSTGRES_PORT=$POSTGRES_PORT + - GPROFILER_POSTGRES_HOST=$POSTGRES_HOST + - GPROFILER_POSTGRES_USERNAME=$POSTGRES_USER + - GPROFILER_POSTGRES_PASSWORD=$POSTGRES_PASSWORD + volumes: + - "logs:${COMMON_LOGS_DIR}" + + # --- + periodic-tasks: + user: "888:888" + build: + context: ../deploy/periodic_tasks + dockerfile: Dockerfile + profiles: ["with-periodic-tasks", "with-all"] + container_name: gprofiler-ps-periodic-tasks + restart: always + environment: + - PGHOST=$POSTGRES_HOST + - PGPORT=$POSTGRES_PORT + - PGUSER=$POSTGRES_USER + - PGPASSWORD=$POSTGRES_PASSWORD + - PGDATABASE=$POSTGRES_DB + - LOGROTATE_PATTERN=${COMMON_LOGS_DIR}/*.log + - LOGROTATE_SIZE=15M + volumes: + - "logs:${COMMON_LOGS_DIR}" + + # --- + ch-indexer: + build: + context: ../src/gprofiler_indexer + dockerfile: Dockerfile + profiles: ["with-ch-indexer", "with-all"] + container_name: gprofiler-ps-ch-indexer + restart: always + environment: + - SQS_QUEUE_URL=$SQS_INDEXER_QUEUE_URL + - AWS_REGION=$AWS_REGION + - S3_BUCKET=$BUCKET_NAME + - CLICKHOUSE_ADDR=$CLICKHOUSE_HOST:9000 + - CLICKHOUSE_USER=${CLICKHOUSE_USER} + - CLICKHOUSE_USE_TLS=false + - CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD} + - CONCURRENCY=8 + - CLICKHOUSE_STACKS_TABLE=flamedb.samples + - CLICKHOUSE_METRICS_TABLE=flamedb.metrics + - CLICKHOUSE_STACKS_BATCH_SIZE=100000 + - CLICKHOUSE_METRICS_BATCH_SIZE=1000 + - CACHE_SIZE=2048 + - AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID + - AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY + - AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN + + #--- + nginx-load-balancer: + image: nginx:1.23.3 + profiles: ["with-nginx-load-balancer", "with-all"] + container_name: gprofiler-ps-nginx-load-balancer + restart: always + # Uncomment the following lines to expose Nginx load balancer ports + # ports: + # - "80:80" + # - "443:443" + volumes: + - ../deploy/https_nginx.conf:/etc/nginx/nginx.conf + - ../deploy/.htpasswd:/etc/nginx/.htpasswd + - ../deploy/tls:/etc/nginx/tls + depends_on: + - agents-logs-backend + - webapp + + #--- + gprofile-tester: + build: + context: ../src/tests + dockerfile: Dockerfile + entrypoint: ["sh", "-c", "sleep 5 && python run_tests.py --test-path integration/"] + container_name: gprofiler-tester + # Uncomment the following line to run the tester in host network mode. + # This is useful for testing services running on the same host (but not managed by this docker compose file) + # without the need of extra network configuration + # network_mode: "host" + stdin_open: true + tty: true + # The following lines are commented out to avoid dependency errors when + # selecting different profile arrangements. + # If you know want to use this functionality, uncomment the lines below. + # Just make sure that the uncommented services will be available with the selected profiles + # when running docker compose. + # depends_on: + # - webapp + # - db_clickhouse + # - db_postgres + # - nginx-load-balancer + # - ch-indexer + # - periodic-tasks + # - ch-rest-service + # - agents-logs-backend + environment: + - TEST_BACKEND=$TEST_BACKEND + - TEST_DB=$TEST_DB + - TEST_MANAGED_BACKEND=$TEST_MANAGED_BACKEND + - TEST_MANAGED_DB=$TEST_MANAGED_DB + - BACKEND_URL=$BACKEND_URL + - BACKEND_PORT=$BACKEND_PORT + - BACKEND_USER=$BACKEND_USER + - BACKEND_PASSWORD=$BACKEND_PASSWORD + - POSTGRES_USER=$POSTGRES_USER + - POSTGRES_PASSWORD=$POSTGRES_PASSWORD + - POSTGRES_DB=$POSTGRES_DB + - POSTGRES_PORT=$POSTGRES_PORT + - POSTGRES_HOST=$POSTGRES_HOST + +volumes: + db_clickhouse: + driver: local + db_postgres: + driver: local + logs: + driver: local \ No newline at end of file