diff --git a/.gitattributes b/.gitattributes index 306fcba3..dfdb8b77 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1,3 +1 @@ -test/test_data/*.pkl filter=lfs diff=lfs merge=lfs -text -test/test_results/*.txt filter=lfs diff=lfs merge=lfs -text *.sh text eol=lf diff --git a/test/test_data/budget_spendings.ndjson b/.github/context/.gitkeep similarity index 100% rename from test/test_data/budget_spendings.ndjson rename to .github/context/.gitkeep diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 00000000..d5d54466 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,345 @@ +# Dynatrace Snowflake Observability Agent โ€” Project Instructions & Context + +## ๐Ÿค– Persona + +You are the **DSOA coding sidekick**. You are a senior data-platform engineer and observability expert specializing in Snowflake, OpenTelemetry, and the Dynatrace ecosystem. You are building and maintaining an observability agent that runs **inside** Snowflake as a set of stored procedures and pushes telemetry (metrics, logs, spans, events, business events) to Dynatrace. + +## ๐Ÿ›๏ธ Core Architecture + +DSOA follows a **plugin-based** architecture. Every observable aspect of Snowflake is captured by a self-contained plugin. + +### Agent lifecycle + +1. The Snowflake **task scheduler** invokes the main stored procedure. +2. `DynatraceSnowAgent.process()` iterates over enabled plugins. +3. Each plugin queries Snowflake views, transforms rows, and emits telemetry via the `OtelManager`. +4. Telemetry is delivered to Dynatrace over HTTPS (OTLP for logs/spans, Dynatrace API for metrics/events). + +### Plugin anatomy (triad) + +Every plugin **must** consist of three co-located parts: + +| Component | Path pattern | Purpose | +| ---------------- | ------------------------------------ | -------------------------------------------------------------------- | +| Python module | `src/dtagent/plugins/{name}.py` | `{CamelCase}Plugin(Plugin)` class with `PLUGIN_NAME` and `process()` | +| SQL directory | `src/dtagent/plugins/{name}.sql/` | Views, functions, tasks (3-digit prefix ordering) | +| Config directory | `src/dtagent/plugins/{name}.config/` | `{name}-config.yml`, `bom.yml`, `instruments-def.yml`, `readme.md` | + +### Key modules + +| Module | Responsibility | +| ------------------------------- | ---------------------------------------------------------------- | +| `src/dtagent/agent.py` | Entry point โ€” `DynatraceSnowAgent` | +| `src/dtagent/config.py` | Reads configuration from Snowflake `CONFIG.CONFIGURATIONS` table | +| `src/dtagent/connector.py` | Ad-hoc telemetry sender (non-plugin) | +| `src/dtagent/util.py` | Shared helpers (escaping, JSON, timestamps) | +| `src/dtagent/otel/` | OTel exporters โ€” `Logs`, `Spans`, `Metrics`, events | +| `src/dtagent/otel/semantics.py` | Metric semantic definitions (auto-generated at compile time) | +| `src/_snowflake.py` | Secrets management (`read_secret()`) | + +## ๐Ÿ› ๏ธ Tech Stack & Implementation + +- **Runtime:** Python 3.9+ (CI uses 3.11). Runs inside Snowflake Snowpark. +- **Snowflake SDK:** `snowflake-snowpark-python`, `snowflake-core`, `snowflake-connector-python`. +- **Telemetry:** OpenTelemetry SDK (`opentelemetry-api/sdk/exporter-otlp 1.38.0`) + Dynatrace Metrics/Events APIs. +- **SQL dialect:** Snowflake SQL. All objects UPPERCASE. Conditional blocks via `--%PLUGIN:name:` / `--%OPTION:name:`. +- **Configuration:** YAML โ†’ flattened `PATH / VALUE / TYPE` rows stored in Snowflake. +- **Build:** Shell scripts (`scripts/dev/compile.sh`, `build.sh`) assemble single-file stored procedures via `##INSERT` directives and strip `COMPILE_REMOVE` regions. +- **Formatter / Linter:** `black` (line-length 140), `flake8`, `pylint` (must be **10.00/10**), `sqlfluff`, `yamllint`, `markdownlint`. + +## ๐Ÿ Python Environment + +**CRITICAL:** Always use `.venv/` virtual environment. Run `.venv/bin/python` or `source .venv/bin/activate` first. Never use system Python. + +## ๐Ÿ“ Code Style (MANDATORY) + +Every change must pass `make lint` before completion. No exceptions. + +### Python + +- **black** (`line-length = 140`), **flake8** (Google docstrings), **pylint** (must score **10.00/10**) +- Use `##region` / `##endregion` for section organization +- MIT copyright header required in all source files + +### SQL + +- **sqlfluff** (`dialect = snowflake`, `max_line_length = 140`) +- ALL UPPERCASE object names, 3-digit file prefixes +- Start with `use role/database/warehouse;`, grant to `DTAGENT_VIEWER` + +| Tool | Config file | Key rules | +| -------------- | -------------------- | -------------------------------------------- | +| `yamllint` | `.yamllint` | | +| `markdownlint` | `.markdownlint.json` | Blank lines required around lists (MD032) | + +- `MD029`: Ordered lists use `1.` for all items +- `MD031/MD032`: Blank lines around code blocks and lists +- `MD034`: Use `[text](url)`, not bare URLs +- `MD036`: Use `##`/`###` for headings, not bold/italic +- `MD040`: All code fences specify language (` ```python`, ` ```bash`, ` ```markdown`) +- `MD050`: Use `**bold**` not `__bold__` + +## ๐Ÿงช Testing (MANDATORY) + +Every change must include or update tests. Use `.venv/bin/pytest`. + +### Test Infrastructure + +- **pytest** (`test/core/`, `test/otel/`, `test/plugins/`, test infrastructure in `test/_utils.py`, `test/_mocks/`) +- **Two modes**: Mocked (default, uses `test/test_data/*.ndjson`) vs Live (when `test/credentials.yml` exists) +- **Plugin pattern**: Subclass plugin, monkey-patch, call `execute_telemetry_test()` with multiple `disabled_telemetry` combos + +1. **Local / Mocked** (default โ€” no `test/credentials.yml`): + - Uses NDJSON fixture data from `test/test_data/*.ndjson`. + - Validates against golden results in `test/test_results/`. + - Fast, deterministic, CI-friendly. +2. **Live** (when `test/credentials.yml` exists): + - Connects to a real Snowflake + Dynatrace instance. + - Use `-p` flag to regenerate NDJSON fixtures from a live Snowflake environment. + +### Writing smart tests + +- **High signal, low boilerplate** โ€” Tests should validate behavior, not recite implementation details. +- **Proportional complexity** โ€” A 500-line test for a 20-line function is a smell. Keep tests concise. +- **Actually run tests** โ€” Never claim tests pass without running `.venv/bin/pytest` and seeing green output. +- **Iterate on failures** โ€” When tests fail, analyze the failure, fix the root cause, rerun, and repeat until green. +- **Never fake results** โ€” Don't update test fixtures with fabricated data. Capture real output from real executions. +- **Test multiple scenarios** โ€” For plugins, validate with different `disabled_telemetry` combinations. + +### Plugin test pattern + +```python +class TestMyPlugin: + FIXTURES = {"APP.V_MY_VIEW": "test/test_data/my_plugin.ndjson", ...} + + def test_my_plugin(self): + # 1. Subclass the plugin to return NDJSON fixture data from _get_table_rows() + # 2. Monkey-patch _get_plugin_class to return the test subclass + # 3. Call execute_telemetry_test() with multiple disabled_telemetry combos + # 4. Assert entry/log/metric/event counts +``` + +Tests are validated with **multiple disabled-telemetry combinations** (e.g., `[]`, `["metrics"]`, `["logs", "spans", "metrics", "events"]`). For new plugins, the implementation plan must include a dedicated test environment setup task โ€” see the checklist in [`docs/PLUGIN_DEVELOPMENT.md`](../docs/PLUGIN_DEVELOPMENT.md). + +### Running tests + +```bash +# Full suite +.venv/bin/pytest + +# Core only +scripts/dev/test_core.sh + +# Plugins only +scripts/dev/test.sh + +# Single test file +.venv/bin/pytest test/plugins/test_budgets.py -v +``` + +### Key test infrastructure + +| File | Purpose | +| -------------------------- | ----------------------------------------------------------------- | +| `test/__init__.py` | `TestDynatraceSnowAgent`, `TestConfiguration`, credential helpers | +| `test/_utils.py` | Fixture helpers, `execute_telemetry_test()`, logging findings | +| `test/_mocks/telemetry.py` | `MockTelemetryClient` โ€” captures and validates telemetry output | + +## ๐Ÿ“– Documentation (MANDATORY) + +Documentation is a first-class deliverable. Update relevant docs with every change. +**Important:** Always update documentation with `./scripts/update_docs.sh` when making changes to the codebase. +**Never** update `docs/PLUGINS.md` or `docs/SEMANTICS.md` directly; use plugin-specific files instead. + +### What to Update + +| Change type | Update these | +| ---------------------- | -------------------------------------------------------------------------------------------------- | +| New plugin | `docs/USECASES.md`, plugin's `readme.md` + `config.md`, `instruments-def.yml`, `docs/SEMANTICS.md` | +| New metric / attribute | `instruments-def.yml`, `docs/SEMANTICS.md` | +| Architecture change | `docs/ARCHITECTURE.md` | +| New version / release | `docs/CHANGELOG.md` (user-facing highlights), `docs/DEVLOG.md` (technical details) | +| Config change | `conf/config-template.yml`, plugin's `{name}-config.yml` | + +**Note:** Do not update `docs/PLUGINS.md` or `docs/SEMANTICS.md` as those are generated automatically; +use `readme.md` and `config.md` files in plugin directories instead. + +### CHANGELOG vs DEVLOG + +**Two-tier release documentation:** + +- **`docs/CHANGELOG.md`** โ€” User-facing release notes. Keep it **concise**. Focus on: + - Major new features (new plugins, significant capabilities) + - Breaking changes that require user action + - Critical bug fixes that affect user experience + - High-level improvements (1-2 sentences max per item) + - Include reference: `> **Note**: Detailed technical changes and implementation notes are available in [DEVLOG.md](DEVLOG.md).` + +- **`docs/DEVLOG.md`** โ€” Technical developer log. Be **comprehensive**. Include: + - Implementation details (how features are built) + - Root cause analysis for bugs (what went wrong and why) + - Refactoring rationale (architectural decisions) + - Internal API changes (function signatures, removed/added utilities) + - Performance optimizations (before/after, techniques used) + - Test infrastructure changes + - Build system updates + +**When to log where:** + +| Change Type | CHANGELOG | DEVLOG | +| ---------------------------------------- | ------------------- | ------------------------- | +| New plugin | โœ… Name + 1 sentence | โœ… Full implementation | +| Breaking change | โœ… Impact on users | โœ… Migration path details | +| Critical bug fix | โœ… User impact | โœ… Root cause + fix | +| Internal refactoring | โŒ | โœ… Full details | +| Timestamp handling change (user-visible) | โœ… Behavior change | โœ… Implementation details | +| Test infrastructure update | โŒ | โœ… Full details | +| Build script improvement | Maybe (if user-facing) | โœ… Full details | +| Documentation update | โŒ (unless major) | โœ… If technically relevant | + +**Example pair:** + +CHANGELOG.md: + +```markdown +- **Timestamp Handling**: Unified timestamp handling with smart unit detection, eliminating wasteful conversions +``` + +DEVLOG.md: + +```markdown +#### Timestamp Handling Refactoring + +- **Motivation**: Eliminate wasteful nsโ†’msโ†’ns conversions and clarify API requirements +- **Approach**: Unified timestamp handling with smart unit detection +- **Implementation**: + - All SQL views produce nanoseconds via `extract(epoch_nanosecond ...)` + - Conversion to appropriate unit occurs only at API boundary + - `validate_timestamp()` works internally in nanoseconds to preserve precision + - Added `return_unit` parameter ("ms" or "ns") for explicit output control + ... +``` + +### Autogenerated Files + +**Documentation** (via `scripts/dev/build_docs.sh`): `docs/PLUGINS.md`, `docs/SEMANTICS.md`, `docs/APPENDIX.md`, `_readme_full.md` (source for PDF) +**Build artifacts** (via `scripts/dev/compile.sh`): `build/_dtagent.py`, `build/_send_telemetry.py`, `build/_semantics.py`, `build/_version.py`, `build/_metric_semantics.txt` + +**Never edit autogenerated files manually.** Edit source files (plugin `readme.md`, `instruments-def.yml`, config templates) and regenerate. + +### Other Documentation Requirements + +- **Docstrings**: Google style, required for all public modules/classes/functions in `src/`, table columns width aligned +- **BOM**: Each plugin ships `bom.yml` listing delivered/referenced Snowflake objects (validated against `test/src-bom.schema.json`) + +## ๐Ÿ”ง Build & CI/CD + +**Build pipeline**: `scripts/dev/compile.sh` (assemble), `scripts/dev/build.sh` (lint + compile + SQL), `scripts/dev/package.sh` (distribute) + +**Branch model**: `main` (stable), `devel` (integration), `feature/*`, `release/*`, `hotfix/*`, `dev/*` (personal) + +**CI workflows**: `.github/workflows/ci.yml` (lint, test), `.github/workflows/release.yml` (build, package, release) + +## ๐Ÿ“‚ Context & Gitignored Paths + +- `.github/context/` โ€” private planning, proposals, roadmaps +- `conf/` โ€” environment-specific configs +- `test/credentials.yml` โ€” for live testing + +## ๐Ÿš€ Delivery Process + +Delivering a new release or feature follows **three mandatory phases**. Do not skip or merge phases. + +### Phase 1 โ€” Proposal + +Before writing code, produce a **written proposal** covering: + +1. Problem statement, scope, acceptance criteria +1. Risks, trade-offs, backward compatibility +1. Explicitly list what's out of scope + +Store in `.github/context/proposals/` (gitignored). Must be reviewed and accepted before Phase 2. + +### Phase 2 โ€” Implementation Plan + +Create **implementation plan** with: + +1. Task breakdown (ordered, discrete, testable) +1. Affected files for each task +1. Test strategy (new/updated tests, prepare data fixtures, test environments) +1. Documentation plan +1. Migration/upgrade path if needed +1. Dependencies: external libraries, Snowflake version requirements, Dynatrace API changes + +Store alongside proposal. Must be reviewed and accepted before Phase 3. + +### Phase 3 โ€” Implementation + +**Iterate on tasks from the accepted plan:** + +1. **One task at a time**: implement, test, lint +1. **For each task**: + - Write/update code and tests + - Run `.venv/bin/pytest` โ€” iterate until green + - Run `make lint` โ€” fix all issues (pylint **10.00/10**) + - Update docs (docstrings, markdown, `instruments-def.yml`, `bom.yml`) + - **Commit** โ€” small, frequent commits per task +1. **After all tasks**: + - Run full test suite and `make lint` + - Run `scripts/dev/build_docs.sh` to test build and update documentation (PLUGINS.md, SEMANTICS.md) + - Update `docs/CHANGELOG.md` (highlights) and `docs/DEVLOG.md` (technical details) + - Review changeset, open PR + +### Phase 4 โ€” Validation & Verification + +**Human verifies** (you facilitate): + +- List modified files and purpose +- Highlight architectural/interface changes +- Document test coverage +- Note performance/security implications + +Human validates: correctness, architecture, tests, performance, security, scope, documentation. + +## โš ๏ธ Anti-Patterns & Pitfalls + +Avoid these common failure modes: + +### Scope Creep & Runaway Refactoring + +- **Don't refactor the entire codebase for a simple change.** Stop if touching many unrelated files. +- **Stay focused.** Note other issues separately; don't fix them now. +- **Resist over-engineering.** Don't create mega-abstractions for simple problems. + +### Test Quality + +- Never "fix" a failing test without fixing the underlying issue. +- Don't write 500-line tests for 20-line functions. +- Never fabricate test data or benchmarks โ€” always capture real output. +- Don't skip running tests. Must see green output. + +### Documentation & Output + +- Don't produce mega-documents with boilerplate. Be concise. +- Never share unreviewed AI content as if human-reviewed. +- Clean up dead code and redundant docs as you go. + +### Context & Commits + +- When stuck, ask for context before guessing. +- Don't make vague changes hoping they work. +- Don't create giant PRs. Break into small commits. +- Commit frequently. One logical change per commit. + +## ๐Ÿ“œ Coding Principles + +- **Plugin isolation** โ€” No cross-plugin imports. Shared logic โ†’ `src/dtagent/util.py` or `src/dtagent/otel/` +- **Code quality** โ€” `make lint` must pass. Pylint **10.00/10**. No exceptions +- **Test everything** โ€” Every change needs tests. Use dual-mode (mock/live) pattern +- **Document everything** โ€” Docstrings (Google), `instruments-def.yml`, `bom.yml`, markdown +- **Copyright** โ€” MIT header in all new files +- **Compile markers** โ€” `##region COMPILE_REMOVE` for dev-only, `##INSERT` for assembly +- **Conditional SQL** โ€” `--%PLUGIN:name:` / `--%OPTION:name:` for conditionals +- **Configuration** โ€” Never hard-code. Add to templates and YAML +- **Security** โ€” Never commit credentials. Use `.gitignore` and `_snowflake.read_secret()` +- **Backward compatibility** โ€” Upgrade scripts for object changes. Document breaking changes diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 1eeaac8f..020ff5d4 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -24,7 +24,7 @@ jobs: - name: Install dependencies run: | pip install flake8 black sqlfluff yamllint pylint check-jsonschema - npm install -g markdownlint-cli + npm install -g markdownlint-cli2 - name: Lint Python run: flake8 --config=.flake8 src/ test/ || exit 1 @@ -44,7 +44,7 @@ jobs: run: yamllint src || exit 1 - name: Lint Markdown - run: markdownlint '**/*.md' || exit 1 + run: markdownlint-cli2 '[^.]*/**/*.md' '*.md' --config .markdownlint.json || exit 1 - name: Lint BOM files run: find src -name "bom.yml" -exec sh -c 'printf "%-50s " "$$1"; check-jsonschema --schemafile test/src-bom.schema.json "$$1" || check-jsonschema --schemafile test/src-bom.schema.json "$$1"' _ {} \; @@ -81,7 +81,7 @@ jobs: - name: Install dependencies run: | pip install -r requirements.txt - pip install pytest-mock + pip install pytest-mock flake8 black pylint - name: Install system dependencies run: | @@ -91,6 +91,32 @@ jobs: - name: Run bash tests run: pytest test/core/test_bash_scripts.py -v + test-bash-slow: + runs-on: ubuntu-latest + if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/devel' || startsWith(github.ref, 'refs/heads/release/') + steps: + - name: Checkout code + uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11.12" + + - name: Install dependencies + run: | + pip install -r requirements.txt + pip install pytest-mock flake8 black pylint + + - name: Install system dependencies + run: | + sudo apt-get update + sudo apt-get install -y bats jq gawk pandoc zip + npm install -g markdownlint-cli2 + + - name: Run slow bash tests (build/package/compile) + run: pytest test/core/test_bash_scripts.py -v --run-slow + test-core: runs-on: ubuntu-latest steps: diff --git a/.gitignore b/.gitignore index 12a95367..063f73ba 100644 --- a/.gitignore +++ b/.gitignore @@ -1,29 +1,47 @@ -connection.json +# === macOS === .DS_Store + +# === Python === +*.pyc +__pycache__/ .venv* -*.token + +# === Configuration Files === +connection.json config.json config-*.json config.yml config-*.yml conf/*.sql snowflake.local.yml -output/** +test/conf/*config-download.yml +# Safety net: prevent accidental re-introduction of binary pickle fixture files +test/test_data/*.pkl +!config-basic.json + +# === Credentials & Secrets === +*.token +*credentials*.* + +# === Logs === log.* *.log -*credentials*.* -*.pyc +.logs/* + +# === Build Artifacts === build/* package/* -.logs/* -instruments-def.* +output/** metrics/* -*.zip -!config-basic.json -LICENSE.md +instruments-def.* *-deploy-script* -*.pdf -test/conf/*config-download.yml dynatrace-snowflake-observability-agent + +# === Distribution & Archives === +*.zip +*.whl *.pkg +*.pdf + +# === AI & Development Context === .github/context/* \ No newline at end of file diff --git a/.vscode/settings.json b/.vscode/settings.json index 8821a32f..ac6d4ec7 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -1,6 +1,4 @@ { - "python.envFile": "${workspaceFolder}/.venv/bin/python", - "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python", "python.testing.pytestEnabled": true, "python.testing.unittestEnabled": false, "python.testing.pytestArgs": [ @@ -216,5 +214,8 @@ }, "[yaml]": { "editor.defaultFormatter": "esbenp.prettier-vscode" + }, + "chat.promptFilesLocations": { + ".github/context/prompts": true } } \ No newline at end of file diff --git a/.windsurf/workflows/project-context.md b/.windsurf/workflows/project-context.md new file mode 100644 index 00000000..a6ac6dc3 --- /dev/null +++ b/.windsurf/workflows/project-context.md @@ -0,0 +1,16 @@ +--- +description: Load DSOA project instructions and coding standards for all work +--- + +Before starting any task, read and follow the project instructions: + +1. Read `.github/copilot-instructions.md` for complete project context including: + - Architecture (plugin-based, triad pattern) + - Code style requirements (black, flake8, pylint 10.00/10) + - Testing requirements (pytest, dual-mode mock/live) + - Documentation standards + - Delivery process (Proposal โ†’ Plan โ†’ Implementation) + +2. Always use the Python virtual environment at `.venv/` + +3. Run `make lint` before considering any change complete diff --git a/Makefile b/Makefile index 7d021b7a..30075d86 100644 --- a/Makefile +++ b/Makefile @@ -16,10 +16,13 @@ lint-yaml: yamllint src lint-markdown: - markdownlint '**/*.md' --config .markdownlint.json + markdownlint-cli2 '[^.]*/**/*.md' '*.md' --config .markdownlint.json lint-bom: find src -name "bom.yml" -exec sh -c 'printf "%-50s " "$$1"; .venv/bin/check-jsonschema --schemafile test/src-bom.schema.json "$$1" || check-jsonschema --schemafile test/src-bom.schema.json "$$1"' _ {} \; # Run all linting checks (stops on first failure, like CI) -lint: lint-python lint-format lint-pylint lint-sql lint-yaml lint-markdown lint-bom \ No newline at end of file +lint: lint-python lint-format lint-pylint lint-sql lint-yaml lint-markdown lint-bom + +docs: + ./scripts/dev/build_docs.sh \ No newline at end of file diff --git a/README.md b/README.md index 194990c5..a4a3ab5d 100644 --- a/README.md +++ b/README.md @@ -22,4 +22,5 @@ analyzing, and detecting anomalies in data processing. It delivers observability - [Version changelog](docs/CHANGELOG.md) - [Contribution guidelines](docs/CONTRIBUTING.md) - [Plugin development guide](docs/PLUGIN_DEVELOPMENT.md) +- [Development log](docs/DEVLOG.md) - [Appendix and reference](docs/APPENDIX.md) diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index a76c3c21..12c0f4a5 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -2,6 +2,43 @@ All notable changes to this project will be documented in this file. +## Dynatrace Snowflake Observability Agent 0.9.4 + +Released on TBD + +> **Note**: Detailed technical changes and implementation notes are available in [DEVLOG.md](DEVLOG.md). + +### New in 0.9.4 + +- **New Plugins**: Added Pipes, Streams, Stage, and Data Lineage monitoring plugins +- **Configurable Lookback Time**: Per-plugin configuration for historical data catchup window +- **SNOWFLAKE.TELEMETRY.EVENTS Support**: Agent now correctly reads from the Snowflake-managed shared event table when it is configured as the account-level event table + +### Fixed in 0.9.4 + +- **Dynamic Tables โ€” Grant Granularity**: `P_GRANT_MONITOR_DYNAMIC_TABLES()` now derives grant scope from the `include` pattern. `DB.%.%` grants at database level, `DB.SCHEMA.%` at schema level, and `DB.SCHEMA.TABLE` on a specific named table only โ€” eliminating previous over-granting when a schema or table was explicitly specified. +- **Span Timestamp Handling**: Fixed spans being re-processed after agent restart due to incorrect timestamp being recorded as last-processed marker +- **OTLP Compliance**: Fixed log `observed_timestamp` field to use nanoseconds per OTLP specification + +### Changed in 0.9.4 + +- **Event Log Plugin โ€” Cross-Tenant Monitoring** *(behavior change)*: DSOA instances now report `WARN`/`ERROR` log entries, metrics, and spans from all other `DTAGENT_*_DB` instances by default. Use `plugins.event_log.cross_tenant_monitoring: false` to opt out. It is recommended to keep this enabled in only one primary DSOA tenant to avoid duplicate reporting across deployments. +- **Shares Plugin**: Fixed inbound shares with deleted databases not being properly reported. The `snowflake.share.has_details_reported` attribute now correctly shows `TRUE` for deleted-DB shares, and the `_MESSAGE` field provides clear context about database deletion status +- **Self-Monitoring**: Fixed database name filtering for self-monitoring logs + +### Improved in 0.9.4 + +- **Budgets Plugin**: Enhanced budget data collection using `SYSTEM$SHOW_BUDGETS_IN_ACCOUNT()`. +- **Query Hierarchy Validation**: Confirmed and validated span hierarchy for nested stored procedure call chains (`IS_ROOT`/`IS_PARENT` flags) with dedicated test coverage for OTel parent-child propagation. +- **Error Handling โ€” Two-Phase Commit**: Query telemetry is now marked as processed only after the OTLP flush succeeds, preventing silent data loss when trace export fails. +- **Event Log Lookback โ€” Configurable**: The event log lookback window (previously hardcoded to 24 h) is now driven by `plugins.event_log.lookback_hours` config key. +- **Test Infrastructure**: Refactored tests to use synthetic JSON fixtures for input/output validation instead of live Dynatrace API calls. +- **Test Fixtures**: Migrated all plugin test input data from binary Python pickle files (`.pkl`) to human-readable NDJSON format (`.ndjson`), improving transparency and enabling direct manual inspection and version control of test data. +- **Event Tables Cost Optimization**: Added guidance for fine-tuning Event Table usage to manage Snowflake costs. +- **Timestamp Handling**: Unified timestamp handling with smart unit detection, eliminating wasteful conversions +- **Build System**: Development scripts now auto-activate virtual environment +- **Test Infrastructure**: Refactored tests to use synthetic JSON fixtures instead of live API calls + ## Dynatrace Snowflake Observability Agent 0.9.3 Released on February 12, 2026 @@ -166,7 +203,7 @@ Released on May 20, 2025. - **Teardown Process**: Correctly tears down tagged instances. - **Span Event Reporting**: Removed the hard limit of 128 span events. The limit is now configurable via `OTEL.SPANS.MAX_EVENT_COUNT`. -- **Spans for Queries**: Fixed the problem with a hierarchy of query calls not being represented by a hierarchy of spans (_0.8.2 Hotfix 1_). +- **Spans for Queries**: Fixed the problem with a hierarchy of query calls not being represented by a hierarchy of spans (*0.8.2 Hotfix 1*). - **Self-Monitoring Configuration**: Plugin default configurations no longer overwrite self-monitoring settings. - **Self-Monitoring BizEvents**: BizEvents are now sent by default when Dynatrace Snowflake Observability Agent is deployed and executed. @@ -293,7 +330,7 @@ Released on Oct 8, 2024. ### Added in 0.7.2 -- Pickle for testing of the Users plugin. +- Fixtures for testing of the Users plugin. - Copyright statements to the code. ### Fixed in 0.7.2 diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md index ff81f51a..ccffcc86 100644 --- a/docs/CONTRIBUTING.md +++ b/docs/CONTRIBUTING.md @@ -203,8 +203,8 @@ pytest test/plugins/ ./scripts/dev/test.sh test_budgets ``` -**Regenerate test data (Pickles):** -If you modify a plugin's SQL logic, you may need to update the test data. +**Regenerate NDJSON fixtures:** +If you modify a plugin's SQL logic, you may need to regenerate its fixture data from a live Snowflake environment. ```bash ./scripts/dev/test.sh test_budgets -p @@ -212,11 +212,11 @@ If you modify a plugin's SQL logic, you may need to update the test data. ### Test Data -Tests use example test data from the `test/test_data` folder: +Tests use NDJSON fixture files from the `test/test_data/` folder. Each fixture file contains one JSON object per line, named `{plugin_name}[_{view_suffix}].ndjson`. -- Pickle (`*.pkl`) files are used for test execution -- ndJSON files are provided for reference only -- Test results are validated against expected data in `test_results` +Fixtures are version-controlled. To regenerate them from a live Snowflake environment, run the relevant plugin test with the `-p` flag (requires `test/credentials.yml`). + +Expected telemetry output is stored in `test/test_results/test_/` as JSON files and used for regression comparison. ### Setting Up Test Environment @@ -246,7 +246,7 @@ To run tests in live mode: 3. **Generate `test/conf/config-download.yml`** by running: ```bash - PYTHONPATH="./src" pytest -s -v "test/core/test_config.py::TestConfig::test_init" --pickle_conf y + PYTHONPATH="./src" pytest -s -v "test/core/test_config.py::TestConfig::test_init" --save_conf y ``` ### Running Tests in Local Mode @@ -373,7 +373,10 @@ Before submitting a PR, please ensure: - [ ] You have added tests for any new functionality - [ ] All tests pass locally (`pytest` and `./test/bash/run_tests.sh`) - [ ] Documentation (`README.md`, `PLUGIN_DEVELOPMENT.md`, etc.) is updated if needed +- [ ] User-facing changes are documented in `docs/CHANGELOG.md` (highlights only) +- [ ] Technical implementation details are documented in `docs/DEVLOG.md` - [ ] If adding a plugin, `instruments-def.yml` is defined and valid +- [ ] New use cases are documented in `docs/USECASES.md` under the appropriate Data Platform Observability theme(s) - [ ] Code follows the [Semantic Conventions](#semantic-conventions) - [ ] If changing SQL objects, all names are UPPERCASE - [ ] If adding new semantic fields, they follow naming rules diff --git a/docs/DEVLOG.md b/docs/DEVLOG.md new file mode 100644 index 00000000..c9a77e77 --- /dev/null +++ b/docs/DEVLOG.md @@ -0,0 +1,238 @@ +# Development Log + +This file documents detailed technical changes, internal refactorings, and development notes. For user-facing highlights, see [CHANGELOG.md](CHANGELOG.md). + +## Version 0.9.4 โ€” Detailed Changes + +### New Features โ€” Technical Details + +#### Pipes Monitoring Plugin + +- Implemented `PipesPlugin` to monitor Snowpipe status and validation +- Uses `SYSTEM$PIPE_STATUS` function for real-time pipe monitoring +- Uses `VALIDATE_PIPE_LOAD` function for validation checks +- Delivers telemetry as logs, metrics, and events + +#### Streams Monitoring Plugin + +- Implemented `StreamsPlugin` to monitor Snowflake Streams +- Tracks stream staleness using `SHOW STREAMS` output +- Monitors pending changes and stream health +- Reports stale streams as warning events + +#### Stage Monitoring Plugin + +- Implemented `StagePlugin` to monitor staged data +- Tracks internal and external stages +- Monitors COPY INTO activities from `QUERY_HISTORY` and `COPY_HISTORY` views +- Reports on staged file sizes, counts, and load patterns + +#### Data Lineage Plugin + +- Implemented `DataLineagePlugin` combining static and dynamic lineage +- Static lineage from `OBJECT_DEPENDENCIES` view (DDL-based relationships) +- Dynamic lineage from `ACCESS_HISTORY` view (runtime data flow) +- Column-level lineage tracking with direct and indirect dependencies +- Lineage graphs delivered as structured events + +#### SNOWFLAKE.TELEMETRY.EVENTS Support (BDX-1172) + +- **Issue**: When a customer account had `EVENT_TABLE = snowflake.telemetry.events` (the Snowflake-managed shared event table), `SETUP_EVENT_TABLE()` listed it in `a_no_custom_event_t` โ€” the "not a real custom table" array โ€” and took the `IF` branch, creating DSOA's own `DTAGENT_DB.STATUS.EVENT_LOG` table and **ignoring** the Snowflake-managed table entirely. +- **Root cause**: `'snowflake.telemetry.events'` was excluded from the view-creation path because the original `ELSE` branch attempted `GRANT SELECT ON TABLE snowflake.telemetry.events TO ROLE DTAGENT_VIEWER`, which Snowflake rejects โ€” privileges cannot be granted on Snowflake-managed objects. +- **Fix**: Two-part change in `src/dtagent/plugins/event_log.sql/init/009_event_log_init.sql`: + 1. Removed `'snowflake.telemetry.events'` from `a_no_custom_event_t` so it falls through to the `ELSE` branch + 2. Wrapped the `GRANT SELECT` in a `BEGIN/EXCEPTION WHEN OTHER THEN SYSTEM$LOG_WARN()` block โ€” attempts the grant and logs warnings, ignoring failures for any read-only or Snowflake-managed table; more robust than a string comparison +- **Behaviour after fix**: When `EVENT_TABLE = snowflake.telemetry.events`, DSOA creates `DTAGENT_DB.STATUS.EVENT_LOG` as a **view** over it, exactly as for any other pre-existing customer event table. All three `event_log` SQL views continue to query `DTAGENT_DB.STATUS.EVENT_LOG` unchanged โ€” no Python changes needed. + +#### Configurable Lookback Time + +- **Motivation**: Lookback windows were hardcoded across SQL views in every plugin that uses `F_LAST_PROCESSED_TS`. This could not be tuned per deployment without modifying SQL files. +- **Approach**: Replace each literal with `CONFIG.F_GET_CONFIG_VALUE('plugins..lookback_hours', )` and add `lookback_hours` to each plugin's config YAML โ€” consistent with how `retention_hours` is already handled in `P_CLEANUP_EVENT_LOG`. +- **Pattern**: `timeadd(hour, -1*F_GET_CONFIG_VALUE('plugins..lookback_hours', ), current_timestamp)` โ€” the `-1*` multiplier converts the positive config value to a negative offset. +- **Note**: The `F_LAST_PROCESSED_TS` guard in each view's `GREATEST(...)` clause ensures normal incremental runs are unaffected; `lookback_hours` only bounds the fallback window when no prior timestamp exists. +- **Files changed** (SQL views + config YAMLs): + +| Plugin | SQL view(s) | Default | +| ----------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------ | +| `event_log` | `051_v_event_log.sql`, `051_v_event_log_metrics_instrumented.sql`, `051_v_event_log_spans_instrumented.sql` | `24`h | +| `login_history` | `061_v_login_history.sql`, `061_v_sessions.sql` | `24`h | +| `warehouse_usage` | `070_v_warehouse_event_history.sql`, `071_v_warehouse_load_history.sql`, `072_v_warehouse_metering_history.sql` | `24`h | +| `tasks` | `061_v_serverless_tasks.sql` โ†’ `lookback_hours` (`4`h); `063_v_task_versions.sql` โ†’ `lookback_hours_versions` (`720`h = 1 month) | separate keys, original defaults preserved | +| `event_usage` | `051_v_event_usage.sql` | `6`h | +| `data_schemas` | `051_v_data_schemas.sql` | `4`h | + +### Bug Fixes โ€” Technical Details + +#### Dynamic Tables Grant โ€” Schema-Level Granularity (BDX-640) + +- **Issue**: `P_GRANT_MONITOR_DYNAMIC_TABLES()` always granted `MONITOR` at **database level**, even when the `include` pattern specified a particular schema (e.g. `PROD_DB.ANALYTICS.%`). This caused the procedure to over-grant: a user expecting grants only on `PROD_DB.ANALYTICS` received grants on all schemas in `PROD_DB`. +- **Root cause**: The CTE extracted only `split_part(value, '.', 0)` (the database part) and the schema part was never inspected. +- **Fix**: Three-pass approach in `032_p_grant_monitor_dynamic_tables.sql`: + 1. **Database pass** โ€” `split_part(value, '.', 1) = '%'` โ†’ `GRANT โ€ฆ IN DATABASE`. + 2. **Schema pass** โ€” `split_part(value, '.', 1) != '%'` and `split_part(value, '.', 2) = '%'` โ†’ `GRANT โ€ฆ IN SCHEMA db.schema`. + 3. **Table pass** โ€” `split_part(value, '.', 1) != '%'` and `split_part(value, '.', 2) != '%'` โ†’ `GRANT โ€ฆ ON DYNAMIC TABLE db.schema.table` (no FUTURE grant โ€” not supported by Snowflake at individual table level). +- **Grant matrix**: + + | Include pattern | Grant level | + | ----------------------------- | ----------------------------------- | + | `%.%.%` | All databases | + | `PROD_DB.%.%` | Database `PROD_DB` | + | `PROD_DB.ANALYTICS.%` | Schema `PROD_DB.ANALYTICS` | + | `PROD_DB.ANALYTICS.ORDERS_DT` | Table `PROD_DB.ANALYTICS.ORDERS_DT` | + +- **Files changed**: `032_p_grant_monitor_dynamic_tables.sql`, `bom.yml`, `config.md` +- **Tests added**: `test/bash/test_grant_monitor_dynamic_tables.bats` โ€” structural content checks covering both grant paths + +#### Log ObservedTimestamp Unit Correction + +- **Issue**: OTel log `observed_timestamp` field was sent in milliseconds +- **Root cause**: OTLP spec requires nanoseconds for `observed_timestamp`, but code was converting to milliseconds +- **Fix**: Modified `process_timestamps_for_telemetry()` to return `observed_timestamp_ns` in nanoseconds +- **Impact**: Logs now comply with OTLP spec +- **Note**: Dynatrace OTLP Logs API still requires milliseconds for `timestamp` field (deviation from spec) + +#### Inbound Shares Reporting Flag + +- **Issue**: `HAS_DB_DELETED` flag incorrectly reported for deleted shared databases in `TMP_SHARES` view +- **Root cause**: Logic error in SQL view predicate +- **Fix**: Corrected SQL logic in `shares.sql/` view definition +- **Impact**: Accurate reporting of deleted shared database status + +#### Self-Monitoring Log Filtering + +- **Issue**: Database name filtering logic failed to correctly identify DTAGENT_DB references +- **Root cause**: String matching logic didn't account for fully qualified names +- **Fix**: Updated filtering logic in self-monitoring plugin +- **Impact**: Self-monitoring logs now correctly exclude internal agent operations + +### Improvements โ€” Technical Details + +#### Timestamp Handling Refactoring + +- **Motivation**: Eliminate wasteful nsโ†’msโ†’ns conversions and clarify API requirements +- **Approach**: Unified timestamp handling with smart unit detection +- **Implementation**: + - All SQL views produce nanoseconds via `extract(epoch_nanosecond ...)` + - Conversion to appropriate unit occurs only at API boundary + - `validate_timestamp()` works internally in nanoseconds to preserve precision + - Added `return_unit` parameter ("ms" or "ns") for explicit output control + - Added `skip_range_validation` parameter for `observed_timestamp` (no time range check) + - Created `process_timestamps_for_telemetry()` utility for standard timestamp processing pattern +- **Changes to `validate_timestamp()`**: + - Works internally in nanoseconds throughout validation logic + - Converts to requested unit only at the end + - Raises `ValueError` if `return_unit` not in ["ms", "ns"] + - Added `skip_range_validation` for observed_timestamp (preserves original value without range checks) +- **Changes to `process_timestamps_for_telemetry()`**: + - New utility function implementing standard pattern for logs and events + - Extracts `timestamp` and `observed_timestamp` from data dict + - Falls back to `timestamp` value when `observed_timestamp` not provided + - Validates `timestamp` with range checking (returns milliseconds) + - Validates `observed_timestamp` without range checking (returns nanoseconds) + - Returns `(timestamp_ms, observed_timestamp_ns)` tuple + - Hardcoded units: always milliseconds for timestamp, nanoseconds for observed_timestamp +- **Removed obsolete functions**: + - `get_timestamp_in_ms()` โ€” replaced by `validate_timestamp(value, return_unit="ms")` + - `validate_timestamp_ms()` โ€” replaced by `validate_timestamp(value, return_unit="ms")` +- **Added new functions**: + - `get_timestamp()` โ€” returns nanoseconds from SQL query results +- **API Documentation**: + - Added comprehensive documentation links in all telemetry classes + - Documented Dynatrace OTLP Logs API deviation (milliseconds for `timestamp` field) + - Documented OTLP standard requirements (nanoseconds for most timestamp fields) +- **Fallback Logic**: + - `observed_timestamp` now correctly falls back to `timestamp` value when not provided + - Only `event_log` plugin provides explicit `observed_timestamp` values + - All other plugins rely on fallback mechanism + +#### Build System Virtual Environment + +- **Change**: All `scripts/dev/` scripts now auto-activate `.venv/` +- **Implementation**: Added `source .venv/bin/activate` to script preambles +- **Impact**: Eliminates common "wrong Python" errors during development + +#### Documentation โ€” Autogenerated Files + +- **Change**: Updated `.github/copilot-instructions.md` with autogenerated file documentation +- **Coverage**: + - Documentation files: `docs/PLUGINS.md`, `docs/SEMANTICS.md`, `docs/APPENDIX.md` + - Build artifacts: `build/_dtagent.py`, `build/_send_telemetry.py`, `build/_semantics.py`, `build/_version.py`, `build/_metric_semantics.txt` +- **Guidance**: Never edit autogenerated files manually; edit source files and regenerate + +#### Budgets Plugin Enhancement + +- **Change**: Enhanced budget data collection using `SYSTEM$SHOW_BUDGETS_IN_ACCOUNT()` +- **Previous**: Manual query construction +- **New**: Leverages Snowflake system function for comprehensive budget data +- **Impact**: More accurate and complete budget information + +#### Error Handling โ€” Two-Phase Commit for Query Telemetry (BDX-694 / BDX-706) + +- **Issue**: `STATUS.UPDATE_PROCESSED_QUERIES` was called regardless of whether the OTLP trace flush succeeded, meaning queries could be silently lost on export failures without being retried on the next cycle. +- **Root cause**: `_process_span_rows` in `src/dtagent/plugins/__init__.py` called `UPDATE_PROCESSED_QUERIES` unconditionally after `flush_traces()`. +- **Fix**: Captured the boolean return value of `flush_traces()` into `flush_succeeded` and gated the `UPDATE_PROCESSED_QUERIES` call behind `if report_status and flush_succeeded`. +- **Impact**: Queries whose spans fail to export are re-queued on the next agent run, ensuring at-least-once delivery semantics for span telemetry. + +#### Event Log Lookback โ€” Configurable Window (BDX-706) + +- **Issue**: `V_EVENT_LOG` used a hardcoded `timeadd(hour, -24, current_timestamp)` lower bound, preventing operators from adjusting the lookback window without editing SQL. +- **Fix**: + - `src/dtagent/plugins/event_log.sql/051_v_event_log.sql`: replaced literal with `CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.lookback_hours', 24)::int`. + - `src/dtagent/plugins/event_log.config/event_log-config.yml`: added `lookback_hours: 24` (default preserves prior behaviour). +- **Impact**: Operators can increase the window for initial deployments or decrease it for high-volume environments without any SQL change. + +#### Query Hierarchy Validation (BDX-620) + +- **Goal**: Confirm that nested stored procedure call chains are correctly represented as OTel parent-child spans. +- **Validation approach**: + - `P_REFRESH_RECENT_QUERIES` sets `IS_ROOT=TRUE` for top-level calls (no `parent_query_id`) and `IS_PARENT=TRUE` for any query that has at least one child in the same batch. Leaf queries have `IS_ROOT=FALSE, IS_PARENT=FALSE`. + - `_process_span_rows` in `src/dtagent/plugins/__init__.py` iterates only `IS_ROOT=TRUE` rows as top-level spans; child spans are fetched recursively via `Spans._get_sub_rows` using `PARENT_QUERY_ID`. + - `ExistingIdGenerator` in `src/dtagent/otel/spans.py` propagates the root's `_TRACE_ID` and `_SPAN_ID` down the hierarchy so every sub-span shares the correct trace context. +- **New test fixture**: `test/test_data/query_history_nested_sp.ndjson` โ€” 3-row synthetic SP chain: outer SP (root) โ†’ inner SP (mid) โ†’ leaf SELECT. +- **New test file**: `test/plugins/test_query_history_span_hierarchy.py` + - `test_span_hierarchy`: integration test verifying 3 entries processed, 3 spans, 3 logs, 27 metrics across all `disabled_telemetry` combinations. + - `test_is_root_only_processes_top_level`: unit test confirming only 1 root row and 2 non-root rows in the fixture. + - `test_is_parent_flags_intermediate_nodes`: unit test asserting correct `IS_ROOT`/`IS_PARENT`/`PARENT_QUERY_ID` values for each level of the hierarchy. +- **Impact**: Span hierarchies for stored procedure chains are confirmed correct and regression-protected. + +#### Test Infrastructure Refactoring + +- **Change**: Refactored tests to use synthetic JSON fixtures +- **Previous**: Live Dynatrace API calls for validation +- **New**: Input/output validation against golden JSON files +- **Impact**: Faster, more reliable, deterministic tests + +#### Event Tables Cost Optimization Documentation (BDX-688) + +- **Change**: Expanded `event_log.config/config.md` from a minimal 5-line note to a full configuration reference +- **Content added**: + - Configuration options table covering all 7 plugin settings with types, defaults, and descriptions + - Cost optimization guidance section explaining the cost impact of `LOOKBACK_HOURS`, `MAX_ENTRIES`, `RETENTION_HOURS`, and `SCHEDULE` + - Key guidance: `retention_hours` should be `>= lookback_hours` to prevent cleanup from removing events before they are processed +- **Files changed**: + - `src/dtagent/plugins/event_log.config/config.md` โ€” full configuration reference + cost guidance + - `src/dtagent/plugins/event_log.config/readme.md` โ€” updated to mention configurable lookback window + +#### Span Timestamp Handling Fix (BDX-706) + +- **Issue**: `_process_span_rows()` in `src/dtagent/plugins/__init__.py` called `_report_execution()` with `current_timestamp()` (a Snowflake lazy column expression) instead of the actual last-row timestamp. +- **Root cause**: When `STATUS.LOG_PROCESSED_MEASUREMENTS` stored this value, it received the string `'Column[current_timestamp]'` rather than a real timestamp. On the next run, `F_LAST_PROCESSED_TS` would return a malformed value, causing the `GREATEST(...)` guard in each SQL view to use the fallback lookback window โ€” potentially re-processing spans already sent. +- **Fix**: Added `last_processed_timestamp` variable tracking `row_dict.get("TIMESTAMP", last_processed_timestamp)` within the row iteration loop, mirroring the identical pattern used by `_log_entries()`. Passed `str(last_processed_timestamp)` to `_report_execution()` instead of `current_timestamp()`. +- **Side effect removed**: Dropped the now-unused `from snowflake.snowpark.functions import current_timestamp` import โ€” pylint flagged this as unused after the fix. +- **Impact**: Spans and traces will no longer be re-processed after an agent restart. The `F_LAST_PROCESSED_TS('event_log_spans')` guard now advances correctly after each run. +- **Affects**: `event_log` plugin (`_process_span_entries`) and any future plugin using `_process_span_rows` with `log_completion=True` + +## Version 0.9.3 โ€” Detailed Changes + +Detailed technical changes for prior versions can be added here as needed. + +## Version 0.9.2 โ€” Detailed Changes + +Detailed technical changes for prior versions can be added here as needed. + +## Notes + +- This file is **not** auto-generated. Manual maintenance required. +- Focus on **technical implementation details**, root causes, and internal changes. +- For user-facing release notes, see [CHANGELOG.md](CHANGELOG.md). +- Entries should help future developers understand decisions and troubleshoot issues. diff --git a/docs/INSTALL.md b/docs/INSTALL.md index 9295a5db..5d93115b 100644 --- a/docs/INSTALL.md +++ b/docs/INSTALL.md @@ -779,7 +779,17 @@ The `plugins` section allows you to configure plugin behavior globally and indiv | `plugins.disabled_by_default` | Boolean | `false` | When set to `true`, all plugins are disabled by default unless explicitly enabled | | `plugins.deploy_disabled_plugins` | Boolean | `true` | Deploy plugin code even if the plugin is disabled. When `true`, disabled plugins' SQL objects and procedures are deployed but not scheduled to run | -Each individual plugin can be configured with plugin-specific options. See the plugin documentation for available configuration options per plugin. +Each individual plugin supports the following common configuration keys (set under `plugins.`): + +| Configuration Key | Type | Default | Description | +| ----------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `lookback_hours` | Integer | varies | Maximum lookback window (in hours) the plugin uses when scanning for new data on each run. The effective start time is the later of `current time - lookback_hours` and the stored last-processed timestamp, so this caps how far back the agent will scan when the marker is missing or older than the lookback window (for example, on first run, after a reset, or following a long outage). During normal operation the plugin advances from the last processed timestamp automatically. See each plugin's `config.md` for the default value and any additional per-context lookback keys (e.g., `lookback_hours_versions` for the `tasks` plugin). | + +| `schedule` | String | varies | Cron or interval schedule for the plugin's Snowflake task. See [Plugin Scheduling](#plugin-scheduling) for supported formats. | +| `is_disabled` | Boolean | `false` | Set to `true` to disable this plugin. | +| `telemetry` | List | varies | List of telemetry types to emit (`logs`, `metrics`, `spans`, `events`, `biz_events`). Remove items to suppress specific signal types. | + +For plugin-specific options (e.g., `max_entries`, `retention_hours`, `include`/`exclude` filters), see the `config.md` file in each plugin's configuration directory. #### OpenTelemetry Configuration Options diff --git a/docs/PLUGINS.md b/docs/PLUGINS.md index e25608ca..9d502d1c 100644 --- a/docs/PLUGINS.md +++ b/docs/PLUGINS.md @@ -119,7 +119,7 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: active_queries: schedule: USING CRON */6 * * * * UTC @@ -131,7 +131,6 @@ plugins: - metrics - spans - biz_events - ``` > **IMPORTANT**: For the `query_history` and `active_queries` plugins to report telemetry for all queries, the `DTAGENT_VIEWER` role must be @@ -167,30 +166,57 @@ The following tables list the Snowflake objects that this plugin delivers data f This plugin enables monitoring of Snowflake budgets, resources linked to them, and their expenditures. It sets up and manages the Dynatrace Snowflake Observability Agent's own budget. -All budgets within the account are reported on as logs and metrics; this includes their details, spending limit, and recent expenditures. -The plugin runs once a day and excludes already reported expenditures. +All budgets the agent has been granted access to are reported as logs and metrics; this includes their details, spending limit, and recent +expenditures. The plugin runs once a day and excludes already reported expenditures. + +> **Note**: This plugin is **disabled by default** because custom budget monitoring requires per-budget privilege grants. The account budget +> (visible via `SNOWFLAKE.BUDGET_VIEWER`) is accessible automatically once enabled. For custom budgets, use `P_GRANT_BUDGET_MONITORING()` +> (requires admin scope) or grant privileges manually โ€” see below. [Show semantics for this plugin](SEMANTICS.md#budgets_semantics_sec) ### Budgets default configuration -To disable this plugin, set `IS_DISABLED` to `true`. +This plugin is **disabled by default**; you need to explicitly set `IS_ENABLED` to `true` to enable it. -In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable -selected plugins; `IS_DISABLED` is not checked then. - -```json +```yaml plugins: budgets: + is_disabled: true quota: 10 schedule: USING CRON 30 0 * * * UTC - is_disabled: false + monitored_budgets: [] + schedule_grants: USING CRON 30 */12 * * * UTC telemetry: - logs - metrics - events - biz_events +``` + +| Parameter | Type | Default | Description | +| ------------------- | ------ | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `quota` | int | `10` | Credit quota for the agent's own `DTAGENT_BUDGET`. | +| `schedule` | string | `USING CRON 30 0 * * * UTC` | Cron schedule for the budgets collection task. | +| `monitored_budgets` | list | `[]` | Fully-qualified custom budget names to monitor, e.g. `["MY_DB.MY_SCHEMA.MY_BUDGET"]`. Names are automatically uppercased; only standard unquoted Snowflake identifiers are supported (`[A-Za-z_][A-Za-z0-9_$]*` per part). | +| `schedule_grants` | string | `USING CRON 30 */12 * * * UTC` | Cron schedule for `TASK_DTAGENT_BUDGETS_GRANTS` (admin scope only). | + +### Enabling the Budgets plugin + +1. Set `is_enabled` to `true` in your configuration file. +1. For **account budget only** (no custom budgets): no additional grants needed โ€” `SNOWFLAKE.BUDGET_VIEWER` is already granted. +1. For **custom budgets**: configure `monitored_budgets` and run `P_GRANT_BUDGET_MONITORING()` (admin scope required), or grant privileges + manually (see below). + +### Granting access to custom budgets manually +For each custom budget `..`, grant the following to `DTAGENT_VIEWER`: + +```sql +grant usage on database to role DTAGENT_VIEWER; +grant usage on schema . to role DTAGENT_VIEWER; +grant snowflake.core.budget role ..!VIEWER to role DTAGENT_VIEWER; +grant database role SNOWFLAKE.USAGE_VIEWER to role DTAGENT_VIEWER; ``` ### Budgets Bill of Materials @@ -199,44 +225,47 @@ The following tables list the Snowflake objects that this plugin delivers data f #### Objects delivered by the `Budgets` plugin -| Name | Type | -| --------------------------------------- | --------------------- | -| ACCOUNT_BUDGET_ADMIN | role | -| ACCOUNT_BUDGET_MONITOR | role | -| BUDGET_OWNER | role | -| DTAGENT_DB.APP.DTAGENT_BUDGET | snowflake.core.budget | -| DTAGENT_DB.APP.TMP_BUDGETS | transient table | -| DTAGENT_DB.APP.TMP_BUDGETS_LIMITS | transient table | -| DTAGENT_DB.APP.TMP_BUDGETS_RESOURCES | transient table | -| DTAGENT_DB.APP.TMP_BUDGET_SPENDING | transient table | -| DTAGENT_DB.APP.P_GET_BUDGETS() | procedure | -| DTAGENT_DB.APP.V_BUDGET_SPENDINGS | view | -| DTAGENT_DB.APP.V_BUDGET_DETAILS | view | -| DTAGENT_DB.CONFIG.UPDATE_BUDGETS_CONF() | procedure | -| DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS | task | +| Name | Type | Comment | +| ------------------------------------------ | --------------------- | ----------------------------------------------------------------------------------------- | +| ACCOUNT_BUDGET_ADMIN | role | | +| ACCOUNT_BUDGET_MONITOR | role | | +| BUDGET_OWNER | role | | +| DTAGENT_DB.APP.DTAGENT_BUDGET | snowflake.core.budget | | +| DTAGENT_DB.APP.TMP_BUDGETS | transient table | | +| DTAGENT_DB.APP.TMP_BUDGETS_LIMITS | transient table | | +| DTAGENT_DB.APP.TMP_BUDGETS_RESOURCES | transient table | | +| DTAGENT_DB.APP.TMP_BUDGET_SPENDING | transient table | | +| DTAGENT_DB.APP.P_GET_BUDGETS() | procedure | | +| DTAGENT_DB.APP.V_BUDGET_SPENDINGS | view | | +| DTAGENT_DB.APP.V_BUDGET_DETAILS | view | | +| DTAGENT_DB.CONFIG.UPDATE_BUDGETS_CONF() | procedure | | +| DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS | task | | +| DTAGENT_DB.APP.P_GRANT_BUDGET_MONITORING() | procedure | Optional (admin scope). Grants DTAGENT_VIEWER privileges on configured monitored_budgets. | +| DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS | task | Optional (admin scope). Periodically calls P_GRANT_BUDGET_MONITORING(). | #### Objects referenced by the `Budgets` plugin -| Name | Type | Privileges | Granted to | Comment | -| ---------------------------- | ----------- | ------------------------------- | ---------------------- | ---------------------------------------------------------- | -| SNOWFLAKE | application | IMPORTED PRIVILEGES ON DATABASE | ACCOUNT_BUDGET_ADMIN | | -| SNOWFLAKE.BUDGET_ADMIN | role | APPLICATION ROLE | ACCOUNT_BUDGET_ADMIN | | -| SNOWFLAKE.BUDGET_VIEWER | role | APPLICATION ROLE | ACCOUNT_BUDGET_MONITOR | | -| SNOWFLAKE.BUDGET_CREATOR | role | DATABASE ROLE | BUDGET_OWNER | | -| ACCOUNT_BUDGET_ADMIN | role | ROLE | DTAGENT_ADMIN | | -| ACCOUNT_BUDGET_MONITOR | role | ROLE | DTAGENT_VIEWER | | -| BUDGET_OWNER | role | ROLE | DTAGENT_ADMIN | | -| SNOWFLAKE.CORE.BUDGET | command | USAGE | | | -| $budget!GET_LINKED_RESOURCES | procedure | USAGE | | We call this procedure on each budget defined in Snowflake | -| $budget!GET_SPENDING_LIMIT | procedure | USAGE | | We call this procedure on each budget defined in Snowflake | -| $budget!GET_SPENDING_HISTORY | procedure | USAGE | | We call this procedure on each budget defined in Snowflake | +| Name | Type | Privileges | Granted to | Comment | +| ---------------------------- | ----------- | ------------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------- | +| SNOWFLAKE | application | IMPORTED PRIVILEGES ON DATABASE | ACCOUNT_BUDGET_ADMIN | | +| SNOWFLAKE.BUDGET_ADMIN | role | APPLICATION ROLE | ACCOUNT_BUDGET_ADMIN | | +| SNOWFLAKE.BUDGET_VIEWER | role | APPLICATION ROLE | ACCOUNT_BUDGET_MONITOR | | +| SNOWFLAKE.BUDGET_CREATOR | role | DATABASE ROLE | BUDGET_OWNER | | +| ACCOUNT_BUDGET_ADMIN | role | ROLE | DTAGENT_ADMIN | | +| ACCOUNT_BUDGET_MONITOR | role | ROLE | DTAGENT_VIEWER | | +| BUDGET_OWNER | role | ROLE | DTAGENT_ADMIN | | +| SNOWFLAKE.CORE.BUDGET | command | USAGE | | | +| $budget!GET_LINKED_RESOURCES | procedure | USAGE | | We call this procedure on each budget defined in Snowflake | +| $budget!GET_SPENDING_LIMIT | procedure | USAGE | | We call this procedure on each budget defined in Snowflake | +| $budget!GET_SPENDING_HISTORY | procedure | USAGE | | We call this procedure on each budget defined in Snowflake | +| SNOWFLAKE.USAGE_VIEWER | role | DATABASE ROLE | DTAGENT_VIEWER | Optional (admin scope). Required for custom budget monitoring via P_GRANT_BUDGET_MONITORING(). | ## The Data Schemas plugin Enables monitoring of data schema changes. Reports events on recent modifications to objects (tables, schemas, databases) made by DDL -queries, within the last 4 hours. +queries, within a configurable lookback window (default: 4 hours, see `plugins.data_schemas.lookback_hours`). [Show semantics for this plugin](SEMANTICS.md#data_schemas_semantics_sec) @@ -247,20 +276,29 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: data_schemas: + lookback_hours: 4 schedule: USING CRON 0 0,8,16 * * * UTC is_disabled: false exclude: [] include: - - '%' + - "%" telemetry: - events - biz_events - ``` +| Key | Type | Default | Description | +| ------------------------------------- | ------ | ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.data_schemas.lookback_hours` | int | `4` | How far back (in hours) the plugin looks for DDL-based schema changes on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Default is `4`h to account for the up-to-3-hour data ingestion delay in `ACCESS_HISTORY`. | +| `plugins.data_schemas.schedule` | string | `USING CRON 0 0,8,16 * * * UTC` | Cron schedule for the data schemas collection task. | +| `plugins.data_schemas.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.data_schemas.include` | list | `["%"]` | List of object name patterns to include (SQL `LIKE` syntax). Default includes all objects. | +| `plugins.data_schemas.exclude` | list | `[]` | List of object name patterns to exclude (SQL `LIKE` syntax). Takes precedence over `include`. | +| `plugins.data_schemas.telemetry` | list | `["events", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | + ### Data Schemas Bill of Materials The following tables list the Snowflake objects that this plugin delivers data from or references. @@ -303,21 +341,20 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: data_volume: include: - DTAGENT_DB.%.% - - '%.PUBLIC.%' + - "%.PUBLIC.%" exclude: - - '%.INFORMATION_SCHEMA.%' - - '%.%.TMP_%' + - "%.INFORMATION_SCHEMA.%" + - "%.%.TMP_%" schedule: USING CRON 30 0,4,8,12,16,20 * * * UTC is_disabled: false telemetry: - metrics - biz_events - ``` ### Data Volume Bill of Materials @@ -360,11 +397,11 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: dynamic_tables: include: - - '%.%.%' + - "%.%.%" exclude: - DTAGENT_DB.%.% schedule: USING CRON */30 * * * * UTC @@ -374,14 +411,24 @@ plugins: - metrics - logs - biz_events - ``` > **IMPORTANT**: For this plugin to function correctly, `MONITOR on DYNAMIC TABLES` must be granted to the `DTAGENT_VIEWER` role. By > default, when the `admin` scope is installed, this is handled by the `P_GRANT_MONITOR_DYNAMIC_TABLES()` procedure, which is executed with > the elevated privileges of the `DTAGENT_ADMIN` role (created only when the `admin` scope is installed), via the > `APP.TASK_DTAGENT_DYNAMIC_TABLES_GRANTS` task. The schedule for this task can be configured separately using the -> `PLUGINS.DYNAMIC_TABLES.SCHEDULE_GRANTS` configuration option. Alternatively, you may choose to: +> `PLUGINS.DYNAMIC_TABLES.SCHEDULE_GRANTS` configuration option. + +The grant granularity is derived automatically from the `include` pattern: + +| Include pattern | Grant level | SQL issued | +| ----------------------------- | ----------- | ---------------------------------------------------------- | +| `%.%.%` or `PROD_DB.%.%` | Database | `GRANT MONITOR ON ALL/FUTURE DYNAMIC TABLES IN DATABASE โ€ฆ` | +| `PROD_DB.ANALYTICS.%` | Schema | `GRANT MONITOR ON ALL/FUTURE DYNAMIC TABLES IN SCHEMA โ€ฆ` | +| `PROD_DB.ANALYTICS.ORDERS_DT` | Table | `GRANT MONITOR ON DYNAMIC TABLE โ€ฆ` (no FUTURE grant) | + +Alternatively, you may choose to grant the required permissions manually, using the appropriate +`GRANT MONITOR ON ALL/FUTURE DYNAMIC TABLES IN โ€ฆ` statement, depending on the desired granularity. ### Dynamic Tables Bill of Materials @@ -401,14 +448,17 @@ The following tables list the Snowflake objects that this plugin delivers data f #### Objects referenced by the `Dynamic Tables` plugin -| Name | Type | Privileges | Granted to | Comment | -| ------------------------------------------------ | ------------- | ---------- | -------------- | -------------------------------------------------------------------------- | -| SHOW DATABASES | command | USAGE | | | -| ALL DYNAMIC TABLES IN DATABASE $database | dynamic table | MONITOR | DTAGENT_VIEWER | We grant that on every database selected in configuration or all (default) | -| ALL FUTURE TABLES IN DATABASE $database | table | MONITOR | DTAGENT_VIEWER | We grant that on every database selected in configuration or all (default) | -| INFORMATION_SCHEMA.DYNAMIC_TABLE_REFRESH_HISTORY | view | USAGE | DTAGENT_VIEWER | | -| INFORMATION_SCHEMA.DYNAMIC_TABLE_GRAPH_HISTORY | view | USAGE | DTAGENT_VIEWER | | -| INFORMATION_SCHEMA.DYNAMIC_TABLES | view | USAGE | DTAGENT_VIEWER | | +| Name | Type | Privileges | Granted to | Comment | +| ----------------------------------------------------- | ------------- | ---------- | -------------- | ------------------------------------------------------------------------------------------------------------------------ | +| SHOW DATABASES | command | USAGE | | | +| ALL DYNAMIC TABLES IN DATABASE $database | dynamic table | MONITOR | DTAGENT_VIEWER | Granted when include pattern has wildcard schema (e.g. DB.%.%) | +| ALL FUTURE DYNAMIC TABLES IN DATABASE $database | dynamic table | MONITOR | DTAGENT_VIEWER | Granted when include pattern has wildcard schema (e.g. DB.%.%) | +| ALL DYNAMIC TABLES IN SCHEMA $database.$schema | dynamic table | MONITOR | DTAGENT_VIEWER | Granted when include pattern has specific schema (e.g. DB.ANALYTICS.%) | +| ALL FUTURE DYNAMIC TABLES IN SCHEMA $database.$schema | dynamic table | MONITOR | DTAGENT_VIEWER | Granted when include pattern has specific schema (e.g. DB.ANALYTICS.%) | +| DYNAMIC TABLE $database.$schema.$table | dynamic table | MONITOR | DTAGENT_VIEWER | Granted when include pattern specifies an exact table name (e.g. DB.ANALYTICS.ORDERS_DT); no FUTURE grant at table level | +| INFORMATION_SCHEMA.DYNAMIC_TABLE_REFRESH_HISTORY | view | USAGE | DTAGENT_VIEWER | | +| INFORMATION_SCHEMA.DYNAMIC_TABLE_GRAPH_HISTORY | view | USAGE | DTAGENT_VIEWER | | +| INFORMATION_SCHEMA.DYNAMIC_TABLES | view | USAGE | DTAGENT_VIEWER | | @@ -416,13 +466,13 @@ The following tables list the Snowflake objects that this plugin delivers data f This plugin delivers to Dynatrace data reported by Snowflake Trail in the `EVENT TABLE`. -By default, it runs every 30 minutes and registers entries from the last 12 hours, omitting the ones, which: +By default, it runs every 30 minutes and processes only new entries since the last run (bounded by a configurable lookback window of 24 +hours), omitting entries that: -- where already delivered, -- with scope set to `DTAGENT_OTLP` as they are internal log recording entries sent over the OpenTelemetry protocol -- related to execution of other instances of Dynatrace Snowflake Observability Agent, or -- with importance below the level set as `CORE.LOG_LEVEL`, i.e., only warnings or errors from the given Dynatrace Snowflake Observability - Agent instance are reported. +- were already delivered, +- have scope set to `DTAGENT_OTLP` (internal log recording entries sent over the OpenTelemetry protocol), or +- have importance below `WARN` for any `DTAGENT_*_DB` instance, i.e., only warnings or errors from Dynatrace Snowflake Observability Agent + instances are reported. By default, it produces log entries containing the following information: @@ -453,30 +503,86 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: event_log: max_entries: 10000 - retention_hours: 12 + lookback_hours: 24 + retention_hours: 24 schedule: USING CRON */30 * * * * UTC schedule_cleanup: USING CRON 0 * * * * UTC is_disabled: false + cross_tenant_monitoring: true + databases: [] telemetry: - metrics - logs - biz_events - spans - ``` +| Key | Type | Default | Description | +| ------------------------------------ | ------ | -------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.event_log.max_entries` | int | `10000` | Maximum number of event log entries fetched per run. Acts as a safety cap to avoid long-running queries. | +| `plugins.event_log.lookback_hours` | int | `24` | How far back (in hours) the plugin looks for new events on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Increase for initial setup; decrease to reduce query cost. | +| `plugins.event_log.retention_hours` | int | `24` | How long (in hours) the cleanup task retains entries in `STATUS.EVENT_LOG`. Only applies if this agent instance owns the event table. | +| `plugins.event_log.schedule` | string | `USING CRON */30 * * * * UTC` | Cron schedule for the main event log processing task. | +| `plugins.event_log.schedule_cleanup` | string | `USING CRON 0 * * * * UTC` | Cron schedule for the cleanup task that removes old entries from `STATUS.EVENT_LOG`. | +| `plugins.event_log.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.event_log.telemetry` | list | `["metrics", "logs", "biz_events", "spans"]` | Telemetry types to emit. Remove items to suppress specific output types. | + +### Cost Optimization Guidance + +The event log plugin queries `STATUS.EVENT_LOG` on every run. The following settings directly affect compute cost: + +- **`lookback_hours`**: This window defines how far back the plugin reads on each run. If no prior processed timestamp is available (first + run, or after a reset), the plugin starts from `now - lookback_hours`. During normal operation the plugin starts from the more recent of + the last processed timestamp and `now - lookback_hours`, capping catch-up after long gaps. A large lookback window can cause heavy queries + after a reset โ€” consider starting with `12` or `24` and increasing only if needed. +- **`max_entries`**: Hard cap on rows processed per run. The default (`10000`) protects against runaway queries. If your Snowflake account + generates very high event volumes, lower this value and rely on the schedule frequency to catch up incrementally. +- **`retention_hours`**: Shorter retention reduces the size of `STATUS.EVENT_LOG`, which improves scan performance. Set this higher than + `lookback_hours` to avoid situations where the cleanup removes events before the plugin can process them. The recommended ratio is + `retention_hours >= lookback_hours`. +- **`schedule`**: Running more frequently (e.g., every 5 minutes) increases credit usage. The default every-30-minutes cadence balances + freshness against cost. For high-volume accounts, consider running less frequently with higher `max_entries`. + > **IMPORTANT**: A dedicated cleanup task, `APP.TASK_DTAGENT_EVENT_LOG_CLEANUP`, ensures that the `EVENT_LOG` table contains only data no -> older than the duration you define with the `PLUGINS.EVENT_LOG.RETENTION_HOURS` configuration option. -> You can schedule this task separately using the `PLUGINS.EVENT_LOG.SCHEDULE_CLEANUP` configuration option, run the cleanup procedure -> `APP.P_CLEANUP_EVENT_LOG()` manually, or manage the retention of data in the `EVENT_LOG` table yourself. +> older than the duration you define with the `plugins.event_log.retention_hours` configuration option. You can schedule this task +> separately using the `plugins.event_log.schedule_cleanup` configuration option, run the cleanup procedure `APP.P_CLEANUP_EVENT_LOG()` +> manually, or manage the retention of data in the `EVENT_LOG` table yourself. > **INFO**: The `EVENT_LOG` table cleanup process works only if this specific instance of Dynatrace Snowflake Observability Agent set up the > table. +### Cross-Tenant Monitoring + +By default (`plugins.event_log.cross_tenant_monitoring: true`) the plugin also reports `WARN`/`ERROR` log entries, metrics, and spans +originating from **other** `DTAGENT_*_DB` instances visible in the same event table. This allows one DSOA deployment to surface health +issues from sibling deployments without logging into Snowflake directly. + +In case you would like to enable cross-tenant monitoring on **only one DSOA tenant**, e.g., to avoid duplicate reporting across deployments, +you need to set `cross_tenant_monitoring: false` in all other tenants. + +```yaml +plugins: + event_log: + cross_tenant_monitoring: false # disable on tenants that should report only their own WARN/ERROR self-monitoring entries +``` + +### Database Filtering + +Use `plugins.event_log.databases` to restrict event log monitoring to specific databases. The list accepts SQL `LIKE` patterns (`%` matches +any sequence of characters, `_` matches any single character). When the list is absent or empty, **all databases** are included. + +```yaml +plugins: + event_log: + databases: + - MYAPP_DB # exact match + - ANALYTICS% # all databases starting with ANALYTICS_ +``` + ### Event Log Bill of Materials The following tables list the Snowflake objects that this plugin delivers data from or references. @@ -488,6 +594,7 @@ The following tables list the Snowflake objects that this plugin delivers data f | DTAGENT_DB.STATUS.EVENT_LOG | table/view | Dynatrace Snowflake Observability Agent can setup an event table if one does not exist. It creates a view over an existing event log table if that table was not setup by the actual Dynatrace Snowflake Observability Agent instance. | | DTAGENT_DB.APP.SETUP_EVENT_TABLE() | procedure | | | DTAGENT_DB.APP.P_CLEANUP_EVENT_LOG() | procedure | | +| DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(VARCHAR) | function | | | DTAGENT_DB.APP.V_EVENT_LOG | view | | | DTAGENT_DB.APP.V_EVENT_LOG_SPANS_INSTRUMENTED | view | | | DTAGENT_DB.APP.V_EVENT_LOG_METRICS_INSTRUMENTED | view | | @@ -525,18 +632,25 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: event_usage: + lookback_hours: 6 schedule: USING CRON 0 * * * * UTC is_disabled: false telemetry: - metrics - logs - biz_events - ``` +| Key | Type | Default | Description | +| ------------------------------------ | ------ | ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.event_usage.lookback_hours` | int | `6` | How far back (in hours) the plugin looks for event usage history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Default is `6`h to account for the up-to-3-hour data ingestion delay in `EVENT_USAGE_HISTORY`. | +| `plugins.event_usage.schedule` | string | `USING CRON 0 * * * * UTC` | Cron schedule for the event usage collection task. | +| `plugins.event_usage.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.event_usage.telemetry` | list | `["metrics", "logs", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | + ### Event Usage Bill of Materials The following tables list the Snowflake objects that this plugin delivers data from or references. @@ -582,17 +696,24 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: login_history: + lookback_hours: 24 schedule: USING CRON */30 * * * * UTC is_disabled: false telemetry: - logs - biz_events - ``` +| Key | Type | Default | Description | +| -------------------------------------- | ------ | ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.login_history.lookback_hours` | int | `24` | How far back (in hours) the plugin looks for login and session events on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. | +| `plugins.login_history.schedule` | string | `USING CRON */30 * * * * UTC` | Cron schedule for the login history collection task. | +| `plugins.login_history.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.login_history.telemetry` | list | `["logs", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | + ### Login History Bill of Materials The following tables list the Snowflake objects that this plugin delivers data from or references. @@ -650,7 +771,7 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: query_history: schedule_grants: USING CRON */30 * * * * UTC @@ -663,7 +784,6 @@ plugins: - logs - biz_events - spans - ``` The plugin can be configured to retrieve query plan and acceleration estimates for the slowest queries. This analysis uses telemetry from @@ -745,7 +865,7 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: resource_monitors: schedule: USING CRON */30 * * * * UTC @@ -755,7 +875,6 @@ plugins: - metrics - events - biz_events - ``` ### Resource Monitors Bill of Materials @@ -806,7 +925,7 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: shares: schedule: USING CRON */30 * * * * UTC @@ -815,12 +934,11 @@ plugins: exclude: - "" include: - - '%.%.%' + - "%.%.%" telemetry: - logs - events - biz_events - ``` ### Shares Bill of Materials @@ -885,9 +1003,11 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: tasks: + lookback_hours: 4 + lookback_hours_versions: 720 schedule: USING CRON 30 * * * * UTC is_disabled: false telemetry: @@ -895,9 +1015,20 @@ plugins: - metrics - events - biz_events - ``` +| Key | Type | Default | Description | +| --------------------------------------- | ------ | --------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.tasks.lookback_hours` | int | `4` | How far back (in hours) the plugin looks for serverless task history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Default is `4`h to account for the up-to-3-hour data ingestion delay in `SERVERLESS_TASK_HISTORY`. | +| `plugins.tasks.lookback_hours_versions` | int | `720` | How far back (in hours) the plugin looks for task version history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours_versions`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours_versions`, so it never reads data older than the lookback window. Default is `720`h (30 days) โ€” task graph versions change infrequently and a longer window ensures new deployments catch all recent version changes. | +| `plugins.tasks.schedule` | string | `USING CRON 30 * * * * UTC` | Cron schedule for the tasks collection task. | +| `plugins.tasks.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.tasks.telemetry` | list | `["logs", "metrics", "events", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | + +> **Note**: `lookback_hours` and `lookback_hours_versions` serve different data sources with different update frequencies. +> `SERVERLESS_TASK_HISTORY` is updated frequently (per task run), while `TASK_VERSIONS` only changes when a task graph is modified โ€” hence +> the much longer default for versions. + ### Tasks Bill of Materials The following tables list the Snowflake objects that this plugin delivers data from or references. @@ -942,7 +1073,7 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: trust_center: schedule: USING CRON 30 */12 * * * UTC @@ -953,7 +1084,6 @@ plugins: - logs - events - biz_events - ``` ### Trust Center Bill of Materials @@ -1011,7 +1141,7 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: users: schedule: USING CRON 0 0 * * * UTC @@ -1023,7 +1153,6 @@ plugins: - logs - events - biz_events - ``` ### Users Bill of Materials @@ -1078,18 +1207,25 @@ To disable this plugin, set `IS_DISABLED` to `true`. In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then. -```json +```yaml plugins: warehouse_usage: + lookback_hours: 24 schedule: USING CRON 0 * * * * UTC is_disabled: false telemetry: - logs - metrics - biz_events - ``` +| Key | Type | Default | Description | +| ---------------------------------------- | ------ | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.warehouse_usage.lookback_hours` | int | `24` | How far back (in hours) the plugin looks for warehouse events, load, and metering history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Applies to all three views (`WAREHOUSE_EVENTS_HISTORY`, `WAREHOUSE_LOAD_HISTORY`, `WAREHOUSE_METERING_HISTORY`). | +| `plugins.warehouse_usage.schedule` | string | `USING CRON 0 * * * * UTC` | Cron schedule for the warehouse usage collection task. | +| `plugins.warehouse_usage.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.warehouse_usage.telemetry` | list | `["logs", "metrics", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | + ### Warehouse Usage Bill of Materials The following tables list the Snowflake objects that this plugin delivers data from or references. diff --git a/docs/PLUGIN_DEVELOPMENT.md b/docs/PLUGIN_DEVELOPMENT.md index 107e05e8..c981d650 100644 --- a/docs/PLUGIN_DEVELOPMENT.md +++ b/docs/PLUGIN_DEVELOPMENT.md @@ -691,6 +691,16 @@ references: ### 8. Create Plugin Tests +#### Test environment checklist + +Before writing the test file, plan and complete the following steps โ€” they define the scope of your tests and drive `docs/USECASES.md`: + +- **Identify core use cases** โ€” List the key observability scenarios your plugin enables (e.g., cost analysis, security monitoring). These become the basis for fixture data *and* entries in `docs/USECASES.md`. +- **Capture representative fixture data** โ€” Run the plugin against a live Snowflake instance (or craft fixtures manually) that cover each core use case. Store NDJSON fixtures in `test/test_data/{name}[_{view_suffix}].ndjson`. +- **Define golden results** โ€” Record expected telemetry counts (logs, metrics, spans, events) per scenario in `test/test_results/test_{name}/` so regressions are caught automatically. +- **Validate all `disabled_telemetry` combinations** โ€” At minimum: `[]`, `["metrics"]`, `["logs"]`, and `["logs", "spans", "metrics", "events"]`. +- **Document use cases** โ€” Add the new use cases to `docs/USECASES.md` under the appropriate [Data Platform Observability](DPO.md) theme(s). + Create `test/plugins/test_example_plugin.py`: ```python @@ -702,9 +712,9 @@ Create `test/plugins/test_example_plugin.py`: class TestExamplePlugin: import pytest - # Define pickle files for test data - PICKLES = { - "APP.V_EXAMPLE_PLUGIN_INSTRUMENTED": "test/test_data/example_plugin.pkl" + # Define NDJSON fixture files for test data + FIXTURES = { + "APP.V_EXAMPLE_PLUGIN_INSTRUMENTED": "test/test_data/example_plugin.ndjson" } @pytest.mark.xdist_group(name="test_telemetry") @@ -718,13 +728,13 @@ class TestExamplePlugin: # ====================================================================== # Generate/load test data - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) - # Mock the plugin to use pickled data instead of querying Snowflake + # Mock the plugin to use fixture data instead of querying Snowflake class TestExamplePluginPlugin(ExamplePluginPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries( - TestExamplePlugin.PICKLES, t_data, limit=2 + return utils._safe_get_fixture_entries( + TestExamplePlugin.FIXTURES, t_data, limit=2 ) def __local_get_plugin_class(source: str): @@ -769,29 +779,30 @@ if __name__ == "__main__": 1. **Test Structure:** - Create one test class per plugin - Name it `Test{PluginName}` - - Define `PICKLES` dict mapping queries to pickle files - - Override `_get_table_rows()` to return mocked data + - Define `FIXTURES` dict mapping Snowflake view names to `test/test_data/.ndjson` paths + - Override `_get_table_rows()` to return fixture data 2. **Generating Test Data:** - - First run generates pickle files from actual Snowflake queries + - Fixture files are NDJSON (one JSON object per line), stored in `test/test_data/` + - First run with `-p` generates fixtures from actual Snowflake queries - Requires valid test credentials (see [CONTRIBUTING.md](CONTRIBUTING.md)) - - Run: `./scripts/dev/test.sh test_example_plugin -p` + - Run: `pytest test/plugins/test_example_plugin.py -p` 3. **Running Tests:** ```bash # Run single plugin test - ./scripts/dev/test.sh test_example_plugin + pytest test/plugins/test_example_plugin.py -v - # Run with pickling (regenerate test data) - ./scripts/dev/test.sh test_example_plugin -p + # Regenerate fixture data from live Snowflake + pytest test/plugins/test_example_plugin.py -p # Run all plugin tests pytest test/plugins/ ``` 4. **Test Modes:** - - **Local mode** (no credentials): Uses mocked APIs, doesn't send data + - **Local mode** (no credentials): Uses NDJSON fixtures, doesn't send data - **Live mode** (with credentials): Connects to Snowflake and Dynatrace ### 9. Build and Deploy @@ -1019,7 +1030,7 @@ After creating all the plugin files: ### Testing Best Practices 1. **Test with realistic data:** - - Generate pickle files from actual Snowflake queries + - Generate NDJSON fixtures from actual Snowflake queries (use `-p` flag) - Include edge cases (nulls, special characters, etc.) - Test with varying data volumes @@ -1030,7 +1041,7 @@ After creating all the plugin files: 3. **Mock external dependencies:** - Override `_get_table_rows()` in tests - - Use `_safe_get_unpickled_entries()` for consistent test data + - Use `_safe_get_fixture_entries()` for consistent test data - Don't rely on live Snowflake connections in unit tests --- @@ -1535,8 +1546,8 @@ def process(self, run_id: str, run_proc: bool = True) -> Dict[str, Dict[str, int **Solutions:** -1. Regenerate test data: `./scripts/dev/test.sh test_plugin -p` -2. Check pickle file exists in `test/test_data/` +1. Regenerate fixture data: `pytest test/plugins/test_plugin.py -p` +2. Check NDJSON fixture exists in `test/test_data/` 3. Verify base_count matches expected output 4. Check that `affecting_types_for_entries` includes all relevant types 5. Run with verbose output: `pytest -s -v test/plugins/test_plugin.py` @@ -1717,7 +1728,11 @@ When creating a new plugin, ensure you have completed all these steps: - [ ] Documented plugin in readme.md - [ ] Created BOM file (bom.yml) - [ ] Written plugin tests -- [ ] Generated test data (pickle files) +- [ ] Identified core use cases and added them to `docs/USECASES.md` under the appropriate theme(s) +- [ ] Captured representative NDJSON fixture data covering core use cases +- [ ] Defined golden results in `test/test_results/test_{name}/` +- [ ] Validated all `disabled_telemetry` combinations (at minimum: `[]`, `["metrics"]`, `["logs"]`, all-disabled) +- [ ] Generated NDJSON fixture data (`pytest test/plugins/test_.py -p`) - [ ] Verified tests pass - [ ] Built the agent (`./scripts/dev/build.sh`) - [ ] Deployed to test environment diff --git a/docs/SEMANTICS.md b/docs/SEMANTICS.md index f311efcc..db1a7fb8 100644 --- a/docs/SEMANTICS.md +++ b/docs/SEMANTICS.md @@ -589,7 +589,7 @@ check the `Context Name` column below. | ---------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | ------------------------------- | | snowflake.​event.​trigger | Additionally to sending logs, each entry in `EVENT_TIMESTAMPS` is sent as event with key set to `snowflake.event.trigger`, value to key from `EVENT_TIMESTAMPS` and `timestamp` set to the key value. | snowflake.grant.created_on | outbound_shares, inbound_shares | | snowflake.​grant.​created_on | The timestamp when the grant was created. | 1639051180946000000 | outbound_shares | -| snowflake.​share.​created_on | The timestamp when the share was created. | 1639051180714000000 | outbound_shares, inbound_shares | +| snowflake.​share.​created_on | The timestamp when the share was created. | 1639051180714000000 | shares | | snowflake.​table.​created_on | The timestamp when the table was created. | 1649940827875000000 | inbound_shares | | snowflake.​table.​ddl | The timestamp of the last DDL operation performed on the table or view. | 1639940327875000000 | inbound_shares | | snowflake.​table.​update | The timestamp when the object was last altered by a DML, DDL, or background metadata operation. | 1649962827875000000 | inbound_shares | @@ -733,7 +733,11 @@ check the `Context Name` column below. | snowflake.​user.​expires_at | The expiration date of the user account. | 1620213179885000000 | users | | snowflake.​user.​ext_authn.​duo | Indicates if Duo authentication is enabled for the user. | true | users | | snowflake.​user.​ext_authn.​uid | The external authentication UID for the user. | ext123 | users | +| snowflake.​user.​has_mfa | Indicates if the user is enrolled for multi
-factor authentication. | true | users | | snowflake.​user.​has_password | Indicates if the user has a password set. | true | users | +| snowflake.​user.​has_pat | Indicates if a programmatic access token has been generated for the user. | true | users | +| snowflake.​user.​has_rsa | Indicates if RSA public key authentication is configured for the user. | true | users | +| snowflake.​user.​has_workload_identity | Indicates if workload identity federation is configured for the user. | true | users | | snowflake.​user.​id | The unique identifier for the user. | 12345 | users | | snowflake.​user.​is_disabled | Indicates if the user account is disabled. | false | users | | snowflake.​user.​is_from_organization | Indicates if the user was imported from a global organization. | true | users | diff --git a/pytest.ini b/pytest.ini index bee27adb..975babe7 100644 --- a/pytest.ini +++ b/pytest.ini @@ -4,4 +4,5 @@ addopts = --ignore-glob=**/otel_*_test.py log_cli = false log_cli_level = DEBUG markers = - xdist_group: mark test to be executed in named group \ No newline at end of file + xdist_group: mark test to be executed in named group + slow: mark test as slow (build/package integration tests, skipped by default) \ No newline at end of file diff --git a/requirements.txt b/requirements.txt index b3126ab7..76367d48 100644 --- a/requirements.txt +++ b/requirements.txt @@ -21,34 +21,68 @@ # SOFTWARE. # # -snowflake>=1.5.1 # IMPORTANT: Requires Python >=3.9 and <3.12 (due to snowflake-legacy dependency) + +# === Snowflake SDK === +# IMPORTANT: Requires Python >=3.9 and <3.12 (due to snowflake-legacy dependency) +snowflake>=1.11.0 +snowflake-core==1.11.0 +snowflake-snowpark-python>=1.45.0 +snowflake-connector-python[secure-local-storage]>=4.3.0 +streamlit + +# === OpenTelemetry === +opentelemetry-api==1.39.1 +opentelemetry-sdk==1.39.1 +opentelemetry-exporter-otlp-proto-http==1.39.1 +opentelemetry-proto==1.39.1 + +# === Data Processing & Utilities === pandas +numpy toml tzlocal -#%DEV: -streamlit -snowflake-snowpark-python==1.40.0 -snowflake-connector-python[secure-local-storage] -opentelemetry-api==1.38.0 -opentelemetry-sdk==1.38.0 -opentelemetry-exporter-otlp-proto-http==1.38.0 -opentelemetry-proto==1.38.0 -snowflake-core==1.5.1 uuid jsonstrip -numpy +pyyaml + +# === Testing === pytest pytest-tap -inflect -markdown2 -weasyprint -pyyaml -# github tests check-jsonschema + +# === Linting & Formatting === pylint flake8 +flake8-docstrings black==26.1.0 sqlfluff yamllint -flake8-docstrings -#%:DEV + +# === Documentation === +inflect +markdown2 +weasyprint + +# === Security: Pin vulnerable dependencies to fixed versions === +# CVE-2025-6176: Brotli DoS vulnerability +brotli>=1.2.0 +# CVE-2026-26007: Cryptography SECT curve validation issue +cryptography>=46.0.5 +# CVE-2025-68146, CVE-2026-22701: Filelock race condition +filelock>=3.20.3 +# CVE-2025-66034: Fonttools buffer overflow +fonttools>=4.60.2 +# CVE-2026-23949: Jaraco.context privilege escalation +jaraco-context>=6.1.0 +# CVE-2026-25990: Pillow buffer overflow +pillow>=12.1.1 +# CVE-2026-0994: Protobuf DoS in json_format.ParseDict() +# Using 5.x branch to maintain compatibility with snowflake-snowpark-python +protobuf>=5.29.6,<6.0.0 +# CVE-2025-66418, CVE-2025-66471, CVE-2026-21441: urllib3 security issues +urllib3>=2.6.3 +# CVE-2026-24049: Wheel path traversal vulnerability +wheel>=0.46.2 + +# dependencies +chardet>=3.0.2,<6.0.0 \ No newline at end of file diff --git a/scripts/dev/build.sh b/scripts/dev/build.sh index 23fc20a6..ce516784 100755 --- a/scripts/dev/build.sh +++ b/scripts/dev/build.sh @@ -29,6 +29,11 @@ set -euo pipefail +# Activate virtual environment +if [ -f ".venv/bin/activate" ]; then + source .venv/bin/activate +fi + # Check for required commands if ! command -v gawk &> /dev/null; then echo "Error: Required command 'gawk' is not installed." diff --git a/scripts/dev/build_docs.sh b/scripts/dev/build_docs.sh index ae8cca99..0cb27f6f 100755 --- a/scripts/dev/build_docs.sh +++ b/scripts/dev/build_docs.sh @@ -25,6 +25,11 @@ # # This script is used to build target documentation into README.md file +# Activate virtual environment +if [ -f ".venv/bin/activate" ]; then + source .venv/bin/activate +fi + ./scripts/dev/build.sh VERSION=$(grep 'VERSION =' build/_version.py | awk -F'"' '{print $2}') diff --git a/scripts/dev/compile.sh b/scripts/dev/compile.sh index 6230ca7c..97f4ff00 100755 --- a/scripts/dev/compile.sh +++ b/scripts/dev/compile.sh @@ -23,6 +23,11 @@ # # +# Activate virtual environment +if [ -f ".venv/bin/activate" ]; then + source .venv/bin/activate +fi + # Updating build number TS=$(date +%s%3) diff --git a/scripts/dev/package.sh b/scripts/dev/package.sh index 5cc5042b..646931d9 100755 --- a/scripts/dev/package.sh +++ b/scripts/dev/package.sh @@ -23,6 +23,11 @@ # # +# Activate virtual environment +if [ -f ".venv/bin/activate" ]; then + source .venv/bin/activate +fi + # this is an internal script for packaging Dynatrace Snowflake Observability Agent for distribution # Args: # * PARAM [OPTIONAL] - can be either diff --git a/scripts/dev/sanitize_fixtures.py b/scripts/dev/sanitize_fixtures.py new file mode 100644 index 00000000..a68609f4 --- /dev/null +++ b/scripts/dev/sanitize_fixtures.py @@ -0,0 +1,215 @@ +#!/usr/bin/env python3 +# +# +# Copyright (c) 2025 Dynatrace Open Source +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +# +"""PII sanitization script for NDJSON fixture files and golden result files. + +Applies deterministic find-and-replace of known PII values (real usernames, +IPs, tenant URLs, database names) with synthetic test values, using the +shared deny-list at ``test/test_data/_deny_patterns.json``. + +Operates on plain text so changes are fully reviewable. + +Usage:: + + # Sanitize all fixture and golden result files (in-place) + python scripts/dev/sanitize_fixtures.py + + # Preview changes without writing (dry run) + python scripts/dev/sanitize_fixtures.py --dry-run + + # Report only โ€” show which files would change, no replacements + python scripts/dev/sanitize_fixtures.py --report +""" + +import argparse +import json +import os +import re +import sys + + +DENY_PATTERNS_PATH = "test/test_data/_deny_patterns.json" + +# Directories / glob patterns to sanitize +TARGET_DIRS = [ + "test/test_data", + "test/test_results", +] + +# File extensions to process (case-insensitive) +TARGET_EXTENSIONS = {".ndjson", ".json", ".txt"} + + +def load_deny_patterns(path: str) -> list: + """Load PII patterns from the shared deny-list JSON file. + + Args: + path: Path to the deny-list JSON file. + + Returns: + List of compiled (pattern, replacement) tuples. + """ + with open(path, "r", encoding="utf-8") as fh: + data = json.load(fh) + + compiled = [] + for entry in data["patterns"]: + regex = re.compile(entry["pattern"]) + compiled.append((regex, entry["replacement"], entry["id"])) + return compiled + + +def sanitize_text(text: str, patterns: list) -> tuple: + """Apply all PII patterns to *text* and return (modified_text, change_count). + + Args: + text: Input text to sanitize. + patterns: List of (compiled_regex, replacement, pattern_id) tuples. + + Returns: + Tuple of (sanitized_text, total_replacements_made). + """ + total = 0 + for regex, replacement, _ in patterns: + new_text, n = regex.subn(replacement, text) + text = new_text + total += n + return text, total + + +def find_target_files(dirs: list) -> list: + """Walk *dirs* and collect all files matching TARGET_EXTENSIONS. + + The deny-list file (``_deny_patterns.json``) is excluded from sanitization + since it intentionally contains the PII patterns as regex strings. + + Args: + dirs: List of directory paths to search. + + Returns: + Sorted list of matching file paths. + """ + # Resolve the absolute path of the deny-list so we can exclude it + deny_list_abs = os.path.abspath(DENY_PATTERNS_PATH) + + found = [] + for root_dir in dirs: + if not os.path.isdir(root_dir): + continue + for dirpath, _, filenames in os.walk(root_dir): + for fname in filenames: + ext = os.path.splitext(fname)[1].lower() + if ext not in TARGET_EXTENSIONS: + continue + full_path = os.path.join(dirpath, fname) + if os.path.abspath(full_path) == deny_list_abs: + continue # Never sanitize the deny-list itself + found.append(full_path) + return sorted(found) + + +def sanitize_files( + target_files: list, + patterns: list, + dry_run: bool = False, + report_only: bool = False, +) -> dict: + """Sanitize all *target_files* using *patterns*. + + Args: + target_files: List of file paths to process. + patterns: Compiled PII patterns. + dry_run: If True, compute changes but do not write. + report_only: If True, only report which files have PII matches. + + Returns: + Dict mapping file path to number of replacements made. + """ + results = {} + + for fpath in target_files: + try: + with open(fpath, "r", encoding="utf-8") as fh: + original = fh.read() + except (OSError, UnicodeDecodeError) as exc: + print(f" [SKIP] {fpath}: {exc}") + continue + + sanitized, total = sanitize_text(original, patterns) + + if total == 0: + continue + + results[fpath] = total + + if report_only: + print(f" [HIT] {fpath}: {total} replacement(s)") + elif dry_run: + print(f" [DRY] {fpath}: {total} replacement(s) would be made") + else: + with open(fpath, "w", encoding="utf-8") as fh: + fh.write(sanitized) + print(f" [OK] {fpath}: {total} replacement(s) applied") + + return results + + +def main(): + """Entry point for the sanitization script.""" + parser = argparse.ArgumentParser(description="Sanitize PII from NDJSON fixtures and golden result files.") + parser.add_argument("--deny-patterns", default=DENY_PATTERNS_PATH, help="Path to _deny_patterns.json") + parser.add_argument("--dry-run", action="store_true", help="Preview changes without writing files") + parser.add_argument("--report", action="store_true", help="Report matching files without showing replacements") + args = parser.parse_args() + + if not os.path.exists(args.deny_patterns): + print(f"ERROR: Deny-patterns file not found: {args.deny_patterns}") + sys.exit(1) + + patterns = load_deny_patterns(args.deny_patterns) + print(f"Loaded {len(patterns)} PII pattern(s) from {args.deny_patterns}") + print() + + target_files = find_target_files(TARGET_DIRS) + print(f"Found {len(target_files)} file(s) to scan in {TARGET_DIRS}") + print() + + results = sanitize_files(target_files, patterns, dry_run=args.dry_run, report_only=args.report) + + if not results: + print("No PII found in any file.") + else: + total_files = len(results) + total_replacements = sum(results.values()) + mode = "would be made" if (args.dry_run or args.report) else "applied" + print(f"\n{total_files} file(s) with {total_replacements} total replacement(s) {mode}.") + + if args.dry_run or args.report: + sys.exit(0 if not results else 0) + + sys.exit(0) + + +if __name__ == "__main__": + main() diff --git a/scripts/dev/test.sh b/scripts/dev/test.sh index 23ca863c..504a4d9a 100755 --- a/scripts/dev/test.sh +++ b/scripts/dev/test.sh @@ -23,19 +23,23 @@ # # -# to pickle new test data call as -# ./test.sh $test_name -p -# You can also pickle for all tests at once, by running -# ./test.sh -a -p -# If You wish to omit running code quality checks on plugin plugin test file and python plugin file set -n as the third param -# ./test.sh $test_name "" -n -# ./test.sh -a -p -n -# this script is not written to handle test/core/test_config test. it is intented to perfrom and validate plugin tests from test/plugins/ +# Generate NDJSON fixtures from live Snowflake and run plugin tests. +# +# Usage: +# ./test.sh Run plugin tests (uses existing NDJSON fixtures) +# ./test.sh -p Regenerate NDJSON fixtures from Snowflake, then run tests +# ./test.sh -a -p Regenerate fixtures for ALL plugins, then run all tests +# ./test.sh "" -n Skip code-quality checks + +# Activate virtual environment +if [ -f ".venv/bin/activate" ]; then + source .venv/bin/activate +fi + TEST_NAME=$1 -TO_PICKLE=$2 +SAFE_TEST_FIXTURE=$2 RUN_QUALITY_CHECK=$3 -EXEMPLARY_RESULT_FILE="test/test_results/${TEST_NAME}_results.txt" TEST_FILE_PYTHON_PATH="test.plugins.$TEST_NAME" TEST_FILE_PATH="test/plugins/$TEST_NAME.py" @@ -83,47 +87,37 @@ code_quality_checks() { fi } -if [ "$TO_PICKLE" == "-p" ]; then +if [ "$SAFE_TEST_FIXTURE" == "-p" ]; then if [ "$TEST_NAME" == "-a" ]; then code_quality_checks src/dtagent/plugins test/plugins - echo "Pickling for all plugin tests" + echo "Generating NDJSON fixtures for all plugin tests" for file in test/plugins/test_*; do - if [ $(basename "${file}") == "test_1_validate.py" ]; then - continue - fi - TEST_NAME=$(basename "${file%.*}") TEST_FILE_PYTHON_PATH="test.plugins.${TEST_NAME}" - EXEMPLARY_RESULT_FILE="test/test_results/${TEST_NAME}_results.txt" - - echo "Pickling for ${TEST_NAME}" - PYTHONPATH="$PYTHONPATH:./src" python -m $TEST_FILE_PYTHON_PATH $TO_PICKLE &> $EXEMPLARY_RESULT_FILE - - pytest -s -v --result="$EXEMPLARY_RESULT_FILE" test/plugins/test_1_validate.py + echo "Generating fixtures for ${TEST_NAME}" + PYTHONPATH="$PYTHONPATH:./src" python -m $TEST_FILE_PYTHON_PATH $SAFE_TEST_FIXTURE done + echo "Running all plugin tests" + pytest -s -v test/plugins/ + else code_quality_checks $PLUGIN_FILE $TEST_FILE_PATH - echo "Pickling for ${TEST_NAME}." + echo "Generating NDJSON fixtures for ${TEST_NAME}." + PYTHONPATH="$PYTHONPATH:./src" python -m $TEST_FILE_PYTHON_PATH $SAFE_TEST_FIXTURE - PYTHONPATH="$PYTHONPATH:./src" python -m $TEST_FILE_PYTHON_PATH $TO_PICKLE &> $EXEMPLARY_RESULT_FILE - pytest -s -v --result="$EXEMPLARY_RESULT_FILE" test/plugins/test_1_validate.py + echo "Running tests for ${TEST_NAME}." + pytest -s -v "$TEST_FILE_PATH" fi else code_quality_checks $PLUGIN_FILE $TEST_FILE_PATH - echo "Executing test and verification for ${TEST_NAME}." - LOG_FILE_NAME=".logs/dtagent-${TEST_NAME}-$(date '+%Y%m%d-%H%M%S').log" - PYTHONPATH="$PYTHONPATH:./src" python -m $TEST_FILE_PYTHON_PATH $LOG_FILE_NAME &> $LOG_FILE_NAME - echo "Test result file - ${LOG_FILE_NAME}" - - # it looks like calling pytest from python with parameters would quite a hassle, so I decided to make the call in this script, not at the end of test classes - # it also excludes calling pytest when pickling which would be pointless as both files (current result and exemplary) would point to the same file - - pytest -s -v --result="$LOG_FILE_NAME" --exemplary_result="$EXEMPLARY_RESULT_FILE" test/plugins/test_1_validate.py + echo "Executing tests for ${TEST_NAME}." + pytest -s -v "$TEST_FILE_PATH" fi + diff --git a/scripts/dev/test_core.sh b/scripts/dev/test_core.sh index 6810c63f..1b0b2dc0 100755 --- a/scripts/dev/test_core.sh +++ b/scripts/dev/test_core.sh @@ -23,10 +23,15 @@ # # +# Activate virtual environment +if [ -f ".venv/bin/activate" ]; then + source .venv/bin/activate +fi + if [ "$1" == 'y' ]; then - PICKLE_CONF='--pickle_conf -y' -else - PICKLE_CONF='' + CONF_FLAG='--save_conf -y' +else + CONF_FLAG='' fi iter_dir() { @@ -39,5 +44,5 @@ iter_dir() { done } -iter_dir core $PICKLE_CONF +iter_dir core $CONF_FLAG iter_dir otel '' \ No newline at end of file diff --git a/src/build/update_docs.py b/src/build/update_docs.py index 9383e30a..932f6a7b 100644 --- a/src/build/update_docs.py +++ b/src/build/update_docs.py @@ -233,7 +233,8 @@ def _generate_plugins_info(dtagent_plugins_path: str, dtagent_conf_path: str) -> f_info_md = os.path.join(plugin_path, "readme.md") f_config_md = os.path.join(plugin_path, "config.md") f_bom_yml = os.path.join(plugin_path, "bom.yml") - config_file_name = f"{plugin_folder.split('.')[0]}-config.yml" + plugin_name = plugin_folder.split('.')[0] + config_file_name = f"{plugin_name}-config.yml" config_file_path = os.path.join(plugin_path, config_file_name) if os.path.isfile(f_info_md) or os.path.isfile(f_config_md) or os.path.isfile(config_file_path) or os.path.isfile(f_bom_yml): @@ -250,14 +251,22 @@ def _generate_plugins_info(dtagent_plugins_path: str, dtagent_conf_path: str) -> __content += f"[Show semantics for this plugin](#{plugin_name}_semantics_sec)\n\n" if os.path.isfile(config_file_path) or os.path.isfile(f_config_md): + config_data = yaml.safe_load(_read_file(config_file_path)) __content += f"### {plugin_title} default configuration\n\n" - __content += ( - "To disable this plugin, set `IS_DISABLED` to `true`.\n\n" - "In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, " - "you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then." - "\n\n" - ) - __content += "```json\n" + _read_file(config_file_path) + "\n```\n\n" + + if config_data.get("plugins", {}).get(plugin_name, {}).get("is_disabled"): + __content += ( + "This plugin is **disabled by default**;\n" + "you need to explicitly set `IS_ENABLED` to `true` to enable it.\n\n" + ) + else: + __content += ( + "To disable this plugin, set `IS_DISABLED` to `true`.\n\n" + "In case the global property `PLUGINS.DISABLED_BY_DEFAULT` is set to `true`, " + "you need to explicitly set `IS_ENABLED` to `true` to enable selected plugins; `IS_DISABLED` is not checked then." + "\n\n" + ) + __content += "```yaml\n" + _read_file(config_file_path) + "\n```\n\n" if os.path.isfile(f_config_md): __content += _read_file(f_config_md) + "\n" diff --git a/src/dtagent.sql/setup/036_update_plugin_schedule.sql b/src/dtagent.sql/setup/036_update_plugin_schedule.sql index 21e29093..ef178bcf 100644 --- a/src/dtagent.sql/setup/036_update_plugin_schedule.sql +++ b/src/dtagent.sql/setup/036_update_plugin_schedule.sql @@ -126,7 +126,7 @@ begin IS_ENABLED := NVL((select VALUE::boolean from DTAGENT_DB.CONFIG.CONFIGURATIONS where PATH = 'plugins.' || :PLUGIN_NAME || '.is_enabled'), false); -- if the plugin is disabled, we suspend all tasks related to the plugin and return a message - if (IS_DISABLED or (IS_DISABLED_BY_DEFAULT and not IS_ENABLED)) then + if ((IS_DISABLED or IS_DISABLED_BY_DEFAULT) and not IS_ENABLED) then -- if the plugin is disabled, we suspend the task and return a message for i in 0 to array_size(:ALL_TASK_NAMES) - 1 do execute immediate concat('alter task if exists ', :ALL_TASK_NAMES[i], ' suspend;'); diff --git a/src/dtagent/agent.py b/src/dtagent/agent.py index 5e8d8eab..94543b97 100644 --- a/src/dtagent/agent.py +++ b/src/dtagent/agent.py @@ -46,7 +46,7 @@ import datetime from types import NoneType -from typing import Tuple, Dict, List, Callable, Generator, Any, Union, Optional +from typing import Tuple, Dict, List, Callable, Generator, Any, Union, Optional, Literal from enum import Enum from abc import ABC, abstractmethod import pandas as pd diff --git a/src/dtagent/connector.py b/src/dtagent/connector.py index 1fd18109..be5242c8 100644 --- a/src/dtagent/connector.py +++ b/src/dtagent/connector.py @@ -56,7 +56,7 @@ import datetime from types import NoneType -from typing import Tuple, Dict, List, Callable, Generator, Any, Union, Optional +from typing import Tuple, Dict, List, Callable, Generator, Any, Union, Optional, Literal from enum import Enum from abc import ABC, abstractmethod import pandas as pd diff --git a/src/dtagent/otel/events/__init__.py b/src/dtagent/otel/events/__init__.py index c242b765..99ded55c 100644 --- a/src/dtagent/otel/events/__init__.py +++ b/src/dtagent/otel/events/__init__.py @@ -39,7 +39,7 @@ from dtagent.context import RUN_CONTEXT_KEY from dtagent.otel import _log_warning from dtagent.otel.otel_manager import OtelManager -from dtagent.util import StringEnum, get_timestamp_in_ms +from dtagent.util import StringEnum, get_timestamp from dtagent.version import VERSION ##endregion COMPILE_REMOVE diff --git a/src/dtagent/otel/events/bizevents.py b/src/dtagent/otel/events/bizevents.py index b0b60a5d..b8507d85 100644 --- a/src/dtagent/otel/events/bizevents.py +++ b/src/dtagent/otel/events/bizevents.py @@ -45,10 +45,13 @@ class BizEvents(AbstractEvents): - """Class parsing and sending bizevents via BizEvents API - https://docs.dynatrace.com/docs/observe/business-observability/bo-api-ingest + """Class parsing and sending bizevents. - NOTE: BizEvents are delivered to Dynatrace using CloudEvents batch format. + API Specifications: + - Dynatrace BizEvents API: https://docs.dynatrace.com/docs/ingest-from/business-analytics/ba-api-ingest + - CloudEvents Spec: https://cloudevents.io/ + + Note: Timestamps are provided as ISO 8601 strings in the CloudEvents `time` field. """ from dtagent.config import Configuration # COMPILE_REMOVE diff --git a/src/dtagent/otel/events/davis.py b/src/dtagent/otel/events/davis.py index 72185b18..1d855b3b 100644 --- a/src/dtagent/otel/events/davis.py +++ b/src/dtagent/otel/events/davis.py @@ -40,7 +40,7 @@ from dtagent.otel.otel_manager import OtelManager from dtagent.otel.events import EventType, AbstractEvents from dtagent.otel.events.generic import GenericEvents -from dtagent.util import StringEnum, get_timestamp_in_ms +from dtagent.util import StringEnum, get_timestamp from dtagent.version import VERSION ##endregion COMPILE_REMOVE @@ -49,8 +49,11 @@ class DavisEvents(GenericEvents): - """Allows for parsing and sending (Davis) Events payloads via Events v2 API - https://docs.dynatrace.com/docs/dynatrace-api/environment-api/events-v2/post-event + """Allows for parsing and sending (Davis) Events payloads via Events v2 API. + + API Specifications: + - Dynatrace Events API v2: + https://docs.dynatrace.com/docs/discover-dynatrace/references/dynatrace-api/environment-api/events-v2/post-event Note: Events API does not support sending multiple events at the same time, as a bulk, like in BizEvents or OpenPipelineEvents. diff --git a/src/dtagent/otel/events/generic.py b/src/dtagent/otel/events/generic.py index 3c3def0c..66ca8849 100644 --- a/src/dtagent/otel/events/generic.py +++ b/src/dtagent/otel/events/generic.py @@ -39,7 +39,7 @@ from dtagent.otel import _log_warning from dtagent.otel.otel_manager import OtelManager from dtagent.otel.events import EventType, AbstractEvents -from dtagent.util import StringEnum, get_timestamp_in_ms, validate_timestamp_ms +from dtagent.util import StringEnum, get_timestamp, validate_timestamp, process_timestamps_for_telemetry from dtagent.version import VERSION import datetime @@ -50,10 +50,14 @@ class GenericEvents(AbstractEvents): - """Enables for parsing and sending Events via OpenPipeline Events API - https://docs.dynatrace.com/docs/platform/openpipeline/reference/openpipeline-ingest-api/generic-events/events-generic-builtin + """Enables for parsing and sending Events via OpenPipeline Events API. - Note: OpenPipeline Events API does support sending multiple events at the same time, similar to BizEvents. + API Specifications: + - Dynatrace OpenPipeline Events: + https://docs.dynatrace.com/docs/platform/openpipeline/reference/openpipeline-ingest-api/generic-events/events-generic-builtin + - CloudEvents Spec: https://cloudevents.io/ + + Note: Timestamps are expected in milliseconds. OpenPipeline Events API supports sending multiple events at once. """ from dtagent.config import Configuration # COMPILE_REMOVE @@ -104,11 +108,18 @@ def _pack_event_data( k: v for k, v in event_data.items() if k not in ("_MESSAGE", "_message") } - start_ts = get_timestamp_in_ms(event_data, kwargs.get("start_time_key", "START_TIME"), 1e6, None) - end_ts = get_timestamp_in_ms(event_data, kwargs.get("end_time_key", "END_TIME"), 1e6, None) + # Get timestamps in nanoseconds from SQL, convert to milliseconds for Dynatrace Events API + start_ts_ns = get_timestamp(event_data, kwargs.get("start_time_key", "START_TIME")) + end_ts_ns = get_timestamp(event_data, kwargs.get("end_time_key", "END_TIME")) + + # Validate and convert to milliseconds for Dynatrace Events API + start_ts = validate_timestamp(start_ts_ns, return_unit="ms") if start_ts_ns else None + end_ts = validate_timestamp(end_ts_ns, return_unit="ms") if end_ts_ns else None - observed_timestamp = get_timestamp_in_ms(event_data, "timestamp") - timestamp = validate_timestamp_ms(observed_timestamp) if observed_timestamp else None + # Process timestamp and observed_timestamp using standard pattern: + # - timestamp in milliseconds (Dynatrace Events API requirement) + # - observed_timestamp in nanoseconds (per OTLP standard) + timestamp, observed_timestamp_ns = process_timestamps_for_telemetry(event_data) # we have map non-simple types to string, as events are not capable of mapping lists # for key, value in event_data_extended.items(): @@ -140,8 +151,9 @@ def _pack_event_data( if timestamp: event_payload["timestamp"] = timestamp - if observed_timestamp and observed_timestamp != timestamp: - event_payload["observed_timestamp"] = observed_timestamp + # Add observed_timestamp if available (in nanoseconds per OTLP standard) + if observed_timestamp_ns: + event_payload["observed_timestamp"] = observed_timestamp_ns return event_payload diff --git a/src/dtagent/otel/logs.py b/src/dtagent/otel/logs.py index edaaf2a5..4513e6a0 100644 --- a/src/dtagent/otel/logs.py +++ b/src/dtagent/otel/logs.py @@ -29,7 +29,7 @@ from typing import Dict, Optional, Any from opentelemetry.sdk.resources import Resource from opentelemetry.sdk._logs import LoggerProvider -from dtagent.util import get_timestamp_in_ms, validate_timestamp_ms +from dtagent.util import get_timestamp, validate_timestamp, process_timestamps_for_telemetry from dtagent.otel.otel_manager import CustomLoggingSession, OtelManager ##endregion COMPILE_REMOVE @@ -38,7 +38,16 @@ class Logs: - """Main Logs class""" + """Main Logs class for sending logs via Dynatrace OTLP Logs API. + + API Specifications: + - Dynatrace OTLP Logs: https://docs.dynatrace.com/docs/ingest-from/opentelemetry/otlp-api/ingest-logs + - OTLP Logs Standard: https://opentelemetry.io/docs/specs/otel/logs/data-model/ + + Note: Dynatrace requires timestamps in milliseconds (UTC milliseconds, RFC3339, or RFC3164), + which differs from the OTLP standard that specifies nanoseconds. However, `observed_timestamp` + must be in nanoseconds per OTLP standard to preserve original timestamp precision. + """ from dtagent.config import Configuration # COMPILE_REMOVE @@ -85,38 +94,38 @@ class CustomOTelTimestampFilter(logging.Filter): def filter(self, record: logging.LogRecord) -> bool: # Handle timestamp field (for log record timing) - ts_ms = getattr(record, "timestamp", None) - if ts_ms is not None: + ts_attr = getattr(record, "timestamp", None) + if ts_attr is not None: delattr(record, "timestamp") - # Validate timestamp is positive and reasonable (not before 1970 or far in the future) try: - ts_ms = int(ts_ms) - # Ensure timestamp is positive and within reasonable range - # Min: 0 (epoch), Max: year 2100 (approx 4102444800000 ms) - if 0 < ts_ms <= 4102444800000: - record.created = ts_ms / 1_000 - record.msecs = ts_ms % 1_000 + ts_val = int(ts_attr) + # Validate with auto-detection and convert to milliseconds using standard validation + validated_ts_ms = validate_timestamp(ts_val, return_unit="ms") + if validated_ts_ms: + record.created = validated_ts_ms / 1_000 + record.msecs = validated_ts_ms % 1_000 except (ValueError, TypeError, OverflowError): # If conversion fails, use default timestamp pass - # Handle observed_timestamp field (for OTEL payload, expected in nanoseconds) - observed_ts = getattr(record, "observed_timestamp", None) - if observed_ts is not None: + # Handle observed_timestamp field (must be in nanoseconds per OTLP standard) + observed_ts_attr = getattr(record, "observed_timestamp", None) + if observed_ts_attr is not None: try: - observed_ts_ms = int(observed_ts) + observed_ts_val = int(observed_ts_attr) except (ValueError, TypeError, OverflowError): - # Invalid value; remove the attribute so we do not send bad data delattr(record, "observed_timestamp") else: - # Use shared validation for millisecond timestamps - validated_ts_ms = validate_timestamp_ms(observed_ts_ms) - if validated_ts_ms: - # Convert milliseconds to nanoseconds for OTEL - setattr(record, "observed_timestamp", validated_ts_ms * 1_000_000) + # Validate with auto-detection and return nanoseconds; skip range validation to preserve original observed_timestamp + validated_ts_ns = validate_timestamp( + observed_ts_val, + return_unit="ns", + skip_range_validation=True, + ) + if validated_ts_ns: + setattr(record, "observed_timestamp", validated_ts_ns) else: - # If invalid, remove the attribute delattr(record, "observed_timestamp") return True @@ -163,17 +172,15 @@ def __adjust_log_attribute(key: str, value: Any) -> Any: # the following conversions through JSON are necessary to ensure certain objects like datetime are properly serialized, # otherwise OTEL seems to be sending objects cannot be deserialized on the Dynatrace side - o_extra = {k: __adjust_log_attribute(k, v) for k, v in _cleanup_data(extra).items() if v} if extra else {} - - # first we record original timestamp in milliseconds as observed_timestamp attribute - timestamp = None - observed_timestamp = get_timestamp_in_ms(o_extra, "timestamp") - if observed_timestamp: - # we validate the original timestamp and record value that is correct for ingest - _timestamp = validate_timestamp_ms(observed_timestamp) - if _timestamp: - o_extra["timestamp"] = _timestamp - timestamp = _timestamp + o_extra = {k: __adjust_log_attribute(k, v) for k, v in _cleanup_data(extra).items() if v is not None} if extra else {} + + # Process timestamps using standard pattern: + # - timestamp in milliseconds (Dynatrace OTLP Logs API deviation from spec) + # - observed_timestamp in nanoseconds (per OTLP standard) + validated_timestamp_ms, validated_observed_timestamp_ns = process_timestamps_for_telemetry(o_extra) + + if validated_timestamp_ms: + o_extra["timestamp"] = validated_timestamp_ms LOG.log(LL_TRACE, o_extra) @@ -183,12 +190,9 @@ def __adjust_log_attribute(key: str, value: Any) -> Any: ): # remove telemetry.sdk.language="python" which is added by OTEL by default as resource attribute del raw_payload["telemetry.sdk.language"] - # Only include observed_timestamp if it's valid and different from the validated timestamp - # The validate_timestamp_ms call in CustomOTelTimestampFilter will catch any that slip through - if observed_timestamp and observed_timestamp != timestamp: - validated_observed = validate_timestamp_ms(observed_timestamp) - if validated_observed: - raw_payload["observed_timestamp"] = validated_observed + # Add observed_timestamp if available (in nanoseconds per OTLP standard) + if validated_observed_timestamp_ns: + raw_payload["observed_timestamp"] = validated_observed_timestamp_ns payload = _cleanup_dict(raw_payload) diff --git a/src/dtagent/otel/metrics.py b/src/dtagent/otel/metrics.py index 08d637d9..58edd9dc 100644 --- a/src/dtagent/otel/metrics.py +++ b/src/dtagent/otel/metrics.py @@ -30,7 +30,7 @@ from typing import Dict, Union, Optional, Tuple from dtagent.otel.otel_manager import OtelManager -from dtagent.util import get_timestamp_in_ms, get_now_timestamp, validate_timestamp_ms +from dtagent.util import get_timestamp, get_now_timestamp, validate_timestamp from dtagent.otel import _log_warning ##endregion COMPILE_REMOVE @@ -39,7 +39,14 @@ class Metrics: - """Allows for parsing and sending metrics data.""" + """Allows for parsing and sending metrics data. + + API Specifications: + - Dynatrace Metrics API v2: + https://docs.dynatrace.com/docs/ingest-from/extend-dynatrace/extend-metrics/reference/metric-ingestion-protocol + + Note: Timestamps must be in UTC milliseconds. + """ from dtagent.config import Configuration # COMPILE_REMOVE from dtagent.otel.semantics import Semantics # COMPILE_REMOVE @@ -203,8 +210,8 @@ def __payload_lines(dimensions: str, metric_name: str, metric_value: Union[str, + self._semantics.get_metric_definition(metric_name, local_metrics_def) ) - timestamp = get_timestamp_in_ms(query_data, start_time, 1e6, int(get_now_timestamp().timestamp() * 1000)) - timestamp = validate_timestamp_ms(timestamp, allowed_past_minutes=55, allowed_future_minutes=10) + timestamp_ns = get_timestamp(query_data, start_time, int(get_now_timestamp().timestamp() * 1_000_000_000)) + timestamp = validate_timestamp(timestamp_ns, allowed_past_minutes=55, allowed_future_minutes=10, return_unit="ms") payload_lines = [] # list all dimensions with their values from the provided data diff --git a/src/dtagent/otel/spans.py b/src/dtagent/otel/spans.py index 7926efb6..f908994f 100644 --- a/src/dtagent/otel/spans.py +++ b/src/dtagent/otel/spans.py @@ -90,7 +90,14 @@ def generate_trace_id(self) -> int: class Spans: - """Main Spans class""" + """Main Spans class for sending traces via Dynatrace OTLP Traces API. + + API Specifications: + - Dynatrace OTLP Traces: https://docs.dynatrace.com/docs/ingest-from/opentelemetry/otlp-api/ingest-traces + - OTLP Traces Standard: https://opentelemetry.io/docs/specs/otel/trace/api/ + + Note: Timestamps must be in nanoseconds per OTLP standard. + """ from dtagent.config import Configuration # COMPILE_REMOVE diff --git a/src/dtagent/plugins/__init__.py b/src/dtagent/plugins/__init__.py index 57a29d03..4a80411a 100644 --- a/src/dtagent/plugins/__init__.py +++ b/src/dtagent/plugins/__init__.py @@ -31,7 +31,6 @@ from typing import Tuple, Dict, List, Callable, Union, Generator, Optional, Any from abc import ABC, abstractmethod from snowflake import snowpark -from snowflake.snowpark.functions import current_timestamp from dtagent import LOG, LL_TRACE from dtagent.config import Configuration from dtagent.util import ( @@ -214,6 +213,7 @@ def _process_span_rows( # pylint: disable=R0913 logs_sent = 0 metrics_sent = 0 metrics_present = False + last_processed_timestamp = None __context = get_context_name_and_run_id(plugin_name=self._plugin_name, context_name=context_name, run_id=run_uuid) @@ -222,6 +222,7 @@ def _process_span_rows( # pylint: disable=R0913 if query_id is None: LOG.warning("Problem with given row in %s: %r", context_name, row_dict) else: + last_processed_timestamp = row_dict.get("TIMESTAMP", last_processed_timestamp) LOG.log(LL_TRACE, "Processing %s for %r", context_name, query_id) _span_events_added, _spans_sent, _logs_sent, _metrics_sent, _metrics_present = self._process_row( row=row_dict, @@ -245,7 +246,9 @@ def _process_span_rows( # pylint: disable=R0913 if metrics_sent == 0: processing_errors.append("Problem sending metrics - metrics were discovered but none were sent") - if not self._spans.flush_traces(): + spans_disabled = getattr(self._spans, "NOT_ENABLED", False) + flush_succeeded = spans_disabled or self._spans.flush_traces() + if not flush_succeeded: processing_errors.append("Problem flushing traces") processing_errors_count = len(processing_errors) @@ -257,7 +260,7 @@ def _process_span_rows( # pylint: disable=R0913 if log_completion: self._report_execution( context_name, - current_timestamp(), + str(last_processed_timestamp), None, { context_name: { @@ -272,7 +275,7 @@ def _process_span_rows( # pylint: disable=R0913 run_id=run_uuid, ) - if report_status: + if report_status and flush_succeeded: self._session.call( "STATUS.UPDATE_PROCESSED_QUERIES", joint_processed_query_ids, diff --git a/src/dtagent/plugins/budgets.config/bom.yml b/src/dtagent/plugins/budgets.config/bom.yml index 236abf82..cf56979d 100644 --- a/src/dtagent/plugins/budgets.config/bom.yml +++ b/src/dtagent/plugins/budgets.config/bom.yml @@ -25,6 +25,12 @@ delivers: type: procedure - name: DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS type: task + - name: DTAGENT_DB.APP.P_GRANT_BUDGET_MONITORING() + type: procedure + comment: Optional (admin scope). Grants DTAGENT_VIEWER privileges on configured monitored_budgets. + - name: DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS + type: task + comment: Optional (admin scope). Periodically calls P_GRANT_BUDGET_MONITORING(). references: - name: SNOWFLAKE @@ -70,3 +76,8 @@ references: type: procedure privileges: USAGE comment: We call this procedure on each budget defined in Snowflake + - name: SNOWFLAKE.USAGE_VIEWER + type: role + privileges: DATABASE ROLE + granted to: DTAGENT_VIEWER + comment: Optional (admin scope). Required for custom budget monitoring via P_GRANT_BUDGET_MONITORING(). diff --git a/src/dtagent/plugins/budgets.config/budgets-config.yml b/src/dtagent/plugins/budgets.config/budgets-config.yml index 39218827..864f3397 100644 --- a/src/dtagent/plugins/budgets.config/budgets-config.yml +++ b/src/dtagent/plugins/budgets.config/budgets-config.yml @@ -1,8 +1,10 @@ plugins: budgets: + is_disabled: true quota: 10 schedule: USING CRON 30 0 * * * UTC - is_disabled: false + monitored_budgets: [] + schedule_grants: USING CRON 30 */12 * * * UTC telemetry: - logs - metrics diff --git a/src/dtagent/plugins/budgets.config/config.md b/src/dtagent/plugins/budgets.config/config.md new file mode 100644 index 00000000..f9606c46 --- /dev/null +++ b/src/dtagent/plugins/budgets.config/config.md @@ -0,0 +1,24 @@ +| Parameter | Type | Default | Description | +| ------------------- | ------ | ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `quota` | int | `10` | Credit quota for the agent's own `DTAGENT_BUDGET`. | +| `schedule` | string | `USING CRON 30 0 * * * UTC` | Cron schedule for the budgets collection task. | +| `monitored_budgets` | list | `[]` | Fully-qualified custom budget names to monitor, e.g. `["MY_DB.MY_SCHEMA.MY_BUDGET"]`. Names are automatically uppercased; only standard unquoted Snowflake identifiers are supported (`[A-Za-z_][A-Za-z0-9_$]*` per part). | +| `schedule_grants` | string | `USING CRON 30 */12 * * * UTC` | Cron schedule for `TASK_DTAGENT_BUDGETS_GRANTS` (admin scope only). | + +### Enabling the Budgets plugin + +1. Set `is_enabled` to `true` in your configuration file. +1. For **account budget only** (no custom budgets): no additional grants needed โ€” `SNOWFLAKE.BUDGET_VIEWER` is already granted. +1. For **custom budgets**: configure `monitored_budgets` and run `P_GRANT_BUDGET_MONITORING()` (admin scope required), or grant + privileges manually (see below). + +### Granting access to custom budgets manually + +For each custom budget `..`, grant the following to `DTAGENT_VIEWER`: + +```sql +grant usage on database to role DTAGENT_VIEWER; +grant usage on schema . to role DTAGENT_VIEWER; +grant snowflake.core.budget role ..!VIEWER to role DTAGENT_VIEWER; +grant database role SNOWFLAKE.USAGE_VIEWER to role DTAGENT_VIEWER; +``` diff --git a/src/dtagent/plugins/budgets.config/readme.md b/src/dtagent/plugins/budgets.config/readme.md index 6d431333..8677eafb 100644 --- a/src/dtagent/plugins/budgets.config/readme.md +++ b/src/dtagent/plugins/budgets.config/readme.md @@ -1,4 +1,8 @@ This plugin enables monitoring of Snowflake budgets, resources linked to them, and their expenditures. It sets up and manages the Dynatrace Snowflake Observability Agent's own budget. -All budgets within the account are reported on as logs and metrics; this includes their details, spending limit, and recent expenditures. -The plugin runs once a day and excludes already reported expenditures. +All budgets the agent has been granted access to are reported as logs and metrics; this includes their details, spending limit, and recent +expenditures. The plugin runs once a day and excludes already reported expenditures. + +> **Note**: This plugin is **disabled by default** because custom budget monitoring requires per-budget privilege grants. +> The account budget (visible via `SNOWFLAKE.BUDGET_VIEWER`) is accessible automatically once enabled. For custom budgets, +> use `P_GRANT_BUDGET_MONITORING()` (requires admin scope) or grant privileges manually โ€” see below. diff --git a/src/dtagent/plugins/budgets.sql/040_p_get_budgets.sql b/src/dtagent/plugins/budgets.sql/040_p_get_budgets.sql index aeee64a5..d52c88bb 100644 --- a/src/dtagent/plugins/budgets.sql/040_p_get_budgets.sql +++ b/src/dtagent/plugins/budgets.sql/040_p_get_budgets.sql @@ -53,7 +53,7 @@ execute as owner as $$ DECLARE - q_get_budgets TEXT DEFAULT 'show SNOWFLAKE.CORE.BUDGET ->> insert into DTAGENT_DB.APP.TMP_BUDGETS select * from $1;'; + v_budgets_json VARIANT; tr_budgets TEXT DEFAULT 'truncate table DTAGENT_DB.APP.TMP_BUDGETS;'; tr_linked_resources TEXT DEFAULT 'truncate table DTAGENT_DB.APP.TMP_BUDGETS_RESOURCES;'; @@ -73,7 +73,18 @@ BEGIN EXECUTE IMMEDIATE :tr_limits; EXECUTE IMMEDIATE :tr_spendings; - EXECUTE IMMEDIATE :q_get_budgets; + v_budgets_json := (SELECT PARSE_JSON(SYSTEM$SHOW_BUDGETS_IN_ACCOUNT())); + INSERT INTO DTAGENT_DB.APP.TMP_BUDGETS (created_on, name, database_name, schema_name, current_version, comment, owner, owner_role_type) + SELECT + TO_TIMESTAMP_LTZ(b.value:"CREATED_ON"::NUMBER / 1000) AS created_on, + b.value:"NAME"::TEXT AS name, + b.value:"DATABASE"::TEXT AS database_name, + b.value:"SCHEMA"::TEXT AS schema_name, + b.value:"CURRENT_VERSION"::TEXT AS current_version, + b.value:"COMMENT"::TEXT AS comment, + b.value:"OWNER"::TEXT AS owner, + b.value:"OWNER_ROLE_TYPE"::TEXT AS owner_role_type + FROM TABLE(FLATTEN(input => :v_budgets_json)) b; FOR budget IN c_budgets DO budget_name := budget.name; diff --git a/src/dtagent/plugins/budgets.sql/901_update_budgets_conf.sql b/src/dtagent/plugins/budgets.sql/901_update_budgets_conf.sql index 99487398..f4ff0940 100644 --- a/src/dtagent/plugins/budgets.sql/901_update_budgets_conf.sql +++ b/src/dtagent/plugins/budgets.sql/901_update_budgets_conf.sql @@ -33,7 +33,7 @@ declare SPENDING_LIMIT int default 10; PLUGIN_NAME varchar default 'budgets'; begin - call DTAGENT_DB.CONFIG.UPDATE_PLUGIN_SCHEDULE(:PLUGIN_NAME); + call DTAGENT_DB.CONFIG.UPDATE_PLUGIN_SCHEDULE(:PLUGIN_NAME, array_construct('grants')); SPENDING_LIMIT := (select VALUE from CONFIG.CONFIGURATIONS where PATH = 'plugins.budgets.quota'); call DTAGENT_DB.APP.DTAGENT_BUDGET!SET_SPENDING_LIMIT(:SPENDING_LIMIT); diff --git a/src/dtagent/plugins/budgets.sql/admin/050_p_grant_budget_monitoring.sql b/src/dtagent/plugins/budgets.sql/admin/050_p_grant_budget_monitoring.sql new file mode 100644 index 00000000..e5d267d6 --- /dev/null +++ b/src/dtagent/plugins/budgets.sql/admin/050_p_grant_budget_monitoring.sql @@ -0,0 +1,107 @@ +-- +-- +-- Copyright (c) 2025 Dynatrace Open Source +-- +-- Permission is hereby granted, free of charge, to any person obtaining a copy +-- of this software and associated documentation files (the "Software"), to deal +-- in the Software without restriction, including without limitation the rights +-- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +-- copies of the Software, and to permit persons to whom the Software is +-- furnished to do so, subject to the following conditions: +-- +-- The above copyright notice and this permission notice shall be included in all +-- copies or substantial portions of the Software. +-- +-- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +-- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +-- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +-- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +-- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +-- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +-- SOFTWARE. +-- +-- +-- +-- APP.P_GRANT_BUDGET_MONITORING() grants DTAGENT_VIEWER the necessary privileges +-- to monitor custom budgets configured in CONFIG.CONFIGURATIONS. +-- +-- !! Must be invoked after creation of CONFIG.CONFIGURATIONS table +-- !! Requires DTAGENT_ADMIN role (admin deployment scope) +-- +--%OPTION:dtagent_admin: +use role DTAGENT_OWNER; use database DTAGENT_DB; use warehouse DTAGENT_WH; + +create or replace procedure DTAGENT_DB.APP.P_GRANT_BUDGET_MONITORING() +returns text +language sql +execute as caller +as +$$ +DECLARE + c_budgets CURSOR FOR + SELECT ci.VALUE::TEXT AS budget_fqn + FROM CONFIG.CONFIGURATIONS c, TABLE(FLATTEN(c.VALUE)) ci + WHERE c.PATH = 'plugins.budgets.monitored_budgets'; + + budget_fqn TEXT DEFAULT ''; + budget_db TEXT DEFAULT ''; + budget_schema TEXT DEFAULT ''; + budget_name TEXT DEFAULT ''; + + q_grant_usage_db TEXT DEFAULT ''; + q_grant_usage_schema TEXT DEFAULT ''; + q_grant_budget_viewer TEXT DEFAULT ''; + q_grant_usage_viewer TEXT DEFAULT ''; + + budget_db_q TEXT DEFAULT ''; + budget_schema_q TEXT DEFAULT ''; + budget_fqn_q TEXT DEFAULT ''; + + safe_identifier_re TEXT DEFAULT '^[A-Za-z_][A-Za-z0-9_$]*$'; + + grants_count INT DEFAULT 0; +BEGIN + q_grant_usage_viewer := 'grant database role SNOWFLAKE.USAGE_VIEWER to role DTAGENT_VIEWER;'; + EXECUTE IMMEDIATE :q_grant_usage_viewer; + + FOR r_budget IN c_budgets DO + budget_fqn := r_budget.budget_fqn; + budget_db := UPPER(SPLIT_PART(:budget_fqn, '.', 1)); + budget_schema := UPPER(SPLIT_PART(:budget_fqn, '.', 2)); + budget_name := UPPER(SPLIT_PART(:budget_fqn, '.', 3)); + + IF (NOT REGEXP_LIKE(:budget_db, :safe_identifier_re) + OR NOT REGEXP_LIKE(:budget_schema, :safe_identifier_re) + OR NOT REGEXP_LIKE(:budget_name, :safe_identifier_re)) THEN + SYSTEM$LOG_WARN('P_GRANT_BUDGET_MONITORING: skipping invalid budget FQN (unsafe identifier): ' || :budget_fqn); + CONTINUE; + END IF; + + budget_db_q := '"' || :budget_db || '"'; + budget_schema_q := '"' || :budget_schema || '"'; + budget_fqn_q := :budget_db_q || '.' || :budget_schema_q || '."' || :budget_name || '"'; + + q_grant_usage_db := 'grant usage on database ' || :budget_db_q || ' to role DTAGENT_VIEWER;'; + q_grant_usage_schema := 'grant usage on schema ' || :budget_db_q || '.' || :budget_schema_q || ' to role DTAGENT_VIEWER;'; + q_grant_budget_viewer := 'grant snowflake.core.budget role ' || :budget_fqn_q || '!VIEWER to role DTAGENT_VIEWER;'; + + EXECUTE IMMEDIATE :q_grant_usage_db; + EXECUTE IMMEDIATE :q_grant_usage_schema; + EXECUTE IMMEDIATE :q_grant_budget_viewer; + + grants_count := :grants_count + 1; + END FOR; + + RETURN 'granted budget monitoring privileges for ' || :grants_count || ' budget(s) to DTAGENT_VIEWER'; + +EXCEPTION + when statement_error then + SYSTEM$LOG_WARN(SQLERRM); + + return SQLERRM; +END; +$$ +; + +grant usage on procedure DTAGENT_DB.APP.P_GRANT_BUDGET_MONITORING() to role DTAGENT_ADMIN; +--%:OPTION:dtagent_admin diff --git a/src/dtagent/plugins/budgets.sql/admin/801_budgets_grants_task.sql b/src/dtagent/plugins/budgets.sql/admin/801_budgets_grants_task.sql new file mode 100644 index 00000000..3505ea88 --- /dev/null +++ b/src/dtagent/plugins/budgets.sql/admin/801_budgets_grants_task.sql @@ -0,0 +1,44 @@ +-- +-- +-- Copyright (c) 2025 Dynatrace Open Source +-- +-- Permission is hereby granted, free of charge, to any person obtaining a copy +-- of this software and associated documentation files (the "Software"), to deal +-- in the Software without restriction, including without limitation the rights +-- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +-- copies of the Software, and to permit persons to whom the Software is +-- furnished to do so, subject to the following conditions: +-- +-- The above copyright notice and this permission notice shall be included in all +-- copies or substantial portions of the Software. +-- +-- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +-- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +-- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +-- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +-- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +-- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +-- SOFTWARE. +-- +-- +-- +-- This task periodically calls P_GRANT_BUDGET_MONITORING() to keep budget +-- monitoring grants in sync with the configured monitored_budgets list. +-- +--%OPTION:dtagent_admin: +use role DTAGENT_OWNER; use database DTAGENT_DB; use warehouse DTAGENT_WH; + +create or replace task DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS + warehouse = DTAGENT_WH + schedule = 'USING CRON 30 */12 * * * UTC' -- every 12 hours at 00:30, 12:30 UTC + allow_overlapping_execution = FALSE +as + call DTAGENT_DB.APP.P_GRANT_BUDGET_MONITORING(); + +grant ownership on task DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS to role DTAGENT_ADMIN revoke current grants; +grant monitor on task DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS to role DTAGENT_VIEWER; + +-- alter task if exists DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS resume; + +-- alter task if exists DTAGENT_DB.APP.TASK_DTAGENT_BUDGETS_GRANTS suspend; +--%:OPTION:dtagent_admin diff --git a/src/dtagent/plugins/data_schemas.config/config.md b/src/dtagent/plugins/data_schemas.config/config.md new file mode 100644 index 00000000..e1bd7f31 --- /dev/null +++ b/src/dtagent/plugins/data_schemas.config/config.md @@ -0,0 +1,8 @@ +| Key | Type | Default | Description | +| ------------------------------------- | ------ | ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.data_schemas.lookback_hours` | int | `4` | How far back (in hours) the plugin looks for DDL-based schema changes on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Default is `4`h to account for the up-to-3-hour data ingestion delay in `ACCESS_HISTORY`. | +| `plugins.data_schemas.schedule` | string | `USING CRON 0 0,8,16 * * * UTC` | Cron schedule for the data schemas collection task. | +| `plugins.data_schemas.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.data_schemas.include` | list | `["%"]` | List of object name patterns to include (SQL `LIKE` syntax). Default includes all objects. | +| `plugins.data_schemas.exclude` | list | `[]` | List of object name patterns to exclude (SQL `LIKE` syntax). Takes precedence over `include`. | +| `plugins.data_schemas.telemetry` | list | `["events", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | diff --git a/src/dtagent/plugins/data_schemas.config/data_schemas-config.yml b/src/dtagent/plugins/data_schemas.config/data_schemas-config.yml index 3222ba56..f87646bc 100644 --- a/src/dtagent/plugins/data_schemas.config/data_schemas-config.yml +++ b/src/dtagent/plugins/data_schemas.config/data_schemas-config.yml @@ -1,5 +1,6 @@ plugins: data_schemas: + lookback_hours: 4 schedule: USING CRON 0 0,8,16 * * * UTC is_disabled: false exclude: [] diff --git a/src/dtagent/plugins/data_schemas.config/readme.md b/src/dtagent/plugins/data_schemas.config/readme.md index 31d3d5b5..ba04b120 100644 --- a/src/dtagent/plugins/data_schemas.config/readme.md +++ b/src/dtagent/plugins/data_schemas.config/readme.md @@ -1 +1 @@ -Enables monitoring of data schema changes. Reports events on recent modifications to objects (tables, schemas, databases) made by DDL queries, within the last 4 hours. +Enables monitoring of data schema changes. Reports events on recent modifications to objects (tables, schemas, databases) made by DDL queries, within a configurable lookback window (default: 4 hours, see `plugins.data_schemas.lookback_hours`). diff --git a/src/dtagent/plugins/data_schemas.sql/051_v_data_schemas.sql b/src/dtagent/plugins/data_schemas.sql/051_v_data_schemas.sql index 6dea88ff..6415d984 100644 --- a/src/dtagent/plugins/data_schemas.sql/051_v_data_schemas.sql +++ b/src/dtagent/plugins/data_schemas.sql/051_v_data_schemas.sql @@ -42,7 +42,7 @@ with cte_includes as ( , cte_all AS ( select * from SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY ah where object_modified_by_ddl:"objectDomain" in ('Table', 'Schema', 'Database') - and query_start_time > GREATEST(timeadd(hour, -4, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('data_schemas')) -- max data delay is 180 min + and query_start_time > GREATEST(timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.data_schemas.lookback_hours', 4), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('data_schemas')) -- max data delay is 180 min and object_modified_by_ddl:"objectName" LIKE ANY (select object_name from cte_includes) and not object_modified_by_ddl:"objectName" LIKE ANY (select object_name from cte_excludes) ) diff --git a/src/dtagent/plugins/dynamic_tables.config/bom.yml b/src/dtagent/plugins/dynamic_tables.config/bom.yml index 006091e3..7828529a 100644 --- a/src/dtagent/plugins/dynamic_tables.config/bom.yml +++ b/src/dtagent/plugins/dynamic_tables.config/bom.yml @@ -23,12 +23,27 @@ references: type: dynamic table privileges: MONITOR granted to: DTAGENT_VIEWER - comment: We grant that on every database selected in configuration or all (default) - - name: ALL FUTURE TABLES IN DATABASE $database - type: table + comment: Granted when include pattern has wildcard schema (e.g. DB.%.%) + - name: ALL FUTURE DYNAMIC TABLES IN DATABASE $database + type: dynamic table + privileges: MONITOR + granted to: DTAGENT_VIEWER + comment: Granted when include pattern has wildcard schema (e.g. DB.%.%) + - name: ALL DYNAMIC TABLES IN SCHEMA $database.$schema + type: dynamic table + privileges: MONITOR + granted to: DTAGENT_VIEWER + comment: Granted when include pattern has specific schema (e.g. DB.ANALYTICS.%) + - name: ALL FUTURE DYNAMIC TABLES IN SCHEMA $database.$schema + type: dynamic table + privileges: MONITOR + granted to: DTAGENT_VIEWER + comment: Granted when include pattern has specific schema (e.g. DB.ANALYTICS.%) + - name: DYNAMIC TABLE $database.$schema.$table + type: dynamic table privileges: MONITOR granted to: DTAGENT_VIEWER - comment: We grant that on every database selected in configuration or all (default) + comment: Granted when include pattern specifies an exact table name (e.g. DB.ANALYTICS.ORDERS_DT); no FUTURE grant at table level - name: INFORMATION_SCHEMA.DYNAMIC_TABLE_REFRESH_HISTORY type: view privileges: USAGE diff --git a/src/dtagent/plugins/dynamic_tables.config/config.md b/src/dtagent/plugins/dynamic_tables.config/config.md index 702b15d1..ca06d495 100644 --- a/src/dtagent/plugins/dynamic_tables.config/config.md +++ b/src/dtagent/plugins/dynamic_tables.config/config.md @@ -1,5 +1,13 @@ > **IMPORTANT**: For this plugin to function correctly, `MONITOR on DYNAMIC TABLES` must be granted to the `DTAGENT_VIEWER` role. > By default, when the `admin` scope is installed, this is handled by the `P_GRANT_MONITOR_DYNAMIC_TABLES()` procedure, which is executed with the elevated privileges of the `DTAGENT_ADMIN` role (created only when the `admin` scope is installed), via the `APP.TASK_DTAGENT_DYNAMIC_TABLES_GRANTS` task. > The schedule for this task can be configured separately using the `PLUGINS.DYNAMIC_TABLES.SCHEDULE_GRANTS` configuration option. -> Alternatively, you may choose to: -> + +The grant granularity is derived automatically from the `include` pattern: + +| Include pattern | Grant level | SQL issued | +| ----------------------------- | ----------- | ---------------------------------------------------------- | +| `%.%.%` or `PROD_DB.%.%` | Database | `GRANT MONITOR ON ALL/FUTURE DYNAMIC TABLES IN DATABASE โ€ฆ` | +| `PROD_DB.ANALYTICS.%` | Schema | `GRANT MONITOR ON ALL/FUTURE DYNAMIC TABLES IN SCHEMA โ€ฆ` | +| `PROD_DB.ANALYTICS.ORDERS_DT` | Table | `GRANT MONITOR ON DYNAMIC TABLE โ€ฆ` (no FUTURE grant) | + +Alternatively, you may choose to grant the required permissions manually, using the appropriate `GRANT MONITOR ON ALL/FUTURE DYNAMIC TABLES IN โ€ฆ` statement, depending on the desired granularity. diff --git a/src/dtagent/plugins/dynamic_tables.sql/admin/032_p_grant_monitor_dynamic_tables.sql b/src/dtagent/plugins/dynamic_tables.sql/admin/032_p_grant_monitor_dynamic_tables.sql index ea6436a4..c1f4cc68 100644 --- a/src/dtagent/plugins/dynamic_tables.sql/admin/032_p_grant_monitor_dynamic_tables.sql +++ b/src/dtagent/plugins/dynamic_tables.sql/admin/032_p_grant_monitor_dynamic_tables.sql @@ -22,7 +22,12 @@ -- -- -- --- APP.P_GRANT_MONITOR_DYNAMIC_TABLES() returns metadata for all dynamic tables defined in Snowflake. +-- APP.P_GRANT_MONITOR_DYNAMIC_TABLES() grants MONITOR privileges on dynamic tables to DTAGENT_VIEWER. +-- +-- Grant granularity is derived from the include pattern: +-- - DB.%.% (wildcard schema, wildcard table) โ†’ GRANT ... IN DATABASE db_name +-- - DB.SCHEMA.% (specific schema, wildcard table) โ†’ GRANT ... IN SCHEMA db_name.schema_name +-- - DB.SCHEMA.TABLE (specific schema, specific table) โ†’ GRANT ... ON DYNAMIC TABLE db_name.schema_name.table_name -- -- !! Must be invoked after creation of CONFIG.CONFIGURATIONS table (031_configuration_table) -- @@ -37,18 +42,22 @@ as $$ DECLARE rs_database_names RESULTSET; + rs_schema_names RESULTSET; + rs_table_names RESULTSET; q_grant_monitor_all TEXT DEFAULT ''; q_grant_monitor_future TEXT DEFAULT ''; BEGIN + -- Grant at DATABASE level for patterns where schema part is a wildcard (e.g. DB.%.%) rs_database_names := (SHOW DATABASES ->> with cte_includes as ( - select distinct split_part(ci.VALUE, '.', 0) as db_pattern + select distinct split_part(ci.VALUE, '.', 1) as db_pattern from CONFIG.CONFIGURATIONS c, table(flatten(c.VALUE)) ci where c.PATH = 'plugins.dynamic_tables.include' + and split_part(ci.VALUE, '.', 2) = '%' ) , cte_excludes as ( - select distinct split_part(ce.VALUE, '.', 0) as db_pattern + select distinct split_part(ce.VALUE, '.', 1) as db_pattern from CONFIG.CONFIGURATIONS c, table(flatten(c.VALUE)) ce where c.PATH = 'plugins.dynamic_tables.exclude' ) @@ -60,13 +69,62 @@ BEGIN ; LET c_database_names CURSOR FOR rs_database_names; - -- iterate over warehouses FOR r_db IN c_database_names DO - q_grant_monitor_all := 'grant monitor on all dynamic tables in database ' || r_db.name || ' to role DTAGENT_VIEWER;'; - q_grant_monitor_future := 'grant monitor on future dynamic tables in database ' || r_db.name || ' to role DTAGENT_VIEWER;'; + q_grant_monitor_all := 'grant monitor on all dynamic tables in database identifier(?) to role DTAGENT_VIEWER'; + q_grant_monitor_future := 'grant monitor on future dynamic tables in database identifier(?) to role DTAGENT_VIEWER'; + + EXECUTE IMMEDIATE :q_grant_monitor_all USING (r_db.name); + EXECUTE IMMEDIATE :q_grant_monitor_future USING (r_db.name); + END FOR; + + -- Grant at SCHEMA level for patterns where schema is specific and table is a wildcard (e.g. DB.ANALYTICS.%) + rs_schema_names := (SHOW DATABASES ->> + with cte_includes as ( + select distinct + split_part(ci.VALUE, '.', 1) as db_pattern, + split_part(ci.VALUE, '.', 2) as schema_name + from CONFIG.CONFIGURATIONS c, table(flatten(c.VALUE)) ci + where c.PATH = 'plugins.dynamic_tables.include' + and split_part(ci.VALUE, '.', 2) != '%' + and split_part(ci.VALUE, '.', 3) = '%' + ) + , cte_excludes as ( + select distinct split_part(ce.VALUE, '.', 1) as db_pattern + from CONFIG.CONFIGURATIONS c, table(flatten(c.VALUE)) ce + where c.PATH = 'plugins.dynamic_tables.exclude' + ) + select "name" as db_name, ci.schema_name + from $1 + join cte_includes ci on "name" LIKE ci.db_pattern + where "kind" = 'STANDARD' + and not "name" LIKE ANY (select db_pattern from cte_excludes)) + ; + LET c_schema_names CURSOR FOR rs_schema_names; + + FOR r_schema IN c_schema_names DO + q_grant_monitor_all := 'grant monitor on all dynamic tables in schema IDENTIFIER(?) to role DTAGENT_VIEWER'; + q_grant_monitor_future := 'grant monitor on future dynamic tables in schema IDENTIFIER(?) to role DTAGENT_VIEWER'; + + EXECUTE IMMEDIATE :q_grant_monitor_all USING (r_schema.db_name || '.' || r_schema.schema_name); + EXECUTE IMMEDIATE :q_grant_monitor_future USING (r_schema.db_name || '.' || r_schema.schema_name); + END FOR; + + -- Grant at TABLE level for patterns where both schema and table parts are specific (e.g. DB.ANALYTICS.ORDERS_DT) + -- Note: FUTURE grants are not applicable at the individual table level + rs_table_names := (select distinct + split_part(ci.VALUE, '.', 1) as db_name, + split_part(ci.VALUE, '.', 2) as schema_name, + split_part(ci.VALUE, '.', 3) as table_name + from CONFIG.CONFIGURATIONS c, table(flatten(c.VALUE)) ci + where c.PATH = 'plugins.dynamic_tables.include' + and split_part(ci.VALUE, '.', 2) != '%' + and split_part(ci.VALUE, '.', 3) != '%'); + LET c_table_names CURSOR FOR rs_table_names; + + FOR r_table IN c_table_names DO + q_grant_monitor_all := 'grant monitor on dynamic table IDENTIFIER(?) to role DTAGENT_VIEWER'; - EXECUTE IMMEDIATE :q_grant_monitor_all; - EXECUTE IMMEDIATE :q_grant_monitor_future; + EXECUTE IMMEDIATE :q_grant_monitor_all USING (r_table.db_name || '.' || r_table.schema_name || '.' || r_table.table_name); END FOR; RETURN 'granted monitor for future and dynamic tables to DTAGENT_VIEWER'; diff --git a/src/dtagent/plugins/event_log.config/bom.yml b/src/dtagent/plugins/event_log.config/bom.yml index a819917d..5cf3521b 100644 --- a/src/dtagent/plugins/event_log.config/bom.yml +++ b/src/dtagent/plugins/event_log.config/bom.yml @@ -9,6 +9,8 @@ delivers: type: procedure - name: DTAGENT_DB.APP.P_CLEANUP_EVENT_LOG() type: procedure + - name: DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(VARCHAR) + type: function - name: DTAGENT_DB.APP.V_EVENT_LOG type: view - name: DTAGENT_DB.APP.V_EVENT_LOG_SPANS_INSTRUMENTED diff --git a/src/dtagent/plugins/event_log.config/config.md b/src/dtagent/plugins/event_log.config/config.md index adc0c12c..51bd3cd1 100644 --- a/src/dtagent/plugins/event_log.config/config.md +++ b/src/dtagent/plugins/event_log.config/config.md @@ -1,4 +1,48 @@ -> **IMPORTANT**: A dedicated cleanup task, `APP.TASK_DTAGENT_EVENT_LOG_CLEANUP`, ensures that the `EVENT_LOG` table contains only data no older than the duration you define with the `PLUGINS.EVENT_LOG.RETENTION_HOURS` configuration option. -> You can schedule this task separately using the `PLUGINS.EVENT_LOG.SCHEDULE_CLEANUP` configuration option, run the cleanup procedure `APP.P_CLEANUP_EVENT_LOG()` manually, or manage the retention of data in the `EVENT_LOG` table yourself. +| Key | Type | Default | Description | +| ------------------------------------ | ------ | -------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.event_log.max_entries` | int | `10000` | Maximum number of event log entries fetched per run. Acts as a safety cap to avoid long-running queries. | +| `plugins.event_log.lookback_hours` | int | `24` | How far back (in hours) the plugin looks for new events on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Increase for initial setup; decrease to reduce query cost. | +| `plugins.event_log.retention_hours` | int | `24` | How long (in hours) the cleanup task retains entries in `STATUS.EVENT_LOG`. Only applies if this agent instance owns the event table. | +| `plugins.event_log.schedule` | string | `USING CRON */30 * * * * UTC` | Cron schedule for the main event log processing task. | +| `plugins.event_log.schedule_cleanup` | string | `USING CRON 0 * * * * UTC` | Cron schedule for the cleanup task that removes old entries from `STATUS.EVENT_LOG`. | +| `plugins.event_log.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.event_log.telemetry` | list | `["metrics", "logs", "biz_events", "spans"]` | Telemetry types to emit. Remove items to suppress specific output types. | + +### Cost Optimization Guidance + +The event log plugin queries `STATUS.EVENT_LOG` on every run. The following settings directly affect compute cost: + +- **`lookback_hours`**: This window defines how far back the plugin reads on each run. If no prior processed timestamp is available (first run, or after a reset), the plugin starts from `now - lookback_hours`. During normal operation the plugin starts from the more recent of the last processed timestamp and `now - lookback_hours`, capping catch-up after long gaps. A large lookback window can cause heavy queries after a reset โ€” consider starting with `12` or `24` and increasing only if needed. +- **`max_entries`**: Hard cap on rows processed per run. The default (`10000`) protects against runaway queries. If your Snowflake account generates very high event volumes, lower this value and rely on the schedule frequency to catch up incrementally. +- **`retention_hours`**: Shorter retention reduces the size of `STATUS.EVENT_LOG`, which improves scan performance. Set this higher than `lookback_hours` to avoid situations where the cleanup removes events before the plugin can process them. The recommended ratio is `retention_hours >= lookback_hours`. +- **`schedule`**: Running more frequently (e.g., every 5 minutes) increases credit usage. The default every-30-minutes cadence balances freshness against cost. For high-volume accounts, consider running less frequently with higher `max_entries`. + +> **IMPORTANT**: A dedicated cleanup task, `APP.TASK_DTAGENT_EVENT_LOG_CLEANUP`, ensures that the `EVENT_LOG` table contains only data no older than the duration you define with the `plugins.event_log.retention_hours` configuration option. +> You can schedule this task separately using the `plugins.event_log.schedule_cleanup` configuration option, run the cleanup procedure `APP.P_CLEANUP_EVENT_LOG()` manually, or manage the retention of data in the `EVENT_LOG` table yourself. > **INFO**: The `EVENT_LOG` table cleanup process works only if this specific instance of Dynatrace Snowflake Observability Agent set up the table. + +### Cross-Tenant Monitoring + +By default (`plugins.event_log.cross_tenant_monitoring: true`) the plugin also reports `WARN`/`ERROR` log entries, metrics, and spans originating from **other** `DTAGENT_*_DB` instances visible in the same event table. This allows one DSOA deployment to surface health issues from sibling deployments without logging into Snowflake directly. + +In case you would like to enable cross-tenant monitoring on **only one DSOA tenant**, e.g., to avoid duplicate reporting across deployments, +you need to set `cross_tenant_monitoring: false` in all other tenants. + +```yaml +plugins: + event_log: + cross_tenant_monitoring: false # disable on tenants that should report only their own WARN/ERROR self-monitoring entries +``` + +### Database Filtering + +Use `plugins.event_log.databases` to restrict event log monitoring to specific databases. The list accepts SQL `LIKE` patterns (`%` matches any sequence of characters, `_` matches any single character). When the list is absent or empty, **all databases** are included. + +```yaml +plugins: + event_log: + databases: + - MYAPP_DB # exact match + - ANALYTICS% # all databases starting with ANALYTICS_ +``` diff --git a/src/dtagent/plugins/event_log.config/event_log-config.yml b/src/dtagent/plugins/event_log.config/event_log-config.yml index 7d2ffe13..88d127c2 100644 --- a/src/dtagent/plugins/event_log.config/event_log-config.yml +++ b/src/dtagent/plugins/event_log.config/event_log-config.yml @@ -1,10 +1,13 @@ plugins: event_log: max_entries: 10000 - retention_hours: 12 + lookback_hours: 24 + retention_hours: 24 schedule: USING CRON */30 * * * * UTC schedule_cleanup: USING CRON 0 * * * * UTC is_disabled: false + cross_tenant_monitoring: true + databases: [] telemetry: - metrics - logs diff --git a/src/dtagent/plugins/event_log.config/readme.md b/src/dtagent/plugins/event_log.config/readme.md index 3e3af11a..d581802f 100644 --- a/src/dtagent/plugins/event_log.config/readme.md +++ b/src/dtagent/plugins/event_log.config/readme.md @@ -1,11 +1,10 @@ This plugin delivers to Dynatrace data reported by Snowflake Trail in the `EVENT TABLE`. -By default, it runs every 30 minutes and registers entries from the last 12 hours, omitting the ones, which: +By default, it runs every 30 minutes and processes only new entries since the last run (bounded by a configurable lookback window of 24 hours), omitting entries that: -- where already delivered, -- with scope set to `DTAGENT_OTLP` as they are internal log recording entries sent over the OpenTelemetry protocol -- related to execution of other instances of Dynatrace Snowflake Observability Agent, or -- with importance below the level set as `CORE.LOG_LEVEL`, i.e., only warnings or errors from the given Dynatrace Snowflake Observability Agent instance are reported. +- were already delivered, +- have scope set to `DTAGENT_OTLP` (internal log recording entries sent over the OpenTelemetry protocol), or +- have importance below `WARN` for any `DTAGENT_*_DB` instance, i.e., only warnings or errors from Dynatrace Snowflake Observability Agent instances are reported. By default, it produces log entries containing the following information: diff --git a/src/dtagent/plugins/event_log.sql/050_f_event_log_include.sql b/src/dtagent/plugins/event_log.sql/050_f_event_log_include.sql new file mode 100644 index 00000000..0129643c --- /dev/null +++ b/src/dtagent/plugins/event_log.sql/050_f_event_log_include.sql @@ -0,0 +1,84 @@ +-- +-- +-- Copyright (c) 2025 Dynatrace Open Source +-- +-- Permission is hereby granted, free of charge, to any person obtaining a copy +-- of this software and associated documentation files (the "Software"), to deal +-- in the Software without restriction, including without limitation the rights +-- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +-- copies of the Software, and to permit persons to whom the Software is +-- furnished to do so, subject to the following conditions: +-- +-- The above copyright notice and this permission notice shall be included in all +-- copies or substantial portions of the Software. +-- +-- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +-- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +-- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +-- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +-- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +-- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +-- SOFTWARE. +-- +-- +-- +-- APP.F_EVENT_LOG_INCLUDE(db_name) decides whether a given event log entry should be included +-- in reporting, based on two config options: +-- +-- plugins.event_log.databases (array, default empty = all DBs included) +-- Optional allow-list of database name patterns (SQL LIKE syntax, e.g. 'MYAPP_%'). +-- When absent or empty every database passes; when non-empty only matching databases pass. +-- +-- plugins.event_log.cross_tenant_monitoring (boolean, default true) +-- Controls whether WARN/ERROR entries from other DTAGENT_*_DB instances are reported. +-- When false, only the local instance (DTAGENT_DB) is reported for DTAGENT-family DBs. +-- DTAGENT_DB is replaced with DTAGENT_$TAG_DB during deployment. +-- +-- Severity filtering (DEBUG/INFO exclusion) for DTAGENT-family entries is handled inside +-- the calling views because it only applies to LOG record types, not to METRICs or SPANs. +-- +use role DTAGENT_OWNER; use database DTAGENT_DB; use warehouse DTAGENT_WH; + +create or replace function DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(db_name VARCHAR) +returns BOOLEAN +language sql +as +$$ + -- Step 1: apply optional database allow-list (empty / absent = all databases pass) + iff( + array_size(DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.databases', [])::array) > 0 + and not exists ( + select 1 + from table(flatten(DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.databases', [])::array)) f + where db_name like f.VALUE::varchar + ), + false, + -- Step 2: for non-DTAGENT DBs always include + iff( + db_name not like 'DTAGENT%_DB', + true, + -- Step 3: for DTAGENT-family DBs include self always (severity handled in view); + -- include other tenants only when cross_tenant_monitoring is true (default) + iff( + db_name = 'DTAGENT_DB' -- DTAGENT_DB will be replaced with DTAGENT_$TAG_DB during deploy + or coalesce( + DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.cross_tenant_monitoring', true::variant)::boolean, + true + ), + true, + false + ) + ) + ) +$$; + +grant usage on function DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(VARCHAR) to role DTAGENT_VIEWER; + + +-- example calls +/* +use role DTAGENT_VIEWER; use database DTAGENT_DB; use warehouse DTAGENT_WH; +select DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE('MYAPP_DB'); -- true (non-DTAGENT) +select DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE('DTAGENT_DB'); -- true (self) +select DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE('DTAGENT_TNB_DB'); -- true (cross_tenant_monitoring=true by default) + */ diff --git a/src/dtagent/plugins/event_log.sql/051_v_event_log.sql b/src/dtagent/plugins/event_log.sql/051_v_event_log.sql index 7b5cf571..e821e3b2 100644 --- a/src/dtagent/plugins/event_log.sql/051_v_event_log.sql +++ b/src/dtagent/plugins/event_log.sql/051_v_event_log.sql @@ -57,13 +57,14 @@ from DTAGENT_DB.STATUS.EVENT_LOG l where not regexp_like(SCOPE['name'], 'DTAGENT(_\\S*)?_OTLP') -- we do not log what was sent via OTLP and VALUE not like 'Sent log%Sent log%' and RECORD_TYPE not in ('METRIC', 'SPAN') + and DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(nvl(_resource_attributes['snow.database.name']::varchar, '')) and ( - -- we log everything for all non-DTAGENT DBs + -- for non-DTAGENT DBs report all severity levels nvl(_resource_attributes['snow.database.name']::varchar, '') not like 'DTAGENT%_DB' - -- only report status other than DEBUG/INFO for DBs that are related to this particular dtagent, - or (_RECORD['severity_text']::varchar not in ('DEBUG', 'INFO') and nvl(_resource_attributes['snow.database.name']::varchar, '') = 'DTAGENT_DB') -- DTAGENT_DB will be replaced with DTAGENT_$TAG_DB during deploy + -- for DTAGENT-family DBs (self and cross-tenant) only report WARN/ERROR + or _RECORD['severity_text']::varchar not in ('DEBUG', 'INFO') ) - and TIMESTAMP > GREATEST( timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_log') ) + and TIMESTAMP > GREATEST( timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_log') ) and (RESOURCE_ATTRIBUTES:"application"::varchar is null or RESOURCE_ATTRIBUTES:"application"::varchar not in ('openflow')) -- exclude known high volume applications order by TIMESTAMP asc limit 10000 -- safety limit to avoid long running queries diff --git a/src/dtagent/plugins/event_log.sql/051_v_event_log_metrics_instrumented.sql b/src/dtagent/plugins/event_log.sql/051_v_event_log_metrics_instrumented.sql index 332bc93b..cca7c089 100644 --- a/src/dtagent/plugins/event_log.sql/051_v_event_log_metrics_instrumented.sql +++ b/src/dtagent/plugins/event_log.sql/051_v_event_log_metrics_instrumented.sql @@ -32,13 +32,8 @@ with cte_event_log as ( select * from DTAGENT_DB.STATUS.EVENT_LOG l where RECORD_TYPE = 'METRIC' - and ( - -- we log everything for all non-DTAGENT DBs - nvl(l.resource_attributes['snow.database.name']::varchar, '') not like 'DTAGENT%_DB' - -- only report metrics for DBs that are related to this particular dtagent, - or nvl(l.resource_attributes['snow.database.name']::varchar, '') = 'DTAGENT_DB' -- DTAGENT_DB will be replaced with DTAGENT_$TAG_DB during deploy - ) - and TIMESTAMP > GREATEST( timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_log_metrics') ) + and DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(nvl(l.resource_attributes['snow.database.name']::varchar, '')) + and TIMESTAMP > GREATEST( timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_log_metrics') ) and (RESOURCE_ATTRIBUTES:"application"::varchar is null or RESOURCE_ATTRIBUTES:"application"::varchar not in ('openflow')) -- exclude known high volume applications order by TIMESTAMP asc limit 10000 -- safety limit to avoid long running queries diff --git a/src/dtagent/plugins/event_log.sql/051_v_event_log_spans_instrumented.sql b/src/dtagent/plugins/event_log.sql/051_v_event_log_spans_instrumented.sql index 8dca381f..7ddc71d5 100644 --- a/src/dtagent/plugins/event_log.sql/051_v_event_log_spans_instrumented.sql +++ b/src/dtagent/plugins/event_log.sql/051_v_event_log_spans_instrumented.sql @@ -33,13 +33,8 @@ with cte_event_log as ( select * from DTAGENT_DB.STATUS.EVENT_LOG l where RECORD_TYPE = 'SPAN' - and ( - -- we log everything for all non-DTAGENT DBs - nvl(l.resource_attributes['snow.database.name']::varchar, '') not like 'DTAGENT%_DB' - -- only report metrics for DBs that are related to this particular dtagent, - or nvl(l.resource_attributes['snow.database.name']::varchar, '') = 'DTAGENT_DB' -- DTAGENT_DB will be replaced with DTAGENT_$TAG_DB during deploy - ) - and TIMESTAMP > GREATEST( timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_log_spans') ) + and DTAGENT_DB.APP.F_EVENT_LOG_INCLUDE(nvl(l.resource_attributes['snow.database.name']::varchar, '')) + and TIMESTAMP > GREATEST( timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_log.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_log_spans') ) and (RESOURCE_ATTRIBUTES:"application"::varchar is null or RESOURCE_ATTRIBUTES:"application"::varchar not in ('openflow')) -- exclude known high volume applications order by TIMESTAMP asc limit 10000 -- safety limit to avoid long running queries diff --git a/src/dtagent/plugins/event_log.sql/init/009_event_log_init.sql b/src/dtagent/plugins/event_log.sql/init/009_event_log_init.sql index 962fbe7a..bbb54245 100644 --- a/src/dtagent/plugins/event_log.sql/init/009_event_log_init.sql +++ b/src/dtagent/plugins/event_log.sql/init/009_event_log_init.sql @@ -35,7 +35,8 @@ as $$ DECLARE s_event_table_name TEXT DEFAULT ''; - a_no_custom_event_t ARRAY DEFAULT ARRAY_CONSTRUCT('', 'snowflake.telemetry.events', 'DTAGENT_DB.STATUS.EVENT_LOG'); + -- names of event log tables which would mean we deal with one created by this DSOA instance or there is no custom event table at all + a_no_custom_event_t ARRAY DEFAULT ARRAY_CONSTRUCT('', 'DTAGENT_DB.STATUS.EVENT_LOG'); is_event_log_table BOOLEAN DEFAULT FALSE; BEGIN show PARAMETERS like 'EVENT_TABLE' in ACCOUNT; @@ -43,8 +44,9 @@ BEGIN select TABLE_TYPE like '%TABLE' into is_event_log_table from DTAGENT_DB.INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA = 'STATUS' and TABLE_NAME = 'EVENT_LOG'; IF (ARRAY_CONTAINS(:s_event_table_name::variant, :a_no_custom_event_t)) THEN - -- there is an event table defined or there is Dynatrace Snowflake Observability Agent one present + -- there is NO event table defined or there is Dynatrace Snowflake Observability Agent one present IF (NOT :is_event_log_table) THEN + -- in case there is a view we need to get rid of it before creating the event table drop view if exists DTAGENT_DB.STATUS.EVENT_LOG; END IF; @@ -59,14 +61,25 @@ BEGIN RETURN 'Dynatrace Snowflake Observability Agent has setup own Event table'; ELSE - -- there is a an event table defined already, not by this Dynatrace Snowflake Observability Agent + -- there is an event table defined already, not by this Dynatrace Snowflake Observability Agent + -- (including SNOWFLAKE.TELEMETRY.EVENTS โ€” the Snowflake-managed shared event table) IF (:is_event_log_table) THEN + -- in case there is a table with this name we need to get rid of it before creating the view on top of the custom event table drop table if exists DTAGENT_DB.STATUS.EVENT_LOG; END IF; - EXECUTE IMMEDIATE concat('create view if not exists DTAGENT_DB.STATUS.EVENT_LOG as select * from ', :s_event_table_name); - EXECUTE IMMEDIATE concat('grant select on table ', :s_event_table_name, ' to role DTAGENT_VIEWER'); + -- attempt to grant select on the source table; ignore failures for read-only or Snowflake-managed tables + BEGIN + EXECUTE IMMEDIATE concat('grant select on table ', :s_event_table_name, ' to role DTAGENT_VIEWER'); + EXCEPTION + WHEN OTHER THEN + -- ignore failures for read-only or Snowflake-managed event tables + -- leaves warning in the logs + SYSTEM$LOG_WARN(concat('Could not grant select on table ', :s_event_table_name, ' to role DTAGENT_VIEWER: ', SQLERRM)); + END; + -- create a view on top of the existing event table, so we can use it in the event_log plugin + EXECUTE IMMEDIATE concat('create view if not exists DTAGENT_DB.STATUS.EVENT_LOG as select * from ', :s_event_table_name); grant ownership on view DTAGENT_DB.STATUS.EVENT_LOG to role DTAGENT_OWNER revoke current grants; grant select on view DTAGENT_DB.STATUS.EVENT_LOG to role DTAGENT_VIEWER; diff --git a/src/dtagent/plugins/event_usage.config/config.md b/src/dtagent/plugins/event_usage.config/config.md new file mode 100644 index 00000000..8815602b --- /dev/null +++ b/src/dtagent/plugins/event_usage.config/config.md @@ -0,0 +1,6 @@ +| Key | Type | Default | Description | +| ------------------------------------ | ------ | ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.event_usage.lookback_hours` | int | `6` | How far back (in hours) the plugin looks for event usage history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Default is `6`h to account for the up-to-3-hour data ingestion delay in `EVENT_USAGE_HISTORY`. | +| `plugins.event_usage.schedule` | string | `USING CRON 0 * * * * UTC` | Cron schedule for the event usage collection task. | +| `plugins.event_usage.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.event_usage.telemetry` | list | `["metrics", "logs", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | diff --git a/src/dtagent/plugins/event_usage.config/event_usage-config.yml b/src/dtagent/plugins/event_usage.config/event_usage-config.yml index 8d3084b6..f47fb5bb 100644 --- a/src/dtagent/plugins/event_usage.config/event_usage-config.yml +++ b/src/dtagent/plugins/event_usage.config/event_usage-config.yml @@ -1,5 +1,6 @@ plugins: event_usage: + lookback_hours: 6 schedule: USING CRON 0 * * * * UTC is_disabled: false telemetry: diff --git a/src/dtagent/plugins/event_usage.sql/051_v_event_usage.sql b/src/dtagent/plugins/event_usage.sql/051_v_event_usage.sql index 550d5095..4ddbb07b 100644 --- a/src/dtagent/plugins/event_usage.sql/051_v_event_usage.sql +++ b/src/dtagent/plugins/event_usage.sql/051_v_event_usage.sql @@ -37,7 +37,7 @@ select from SNOWFLAKE.ACCOUNT_USAGE.EVENT_USAGE_HISTORY euh where - euh.end_time > GREATEST( timeadd(hour, -6, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_usage')) -- there can be 180 minutes latency + euh.end_time > GREATEST( timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.event_usage.lookback_hours', 6), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('event_usage')) -- there can be 180 minutes latency order by euh.end_time asc; diff --git a/src/dtagent/plugins/login_history.config/config.md b/src/dtagent/plugins/login_history.config/config.md new file mode 100644 index 00000000..641ac204 --- /dev/null +++ b/src/dtagent/plugins/login_history.config/config.md @@ -0,0 +1,6 @@ +| Key | Type | Default | Description | +| -------------------------------------- | ------ | ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.login_history.lookback_hours` | int | `24` | How far back (in hours) the plugin looks for login and session events on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. | +| `plugins.login_history.schedule` | string | `USING CRON */30 * * * * UTC` | Cron schedule for the login history collection task. | +| `plugins.login_history.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.login_history.telemetry` | list | `["logs", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | diff --git a/src/dtagent/plugins/login_history.config/login_history-config.yml b/src/dtagent/plugins/login_history.config/login_history-config.yml index a04597e4..9296a477 100644 --- a/src/dtagent/plugins/login_history.config/login_history-config.yml +++ b/src/dtagent/plugins/login_history.config/login_history-config.yml @@ -1,5 +1,6 @@ plugins: login_history: + lookback_hours: 24 schedule: USING CRON */30 * * * * UTC is_disabled: false telemetry: diff --git a/src/dtagent/plugins/login_history.sql/061_v_login_history.sql b/src/dtagent/plugins/login_history.sql/061_v_login_history.sql index 3954ea09..26508b4d 100644 --- a/src/dtagent/plugins/login_history.sql/061_v_login_history.sql +++ b/src/dtagent/plugins/login_history.sql/061_v_login_history.sql @@ -52,7 +52,7 @@ select from SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY lh where - lh.event_timestamp > GREATEST( timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('login_history') ) + lh.event_timestamp > GREATEST( timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.login_history.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('login_history') ) order by lh.event_timestamp asc limit 1000 diff --git a/src/dtagent/plugins/login_history.sql/061_v_sessions.sql b/src/dtagent/plugins/login_history.sql/061_v_sessions.sql index beb9c86c..d68f6b8c 100644 --- a/src/dtagent/plugins/login_history.sql/061_v_sessions.sql +++ b/src/dtagent/plugins/login_history.sql/061_v_sessions.sql @@ -49,7 +49,7 @@ select from SNOWFLAKE.ACCOUNT_USAGE.SESSIONS s where - s.created_on > GREATEST( timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('sessions') ) + s.created_on > GREATEST( timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.login_history.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('sessions') ) order by s.created_on asc limit 1000 diff --git a/src/dtagent/plugins/query_history.sql/061_p_get_acc_estimates.sql b/src/dtagent/plugins/query_history.sql/061_p_get_acc_estimates.sql index dcbf3c41..79e15fac 100644 --- a/src/dtagent/plugins/query_history.sql/061_p_get_acc_estimates.sql +++ b/src/dtagent/plugins/query_history.sql/061_p_get_acc_estimates.sql @@ -48,12 +48,19 @@ DECLARE order by execution_time desc; query_id VARCHAR DEFAULT ''; + safe_query_id_re TEXT DEFAULT '^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$'; BEGIN -- initializing TMP_QUERY_ACCELERATION_ESTIMATES EXECUTE IMMEDIATE :truncate_tmp; FOR query IN c_queries_to_analyze DO query_id := query.query_id; + + IF (NOT REGEXP_LIKE(:query_id, :safe_query_id_re)) THEN + SYSTEM$LOG_WARN('P_GET_ACCELERATION_ESTIMATES: skipping query with unexpected query_id format: ' || :query_id); + CONTINUE; + END IF; + EXECUTE IMMEDIATE 'select PARSE_JSON(SYSTEM$ESTIMATE_QUERY_ACCELERATION(''' || :query_id || ''')) as json;'; INSERT INTO DTAGENT_DB.APP.TMP_QUERY_ACCELERATION_ESTIMATES(QUERY_ID, ATTRIBUTES) select diff --git a/src/dtagent/plugins/shares.config/instruments-def.yml b/src/dtagent/plugins/shares.config/instruments-def.yml index 6cbe2052..bf34e2af 100644 --- a/src/dtagent/plugins/shares.config/instruments-def.yml +++ b/src/dtagent/plugins/shares.config/instruments-def.yml @@ -214,8 +214,7 @@ event_timestamps: __description: The timestamp when the grant was created. snowflake.share.created_on: __context_names: - - outbound_shares - - inbound_shares + - shares __example: 1639051180714000000 __description: The timestamp when the share was created. snowflake.table.created_on: diff --git a/src/dtagent/plugins/shares.sql/051_p_grant_imported_privileges.sql b/src/dtagent/plugins/shares.sql/051_p_grant_imported_privileges.sql index 6755d73c..c385ef64 100644 --- a/src/dtagent/plugins/shares.sql/051_p_grant_imported_privileges.sql +++ b/src/dtagent/plugins/shares.sql/051_p_grant_imported_privileges.sql @@ -29,8 +29,17 @@ language sql execute as owner as $$ +DECLARE + safe_identifier_re TEXT DEFAULT '^[A-Za-z_][A-Za-z0-9_$]*$'; + db_name_q TEXT DEFAULT ''; BEGIN - EXECUTE IMMEDIATE concat('GRANT IMPORTED PRIVILEGES on DATABASE ', :db_name, ' TO ROLE DTAGENT_VIEWER'); + IF (NOT REGEXP_LIKE(UPPER(:db_name), :safe_identifier_re)) THEN + SYSTEM$LOG_WARN('P_GRANT_IMPORTED_PRIVILEGES: skipping invalid database name (unsafe identifier): ' || :db_name); + RETURN 'skipped: unsafe database name ' || :db_name; + END IF; + + db_name_q := '"' || UPPER(:db_name) || '"'; + EXECUTE IMMEDIATE concat('GRANT IMPORTED PRIVILEGES on DATABASE ', :db_name_q, ' TO ROLE DTAGENT_VIEWER'); RETURN 'imported privileges granted on ' || :db_name; EXCEPTION diff --git a/src/dtagent/plugins/shares.sql/052_p_list_inbound_tables.sql b/src/dtagent/plugins/shares.sql/052_p_list_inbound_tables.sql index 1ced9c31..dce75264 100644 --- a/src/dtagent/plugins/shares.sql/052_p_list_inbound_tables.sql +++ b/src/dtagent/plugins/shares.sql/052_p_list_inbound_tables.sql @@ -34,18 +34,37 @@ execute as caller as $$ DECLARE - query TEXT; - rs RESULTSET; - rs_repeat RESULTSET; - rs_empty RESULTSET DEFAULT (SELECT NULL:text as SHARE_NAME, FALSE:boolean as IS_REPORTED, OBJECT_CONSTRUCT() as DETAILS WHERE 1=0); - rs_unavailable RESULTSET DEFAULT (SELECT :share_name as SHARE_NAME, TRUE:boolean as IS_REPORTED, OBJECT_CONSTRUCT('SHARE_STATUS', 'UNAVAILABLE', 'SHARE_NAME', :share_name, 'DATABASE_NAME', :db_name, 'ERROR_MESSAGE', 'Shared database is no longer available') as DETAILS); - error_msg TEXT; + query TEXT; + rs RESULTSET; + rs_repeat RESULTSET; + rs_empty RESULTSET DEFAULT (SELECT NULL:text as SHARE_NAME, + FALSE:boolean as IS_REPORTED, + OBJECT_CONSTRUCT() as DETAILS + WHERE 1=0); + rs_unavailable RESULTSET DEFAULT (SELECT :share_name as SHARE_NAME, + TRUE:boolean as IS_REPORTED, + OBJECT_CONSTRUCT('SHARE_STATUS', 'UNAVAILABLE', + 'SHARE_NAME', :share_name, + 'DATABASE_NAME', :db_name, + 'ERROR_MESSAGE', 'Shared database is no longer available') as DETAILS); + error_msg TEXT; + safe_identifier_re TEXT DEFAULT '^[A-Za-z_][A-Za-z0-9_$]*$'; + db_name_q TEXT DEFAULT ''; + share_name_safe TEXT DEFAULT ''; BEGIN + IF (NOT REGEXP_LIKE(UPPER(:db_name), :safe_identifier_re)) THEN + SYSTEM$LOG_WARN('P_LIST_INBOUND_TABLES: skipping invalid database name (unsafe identifier): ' || :db_name); + RETURN TABLE(rs_empty); + END IF; + + db_name_q := '"' || UPPER(:db_name) || '"'; + share_name_safe := REPLACE(:share_name, '''', ''); + IF (:with_grant) THEN call DTAGENT_DB.APP.P_GRANT_IMPORTED_PRIVILEGES(:db_name); END IF; - query := concat('select ''', :share_name, ''' as SHARE_NAME, TRUE as IS_REPORTED, OBJECT_CONSTRUCT(t.*) from ', :db_name, '.INFORMATION_SCHEMA.TABLES t where TABLE_SCHEMA != ''INFORMATION_SCHEMA'''); + query := concat('select ''', :share_name_safe, ''' as SHARE_NAME, TRUE as IS_REPORTED, OBJECT_CONSTRUCT(t.*) from ', :db_name_q, '.INFORMATION_SCHEMA.TABLES t where TABLE_SCHEMA != ''INFORMATION_SCHEMA'''); rs := (EXECUTE IMMEDIATE :query); RETURN TABLE(rs); @@ -67,7 +86,7 @@ EXCEPTION END IF; ELSE -- If the query fails and we are not granting privileges, we try to repeat the query asking for privileges to be granted first - rs_repeat := (EXECUTE IMMEDIATE concat('call DTAGENT_DB.APP.P_LIST_INBOUND_TABLES(''', :share_name, ''', ''', :db_name, ''', TRUE)')); + rs_repeat := (EXECUTE IMMEDIATE concat('call DTAGENT_DB.APP.P_LIST_INBOUND_TABLES(''', :share_name_safe, ''', ''', UPPER(:db_name), ''', TRUE)')); RETURN TABLE(rs_repeat); END IF; END; diff --git a/src/dtagent/plugins/shares.sql/053_p_get_shares.sql b/src/dtagent/plugins/shares.sql/053_p_get_shares.sql index 624c09a5..8d09c66b 100644 --- a/src/dtagent/plugins/shares.sql/053_p_get_shares.sql +++ b/src/dtagent/plugins/shares.sql/053_p_get_shares.sql @@ -92,9 +92,6 @@ BEGIN select :share_name, FALSE; else - insert into DTAGENT_DB.APP.TMP_INBOUND_SHARES(SHARE_NAME, IS_REPORTED) - select :share_name, TRUE; - if ((SELECT count(*) > 0 from SNOWFLAKE.ACCOUNT_USAGE.DATABASES where DATABASE_NAME = :db_name and DELETED is null)) then call DTAGENT_DB.APP.P_LIST_INBOUND_TABLES(:share_name, :db_name); @@ -102,8 +99,8 @@ BEGIN select SHARE_NAME, IS_REPORTED, DETAILS from TABLE(result_scan(last_query_id())); else - insert into DTAGENT_DB.APP.TMP_INBOUND_SHARES(SHARE_NAME, DETAILS) - select :share_name, OBJECT_CONSTRUCT('HAS_DB_DELETED', TRUE); + insert into DTAGENT_DB.APP.TMP_INBOUND_SHARES(SHARE_NAME, IS_REPORTED, DETAILS) + select :share_name, TRUE, OBJECT_CONSTRUCT('HAS_DB_DELETED', TRUE); end if; end if; diff --git a/src/dtagent/plugins/shares.sql/061_v_inbound_shares.sql b/src/dtagent/plugins/shares.sql/061_v_inbound_shares.sql index df8dc435..e697e3f6 100644 --- a/src/dtagent/plugins/shares.sql/061_v_inbound_shares.sql +++ b/src/dtagent/plugins/shares.sql/061_v_inbound_shares.sql @@ -26,6 +26,8 @@ create or replace view DTAGENT_DB.APP.V_INBOUND_SHARE_TABLES as select case + when ins.DETAILS:"HAS_DB_DELETED" = TRUE then + concat('Inbound share "', s.name, '" has a deleted database - data is no longer accessible') when ins.DETAILS:"SHARE_STATUS" = 'UNAVAILABLE' then concat('Inbound share "', s.name, '" is no longer available - access may have been revoked by the publisher') when LEN(NVL(s.comment, '')) > 0 then s.comment @@ -63,7 +65,7 @@ select 'snowflake.share.shared_from', s.owner_account, 'snowflake.share.shared_to', s.given_to, 'snowflake.share.owner', s.owner, - 'snowflake.share.is_secure_objects_only', s.secure_objects_only, + 'snowflake.share.is_secure_objects_only', TRY_TO_BOOLEAN(s.secure_objects_only), 'snowflake.share.listing_global_name', s.listing_global_name, 'snowflake.error.message', ins.DETAILS:"ERROR_MESSAGE" ) as ATTRIBUTES, diff --git a/src/dtagent/plugins/shares.sql/061_v_outbound_shares.sql b/src/dtagent/plugins/shares.sql/061_v_outbound_shares.sql index 9a355636..f7af6875 100644 --- a/src/dtagent/plugins/shares.sql/061_v_outbound_shares.sql +++ b/src/dtagent/plugins/shares.sql/061_v_outbound_shares.sql @@ -48,7 +48,7 @@ select 'snowflake.share.shared_from', s.owner_account, 'snowflake.share.shared_to', s.given_to, 'snowflake.share.owner', s.owner, - 'snowflake.share.is_secure_objects_only', s.secure_objects_only, + 'snowflake.share.is_secure_objects_only', TRY_TO_BOOLEAN(s.secure_objects_only), 'snowflake.share.listing_global_name', s.listing_global_name ) as ATTRIBUTES, OBJECT_CONSTRUCT( diff --git a/src/dtagent/plugins/shares.sql/061_v_share_events.sql b/src/dtagent/plugins/shares.sql/061_v_share_events.sql index a84e66de..e8628501 100644 --- a/src/dtagent/plugins/shares.sql/061_v_share_events.sql +++ b/src/dtagent/plugins/shares.sql/061_v_share_events.sql @@ -42,7 +42,7 @@ select 'snowflake.share.shared_from', s.owner_account, 'snowflake.share.shared_to', s.given_to, 'snowflake.share.owner', s.owner, - 'snowflake.share.is_secure_objects_only', s.secure_objects_only, + 'snowflake.share.is_secure_objects_only', TRY_TO_BOOLEAN(s.secure_objects_only), 'snowflake.share.listing_global_name', s.listing_global_name ) as ATTRIBUTES, diff --git a/src/dtagent/plugins/tasks.config/config.md b/src/dtagent/plugins/tasks.config/config.md new file mode 100644 index 00000000..aa138f11 --- /dev/null +++ b/src/dtagent/plugins/tasks.config/config.md @@ -0,0 +1,9 @@ +| Key | Type | Default | Description | +| --------------------------------------- | ------ | --------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.tasks.lookback_hours` | int | `4` | How far back (in hours) the plugin looks for serverless task history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Default is `4`h to account for the up-to-3-hour data ingestion delay in `SERVERLESS_TASK_HISTORY`. | +| `plugins.tasks.lookback_hours_versions` | int | `720` | How far back (in hours) the plugin looks for task version history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours_versions`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours_versions`, so it never reads data older than the lookback window. Default is `720`h (30 days) โ€” task graph versions change infrequently and a longer window ensures new deployments catch all recent version changes. | +| `plugins.tasks.schedule` | string | `USING CRON 30 * * * * UTC` | Cron schedule for the tasks collection task. | +| `plugins.tasks.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.tasks.telemetry` | list | `["logs", "metrics", "events", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | + +> **Note**: `lookback_hours` and `lookback_hours_versions` serve different data sources with different update frequencies. `SERVERLESS_TASK_HISTORY` is updated frequently (per task run), while `TASK_VERSIONS` only changes when a task graph is modified โ€” hence the much longer default for versions. diff --git a/src/dtagent/plugins/tasks.config/tasks-config.yml b/src/dtagent/plugins/tasks.config/tasks-config.yml index 6992cc7f..ee7ec7df 100644 --- a/src/dtagent/plugins/tasks.config/tasks-config.yml +++ b/src/dtagent/plugins/tasks.config/tasks-config.yml @@ -1,5 +1,7 @@ plugins: tasks: + lookback_hours: 4 + lookback_hours_versions: 720 schedule: USING CRON 30 * * * * UTC is_disabled: false telemetry: diff --git a/src/dtagent/plugins/tasks.sql/061_v_serverless_tasks.sql b/src/dtagent/plugins/tasks.sql/061_v_serverless_tasks.sql index 84cb4c03..1433f04f 100644 --- a/src/dtagent/plugins/tasks.sql/061_v_serverless_tasks.sql +++ b/src/dtagent/plugins/tasks.sql/061_v_serverless_tasks.sql @@ -46,7 +46,7 @@ select from SNOWFLAKE.ACCOUNT_USAGE.SERVERLESS_TASK_HISTORY sth where - sth.end_time > GREATEST(timeadd(hour, -4, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('serverless_tasks')) -- max data delay is 180 min + sth.end_time > GREATEST(timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.tasks.lookback_hours', 4), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('serverless_tasks')) -- max data delay is 180 min order by sth.end_time asc; diff --git a/src/dtagent/plugins/tasks.sql/063_v_task_versions.sql b/src/dtagent/plugins/tasks.sql/063_v_task_versions.sql index cec4ad78..8907ae22 100644 --- a/src/dtagent/plugins/tasks.sql/063_v_task_versions.sql +++ b/src/dtagent/plugins/tasks.sql/063_v_task_versions.sql @@ -59,7 +59,7 @@ select from SNOWFLAKE.ACCOUNT_USAGE.TASK_VERSIONS tv where - GREATEST_IGNORE_NULLS(tv.GRAPH_VERSION_CREATED_ON, tv.LAST_COMMITTED_ON, tv.LAST_SUSPENDED_ON) > GREATEST(timeadd(month, -1, current_timestamp()), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('task_versions')) + GREATEST_IGNORE_NULLS(tv.GRAPH_VERSION_CREATED_ON, tv.LAST_COMMITTED_ON, tv.LAST_SUSPENDED_ON) > GREATEST(timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.tasks.lookback_hours_versions', 720), current_timestamp()), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('task_versions')) order by tv.GRAPH_VERSION_CREATED_ON asc; diff --git a/src/dtagent/plugins/users.config/instruments-def.yml b/src/dtagent/plugins/users.config/instruments-def.yml index 6edc1942..1b8e5f2d 100644 --- a/src/dtagent/plugins/users.config/instruments-def.yml +++ b/src/dtagent/plugins/users.config/instruments-def.yml @@ -77,11 +77,31 @@ attributes: __context_names: - users __description: The external authentication UID for the user. + snowflake.user.has_mfa: + __example: "true" + __context_names: + - users + __description: Indicates if the user is enrolled for multi-factor authentication. snowflake.user.has_password: __example: "true" __context_names: - users __description: Indicates if the user has a password set. + snowflake.user.has_pat: + __example: "true" + __context_names: + - users + __description: Indicates if a programmatic access token has been generated for the user. + snowflake.user.has_rsa: + __example: "true" + __context_names: + - users + __description: Indicates if RSA public key authentication is configured for the user. + snowflake.user.has_workload_identity: + __example: "true" + __context_names: + - users + __description: Indicates if workload identity federation is configured for the user. snowflake.user.id: __example: "12345" __context_names: diff --git a/src/dtagent/plugins/users.sql/051_p_get_users.sql b/src/dtagent/plugins/users.sql/051_p_get_users.sql index cedcf3e0..56d1c065 100644 --- a/src/dtagent/plugins/users.sql/051_p_get_users.sql +++ b/src/dtagent/plugins/users.sql/051_p_get_users.sql @@ -34,6 +34,7 @@ create or replace transient table DTAGENT_DB.APP.TMP_USERS ( ext_authn_duo boolean, ext_authn_uid text, has_mfa boolean, bypass_mfa_until timestamp_ltz, last_success_login timestamp_ltz, expires_at timestamp_ltz, locked_until_time timestamp_ltz, has_rsa_public_key boolean, password_last_set_time timestamp_ltz, + has_pat boolean, has_workload_identity boolean, owner text, default_secondary_role text, type text, database_name text, database_id number, schema_name text, schema_id number, is_from_organization_user boolean) @@ -50,6 +51,7 @@ create or replace transient table DTAGENT_DB.APP.TMP_USERS_HELPER ( ext_authn_duo boolean, ext_authn_uid text, has_mfa boolean, bypass_mfa_until timestamp_ltz, last_success_login timestamp_ltz, expires_at timestamp_ltz, locked_until_time timestamp_ltz, has_rsa_public_key boolean, password_last_set_time timestamp_ltz, + has_pat boolean, has_workload_identity boolean, owner text, default_secondary_role text, type text, database_name text, database_id number, schema_name text, schema_id number, is_from_organization_user boolean) @@ -78,6 +80,7 @@ DECLARE ext_authn_duo, ext_authn_uid, has_mfa, bypass_mfa_until, last_success_login, expires_at, locked_until_time, has_rsa_public_key, password_last_set_time, + has_pat, has_workload_identity, owner, default_secondary_role, type, database_name, database_id, schema_name, schema_id, is_from_organization_user diff --git a/src/dtagent/plugins/users.sql/071_v_users_instrumented.sql b/src/dtagent/plugins/users.sql/071_v_users_instrumented.sql index 96134962..20f5819b 100644 --- a/src/dtagent/plugins/users.sql/071_v_users_instrumented.sql +++ b/src/dtagent/plugins/users.sql/071_v_users_instrumented.sql @@ -53,6 +53,10 @@ select 'snowflake.user.default.role', u.default_role, 'snowflake.user.ext_authn.duo', u.ext_authn_duo, 'snowflake.user.ext_authn.uid', u.ext_authn_uid, + 'snowflake.user.has_rsa', u.has_rsa_public_key, + 'snowflake.user.has_mfa', u.has_mfa, + 'snowflake.user.has_pat', u.has_pat, + 'snowflake.user.has_workload_identity', u.has_workload_identity, 'snowflake.user.owner', u.owner, 'snowflake.user.default.secondary_role', u.default_secondary_role, 'snowflake.user.type', u.type, diff --git a/src/dtagent/plugins/warehouse_usage.config/config.md b/src/dtagent/plugins/warehouse_usage.config/config.md new file mode 100644 index 00000000..4c59f3ce --- /dev/null +++ b/src/dtagent/plugins/warehouse_usage.config/config.md @@ -0,0 +1,6 @@ +| Key | Type | Default | Description | +| ---------------------------------------- | ------ | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `plugins.warehouse_usage.lookback_hours` | int | `24` | How far back (in hours) the plugin looks for warehouse events, load, and metering history on each run. If no prior processed timestamp exists, the plugin starts from `now - lookback_hours`. If a prior timestamp exists, the plugin starts from the more recent of that timestamp and `now - lookback_hours`, so it never reads data older than the lookback window. Applies to all three views (`WAREHOUSE_EVENTS_HISTORY`, `WAREHOUSE_LOAD_HISTORY`, `WAREHOUSE_METERING_HISTORY`). | +| `plugins.warehouse_usage.schedule` | string | `USING CRON 0 * * * * UTC` | Cron schedule for the warehouse usage collection task. | +| `plugins.warehouse_usage.is_disabled` | bool | `false` | Set to `true` to disable this plugin entirely. | +| `plugins.warehouse_usage.telemetry` | list | `["logs", "metrics", "biz_events"]` | Telemetry types to emit. Remove items to suppress specific output types. | diff --git a/src/dtagent/plugins/warehouse_usage.config/warehouse_usage-config.yml b/src/dtagent/plugins/warehouse_usage.config/warehouse_usage-config.yml index a4026f70..096d9071 100644 --- a/src/dtagent/plugins/warehouse_usage.config/warehouse_usage-config.yml +++ b/src/dtagent/plugins/warehouse_usage.config/warehouse_usage-config.yml @@ -1,5 +1,6 @@ plugins: warehouse_usage: + lookback_hours: 24 schedule: USING CRON 0 * * * * UTC is_disabled: false telemetry: diff --git a/src/dtagent/plugins/warehouse_usage.sql/070_v_warehouse_event_history.sql b/src/dtagent/plugins/warehouse_usage.sql/070_v_warehouse_event_history.sql index 075481c9..9b752ea8 100644 --- a/src/dtagent/plugins/warehouse_usage.sql/070_v_warehouse_event_history.sql +++ b/src/dtagent/plugins/warehouse_usage.sql/070_v_warehouse_event_history.sql @@ -46,7 +46,7 @@ select ) as ATTRIBUTES from SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_EVENTS_HISTORY weh where - weh.timestamp > GREATEST(timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('warehouse_usage')) + weh.timestamp > GREATEST(timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.warehouse_usage.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('warehouse_usage')) order by TIMESTAMP asc; grant select on view DTAGENT_DB.APP.V_WAREHOUSE_EVENT_HISTORY to role DTAGENT_VIEWER; diff --git a/src/dtagent/plugins/warehouse_usage.sql/071_v_warehouse_load_history.sql b/src/dtagent/plugins/warehouse_usage.sql/071_v_warehouse_load_history.sql index b99e9d91..57965b63 100644 --- a/src/dtagent/plugins/warehouse_usage.sql/071_v_warehouse_load_history.sql +++ b/src/dtagent/plugins/warehouse_usage.sql/071_v_warehouse_load_history.sql @@ -44,7 +44,7 @@ select ) as METRICS from SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_LOAD_HISTORY wlh where - wlh.start_time > GREATEST(timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('warehouse_usage_load')) + wlh.start_time > GREATEST(timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.warehouse_usage.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('warehouse_usage_load')) order by TIMESTAMP asc; grant select on view DTAGENT_DB.APP.V_WAREHOUSE_LOAD_HISTORY to role DTAGENT_VIEWER; diff --git a/src/dtagent/plugins/warehouse_usage.sql/072_v_warehouse_metering_history.sql b/src/dtagent/plugins/warehouse_usage.sql/072_v_warehouse_metering_history.sql index a25b0246..2127a9d8 100644 --- a/src/dtagent/plugins/warehouse_usage.sql/072_v_warehouse_metering_history.sql +++ b/src/dtagent/plugins/warehouse_usage.sql/072_v_warehouse_metering_history.sql @@ -44,7 +44,7 @@ select ) as METRICS from SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY wmh where - wmh.start_time > GREATEST(timeadd(hour, -24, current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('warehouse_usage_metering')) + wmh.start_time > GREATEST(timeadd(hour, -1*DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.warehouse_usage.lookback_hours', 24), current_timestamp), DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('warehouse_usage_metering')) order by TIMESTAMP asc; grant select on view DTAGENT_DB.APP.V_WAREHOUSE_METERING_HISTORY to role DTAGENT_VIEWER; diff --git a/src/dtagent/util.py b/src/dtagent/util.py index 99c66c32..1a26f9d6 100644 --- a/src/dtagent/util.py +++ b/src/dtagent/util.py @@ -30,7 +30,7 @@ import os import re from enum import Enum -from typing import Any, Dict, List, Optional, Union, Generator, Tuple +from typing import Any, Dict, List, Literal, Optional, Union, Generator, Tuple import pandas as pd @@ -293,90 +293,115 @@ def __adjust_time(time_key: str) -> None: return row_dict -def validate_timestamp_ms(timestamp_ms: int, allowed_past_minutes: int = 24 * 60 - 5, allowed_future_minutes: int = 10) -> Optional[int]: +def validate_timestamp( + timestamp: int, + allowed_past_minutes: int = 24 * 60 - 5, + allowed_future_minutes: int = 10, + return_unit: Literal["ms", "ns"] = "ms", + skip_range_validation: bool = False, +) -> Optional[int]: """Validates and normalizes timestamps with configurable time windows and automatic unit conversion. This function performs multiple validation steps: 1. Rejects negative timestamps (e.g., sentinel values like -1000000) - 2. Auto-converts timestamps that are too large by detecting the likely time unit: - - Femtoseconds (> 4.1e21): divides by 1e12 - - Picoseconds (> 4.1e18): divides by 1e9 - - Nanoseconds (> 4.1e15): divides by 1e6 - - Microseconds (> 4.1e12): divides by 1e3 - 3. Validates the timestamp is within the allowed time range from current time + 2. Auto-detects input unit and converts to nanoseconds (preserving full precision): + - Uses threshold of 4.1e15 (year 2100 in milliseconds * 1000) + - Values > 4.1e15: assumed to be nanoseconds or higher precision + - Values <= 4.1e15: assumed to be milliseconds or microseconds + - Automatically converts from femtoseconds/picoseconds/nanoseconds/microseconds โ†’ nanoseconds + 3. Optionally validates the timestamp is within the allowed time range from current time + 4. Returns in requested unit (milliseconds or nanoseconds) + + Detection thresholds (based on year 2100): + - Femtoseconds: > 4.1e21 โ†’ divide by 1e6 to get nanoseconds + - Picoseconds: > 4.1e18 โ†’ divide by 1e3 to get nanoseconds + - Nanoseconds: > 4.1e15 โ†’ use as-is + - Microseconds: > 4.1e12 โ†’ multiply by 1e3 to get nanoseconds + - Milliseconds: <= 4.1e12 โ†’ multiply by 1e6 to get nanoseconds Args: - timestamp_ms (int): timestamp in ms to check (or higher precision units to be auto-converted) - allowed_past_minutes (int, optional): allowed past range in minutes. Defaults to 24*60 - 5 (about 1435 minutes, or ~24 hours). - For logs and events, use defaults; for metrics, use 55. - allowed_future_minutes (int, optional): allowed future range in minutes. Defaults to 10. + timestamp: timestamp to validate (in any supported precision unit) + allowed_past_minutes: allowed past range in minutes (default: ~24 hours) + allowed_future_minutes: allowed future range in minutes (default: 10) + return_unit: unit to return validated timestamp in - "ms" or "ns" (default: "ms") + skip_range_validation: if True, skip time range validation (useful for observed_timestamp) Returns: - Optional[int]: validated timestamp in milliseconds, or None if timestamp is out of range or invalid + Optional[int]: validated timestamp in requested unit, or None if invalid + + Raises: + ValueError: if return_unit is neither "ms" nor "ns" Examples: - >>> validate_timestamp_ms(1707494400000) # Valid milliseconds timestamp + >>> validate_timestamp(1707494400000, return_unit="ms") # Milliseconds 1707494400000 - >>> validate_timestamp_ms(1707494400000000) # Microseconds, auto-converted + >>> validate_timestamp(1707494400000000000, return_unit="ns") # Nanoseconds + 1707494400000000000 + >>> validate_timestamp(1707494400000000000, return_unit="ms") # ns input, ms output 1707494400000 - >>> validate_timestamp_ms(-1000000) # Negative sentinel value + >>> validate_timestamp(-1000000, return_unit="ms") # Negative sentinel None - >>> validate_timestamp_ms(1770224954840999937441792) # Picoseconds, auto-converted - 1770224954840 + >>> validate_timestamp(old_timestamp, skip_range_validation=True, return_unit="ns") # For observed_timestamp + 1234567890000000000 """ + # Validate return_unit parameter + if return_unit not in ("ms", "ns"): + raise ValueError(f"return_unit must be 'ms' or 'ns', got '{return_unit}'") + # Pre-validation: reject negative timestamps (sentinel values like -1000000) - if timestamp_ms < 0: + if timestamp < 0: return None - # Pre-validation: reject timestamps that are clearly too large (e.g., nanoseconds instead of milliseconds) + # Auto-detect and convert to nanoseconds (preserves precision) # Year 2100 in milliseconds is approximately 4.1e12 - # Values larger than this are likely incorrectly converted from higher precision time units - # Attempt to auto-convert from femtoseconds, picoseconds, nanoseconds, or microseconds - # Thresholds based on year 2100 in each unit: - # - Milliseconds: 4.1e12 - # - Microseconds: 4.1e12 * 1e3 = 4.1e15 - # - Nanoseconds: 4.1e12 * 1e6 = 4.1e18 - # - Picoseconds: 4.1e12 * 1e9 = 4.1e21 - # - Femtoseconds: 4.1e12 * 1e12 = 4.1e24 - if timestamp_ms > 4_100_000_000_000: - - # Try femtoseconds (divide by 1e12 using integer arithmetic) - if timestamp_ms > 4_100_000_000_000_000_000_000: - converted_ts = timestamp_ms // 1_000_000_000_000 - - # Try picoseconds (divide by 1e9 using integer arithmetic) - elif timestamp_ms > 4_100_000_000_000_000_000: - converted_ts = timestamp_ms // 1_000_000_000 - - # Try nanoseconds (divide by 1e6 using integer arithmetic) - elif timestamp_ms > 4_100_000_000_000_000: - converted_ts = timestamp_ms // 1_000_000 - - # Try microseconds (divide by 1e3 using integer arithmetic) - elif timestamp_ms > 4_100_000_000_000: - converted_ts = timestamp_ms // 1_000 + # Values larger than this are likely higher precision time units + timestamp_ns = timestamp + + if timestamp > 4_100_000_000_000: + # Try femtoseconds (divide by 1e6 to get nanoseconds) + if timestamp > 4_100_000_000_000_000_000_000: + converted_ts = timestamp // 1_000_000 + # Try picoseconds (divide by 1e3 to get nanoseconds) + elif timestamp > 4_100_000_000_000_000_000: + converted_ts = timestamp // 1_000 + # Try nanoseconds (use as-is) + elif timestamp > 4_100_000_000_000_000: + converted_ts = timestamp + # Try microseconds (multiply by 1e3 to get nanoseconds) + elif timestamp > 4_100_000_000_000: + converted_ts = timestamp * 1_000 else: converted_ts = -1 # Invalid value - if 0 < converted_ts <= 4_100_000_000_000: - timestamp_ms = converted_ts + # Validate the converted timestamp is reasonable (within year 2100 range in nanoseconds) + if 0 < converted_ts <= 4_100_000_000_000_000_000: + timestamp_ns = converted_ts else: return None + else: + # Input is in milliseconds, convert to nanoseconds + timestamp_ns = timestamp * 1_000_000 - try: - timestamp = datetime.datetime.fromtimestamp(timestamp_ms / 1e3, tz=datetime.timezone.utc) - except (ValueError, OSError, OverflowError): - # Handle any errors from fromtimestamp (invalid values, overflow, etc.) - return None + # Optionally validate timestamp is within allowed time range + if not skip_range_validation: + try: + timestamp_dt = datetime.datetime.fromtimestamp(timestamp_ns / 1e9, tz=datetime.timezone.utc) + except (ValueError, OSError, OverflowError): + return None - now = get_now_timestamp() - min_past = now - datetime.timedelta(minutes=allowed_past_minutes) - max_future = now + datetime.timedelta(minutes=allowed_future_minutes) + now = get_now_timestamp() + min_past = now - datetime.timedelta(minutes=allowed_past_minutes) + max_future = now + datetime.timedelta(minutes=allowed_future_minutes) - if timestamp < min_past or timestamp > max_future: - return None + if timestamp_dt < min_past or timestamp_dt > max_future: + return None + + # Return in requested unit + if return_unit == "ns": + return timestamp_ns - return timestamp_ms + # return_unit == "ms" - convert from nanoseconds to milliseconds + return timestamp_ns // 1_000_000 def _get_timestamp_in_sec(ts: float = 0, conversion_unit: float = 1, timezone=datetime.timezone.utc) -> datetime.datetime: @@ -511,26 +536,90 @@ def _chunked_iterable(iterable, size: int) -> Generator[List, None, None]: yield chunk -def get_timestamp_in_ms(query_data: Dict, ts_key: str, conversion_unit: int = 1e6, default_ts=None): - """Returns timestamp in milliseconds by converting value retrieved from query_data under given ts_key""" +def get_timestamp(query_data: Dict, ts_key: str, default_ts=None) -> Optional[int]: + """Returns timestamp in nanoseconds from query_data. + + Handles multiple input formats: + - datetime objects โ†’ converted to nanoseconds + - ISO 8601 strings โ†’ parsed and converted to nanoseconds + - Numeric values โ†’ assumed to be nanoseconds (from SQL extract(epoch_nanosecond ...)) + + Args: + query_data: Dictionary containing the timestamp data + ts_key: Key to retrieve the timestamp from query_data + default_ts: Default value to return if timestamp not found or invalid + + Returns: + Optional[int]: timestamp in nanoseconds, or None if not found + + Examples: + >>> get_timestamp({"ts": 1707494400000000000}, "ts") # Nanoseconds + 1707494400000000000 + >>> get_timestamp({"ts": datetime(2024, 2, 9)}, "ts") # datetime object + 1707436800000000000 + """ ts = query_data.get(ts_key, None) if ts is not None and not pd.isna(ts): if isinstance(ts, datetime.datetime): - # Ensure timezone awareness before converting to timestamp ts = ensure_timezone_aware(ts) - return int(ts.timestamp() * 1000) + return int(ts.timestamp() * 1_000_000_000) # Convert to nanoseconds if isinstance(ts, str): try: - # Parse ISO format datetime string (replace Z with +00:00 for fromisoformat) ts = datetime.datetime.fromisoformat(ts.replace("Z", "+00:00")) ts = ensure_timezone_aware(ts) - return int(ts.timestamp() * 1000) + return int(ts.timestamp() * 1_000_000_000) except ValueError: - pass # Fall through to numeric conversion if parsing fails - return int(int(ts) / conversion_unit) + pass + return int(ts) # Already in nanoseconds from SQL return default_ts +def process_timestamps_for_telemetry(data: Dict) -> Tuple[Optional[int], Optional[int]]: + """Processes timestamp and observed_timestamp for telemetry APIs with proper validation. + + This utility function implements the standard pattern for handling timestamps in telemetry: + 1. Gets timestamp from "timestamp" key in nanoseconds + 2. Gets observed_timestamp from "observed_timestamp" key, falls back to timestamp if not present + 3. Validates timestamp with range checking (for API acceptance) and returns in milliseconds + 4. Validates observed_timestamp WITHOUT range checking (preserves original) and returns in nanoseconds + + Args: + data: Dictionary containing timestamp data (typically from SQL query results) + + Returns: + Tuple[Optional[int], Optional[int]]: (timestamp_ms, observed_timestamp_ns) + - timestamp_ms: validated timestamp in milliseconds (for Dynatrace APIs) + - observed_timestamp_ns: validated observed_timestamp in nanoseconds (per OTLP standard) + Either can be None if validation fails or data not present + + Examples: + >>> process_timestamps_for_telemetry({"timestamp": 1707494400000000000}) + (1707494400000, 1707494400000000000) # timestamp in ms, observed_timestamp in ns + + >>> process_timestamps_for_telemetry( + ... {"timestamp": 1707494400000000000, "observed_timestamp": 1707494300000000000} + ... ) + (1707494400000, 1707494300000000000) # Uses explicit observed_timestamp + """ + # Get timestamp in nanoseconds from SQL + timestamp_ns = get_timestamp(data, "timestamp") + + # Get observed_timestamp if provided, otherwise fallback to timestamp value + observed_timestamp_ns = get_timestamp(data, "observed_timestamp", default_ts=timestamp_ns) + + # Validate main timestamp with range checking (for API acceptance) - return in milliseconds + validated_timestamp_ms = None + if timestamp_ns: + validated_timestamp_ms = validate_timestamp(timestamp_ns, return_unit="ms") + + # Validate observed_timestamp WITHOUT range checking (preserve original) - return in nanoseconds + validated_observed_timestamp_ns = None + if observed_timestamp_ns: + validated_observed_timestamp_ns = validate_timestamp(observed_timestamp_ns, return_unit="ns", skip_range_validation=True) + + return validated_timestamp_ms, validated_observed_timestamp_ns + + def ensure_timezone_aware(dt: datetime.datetime) -> datetime.datetime: """Ensures a datetime object is timezone-aware by adding UTC timezone for naive datetimes. diff --git a/src/dtagent/version.py b/src/dtagent/version.py index aa38e954..a52ecad0 100644 --- a/src/dtagent/version.py +++ b/src/dtagent/version.py @@ -29,7 +29,7 @@ ##region --------------------------- VERSION INFO ------------------------------------ -VERSION = "0.9.3" +VERSION = "0.9.4" BUILD = 0 ##endregion diff --git a/test/.pylintrc b/test/.pylintrc index 57f0132e..ea14097c 100644 --- a/test/.pylintrc +++ b/test/.pylintrc @@ -1,5 +1,5 @@ [MESSAGES CONTROL] -disable=C0415,C0301,W0621,W0611,E0401,C0411,R0914,R0902,R0903,R1737,R0912,W0107,C0103,W1203,R0915,W0511,W0212,E0611,W0613,R1702,R1718,W0150,C0114,C0115,C0116 +disable=C0415,C0301,W0621,W0611,E0401,C0411,R0914,R0902,R0903,R1737,R0912,W0107,C0103,W1203,R0915,W0511,W0212,E0611,W0613,R1702,R1718,W0150,C0114,C0115,C0116,R0911 [FORMAT] max-line-length=140 diff --git a/test/_utils.py b/test/_utils.py index f62f0e8b..fee47b94 100644 --- a/test/_utils.py +++ b/test/_utils.py @@ -44,28 +44,76 @@ TEST_CONFIG_FILE_NAME = "./test/conf/config-download.yml" -def _pickle_all(session: snowpark.Session, pickles: dict, force: bool = False): - """Pickle all tables provided in the pickles dictionary if necessary or forced. +def _fixture_json_default(obj): + """JSON encoder fallback for numpy/pandas types from Snowflake DataFrames. Args: - session (snowpark.Session): The Snowflake session used to access tables. - pickles (dict): A dictionary mapping table names to pickle file names. - force (bool, optional): If True, force pickling even if not necessary. Defaults to False. + obj: Object that is not natively JSON-serializable. Returns: - None - """ - if force or should_pickle(pickles.values()): - for table_name, pickle_name in pickles.items(): - _pickle_data_history(session, table_name, pickle_name) + A JSON-serializable Python primitive. + Raises: + TypeError: When the type cannot be coerced. + """ + import math -def _pickle_data_history( - session: snowpark.Session, t_data: str, pickle_name: str, operation: Optional[Callable] = None -) -> Generator[Dict, None, None]: - if is_select_for_table(t_data): + try: + import numpy as np import pandas as pd + except ImportError: + return str(obj) + + if obj is pd.NaT: + return None + if isinstance(obj, np.integer): + return int(obj) + if isinstance(obj, np.floating): + v = float(obj) + return None if (math.isnan(v) or math.isinf(v)) else v + if isinstance(obj, np.bool_): + return bool(obj) + if isinstance(obj, pd.Timestamp): + return obj.isoformat() + if isinstance(obj, (datetime.datetime, datetime.date)): + return obj.isoformat() + if isinstance(obj, (bytes, bytearray)): + import base64 + + return base64.b64encode(obj).decode("ascii") + return str(obj) + + +def _dump_fixture_row(row_dict: dict) -> str: + """Serialise a single row dict to a JSON string, replacing NaN/Inf with null. + + Args: + row_dict: Row dictionary to serialise. + + Returns: + JSON string representation of the row. + """ + import math + + cleaned = {k: (None if isinstance(v, float) and (math.isnan(v) or math.isinf(v)) else v) for k, v in row_dict.items()} + return json.dumps(cleaned, default=_fixture_json_default) + + +def _generate_fixture(session: snowpark.Session, t_data: str, fixture_path: str, operation: Optional[Callable] = None) -> None: + """Generate an NDJSON fixture file from a live Snowflake table or SQL query. + The output path must follow the ``{plugin_name}[_{view_suffix}].ndjson`` convention. + + Args: + session (snowpark.Session): Active Snowflake Snowpark session. + t_data (str): Table name or SELECT statement. + fixture_path (str): Destination ``.ndjson`` file path. + operation (Optional[Callable]): Optional DataFrame transform applied + before serialisation (e.g. sorting). + """ + import pandas as pd # noqa: PLC0415 + + if is_select_for_table(t_data): df_data = session.sql(t_data).collect() pd_data = pd.DataFrame(df_data) else: @@ -74,8 +122,23 @@ def _pickle_data_history( df_data = operation(df_data) pd_data = df_data.to_pandas() - pd_data.to_pickle(pickle_name) - print("Pickled " + str(pickle_name)) + rows = [_dump_fixture_row(row.to_dict()) for _, row in pd_data.iterrows()] + with open(fixture_path, "w", encoding="utf-8") as fh: + fh.write("\n".join(rows) + ("\n" if rows else "")) + print(f"Generated fixture {fixture_path} ({len(rows)} rows)") + + +def _generate_all_fixtures(session: snowpark.Session, fixtures: dict, force: bool = False) -> None: + """Generate NDJSON fixture files for all tables in the fixtures dictionary. + + Args: + session (snowpark.Session): Active Snowflake Snowpark session. + fixtures (dict): Mapping of table names to fixture file paths. + force (bool, optional): Re-generate even when files already exist. Defaults to False. + """ + if force or should_generate_fixtures(fixtures.values()): + for table_name, fixture_path in fixtures.items(): + _generate_fixture(session, table_name, fixture_path) def _logging_findings( @@ -104,89 +167,98 @@ def _logging_findings( return results -def _safe_get_unpickled_entries(pickles: dict, table_name: str, *args, **kwargs) -> Generator[Dict, None, None]: - """Safely get unpickled entries for the given table name from the pickles dictionary. - - Args: - pickles (dict): Dictionary mapping table names to pickle file paths. - table_name (str): The name of the table to retrieve unpickled entries for. - - Returns: - Generator[Dict, None, None]: A generator yielding dictionaries representing unpickled entries for the specified table. - - Raises: - ValueError: If the table name is not found in the pickles dictionary. - """ - if table_name not in pickles: - raise ValueError(f"Unknown table name: {table_name}") - return _get_unpickled_entries(pickles[table_name], *args, **kwargs) - - -def _get_unpickled_entries( - pickle_name: str, +def _get_fixture_entries( + fixture_path: str, limit: int = None, adjust_ts: bool = True, start_time: str = "START_TIME", end_time: str = "END_TIME", ) -> Generator[Dict, None, None]: - import pandas as pd + """Read fixture rows from an NDJSON file, optionally applying timestamp adjustment. - ndjson_name = os.path.splitext(pickle_name)[0] + ".ndjson" - # if os.path.exists(ndjson_name): - # # Read from safer NDJSON format - # pandas_df = pd.read_json(ndjson_name, lines=True) - # print(f"Read from NDJSON {ndjson_name}") - # else: - # Fallback to pickle and generate NDJSON - pandas_df = pd.read_pickle(pickle_name) - print(f"Unpickled {pickle_name}") + Rows are repeated or truncated to satisfy *limit*. Timestamps are adjusted + via ``_adjust_timestamp`` so they fall within OTel ingestion bounds. - collected_rows = [] + No pandas dependency โ€” uses stdlib ``json`` only. - if limit is not None: - if 0 < len(pandas_df) < limit: - n_repeats = limit // len(pandas_df) - is_remainder = limit % len(pandas_df) > 0 + Args: + fixture_path (str): Path to the ``.ndjson`` fixture file. + limit (int, optional): Maximum number of rows to yield; rows are + repeated when the fixture has fewer rows than *limit*. + adjust_ts (bool, optional): Whether to adjust timestamps. Defaults to True. + start_time (str, optional): Name of the start-time column. Defaults to ``START_TIME``. + end_time (str, optional): Name of the end-time column. Defaults to ``END_TIME``. + + Yields: + Dict: Row dictionaries from the fixture file. + """ + from dtagent.util import _adjust_timestamp - dfs_to_concat = [pandas_df] * (n_repeats + (1 if is_remainder else 0)) + with open(fixture_path, "r", encoding="utf-8") as fh: + raw_rows = [json.loads(line) for line in fh if line.strip()] - # Concatenate them and reset the index - pandas_df = pd.concat(dfs_to_concat, ignore_index=True) + if not raw_rows: + return - pandas_df = pandas_df.head(limit) + if limit is not None and 0 < len(raw_rows) < limit: + n_full = limit // len(raw_rows) + remainder = limit % len(raw_rows) + raw_rows = raw_rows * n_full + raw_rows[:remainder] - for _, row in pandas_df.iterrows(): - from dtagent.util import _adjust_timestamp + if limit is not None: + raw_rows = raw_rows[:limit] - row_dict = row.to_dict() + print(f"Loaded fixture {fixture_path} ({len(raw_rows)} rows)") + + for row_dict in raw_rows: if adjust_ts: _adjust_timestamp(row_dict, start_time=start_time, end_time=end_time) - - collected_rows.append(row_dict) yield row_dict - if not os.path.exists(ndjson_name): - with open(ndjson_name, "w", encoding="utf-8") as f: - for row in collected_rows: - f.write(json.dumps(row) + "\n") +def _safe_get_fixture_entries(fixtures: dict, table_name: str, *args, **kwargs) -> Generator[Dict, None, None]: + """Safely read fixture entries for *table_name* from the fixtures dictionary. -def should_pickle(pickle_files: list) -> bool: + Args: + fixtures (dict): Mapping of table names to ``.ndjson`` fixture file paths. + table_name (str): Table name key to look up. - return (len(sys.argv) > 1 and sys.argv[1] == "-p") or any(not os.path.exists(file_name) for file_name in pickle_files) + Returns: + Generator[Dict, None, None]: Fixture rows for the requested table. + + Raises: + ValueError: If *table_name* is not present in *fixtures*. + """ + if table_name not in fixtures: + raise ValueError(f"Unknown table name: {table_name}") + return _get_fixture_entries(fixtures[table_name], *args, **kwargs) + + +def should_generate_fixtures(fixture_files) -> bool: + """Return True when fixture files need to be (re-)generated from Snowflake. + + Generation is requested when the ``-p`` CLI flag is present or when any of + the listed fixture files do not exist yet. + + Args: + fixture_files: Iterable of fixture file paths to check. + + Returns: + True if fixture regeneration is needed. + """ + return (len(sys.argv) > 1 and sys.argv[1] == "-p") or any(not os.path.exists(f) for f in fixture_files) -def _merge_pickles_from_tests() -> Dict[str, str]: - """Merges all PICKLES dictionaries from test_*.py files in the plugins directory into a single dictionary. +def _merge_fixtures_from_tests() -> Dict[str, str]: + """Merge all FIXTURES dictionaries from test_*.py files in the plugins directory. Returns: - Dict: A dictionary containing all merged PICKLES dictionaries, - mapping all table names to their corresponding pickle file paths. + Dict mapping all table names to their corresponding ``.ndjson`` fixture paths. """ import importlib import inspect - pickles = {} + fixtures: Dict[str, str] = {} plugins_dir = os.path.join(os.path.dirname(__file__), "plugins") for filename in os.listdir(plugins_dir): if filename.startswith("test_") and filename.endswith(".py"): @@ -194,15 +266,15 @@ def _merge_pickles_from_tests() -> Dict[str, str]: try: module = importlib.import_module(module_name) for _, member in inspect.getmembers(module): - if inspect.isclass(member) and hasattr(member, "PICKLES"): - pickles.update(member.PICKLES) - except ImportError as e: - print(f"Could not import {module_name}: {e}") - return pickles + if inspect.isclass(member) and hasattr(member, "FIXTURES"): + fixtures.update(member.FIXTURES) + except ImportError as exc: + print(f"Could not import {module_name}: {exc}") + return fixtures class LocalTelemetrySender(TelemetrySender): - PICKLES = _merge_pickles_from_tests() + FIXTURES = _merge_fixtures_from_tests() def __init__(self, session: snowpark.Session, params: dict, exec_id: str, limit_results: int = 2, config: TestConfiguration = None): @@ -220,8 +292,8 @@ def _get_config(self, session: snowpark.Session) -> Configuration: return self._local_config if self._local_config else TelemetrySender._get_config(self, session) def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - if t_data in self.PICKLES: - return _get_unpickled_entries(self.PICKLES[t_data], limit=self.limit_results) + if t_data in self.FIXTURES: + return _get_fixture_entries(self.FIXTURES[t_data], limit=self.limit_results) return TelemetrySender._get_table_rows(self, t_data) @@ -232,7 +304,7 @@ def _flush_logs(self) -> None: def telemetry_test_sender( session: snowpark.Session, sources: str, params: dict, limit_results: int = 2, config: TestConfiguration = None, test_source: str = None ) -> Tuple[int, int, int, int, int]: - """Invokes send_data function on a LocalTelemetrySender instance, which uses pickled data for testing purposes + """Invoke send_data on a LocalTelemetrySender instance using NDJSON fixture data for testing. Args: session (snowpark.Session): The Snowflake session used to access tables. @@ -266,6 +338,7 @@ def execute_telemetry_test( base_count: Dict[str, Dict[str, int]], test_name: str, affecting_types_for_entries: List[str] = None, + config: TestConfiguration = None, ): """Generalized test function for telemetry plugins. @@ -283,7 +356,7 @@ def execute_telemetry_test( affecting_types_for_entries = affecting_types_for_entries or ["logs", "metrics", "spans"] - config = get_config() + config = config if config is not None else get_config() session = _get_session() for telemetry_type in ("spans", "logs", "metrics", "events"): @@ -319,9 +392,9 @@ def execute_telemetry_test( assert results[test_name][RUN_RESULTS_KEY][plugin_key].get("events", 0) == events_expected -def get_config(pickle_conf: str = None) -> TestConfiguration: +def get_config(save_conf: str = None) -> TestConfiguration: conf = {} - if pickle_conf == "y": # recreate the config file + if save_conf == "y": # recreate the config file from test import _get_session session = _get_session() diff --git a/test/bash/test_build_scripts.bats b/test/bash/test_build_scripts.bats index 5675a1d9..6cf00820 100644 --- a/test/bash/test_build_scripts.bats +++ b/test/bash/test_build_scripts.bats @@ -6,6 +6,11 @@ setup() { setup_file() { cd "$BATS_TEST_DIRNAME/../.." + if [ -z "${BATS_SLOW_TESTS:-}" ]; then + export BUILD_DOCS_STATUS=0 + export BUILD_OUTPUT="" + return + fi # Run build_docs.sh once for all tests BUILD_OUTPUT=$(timeout 240 ./scripts/dev/build_docs.sh 2>&1) export BUILD_DOCS_STATUS=$? @@ -13,6 +18,9 @@ setup_file() { } @test "build.sh runs without immediate errors" { + if [ -z "${BATS_SLOW_TESTS:-}" ]; then + skip "slow test โ€” set BATS_SLOW_TESTS=1 to run" + fi # This test assumes dependencies like pylint are installed # In a real environment, this would pass if build tools are available run timeout 120 ./scripts/dev/build.sh @@ -197,6 +205,9 @@ setup_file() { } @test "package.sh creates a valid package zip with build files and documentation" { + if [ -z "${BATS_SLOW_TESTS:-}" ]; then + skip "slow test โ€” set BATS_SLOW_TESTS=1 to run" + fi run timeout 300 ./scripts/dev/package.sh if [ "$status" -ne 0 ]; then echo "package.sh failed with status $status" @@ -357,6 +368,9 @@ setup_file() { } @test "markdownlint passes for all documentation" { + if [ -z "${BATS_SLOW_TESTS:-}" ]; then + skip "slow test โ€” set BATS_SLOW_TESTS=1 to run" + fi if ! command -v markdownlint &> /dev/null; then skip "markdownlint not installed" fi diff --git a/test/bash/test_compile.bats b/test/bash/test_compile.bats index 253e5e58..82978755 100644 --- a/test/bash/test_compile.bats +++ b/test/bash/test_compile.bats @@ -1,23 +1,34 @@ #!/usr/bin/env bats -setup() { +setup_file() { cd "$BATS_TEST_DIRNAME/../.." + # Ensure build directory exists (not present in a fresh CI checkout) + mkdir -p build/30_plugins build/09_upgrade # Backup original files if exist [ -f build/_version.py ] && cp build/_version.py build/_version.py.bak [ -f build/_dtagent.py ] && cp build/_dtagent.py build/_dtagent.py.bak [ -f build/_send_telemetry.py ] && cp build/_send_telemetry.py build/_send_telemetry.py.bak + # Run compile.sh once for all tests; communicate result via BATS_FILE_TMPDIR + # (export from setup_file does not propagate into individual test subshells) + ./scripts/dev/compile.sh + echo $? > "${BATS_FILE_TMPDIR}/compile_status" } -teardown() { +teardown_file() { + cd "$BATS_TEST_DIRNAME/../.." # Restore or clean up [ -f build/_version.py.bak ] && mv build/_version.py.bak build/_version.py || rm -f build/_version.py [ -f build/_dtagent.py.bak ] && mv build/_dtagent.py.bak build/_dtagent.py || rm -f build/_dtagent.py [ -f build/_send_telemetry.py.bak ] && mv build/_send_telemetry.py.bak build/_send_telemetry.py || rm -f build/_send_telemetry.py } +setup() { + cd "$BATS_TEST_DIRNAME/../.." + COMPILE_STATUS=$(cat "${BATS_FILE_TMPDIR}/compile_status" 2>/dev/null || echo 1) +} + @test "compile.sh creates compiled files" { - run ./scripts/dev/compile.sh - [ "$status" -eq 0 ] + [ "$COMPILE_STATUS" -eq 0 ] [ -f build/_version.py ] [ -f build/_semantics.py ] [ -f build/_metric_semantics.txt ] @@ -28,8 +39,7 @@ teardown() { } @test "compile.sh removes docstrings from compiled files" { - run ./scripts/dev/compile.sh - [ "$status" -eq 0 ] + [ "$COMPILE_STATUS" -eq 0 ] # Check that compiled files exist [ -f build/_dtagent.py ] diff --git a/test/bash/test_custom_object_names.bats b/test/bash/test_custom_object_names.bats index c0bc9ae0..c5797528 100755 --- a/test/bash/test_custom_object_names.bats +++ b/test/bash/test_custom_object_names.bats @@ -5,25 +5,63 @@ setup() { TEST_SQL_FILE=$(mktemp) TEST_CONFIG_FILE=$(mktemp) - # Ensure build directory has necessary files + # Create self-contained build fixtures with real DTAGENT_* placeholder names mkdir -p build/30_plugins build/09_upgrade - # Copy files from package/build if they don't exist - for file in 00_init.sql 10_admin.sql 20_setup.sql 40_config.sql 70_agents.sql; do - if [ ! -f "build/$file" ] && [ -f "package/build/$file" ]; then - cp "package/build/$file" "build/$file" - fi - done - - # Copy plugins if needed - if [ -d "package/build/30_plugins" ]; then - cp -r package/build/30_plugins/* build/30_plugins/ 2>/dev/null || true - fi + cat > build/00_init.sql << 'EOSQL' +-- Init +use role DTAGENT_OWNER; +create database if not exists DTAGENT_DB; +--%OPTION:resource_monitor: +create resource monitor if not exists DTAGENT_RS; +--%:OPTION:resource_monitor +EOSQL + + cat > build/10_admin.sql << 'EOSQL' +-- Admin +use role DTAGENT_OWNER; +--%OPTION:dtagent_admin: +create role if not exists DTAGENT_ADMIN; +grant role DTAGENT_ADMIN to role DTAGENT_OWNER; +grant manage grants on account to role DTAGENT_ADMIN; +--%:OPTION:dtagent_admin +EOSQL + + cat > build/20_setup.sql << 'EOSQL' +-- Setup +use role DTAGENT_OWNER; +create warehouse if not exists DTAGENT_WH; +create role if not exists DTAGENT_VIEWER; +use database DTAGENT_DB; +--%OPTION:resource_monitor: +create or replace procedure CONFIG.P_UPDATE_RESOURCE_MONITOR(credit_quota int) +returns string as begin + alter resource monitor DTAGENT_RS set credit_quota = :credit_quota; + return 'ok'; +end; +--%:OPTION:resource_monitor +EOSQL + + cat > build/40_config.sql << 'EOSQL' +-- Config +use role DTAGENT_OWNER; +use database DTAGENT_DB; +SELECT 'config'; +EOSQL + + cat > build/70_agents.sql << 'EOSQL' +-- Agents +use role DTAGENT_OWNER; +use database DTAGENT_DB; +use warehouse DTAGENT_WH; +SELECT 'agents'; +EOSQL } teardown() { rm -f "$TEST_SQL_FILE" "$TEST_CONFIG_FILE" - # Don't remove build files as they might be needed by other tests + rm -f build/00_init.sql build/10_admin.sql build/20_setup.sql build/40_config.sql build/70_agents.sql + rm -rf build/30_plugins build/09_upgrade unset BUILD_CONFIG_FILE DTAGENT_TOKEN } @@ -528,13 +566,13 @@ EOF cp "$TEST_SQL_FILE" "$temp_file" done rm -f "$temp_file" - + # Verify the block was removed ! grep -q "CREATE RESOURCE MONITOR" "$TEST_SQL_FILE" - + # Verify code before and after remains grep -q "SELECT 1" "$TEST_SQL_FILE" grep -q "SELECT 2" "$TEST_SQL_FILE" - + rm -f "$TEST_INPUT_FILE" } diff --git a/test/bash/test_list_plugins_to_exclude.bats b/test/bash/test_list_plugins_to_exclude.bats index c9845b2c..8b28149c 100644 --- a/test/bash/test_list_plugins_to_exclude.bats +++ b/test/bash/test_list_plugins_to_exclude.bats @@ -37,7 +37,7 @@ teardown() { ] EOF - run ./package/list_plugins_to_exclude.sh + run ./scripts/deploy/list_plugins_to_exclude.sh [ "$status" -eq 0 ] [[ "$output" =~ "test_plugin" ]] ! [[ "$output" =~ "active_plugin" ]] @@ -69,7 +69,7 @@ EOF ] EOF - run ./package/list_plugins_to_exclude.sh + run ./scripts/deploy/list_plugins_to_exclude.sh [ "$status" -eq 0 ] # not_enabled_plugin should be excluded (not explicitly enabled when disabled_by_default=true) @@ -94,7 +94,7 @@ EOF ] EOF - run ./package/list_plugins_to_exclude.sh + run ./scripts/deploy/list_plugins_to_exclude.sh [ "$status" -eq 0 ] # Should not exclude any plugins when deploy_disabled_plugins=true [ -z "$output" ] @@ -131,7 +131,7 @@ EOF ] EOF - run ./package/list_plugins_to_exclude.sh + run ./scripts/deploy/list_plugins_to_exclude.sh [ "$status" -eq 0 ] [[ "$output" =~ "plugin_one" ]] [[ "$output" =~ "plugin_two" ]] @@ -174,7 +174,7 @@ EOF ] EOF - run ./package/list_plugins_to_exclude.sh + run ./scripts/deploy/list_plugins_to_exclude.sh [ "$status" -eq 0 ] # explicitly_disabled should be excluded (explicitly disabled) [[ "$output" =~ (^|[[:space:]])explicitly_disabled([[:space:]]|$) ]] diff --git a/test/core/conftest.py b/test/core/conftest.py index 3835ca6c..a9fa2416 100644 --- a/test/core/conftest.py +++ b/test/core/conftest.py @@ -26,12 +26,18 @@ def pytest_addoption(parser): parser.addoption( - "--pickle_conf", + "--save_conf", action="store", - help="Indicator if we want to download new config from Snowflake.", + help="Download and save config from Snowflake to local file (pass 'y' to enable).", + ) + parser.addoption( + "--run-slow", + action="store_true", + default=False, + help="Run slow build/package integration tests (skipped by default).", ) @fixture(scope="session") -def pickle_conf(request): - return request.config.getoption("--pickle_conf") +def save_conf(request): + return request.config.getoption("--save_conf") diff --git a/test/core/readme.md b/test/core/readme.md index bf4f89e0..1d861fc1 100644 --- a/test/core/readme.md +++ b/test/core/readme.md @@ -30,7 +30,7 @@ pytest test/core/ -v - Configuration file loading and parsing - Environment variable handling - Configuration validation -- Pickle configuration for live testing +- Safe configuration for live testing ### Utility Tests (`test_util.py`) diff --git a/test/core/test_admin_role_usage.py b/test/core/test_admin_role_usage.py index 9efb9d21..a43897ac 100644 --- a/test/core/test_admin_role_usage.py +++ b/test/core/test_admin_role_usage.py @@ -178,7 +178,19 @@ def test_accountadmin_only_in_init_upgrade_scopes(self): continue with open(file_path, "r", encoding="utf-8") as f: + in_option_block = False for line_num, line in enumerate(f, 1): + # Track deploy-time OPTION blocks (--%OPTION:name: ... --%:OPTION:name) + # These are stripped by prepare_deploy_script.sh and must not be flagged here + if line.startswith("--%OPTION:"): + in_option_block = True + continue + if line.startswith("--%:OPTION:"): + in_option_block = False + continue + if in_option_block: + continue + # Skip comments if line.strip().startswith("--"): continue diff --git a/test/core/test_bash_scripts.py b/test/core/test_bash_scripts.py index a1c1407c..2d756de3 100644 --- a/test/core/test_bash_scripts.py +++ b/test/core/test_bash_scripts.py @@ -1,3 +1,5 @@ +import os +import shutil import subprocess import pytest from pathlib import Path @@ -8,14 +10,44 @@ BATS_DIR = Path(__file__).parent.parent / "bash" BATS_FILES = sorted(BATS_DIR.glob("*.bats")) +# Bats files that contain slow build/package integration tests +SLOW_BATS_FILES = {"test_build_scripts"} + +# Resolve bats executable path once at import time (handles Homebrew on macOS where +# /opt/homebrew/bin may not be in the subprocess PATH inherited by pytest) +BATS_EXECUTABLE = shutil.which("bats") + + +def _is_slow(bats_file: Path) -> bool: + return bats_file.stem in SLOW_BATS_FILES + @pytest.mark.parametrize("bats_file", BATS_FILES, ids=[f.stem for f in BATS_FILES]) -def test_bash_script(bats_file): +def test_bash_script(request, bats_file): """Run individual bash script test file using Bats framework. Uses pytest-tap to parse TAP output and report individual test cases. + Slow build/package integration tests (test_build_scripts) are skipped by + default; pass --run-slow to include them. """ - result = subprocess.run(["bats", str(bats_file)], capture_output=True, text=True, check=False) + if not BATS_EXECUTABLE: + pytest.skip("bats not found in PATH โ€” install via 'brew install bats-core' or 'npm install -g bats'") + + run_slow = request.config.getoption("--run-slow", default=False) + if _is_slow(bats_file) and not run_slow: + pytest.skip("slow build/package integration test โ€” pass --run-slow to enable") + + env = os.environ.copy() + if run_slow and _is_slow(bats_file): + env["BATS_SLOW_TESTS"] = "1" + + result = subprocess.run( + [BATS_EXECUTABLE, str(bats_file)], + capture_output=True, + text=True, + check=False, + env=env, + ) # Parse TAP output using pytest-tap parser = Parser() @@ -32,6 +64,8 @@ def test_bash_script(bats_file): failure_msg += f"\n โœ— {test.description}\n" if test.directive: failure_msg += f" Directive: {test.directive.text}\n" + if result.stdout: + failure_msg += f"\nSTDOUT:\n{result.stdout}" if result.stderr: failure_msg += f"\nSTDERR:\n{result.stderr}" pytest.fail(failure_msg) diff --git a/test/core/test_config.py b/test/core/test_config.py index 737dfe71..8a99968f 100644 --- a/test/core/test_config.py +++ b/test/core/test_config.py @@ -141,10 +141,10 @@ def test_plugin_conf(self): for key in d_config_extra_keys.get(plugin_key, []): assert key in d_conf["plugins"][plugin_key], f"{key} is missing from {plugin_key} config" - def test_init(self, pickle_conf: str): + def test_init(self, save_conf: str): from test._utils import get_config - c = get_config(pickle_conf) + c = get_config(save_conf) assert c.get("logs.http") is not None assert c.get("spans.http") is not None diff --git a/test/core/test_fixture_sanitization.py b/test/core/test_fixture_sanitization.py new file mode 100644 index 00000000..98f2f78c --- /dev/null +++ b/test/core/test_fixture_sanitization.py @@ -0,0 +1,133 @@ +# +# +# Copyright (c) 2025 Dynatrace Open Source +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +# +"""Guard tests: validate NDJSON fixture files. + +Covers: +- Every line in every ``.ndjson`` fixture is valid JSON (no ``NaN``, no bare ``Infinity``). +- Fixture file names follow the ``{plugin_name}[_{view_suffix}].ndjson`` convention. +- No ``.pkl`` binary fixture files are committed to the repository. +""" + +import json +import os + +import pytest + +FIXTURE_DIR = "test/test_data" +PLUGINS_DIR = "src/dtagent/plugins" + + +def _discover_fixture_prefixes() -> set: + """Auto-discover valid fixture name prefixes from the plugins directory. + + Each ``.py`` file in the plugins directory (excluding ``__init__.py`` and + private modules) defines a plugin whose name is a valid fixture prefix. + Plugins whose names end in ``s`` also contribute the singular form as a + valid prefix (e.g. ``dynamic_tables`` โ†’ ``dynamic_table``) because some + views within a plugin use the singular noun. + + Returns: + Set of valid fixture-name prefix strings. + """ + prefixes = set() + for fname in os.listdir(PLUGINS_DIR): + if not fname.endswith(".py") or fname.startswith("_"): + continue + plugin_name = fname[:-3] # strip .py + prefixes.add(plugin_name) + # Add singular form so fixtures like dynamic_table_refresh_history.ndjson + # are accepted even though the plugin is called dynamic_tables. + if plugin_name.endswith("s"): + prefixes.add(plugin_name[:-1]) + return prefixes + + +# Valid plugin name prefixes โ€” auto-discovered at import time so the list +# never needs to be manually maintained when plugins are added or removed. +VALID_FIXTURE_PREFIXES = _discover_fixture_prefixes() + +# Internal files that are not fixture data โ€” excluded from naming checks. +_NON_FIXTURE_FILES = {"_deny_patterns.json", "telemetry_structured.json", "telemetry_unstructured.json"} + + +##region Fixture enumeration + + +def _get_ndjson_fixtures(): + """Return list of ``.ndjson`` fixture file paths under FIXTURE_DIR.""" + return sorted(os.path.join(FIXTURE_DIR, f) for f in os.listdir(FIXTURE_DIR) if f.endswith(".ndjson") and f not in _NON_FIXTURE_FILES) + + +##endregion + +##region Tests + + +class TestFixtureValidation: + + @pytest.fixture(scope="class") + def ndjson_fixtures(self): + return _get_ndjson_fixtures() + + # ------------------------------------------------------------------ + # JSON validity + + def test_ndjson_fixtures_are_valid_json(self, ndjson_fixtures): + """Every line in every NDJSON fixture file must be valid JSON (no NaN, no Infinity).""" + errors = [] + for fpath in ndjson_fixtures: + with open(fpath, "r", encoding="utf-8") as fh: + for lineno, line in enumerate(fh, 1): + stripped = line.strip() + if not stripped: + continue + try: + json.loads(stripped) + except json.JSONDecodeError as exc: + errors.append(f"{fpath}:{lineno}: {exc}") + assert not errors, "Invalid JSON lines in NDJSON fixtures:\n" + "\n".join(errors) + + # ------------------------------------------------------------------ + # Naming convention + + def test_ndjson_fixture_naming_convention(self, ndjson_fixtures): + """Fixture files must follow the ``{plugin_name}[_{view_suffix}].ndjson`` convention.""" + violations = [] + for fpath in ndjson_fixtures: + basename = os.path.basename(fpath) + name_without_ext = os.path.splitext(basename)[0] + if not any(name_without_ext == prefix or name_without_ext.startswith(prefix + "_") for prefix in VALID_FIXTURE_PREFIXES): + violations.append(f"{basename}: does not start with a known plugin prefix {sorted(VALID_FIXTURE_PREFIXES)}") + assert not violations, "Fixture files with non-standard names:\n" + "\n".join(violations) + + # ------------------------------------------------------------------ + # No lingering pkl files + + def test_no_pkl_files_in_fixture_dir(self): + """Binary .pkl fixture files must not exist in the fixture directory.""" + pkl_files = [f for f in os.listdir(FIXTURE_DIR) if f.endswith(".pkl")] + assert not pkl_files, f".pkl files found in {FIXTURE_DIR} (must be removed): {pkl_files}" + + +##endregion diff --git a/test/core/test_util_timestamp.py b/test/core/test_util_timestamp.py index f80a83f9..abb47be7 100644 --- a/test/core/test_util_timestamp.py +++ b/test/core/test_util_timestamp.py @@ -1,94 +1,102 @@ import datetime import time -from dtagent.util import get_timestamp_in_ms, validate_timestamp_ms +import pytest +from dtagent.util import get_timestamp, validate_timestamp, process_timestamps_for_telemetry -class TestUtilTimestamp: - def test_get_timestamp_in_ms_datetime(self): +class TestGetTimestamp: + def test_get_timestamp_datetime(self): dt = datetime.datetime(2025, 11, 20, 12, 0, 0, tzinfo=datetime.timezone.utc) # timestamp for this date is 1763640000.0 - # in ms it should be 1763640000000 (int) + # in ns it should be 1763640000000000000 (int) query_data = {"ts": dt} - ts = get_timestamp_in_ms(query_data, "ts") + ts = get_timestamp(query_data, "ts") assert isinstance(ts, int) - assert ts == 1763640000000 + assert ts == 1763640000000000000 - def test_get_timestamp_in_ms_int(self): + def test_get_timestamp_int(self): ts_ns = 1763640000000000000 query_data = {"ts": ts_ns} - ts = get_timestamp_in_ms(query_data, "ts", conversion_unit=1e6) + ts = get_timestamp(query_data, "ts") assert isinstance(ts, int) - assert ts == 1763640000000 + assert ts == 1763640000000000000 -class TestValidateTimestampMs: - """Tests for validate_timestamp_ms function""" +class TestValidateTimestamp: + """Tests for validate_timestamp function""" def test_validate_current_timestamp(self): """Test that current timestamp is valid""" current_ms = int(time.time() * 1000) - result = validate_timestamp_ms(current_ms) + result = validate_timestamp(current_ms, return_unit="ms") assert result == current_ms + def test_validate_current_timestamp_ns(self): + """Test that current timestamp is valid and can return nanoseconds""" + current_ns = int(time.time() * 1_000_000_000) + result = validate_timestamp(current_ns, return_unit="ns") + # Allow for small precision differences + assert abs(result - current_ns) < 1_000_000 # Within 1ms + def test_validate_negative_timestamp_rejected(self): """Test that negative timestamps (like -1000000) are rejected""" - result = validate_timestamp_ms(-1000000) + result = validate_timestamp(-1000000, return_unit="ms") assert result is None def test_validate_zero_timestamp_rejected(self): """Test that zero timestamp is rejected""" - result = validate_timestamp_ms(0) + result = validate_timestamp(0, return_unit="ms") assert result is None def test_validate_very_large_timestamp_rejected(self): """Test that nanosecond-scale timestamps (10x too large) are rejected""" # This is approximately year 2026 in nanoseconds (milliseconds * 1e6) invalid_ns = 1770224954840999937441792 - result = validate_timestamp_ms(invalid_ns) + result = validate_timestamp(invalid_ns, return_unit="ms") assert result is None def test_validate_future_timestamp_in_allowed_range(self): """Test that future timestamps within allowed range are accepted""" # 5 minutes in the future (within default 10 minute limit) future_ms = int(time.time() * 1000) + (5 * 60 * 1000) - result = validate_timestamp_ms(future_ms) + result = validate_timestamp(future_ms, return_unit="ms") assert result == future_ms def test_validate_past_timestamp_in_allowed_range(self): """Test that past timestamps within allowed range are accepted""" # 1 hour in the past (within default 24 hour limit) past_ms = int(time.time() * 1000) - (60 * 60 * 1000) - result = validate_timestamp_ms(past_ms) + result = validate_timestamp(past_ms, return_unit="ms") assert result == past_ms def test_validate_too_far_in_future_rejected(self): """Test that timestamps too far in the future are rejected""" # 1 hour in the future (beyond default 10 minute limit) future_ms = int(time.time() * 1000) + (60 * 60 * 1000) - result = validate_timestamp_ms(future_ms) + result = validate_timestamp(future_ms, return_unit="ms") assert result is None def test_validate_too_far_in_past_rejected(self): """Test that timestamps too far in the past are rejected""" # 2 days in the past (beyond default 24 hour limit) past_ms = int(time.time() * 1000) - (2 * 24 * 60 * 60 * 1000) - result = validate_timestamp_ms(past_ms) + result = validate_timestamp(past_ms, return_unit="ms") assert result is None def test_validate_custom_allowed_ranges(self): - """Test validate_timestamp_ms with custom allowed ranges""" + """Test validate_timestamp with custom allowed ranges""" # 90 minutes in the past past_ms = int(time.time() * 1000) - (90 * 60 * 1000) # Should be rejected with default range (24 hours) - result_default = validate_timestamp_ms(past_ms) + result_default = validate_timestamp(past_ms, return_unit="ms") assert result_default == past_ms # Within 24 hours # Should be rejected with metrics range (55 minutes) - result_metrics = validate_timestamp_ms(past_ms, allowed_past_minutes=55) + result_metrics = validate_timestamp(past_ms, allowed_past_minutes=55, return_unit="ms") assert result_metrics is None # Beyond 55 minutes def test_validate_year_2100_boundary(self): @@ -96,17 +104,76 @@ def test_validate_year_2100_boundary(self): # Year 2099 should be valid if within time range year_2099_ms = 4_070_908_800_000 # Jan 1, 2099 in ms # This will likely fail the time range check (too far in future), which is correct - result = validate_timestamp_ms(year_2099_ms) + result = validate_timestamp(year_2099_ms, return_unit="ms") assert result is None # Too far in the future # Year 2100 should be rejected (beyond our max threshold) year_2100_ms = 4_102_444_800_000 # Jan 1, 2100 in ms - result = validate_timestamp_ms(year_2100_ms) + result = validate_timestamp(year_2100_ms, return_unit="ms") + assert result is None + + def test_validate_invalid_return_unit(self): + """Test that invalid return_unit parameter raises ValueError""" + current_ms = int(time.time() * 1000) + with pytest.raises(ValueError, match="return_unit must be 'ms' or 'ns'"): + validate_timestamp(current_ms, return_unit="seconds") + + def test_old_timestamp_rejected_by_default(self): + """Test that old timestamps are rejected when skip_range_validation is False (default)""" + now_ms = int(time.time() * 1000) + ten_years_ms = 10 * 365 * 24 * 60 * 60 * 1000 + old_ms = now_ms - ten_years_ms + + result = validate_timestamp(old_ms, return_unit="ms") + assert result is None + + def test_old_timestamp_accepted_when_skipping_range_validation(self): + """Test that old timestamps are accepted when skip_range_validation is True""" + now_ms = int(time.time() * 1000) + ten_years_ms = 10 * 365 * 24 * 60 * 60 * 1000 + old_ms = now_ms - ten_years_ms + + result = validate_timestamp(old_ms, return_unit="ms", skip_range_validation=True) + assert result is not None + assert result == old_ms + + def test_future_timestamp_rejected_by_default(self): + """Test that future timestamps are rejected when skip_range_validation is False (default)""" + now_ms = int(time.time() * 1000) + ten_years_ms = 10 * 365 * 24 * 60 * 60 * 1000 + future_ms = now_ms + ten_years_ms + + result = validate_timestamp(future_ms, return_unit="ms") assert result is None + def test_future_timestamp_accepted_when_skipping_range_validation(self): + """Test that future timestamps are accepted when skip_range_validation is True""" + now_ms = int(time.time() * 1000) + ten_years_ms = 10 * 365 * 24 * 60 * 60 * 1000 + future_ms = now_ms + ten_years_ms -class TestValidateTimestampMsAutoConversion: - """Tests for validate_timestamp_ms auto-conversion from higher precision time units""" + result = validate_timestamp(future_ms, return_unit="ms", skip_range_validation=True) + assert result is not None + assert result == future_ms + + def test_skip_range_validation_with_nanoseconds(self): + """Test that skip_range_validation works correctly with nanosecond return unit""" + now_ns = int(time.time() * 1_000_000_000) + ten_years_ns = 10 * 365 * 24 * 60 * 60 * 1_000_000_000 + old_ns = now_ns - ten_years_ns + + # Should be rejected without skip_range_validation + result_default = validate_timestamp(old_ns, return_unit="ns") + assert result_default is None + + # Should be accepted with skip_range_validation + result_skip = validate_timestamp(old_ns, return_unit="ns", skip_range_validation=True) + assert result_skip is not None + assert result_skip == old_ns + + +class TestValidateTimestampAutoConversion: + """Tests for validate_timestamp auto-conversion from higher precision time units""" def test_auto_convert_microseconds(self): """Test auto-conversion from microseconds to milliseconds""" @@ -114,7 +181,7 @@ def test_auto_convert_microseconds(self): current_ms = int(time.time() * 1000) microseconds = current_ms * 1000 - result = validate_timestamp_ms(microseconds) + result = validate_timestamp(microseconds, return_unit="ms") # Should be converted back to milliseconds and validated assert result is not None assert result == current_ms @@ -125,7 +192,7 @@ def test_auto_convert_nanoseconds_from_csv(self): # 1770598800000000030932992 nanoseconds nanoseconds = 1770598800000000030932992 - result = validate_timestamp_ms(nanoseconds) + result = validate_timestamp(nanoseconds, return_unit="ms") # Should be converted to milliseconds: 1770598800000 # But will likely be rejected as too far in future (year 2026) # The important thing is it doesn't crash with ValueError @@ -137,7 +204,7 @@ def test_auto_convert_valid_nanoseconds(self): current_ms = int(time.time() * 1000) nanoseconds = current_ms * 1_000_000 - result = validate_timestamp_ms(nanoseconds) + result = validate_timestamp(nanoseconds, return_unit="ms") # Should be converted back to milliseconds and validated assert result is not None # Allow for 1ms precision loss due to floating point arithmetic @@ -149,7 +216,7 @@ def test_auto_convert_picoseconds_from_csv(self): # 1770224954840999937441792 appears to be in picoseconds picoseconds = 1770224954840999937441792 - result = validate_timestamp_ms(picoseconds) + result = validate_timestamp(picoseconds, return_unit="ms") # Should attempt conversion: 1770224954840999937441792 / 1e9 = 1770224954840 # This would be Feb 4, 2026 which might be within range # The important thing is it doesn't crash with ValueError @@ -161,7 +228,7 @@ def test_auto_convert_valid_picoseconds(self): current_ms = int(time.time() * 1000) picoseconds = current_ms * 1_000_000_000 - result = validate_timestamp_ms(picoseconds) + result = validate_timestamp(picoseconds, return_unit="ms") # Should be converted back to milliseconds and validated assert result is not None assert result == current_ms @@ -172,7 +239,7 @@ def test_auto_convert_valid_femtoseconds(self): current_ms = int(time.time() * 1000) femtoseconds = current_ms * 1_000_000_000_000 - result = validate_timestamp_ms(femtoseconds) + result = validate_timestamp(femtoseconds, return_unit="ms") # Should be converted back to milliseconds and validated assert result is not None # Allow for 1ms precision loss due to floating point arithmetic @@ -183,7 +250,7 @@ def test_auto_convert_boundary_microseconds(self): # Just above 4.1e12 (millisecond threshold) microseconds = 4_100_000_000_001_000 # This should trigger microsecond conversion - result = validate_timestamp_ms(microseconds) + result = validate_timestamp(microseconds, return_unit="ms") # After conversion: 4_100_000_000_001 ms (year 2099+) # Will be rejected as too far in the future assert result is None @@ -193,7 +260,7 @@ def test_no_conversion_for_valid_milliseconds(self): # Current time in milliseconds - should NOT be converted current_ms = int(time.time() * 1000) - result = validate_timestamp_ms(current_ms) + result = validate_timestamp(current_ms, return_unit="ms") # Should remain unchanged and be valid assert result == current_ms @@ -203,8 +270,185 @@ def test_conversion_preserves_precision(self): base_ms = 1707494400000 # Feb 9, 2024, 20:00:00 UTC microseconds = base_ms * 1000 + 123 # Add some microseconds - result = validate_timestamp_ms(microseconds) + result = validate_timestamp(microseconds, return_unit="ms") # Should convert to milliseconds (drops the extra microseconds) # May be rejected if outside time window, but if valid, should be base_ms if result is not None: assert result == base_ms + + def test_return_unit_ns(self): + """Test that return_unit='ns' returns nanoseconds""" + current_ms = int(time.time() * 1000) + result = validate_timestamp(current_ms, return_unit="ns") + # Should return in nanoseconds + assert result is not None + assert result == current_ms * 1_000_000 + + def test_return_unit_ms_from_ns_input(self): + """Test that return_unit='ms' converts nanoseconds input to milliseconds""" + current_ns = int(time.time() * 1_000_000_000) + result = validate_timestamp(current_ns, return_unit="ms") + # Should return in milliseconds + assert result is not None + expected_ms = current_ns // 1_000_000 + assert abs(result - expected_ms) <= 1 # Allow for rounding + + +class TestProcessTimestampsForTelemetry: + """Tests for process_timestamps_for_telemetry utility function""" + + def test_process_with_only_timestamp(self): + """Test processing with only timestamp field""" + current_ns = int(time.time() * 1_000_000_000) + data = {"timestamp": current_ns} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # Should return timestamp in milliseconds + assert timestamp_ms is not None + expected_ms = current_ns // 1_000_000 + assert abs(timestamp_ms - expected_ms) <= 1 + + # Should fallback observed_timestamp to timestamp value (in nanoseconds) + assert observed_timestamp_ns is not None + assert observed_timestamp_ns == current_ns + + def test_process_with_timestamp_and_observed_timestamp(self): + """Test processing with both timestamp and observed_timestamp fields""" + current_ns = int(time.time() * 1_000_000_000) + # observed_timestamp is 5 minutes earlier + observed_ns = current_ns - (5 * 60 * 1_000_000_000) + + data = {"timestamp": current_ns, "observed_timestamp": observed_ns} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # Should return timestamp in milliseconds + assert timestamp_ms is not None + expected_ms = current_ns // 1_000_000 + assert abs(timestamp_ms - expected_ms) <= 1 + + # Should use explicit observed_timestamp (in nanoseconds) + assert observed_timestamp_ns is not None + assert observed_timestamp_ns == observed_ns + + def test_fallback_when_observed_timestamp_not_provided(self): + """Test that observed_timestamp falls back to timestamp value when not provided""" + current_ns = int(time.time() * 1_000_000_000) + data = {"timestamp": current_ns} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # Both should be based on the same timestamp value + assert timestamp_ms is not None + assert observed_timestamp_ns is not None + # observed_timestamp_ns should equal the original timestamp value + assert observed_timestamp_ns == current_ns + # timestamp_ms should be the converted value + assert timestamp_ms == current_ns // 1_000_000 + + def test_validation_with_range_checking_for_timestamp(self): + """Test that timestamp is validated with range checking""" + # Create a timestamp that's too old (10 years in the past) + now_ns = int(time.time() * 1_000_000_000) + ten_years_ns = 10 * 365 * 24 * 60 * 60 * 1_000_000_000 + old_ns = now_ns - ten_years_ns + + data = {"timestamp": old_ns} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # timestamp should be rejected (None) due to range validation + assert timestamp_ms is None + # observed_timestamp should still be accepted (skips range validation) + assert observed_timestamp_ns is not None + assert observed_timestamp_ns == old_ns + + def test_validation_without_range_checking_for_observed_timestamp(self): + """Test that observed_timestamp is validated WITHOUT range checking""" + # Current time + current_ns = int(time.time() * 1_000_000_000) + # Very old observed_timestamp (10 years in the past) + ten_years_ns = 10 * 365 * 24 * 60 * 60 * 1_000_000_000 + old_ns = current_ns - ten_years_ns + + data = {"timestamp": current_ns, "observed_timestamp": old_ns} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # timestamp should be valid (current time) + assert timestamp_ms is not None + # observed_timestamp should be accepted despite being old (skip_range_validation=True) + assert observed_timestamp_ns is not None + assert observed_timestamp_ns == old_ns + + def test_return_format(self): + """Test that return format is (timestamp_ms, observed_timestamp_ns)""" + current_ns = int(time.time() * 1_000_000_000) + data = {"timestamp": current_ns} + + result = process_timestamps_for_telemetry(data) + + # Should return a tuple + assert isinstance(result, tuple) + assert len(result) == 2 + + timestamp_ms, observed_timestamp_ns = result + + # timestamp_ms should be in milliseconds (verify by converting back) + assert timestamp_ms is not None + assert timestamp_ms == current_ns // 1_000_000 + # observed_timestamp_ns should be in nanoseconds (same as input) + assert observed_timestamp_ns is not None + assert observed_timestamp_ns == current_ns + + def test_handling_invalid_timestamp(self): + """Test handling of invalid timestamps""" + data = {"timestamp": -1000} # Invalid negative timestamp + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # Both should be None for invalid input + assert timestamp_ms is None + assert observed_timestamp_ns is None + + def test_handling_missing_timestamp(self): + """Test handling when timestamp field is missing""" + data = {} # No timestamp field + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # Both should be None when timestamp is missing + assert timestamp_ms is None + assert observed_timestamp_ns is None + + def test_handling_datetime_objects(self): + """Test that datetime objects are properly converted""" + # Use a recent datetime within the valid range (e.g., 1 hour ago) + dt = datetime.datetime.now(tz=datetime.timezone.utc) - datetime.timedelta(hours=1) + data = {"timestamp": dt} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # Should convert datetime to proper units + assert timestamp_ms is not None + assert observed_timestamp_ns is not None + # Verify the conversion is approximately correct (allow for microsecond precision loss in milliseconds) + # When datetime has microseconds, the nanosecond value will have sub-millisecond precision + # that gets truncated when converting to milliseconds + assert abs(observed_timestamp_ns - (timestamp_ms * 1_000_000)) < 1_000_000 # Within 1ms difference + + def test_observed_timestamp_preserves_precision(self): + """Test that observed_timestamp preserves nanosecond precision""" + # Use a timestamp with specific nanosecond precision + current_ns = int(time.time() * 1_000_000_000) + precise_ns = current_ns + 123456789 # Add specific nanoseconds + + data = {"timestamp": current_ns, "observed_timestamp": precise_ns} + + timestamp_ms, observed_timestamp_ns = process_timestamps_for_telemetry(data) + + # timestamp should be valid + assert timestamp_ms is not None + # observed_timestamp should preserve exact nanosecond value + assert observed_timestamp_ns == precise_ns diff --git a/test/otel/test_events.py b/test/otel/test_events.py index 26aed880..5b133c7c 100644 --- a/test/otel/test_events.py +++ b/test/otel/test_events.py @@ -207,8 +207,8 @@ def test_send_results_as_events(self): with mock_client.mock_telemetry_sending(): events = self._dtagent._get_events() - PICKLE_NAME = "test/test_data/data_volume.pkl" - for row_dict in _utils._get_unpickled_entries(PICKLE_NAME, limit=2): + FIXTURE_NAME = "test/test_data/data_volume.ndjson" + for row_dict in _utils._get_fixture_entries(FIXTURE_NAME, limit=2): events_sent = events.report_via_api( query_data=row_dict, event_type=EventType.CUSTOM_INFO, @@ -225,10 +225,10 @@ def test_send_results_as_bizevents(self): with mock_client.mock_telemetry_sending(): bizevents = self._dtagent._get_biz_events() - PICKLE_NAME = "test/test_data/data_volume.pkl" + FIXTURE_NAME = "test/test_data/data_volume.ndjson" events_sent = bizevents.report_via_api( - query_data=_utils._get_unpickled_entries(PICKLE_NAME, limit=2), + query_data=_utils._get_fixture_entries(FIXTURE_NAME, limit=2), event_type=str(EventType.CUSTOM_INFO), context=get_context_name_and_run_id( plugin_name="test_send_results_as_bizevents", context_name="data_volume", run_id=str(uuid.uuid4().hex) diff --git a/test/otel/test_logs.py b/test/otel/test_logs.py index 82c0f415..3d5bd991 100644 --- a/test/otel/test_logs.py +++ b/test/otel/test_logs.py @@ -413,7 +413,7 @@ def test_filter_with_picosecond_observed_timestamp(self, logs_instance): assert record.observed_timestamp == expected_ns def test_filter_with_out_of_range_picosecond_timestamp(self, logs_instance): - """Test that out-of-range picosecond timestamp (from bug report) is removed""" + """Test that out-of-range femtosecond/picosecond timestamp is preserved (skip_range_validation for observed_timestamp)""" import logging filter_instance = self.get_filter(logs_instance) @@ -421,15 +421,17 @@ def test_filter_with_out_of_range_picosecond_timestamp(self, logs_instance): record = logging.LogRecord(name="test", level=logging.INFO, pathname="", lineno=0, msg="test message", args=(), exc_info=None) - # This specific value from the bug report is too far in the past/future after conversion + # This specific value from the bug report is detected as femtoseconds (> 4.1e21) + # With skip_range_validation=True, observed_timestamp preserves original timestamps record.observed_timestamp = 1770224954840999937441792 filter_instance.filter(record) - # Verify the out-of-range observed_timestamp was removed - # After conversion to ms, this value may be outside the allowed range - # validate_timestamp_ms will return None, and the attribute should be deleted - assert not hasattr(record, "observed_timestamp") + # Verify the observed_timestamp was converted (from femtoseconds to nanoseconds) and preserved + # even though it may be out of typical range (skip_range_validation=True) + assert hasattr(record, "observed_timestamp") + # Should be converted from femtoseconds to nanoseconds (divided by 1_000_000) + assert record.observed_timestamp == 1770224954840999937441792 // 1_000_000 def test_filter_with_none_observed_timestamp(self, logs_instance): """Test that None observed_timestamp doesn't cause issues""" diff --git a/test/plugins/conftest.py b/test/plugins/conftest.py deleted file mode 100644 index 2190a25e..00000000 --- a/test/plugins/conftest.py +++ /dev/null @@ -1,47 +0,0 @@ -# -# -# Copyright (c) 2025 Dynatrace Open Source -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# -# -from pytest import fixture - - -def pytest_addoption(parser): - parser.addoption( - "--result", - action="store", - help="File name of the test result file.", - ) - parser.addoption( - "--exemplary_result", - action="store", - help="File name of the test exemplary result file.", - ) - - -@fixture(scope="session") -def result(request): - return request.config.getoption("--result") - - -@fixture(scope="session") -def exemplary_result(request): - return request.config.getoption("--exemplary_result") diff --git a/test/plugins/readme.md b/test/plugins/readme.md index e9f475a0..2531b61c 100644 --- a/test/plugins/readme.md +++ b/test/plugins/readme.md @@ -45,7 +45,7 @@ Plugin tests support two execution modes: ### 1. Local Mode (Mocked APIs) - Runs without live Snowflake/Dynatrace connections -- Uses pickled test data from `test/test_data/` +- Uses saved test data from `test/test_data/` - Validates against expected results in `test/test_results/` - **Default mode** when `test/credentials.yml` is not present @@ -69,11 +69,11 @@ When adding new plugins or changing data collection logic: ./test.sh -a -p ``` -This creates new pickle files in `test/test_data/` and result files in `test/test_results/`. +This creates new test fixture files in `test/test_data/` and result files in `test/test_results/`. ### Test Data Structure -- **Input data**: Pickle files (`.pkl`) in `test/test_data/` +- **Input data**: Test fixture files (`.ndjson`) in `test/test_data/` - **Expected results**: Text files in `test/test_results/` - **Reference data**: NDJSON files for human-readable inspection diff --git a/test/plugins/test_1_validate.py b/test/plugins/test_1_validate.py deleted file mode 100644 index 2405b930..00000000 --- a/test/plugins/test_1_validate.py +++ /dev/null @@ -1,96 +0,0 @@ -# -# -# Copyright (c) 2025 Dynatrace Open Source -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# -# -# the name isn't pretty but I wanted this file to be at the top of other tests, and it needed to start with test_, hence this one - -# if You wish to run validation manually from command line, do it as -# pytest -s -v --result=$LOG_FILE_NAME --exemplary_result=$EXEMPLARY_RESULT_FILE test/plugins/test_1_validate.py - -from typing import Union - - -def get_lines_with_substr(lines, substr): - return [line for line in lines if substr in line] - - -def prepare_result_text(result: str) -> Union[str, None]: - def __remove_false_positives(text: str, false_positives: str) -> str: - import re - - p = re.compile(re.escape(false_positives), re.IGNORECASE) - return p.sub("", text) - - def __check_issue_tag(text: str, tag: str) -> bool: - """Checks whether a line in text does not start with a tag - - Args: - text (str): text to check - tag (str): tag to discover at the beginning of a line - - Returns: - bool: True if such line was discovered - """ - for line in text.splitlines(): - if line.startswith(tag) and "_OTLP" not in line: - return True - return False - - with open(result, "r", encoding="utf-8") as f: - result_text = f.read() - - # I have no clue how to get rid of this error, tests run properly despite it showing up - result_text = __remove_false_positives(result_text, "error:root:no such file or directory") - result_text = __remove_false_positives(result_text, "warning:dtagent:setting log level") - - # removing 'error.code' as it is expected as part of content generated in test_trust_center - result_text = result_text.replace("'error.code'", "") - - assert not __check_issue_tag(result_text, "ERROR") - assert not __check_issue_tag(result_text, "WARN") - assert not __check_issue_tag(result_text, "Traceback") - - return result_text - - -def test_compare_results(result: str, exemplary_result: str): - if result is not None: - result_text = prepare_result_text(result) - results_lines = get_lines_with_substr(result_text.splitlines(), "!!!!") - result_last_line = results_lines[-1].lower() - - if exemplary_result is None: - assert len(results_lines) > 0, f"File empty?\n{result_text}" - - assert not any(substring in result_last_line for substring in [" 0", "(0"]), f"0 results\n{result_last_line}" - - print(f"\n!!! No results given to compare. Only checked {result} file for errors") - else: - print(f"\n!!!! Verifying {result} with {exemplary_result}") - with open(exemplary_result, "r", encoding="utf-8") as f: - exemplary_result_text = f.read().lower() - - test_lines = get_lines_with_substr(exemplary_result_text.splitlines(), "!!!!") - - assert len(test_lines) > 0, f"No results?\n{exemplary_result_text}" - - assert result_last_line == test_lines[-1].lower() diff --git a/test/plugins/test_active_queries.py b/test/plugins/test_active_queries.py index 74d5a114..85d4f02e 100644 --- a/test/plugins/test_active_queries.py +++ b/test/plugins/test_active_queries.py @@ -24,7 +24,7 @@ class TestActiveQueries: import pytest - PICKLES = {"SELECT * FROM TABLE(DTAGENT_DB.APP.F_ACTIVE_QUERIES_INSTRUMENTED())": "test/test_data/active_queries.pkl"} + FIXTURES = {"SELECT * FROM TABLE(DTAGENT_DB.APP.F_ACTIVE_QUERIES_INSTRUMENTED())": "test/test_data/active_queries.ndjson"} @pytest.mark.xdist_group(name="test_telemetry") def test_active_queries(self): @@ -38,12 +38,12 @@ def test_active_queries(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestActiveQueriesPlugin(ActiveQueriesPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestActiveQueries.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestActiveQueries.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestActiveQueriesPlugin diff --git a/test/plugins/test_budgets.py b/test/plugins/test_budgets.py index 5f91f890..a0ae8f3d 100644 --- a/test/plugins/test_budgets.py +++ b/test/plugins/test_budgets.py @@ -24,35 +24,35 @@ class TestBudgets: import pytest - PICKLES = { - "APP.V_BUDGET_DETAILS": "test/test_data/budgets.pkl", - "APP.V_BUDGET_SPENDINGS": "test/test_data/budget_spendings.pkl", + FIXTURES = { + "APP.V_BUDGET_DETAILS": "test/test_data/budgets.ndjson", + "APP.V_BUDGET_SPENDINGS": "test/test_data/budgets_spendings.ndjson", } - @pytest.mark.xdist_group(name="test_telemetry") - def test_budgets(self): - from unittest.mock import patch + def _make_plugin_class(self): from typing import Dict, Generator from dtagent.plugins.budgets import BudgetsPlugin - from test import _get_session, TestDynatraceSnowAgent import test._utils as utils - if utils.should_pickle(self.PICKLES.values()): - session = _get_session() - session.call("APP.P_GET_BUDGETS", log_on_exception=True) - utils._pickle_all(session, self.PICKLES, force=True) - class TestBudgetsPlugin(BudgetsPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestBudgets.PICKLES, t_data) + return utils._safe_get_fixture_entries(TestBudgets.FIXTURES, t_data) - def __local_get_plugin_class(source: str): - return TestBudgetsPlugin + return TestBudgetsPlugin + @pytest.mark.xdist_group(name="test_telemetry") + def test_budgets(self): + from test import _get_session, TestDynatraceSnowAgent from dtagent import plugins + import test._utils as utils + + if utils.should_generate_fixtures(self.FIXTURES.values()): + session = _get_session() + session.call("APP.P_GET_BUDGETS", log_on_exception=True) + utils._generate_all_fixtures(session, self.FIXTURES, force=True) - plugins._get_plugin_class = __local_get_plugin_class + plugins._get_plugin_class = lambda source: self._make_plugin_class() # ====================================================================== disabled_combinations = [ @@ -76,6 +76,45 @@ def __local_get_plugin_class(source: str): }, ) + def test_budgets_disabled_by_default(self): + """Verify that the default config has is_disabled set to True.""" + import test._utils as utils + + config = utils.get_config() + assert config._config["plugins"]["budgets"]["is_disabled"] is True + + def test_budgets_monitored_budgets_default_empty(self): + """Verify that the default monitored_budgets is an empty list.""" + import test._utils as utils + + config = utils.get_config() + assert config._config["plugins"]["budgets"]["monitored_budgets"] == [] + + @pytest.mark.xdist_group(name="test_telemetry") + def test_budgets_with_monitored_budgets_configured(self): + """Verify plugin runs correctly when monitored_budgets is populated (grants already applied).""" + from test import TestDynatraceSnowAgent + from dtagent import plugins + import test._utils as utils + + plugins._get_plugin_class = lambda source: self._make_plugin_class() + + config = utils.get_config() + config._config["plugins"]["budgets"]["monitored_budgets"] = ["MY_DB.MY_SCHEMA.MY_BUDGET"] + config._config["plugins"]["budgets"]["is_disabled"] = False + + utils.execute_telemetry_test( + TestDynatraceSnowAgent, + test_name="test_budget", + disabled_telemetry=[], + affecting_types_for_entries=["logs", "metrics", "events"], + config=config, + base_count={ + "budgets": {"entries": 1, "log_lines": 1, "metrics": 1, "events": 1}, + "spendings": {"entries": 0, "log_lines": 0, "metrics": 0, "events": 0}, + }, + ) + if __name__ == "__main__": test_class = TestBudgets() diff --git a/test/plugins/test_data_schemas.py b/test/plugins/test_data_schemas.py index 2081fcb4..51591b82 100644 --- a/test/plugins/test_data_schemas.py +++ b/test/plugins/test_data_schemas.py @@ -24,7 +24,7 @@ class TestDataSchemas: import pytest - PICKLES = {"APP.V_DATA_SCHEMAS": "test/test_data/data_schemas.pkl"} + FIXTURES = {"APP.V_DATA_SCHEMAS": "test/test_data/data_schemas.ndjson"} @pytest.mark.xdist_group(name="test_telemetry") def test_data_schemas(self): @@ -35,12 +35,12 @@ def test_data_schemas(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestDataSchemasPlugin(DataSchemasPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestDataSchemas.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestDataSchemas.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestDataSchemasPlugin diff --git a/test/plugins/test_data_volume.py b/test/plugins/test_data_volume.py index 52fe653a..5f10d5a2 100644 --- a/test/plugins/test_data_volume.py +++ b/test/plugins/test_data_volume.py @@ -24,7 +24,7 @@ class TestDataVol: import pytest - PICKLES = {"APP.V_DATA_VOLUME": "test/test_data/data_volume.pkl"} + FIXTURES = {"APP.V_DATA_VOLUME": "test/test_data/data_volume.ndjson"} @pytest.mark.xdist_group(name="test_telemetry") def test_data_vol(self): @@ -35,12 +35,12 @@ def test_data_vol(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestDataVolumePlugin(DataVolumePlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestDataVol.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestDataVol.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestDataVolumePlugin diff --git a/test/plugins/test_dynamic_tables.py b/test/plugins/test_dynamic_tables.py index 235259f4..a2d1fba4 100644 --- a/test/plugins/test_dynamic_tables.py +++ b/test/plugins/test_dynamic_tables.py @@ -24,10 +24,10 @@ class TestDynamicTables: import pytest - PICKLES = { - "APP.V_DYNAMIC_TABLES_INSTRUMENTED": "test/test_data/dynamic_tables.pkl", - "APP.V_DYNAMIC_TABLE_REFRESH_HISTORY_INSTRUMENTED": "test/test_data/dynamic_table_refresh_history.pkl", - "APP.V_DYNAMIC_TABLE_GRAPH_HISTORY_INSTRUMENTED": "test/test_data/dynamic_table_graph_history.pkl", + FIXTURES = { + "APP.V_DYNAMIC_TABLES_INSTRUMENTED": "test/test_data/dynamic_tables.ndjson", + "APP.V_DYNAMIC_TABLE_REFRESH_HISTORY_INSTRUMENTED": "test/test_data/dynamic_table_refresh_history.ndjson", + "APP.V_DYNAMIC_TABLE_GRAPH_HISTORY_INSTRUMENTED": "test/test_data/dynamic_table_graph_history.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -42,12 +42,12 @@ def test_dynamic_tables(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestDynamicTablesPlugin(DynamicTablesPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestDynamicTables.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestDynamicTables.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestDynamicTablesPlugin diff --git a/test/plugins/test_event_log.py b/test/plugins/test_event_log.py index dde8bd8b..c3e4b41a 100644 --- a/test/plugins/test_event_log.py +++ b/test/plugins/test_event_log.py @@ -24,10 +24,10 @@ class TestEventLog: import pytest - PICKLES = { - "APP.V_EVENT_LOG": "test/test_data/event_log.pkl", - "APP.V_EVENT_LOG_METRICS_INSTRUMENTED": "test/test_data/event_log_metrics.pkl", - "APP.V_EVENT_LOG_SPANS_INSTRUMENTED": "test/test_data/event_log_spans.pkl", + FIXTURES = { + "APP.V_EVENT_LOG": "test/test_data/event_log.ndjson", + "APP.V_EVENT_LOG_METRICS_INSTRUMENTED": "test/test_data/event_log_metrics.ndjson", + "APP.V_EVENT_LOG_SPANS_INSTRUMENTED": "test/test_data/event_log_spans.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -42,14 +42,14 @@ def test_event_log(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestEventLogPlugin(EventLogPlugin): def _get_events(self) -> Generator[Dict, None, None]: return self._get_table_rows("APP.V_EVENT_LOG") def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestEventLog.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestEventLog.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestEventLogPlugin diff --git a/test/plugins/test_event_usage.py b/test/plugins/test_event_usage.py index 2b708d63..a0c43a53 100644 --- a/test/plugins/test_event_usage.py +++ b/test/plugins/test_event_usage.py @@ -24,7 +24,7 @@ class TestEventUsage: import pytest - PICKLES = {"APP.V_EVENT_USAGE_HISTORY": "test/test_data/event_usage.pkl"} + FIXTURES = {"APP.V_EVENT_USAGE_HISTORY": "test/test_data/event_usage.ndjson"} @pytest.mark.xdist_group(name="test_telemetry") def test_event_usage(self): @@ -38,12 +38,12 @@ def test_event_usage(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestEventUsagePlugin(EventUsagePlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestEventUsage.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestEventUsage.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestEventUsagePlugin diff --git a/test/plugins/test_login_history.py b/test/plugins/test_login_history.py index 92463eae..69a0ccc8 100644 --- a/test/plugins/test_login_history.py +++ b/test/plugins/test_login_history.py @@ -24,7 +24,10 @@ class TestLoginHist: import pytest - PICKLES = {"APP.V_LOGIN_HISTORY": "test/test_data/login_history.pkl", "APP.V_SESSIONS": "test/test_data/sessions.pkl"} + FIXTURES = { + "APP.V_LOGIN_HISTORY": "test/test_data/login_history.ndjson", + "APP.V_SESSIONS": "test/test_data/login_history_sessions.ndjson", + } @pytest.mark.xdist_group(name="test_telemetry") def test_login_hist(self): @@ -37,12 +40,12 @@ def test_login_hist(self): # ====================================================================== - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestLoginHistoryPlugin(LoginHistoryPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestLoginHist.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestLoginHist.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestLoginHistoryPlugin diff --git a/test/plugins/test_query_history.py b/test/plugins/test_query_history.py index dad2df9b..12111120 100644 --- a/test/plugins/test_query_history.py +++ b/test/plugins/test_query_history.py @@ -24,7 +24,7 @@ class TestQueryHist: import pytest - PICKLES = {"APP.V_RECENT_QUERIES": "test/test_data/recent_queries2.pkl"} + FIXTURES = {"APP.V_RECENT_QUERIES": "test/test_data/query_history.ndjson"} @pytest.mark.xdist_group(name="test_telemetry") def test_query_hist(self): @@ -34,7 +34,7 @@ def test_query_hist(self): from typing import Dict, Generator from snowflake import snowpark - import pandas as pd + import json as _json import test._utils as utils from test import TestDynatraceSnowAgent, _get_session @@ -42,10 +42,10 @@ def test_query_hist(self): # ====================================================================== - if utils.should_pickle(self.PICKLES.values()): + if utils.should_generate_fixtures(self.FIXTURES.values()): session = _get_session() session.call("APP.P_REFRESH_RECENT_QUERIES", log_on_exception=True) - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) from dtagent.otel.spans import Spans @@ -58,22 +58,22 @@ def _get_sub_rows( parent_row_id_col: str, row_id: str, ) -> Generator[Dict, None, None]: - pandas_df = pd.read_pickle(TestQueryHist.PICKLES[view_name]) - print(f"Unpickled for {view_name} at {parent_row_id_col} = {row_id}") + fixture_path = TestQueryHist.FIXTURES[view_name] + print(f"Loaded fixture for {view_name} at {parent_row_id_col} = {row_id}") + with open(fixture_path, "r", encoding="utf-8") as _fh: + all_rows = [_json.loads(line) for line in _fh if line.strip()] - pandas_df = pandas_df[pandas_df[parent_row_id_col] == row_id] + from dtagent.util import _adjust_timestamp - for _, row in pandas_df.iterrows(): - from dtagent.util import _adjust_timestamp - - row_dict = row.to_dict() - _adjust_timestamp(row_dict) - yield row_dict + for row_dict in all_rows: + if row_dict.get(parent_row_id_col) == row_id: + _adjust_timestamp(row_dict) + yield row_dict class TestQueryHistoryPlugin(QueryHistoryPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestQueryHist.PICKLES, t_data, limit=3) + return utils._safe_get_fixture_entries(TestQueryHist.FIXTURES, t_data, limit=3) class TestSpanDynatraceSnowAgent(TestDynatraceSnowAgent): from opentelemetry.sdk.resources import Resource diff --git a/test/plugins/test_query_history_span_hierarchy.py b/test/plugins/test_query_history_span_hierarchy.py new file mode 100644 index 00000000..88c70940 --- /dev/null +++ b/test/plugins/test_query_history_span_hierarchy.py @@ -0,0 +1,194 @@ +# +# +# Copyright (c) 2025 Dynatrace Open Source +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in all +# copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +# SOFTWARE. +# +# +"""Tests for span hierarchy validation in query_history plugin (BDX-620). + +Verifies that nested stored procedure call chains (outer SP โ†’ inner SP โ†’ leaf SELECT) +are correctly represented using IS_PARENT / IS_ROOT flags and produce the expected +parent-child span structure via OpenTelemetry propagation. + +Fixture layout (query_history_nested_sp.ndjson): + sp-root (IS_ROOT=True, IS_PARENT=True, PARENT_QUERY_ID=null) + sp-mid1 (IS_ROOT=False, IS_PARENT=True, PARENT_QUERY_ID=sp-root) + sp-leaf (IS_ROOT=False, IS_PARENT=False, PARENT_QUERY_ID=sp-mid1) +""" + + +class TestQueryHistSpanHierarchy: + import pytest + + FIXTURES = {"APP.V_RECENT_QUERIES": "test/test_data/query_history_nested_sp.ndjson"} + + @pytest.mark.xdist_group(name="test_telemetry") + def test_span_hierarchy(self): + """Validates nested stored procedure span hierarchy (BDX-620). + + Checks that: + - Only the IS_ROOT=True row is processed as a top-level span entry. + - IS_PARENT=True rows recurse into sub-spans via _get_sub_rows. + - All 3 queries in the chain are recorded as processed_ids. + - IS_PARENT / IS_ROOT flags computed by P_REFRESH_RECENT_QUERIES are + correctly consumed by the OTel span generation path. + """ + import json as _json + from typing import Dict, Generator + + from snowflake import snowpark + import test._utils as utils + + from test import TestDynatraceSnowAgent, _get_session + from dtagent.plugins.query_history import QueryHistoryPlugin + from dtagent.otel.spans import Spans + from dtagent.util import _adjust_timestamp + + # ------------------------------------------------------------------ + # Sub-class Spans so _get_sub_rows reads from the fixture instead of + # querying a live Snowflake table. + # ------------------------------------------------------------------ + class TestSpans(Spans): + + def _get_sub_rows( + self, + session: snowpark.Session, + view_name: str, + parent_row_id_col: str, + row_id: str, + ) -> Generator[Dict, None, None]: + fixture_path = TestQueryHistSpanHierarchy.FIXTURES[view_name] + with open(fixture_path, "r", encoding="utf-8") as fh: + all_rows = [_json.loads(line) for line in fh if line.strip()] + + for row_dict in all_rows: + if row_dict.get(parent_row_id_col) == row_id: + _adjust_timestamp(row_dict) + yield row_dict + + # ------------------------------------------------------------------ + # Sub-class QueryHistoryPlugin so _get_table_rows reads the fixture. + # ------------------------------------------------------------------ + class TestQueryHistoryPlugin(QueryHistoryPlugin): + + def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: + return utils._safe_get_fixture_entries(TestQueryHistSpanHierarchy.FIXTURES, t_data) + + class TestSpanDynatraceSnowAgent(TestDynatraceSnowAgent): + from opentelemetry.sdk.resources import Resource + + def _get_spans(self, resource: Resource) -> Spans: + return TestSpans(resource, self._configuration) + + def __local_get_plugin_class(source: str): + return TestQueryHistoryPlugin + + from dtagent import plugins + + plugins._get_plugin_class = __local_get_plugin_class + + # ------------------------------------------------------------------ + # Verify hierarchy: the fixture has 1 root, so spans=1 at the top level + # but sub-spans bring the total spans count to 3 (root + mid + leaf). + # ------------------------------------------------------------------ + disabled_combinations = [ + [], + ["logs"], + ["spans"], + ["metrics"], + ["logs", "metrics"], + ["metrics", "spans"], + ["logs", "spans", "metrics", "events"], + ] + + for disabled_telemetry in disabled_combinations: + utils.execute_telemetry_test( + TestSpanDynatraceSnowAgent, + test_name="test_query_history_span_hierarchy", + disabled_telemetry=disabled_telemetry, + base_count={ + "query_history": { + "entries": 3, + "log_lines": 3, + "metrics": 27, + "spans": 3, + } + }, + ) + + def test_is_root_only_processes_top_level(self): + """Unit test: _process_span_rows skips rows where IS_ROOT=False at top level. + + Ensures that only rows with IS_ROOT=True (or missing IS_ROOT) are passed + to generate_span as top-level spans, consistent with the OTel parent-child + model produced by P_REFRESH_RECENT_QUERIES. + """ + import json as _json + from dtagent.util import _adjust_timestamp + + fixture_path = self.FIXTURES["APP.V_RECENT_QUERIES"] + with open(fixture_path, "r", encoding="utf-8") as fh: + rows = [_json.loads(line) for line in fh if line.strip()] + + root_rows = [r for r in rows if r.get("IS_ROOT", True)] + non_root_rows = [r for r in rows if not r.get("IS_ROOT", True)] + + assert len(root_rows) == 1, f"Expected 1 root row, got {len(root_rows)}" + assert len(non_root_rows) == 2, f"Expected 2 non-root rows, got {len(non_root_rows)}" + + root = root_rows[0] + assert root["QUERY_ID"] == "sp-root-0001-0000-0000-000000000001" + assert root["PARENT_QUERY_ID"] is None + assert root["IS_PARENT"] is True + + def test_is_parent_flags_intermediate_nodes(self): + """Unit test: IS_PARENT=True on intermediate nodes, False on leaves. + + Validates that the fixture correctly represents the 3-level SP hierarchy + where only leaf nodes have IS_PARENT=False. + """ + import json as _json + + fixture_path = self.FIXTURES["APP.V_RECENT_QUERIES"] + with open(fixture_path, "r", encoding="utf-8") as fh: + rows = [_json.loads(line) for line in fh if line.strip()] + + by_id = {r["QUERY_ID"]: r for r in rows} + + root = by_id["sp-root-0001-0000-0000-000000000001"] + mid = by_id["sp-mid1-0001-0000-0000-000000000002"] + leaf = by_id["sp-leaf-0001-0000-0000-000000000003"] + + assert root["IS_ROOT"] is True + assert root["IS_PARENT"] is True + assert root["PARENT_QUERY_ID"] is None + + assert mid["IS_ROOT"] is False + assert mid["IS_PARENT"] is True + assert mid["PARENT_QUERY_ID"] == root["QUERY_ID"] + + assert leaf["IS_ROOT"] is False + assert leaf["IS_PARENT"] is False + assert leaf["PARENT_QUERY_ID"] == mid["QUERY_ID"] + + +if __name__ == "__main__": + test_class = TestQueryHistSpanHierarchy() + test_class.test_span_hierarchy() diff --git a/test/plugins/test_resource_monitors.py b/test/plugins/test_resource_monitors.py index e668e241..dfdc880c 100644 --- a/test/plugins/test_resource_monitors.py +++ b/test/plugins/test_resource_monitors.py @@ -26,7 +26,7 @@ class TestResMon: T_DATA_RESMON = "APP.V_RESOURCE_MONITORS" T_DATA_WHS = "APP.V_WAREHOUSES" - PICKLES = {T_DATA_RESMON: "test/test_data/resource_monitors.pkl", T_DATA_WHS: "test/test_data/warehouses.pkl"} + FIXTURES = {T_DATA_RESMON: "test/test_data/resource_monitors.ndjson", T_DATA_WHS: "test/test_data/resource_monitors_warehouses.ndjson"} @pytest.mark.xdist_group(name="test_telemetry") def test_res_mon(self): @@ -41,18 +41,18 @@ def test_res_mon(self): # ====================================================================== - if utils.should_pickle(self.PICKLES.values()): + if utils.should_generate_fixtures(self.FIXTURES.values()): session = _get_session() session.call("APP.P_REFRESH_RESOURCE_MONITORS", log_on_exception=True) - utils._pickle_data_history( - session, self.T_DATA_RESMON, self.PICKLES[self.T_DATA_RESMON], lambda df: df.sort("IS_ACCOUNT_LEVEL", ascending=False) + utils._generate_fixture( + session, self.T_DATA_RESMON, self.FIXTURES[self.T_DATA_RESMON], lambda df: df.sort("IS_ACCOUNT_LEVEL", ascending=False) ) - utils._pickle_data_history(session, self.T_DATA_WHS, self.PICKLES[self.T_DATA_WHS]) + utils._generate_fixture(session, self.T_DATA_WHS, self.FIXTURES[self.T_DATA_WHS]) class TestResourceMonitorsPlugin(ResourceMonitorsPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestResMon.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestResMon.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestResourceMonitorsPlugin diff --git a/test/plugins/test_shares.py b/test/plugins/test_shares.py index 2d0fd4a9..7ef87c42 100644 --- a/test/plugins/test_shares.py +++ b/test/plugins/test_shares.py @@ -24,10 +24,10 @@ class TestShares: import pytest - PICKLES = { - "APP.V_INBOUND_SHARE_TABLES": "test/test_data/inbound_shares.pkl", - "APP.V_OUTBOUND_SHARE_TABLES": "test/test_data/outbound_shares.pkl", - "APP.V_SHARE_EVENTS": "test/test_data/shares.pkl", + FIXTURES = { + "APP.V_INBOUND_SHARE_TABLES": "test/test_data/shares_inbound.ndjson", + "APP.V_OUTBOUND_SHARE_TABLES": "test/test_data/shares_outbound.ndjson", + "APP.V_SHARE_EVENTS": "test/test_data/shares_events.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -43,15 +43,16 @@ def test_shares(self): # ====================================================================== - if utils.should_pickle(self.PICKLES.values()): + if utils.should_generate_fixtures(self.FIXTURES.values()): session = _get_session() session.call("APP.P_GET_SHARES", log_on_exception=True) - utils._pickle_all(session, self.PICKLES, force=True) + utils._generate_all_fixtures(session, self.FIXTURES, force=True) class TestSharesPlugin(SharesPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestShares.PICKLES, t_data, limit=2) + limit = 3 if t_data == "APP.V_INBOUND_SHARE_TABLES" else 2 + return utils._safe_get_fixture_entries(TestShares.FIXTURES, t_data, limit=limit) def __local_get_plugin_class(source: str): return TestSharesPlugin @@ -77,7 +78,7 @@ def __local_get_plugin_class(source: str): affecting_types_for_entries=["logs", "events"], # there is not test data for events base_count={ "outbound_shares": {"entries": 2, "log_lines": 2, "metrics": 0, "events": 2}, - "inbound_shares": {"entries": 2, "log_lines": 2, "metrics": 0, "events": 0}, + "inbound_shares": {"entries": 3, "log_lines": 3, "metrics": 0, "events": 0}, "shares": {"entries": 2, "log_lines": 0, "metrics": 0, "events": 2}, }, ) diff --git a/test/plugins/test_tasks.py b/test/plugins/test_tasks.py index f8807cca..b71123cd 100644 --- a/test/plugins/test_tasks.py +++ b/test/plugins/test_tasks.py @@ -24,10 +24,10 @@ class TestTasks: import pytest - PICKLES = { - "APP.V_SERVERLESS_TASKS": "test/test_data/tasks_serverless.pkl", - "APP.V_TASK_HISTORY": "test/test_data/tasks_history.pkl", - "APP.V_TASK_VERSIONS": "test/test_data/tasks_versions.pkl", + FIXTURES = { + "APP.V_SERVERLESS_TASKS": "test/test_data/tasks_serverless.ndjson", + "APP.V_TASK_HISTORY": "test/test_data/tasks_history.ndjson", + "APP.V_TASK_VERSIONS": "test/test_data/tasks_versions.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -43,12 +43,12 @@ def test_tasks(self): # ----------------------------------------------------- - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestTasksPlugin(TasksPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestTasks.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestTasks.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestTasksPlugin diff --git a/test/plugins/test_trust_center.py b/test/plugins/test_trust_center.py index 21ca5948..8d1439b2 100644 --- a/test/plugins/test_trust_center.py +++ b/test/plugins/test_trust_center.py @@ -24,9 +24,9 @@ class TestTrustCenter: import pytest - PICKLES = { - "APP.V_TRUST_CENTER_METRICS": "test/test_data/trust_center_metrics.pkl", - "APP.V_TRUST_CENTER_INSTRUMENTED": "test/test_data/trust_center_instr.pkl", + FIXTURES = { + "APP.V_TRUST_CENTER_METRICS": "test/test_data/trust_center_metrics.ndjson", + "APP.V_TRUST_CENTER_INSTRUMENTED": "test/test_data/trust_center_instrumented.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -42,12 +42,12 @@ def test_trust_center(self): # ----------------------------------------------------- - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestTrustCenterPlugin(TrustCenterPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestTrustCenter.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestTrustCenter.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestTrustCenterPlugin diff --git a/test/plugins/test_users.py b/test/plugins/test_users.py index 1e3732da..5ecca65a 100644 --- a/test/plugins/test_users.py +++ b/test/plugins/test_users.py @@ -24,12 +24,12 @@ class TestUsers: import pytest - PICKLES = { - "APP.V_USERS_INSTRUMENTED": "test/test_data/users_hist.pkl", - "APP.V_USERS_ALL_PRIVILEGES_INSTRUMENTED": "test/test_data/users_all_privileges.pkl", - "APP.V_USERS_ALL_ROLES_INSTRUMENTED": "test/test_data/users_all_roles.pkl", - "APP.V_USERS_DIRECT_ROLES_INSTRUMENTED": "test/test_data/users_roles_direct.pkl", - "APP.V_USERS_REMOVED_DIRECT_ROLES_INSTRUMENTED": "test/test_data/users_roles_direct_removed.pkl", + FIXTURES = { + "APP.V_USERS_INSTRUMENTED": "test/test_data/users_history.ndjson", + "APP.V_USERS_ALL_PRIVILEGES_INSTRUMENTED": "test/test_data/users_all_privileges.ndjson", + "APP.V_USERS_ALL_ROLES_INSTRUMENTED": "test/test_data/users_all_roles.ndjson", + "APP.V_USERS_DIRECT_ROLES_INSTRUMENTED": "test/test_data/users_roles_direct.ndjson", + "APP.V_USERS_REMOVED_DIRECT_ROLES_INSTRUMENTED": "test/test_data/users_roles_direct_removed.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -46,15 +46,15 @@ def test_users(self): # ----------------------------------------------------- - if utils.should_pickle(self.PICKLES.values()): + if utils.should_generate_fixtures(self.FIXTURES.values()): session = _get_session() session.call("APP.P_GET_USERS", log_on_exception=True) - utils._pickle_all(session, self.PICKLES, force=True) + utils._generate_all_fixtures(session, self.FIXTURES, force=True) class TestUsersPlugin(UsersPlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - for r in utils._safe_get_unpickled_entries(TestUsers.PICKLES, t_data, limit=2): + for r in utils._safe_get_fixture_entries(TestUsers.FIXTURES, t_data, limit=2): print(f"USER DATA at {t_data}: {r}") yield r diff --git a/test/plugins/test_warehouse_usage.py b/test/plugins/test_warehouse_usage.py index 838248bd..94435fc7 100644 --- a/test/plugins/test_warehouse_usage.py +++ b/test/plugins/test_warehouse_usage.py @@ -24,10 +24,10 @@ class TestWhUsage: import pytest - PICKLES = { - "APP.V_WAREHOUSE_EVENT_HISTORY": "test/test_data/wh_usage_events.pkl", - "APP.V_WAREHOUSE_LOAD_HISTORY": "test/test_data/wh_usage_loads.pkl", - "APP.V_WAREHOUSE_METERING_HISTORY": "test/test_data/wh_usage_metering.pkl", + FIXTURES = { + "APP.V_WAREHOUSE_EVENT_HISTORY": "test/test_data/warehouse_usage_events.ndjson", + "APP.V_WAREHOUSE_LOAD_HISTORY": "test/test_data/warehouse_usage_loads.ndjson", + "APP.V_WAREHOUSE_METERING_HISTORY": "test/test_data/warehouse_usage_metering.ndjson", } @pytest.mark.xdist_group(name="test_telemetry") @@ -42,12 +42,12 @@ def test_wh_usage(self): # ----------------------------------------------------- - utils._pickle_all(_get_session(), self.PICKLES) + utils._generate_all_fixtures(_get_session(), self.FIXTURES) class TestWarehouseUsagePlugin(WarehouseUsagePlugin): def _get_table_rows(self, t_data: str) -> Generator[Dict, None, None]: - return utils._safe_get_unpickled_entries(TestWhUsage.PICKLES, t_data, limit=2) + return utils._safe_get_fixture_entries(TestWhUsage.FIXTURES, t_data, limit=2) def __local_get_plugin_class(source: str): return TestWarehouseUsagePlugin diff --git a/test/readme.md b/test/readme.md index 2864aa3d..3a4bb363 100644 --- a/test/readme.md +++ b/test/readme.md @@ -108,21 +108,30 @@ pytest test/plugins/ ## Test Data Management -### Regenerating Plugin Test Data +### Fixture Files (NDJSON) + +Plugin test data is stored as NDJSON files (one JSON object per line) in `test/test_data/`. Each plugin has one or more fixture files following the naming convention `{plugin_name}[_{view_suffix}].ndjson`. + +Fixtures are version-controlled alongside the test code. To regenerate them from a live Snowflake environment: ```bash -# Single plugin -./test.sh test_plugin_name -p +# Single plugin (requires test/credentials.yml) +.venv/bin/pytest test/plugins/test_.py -p # All plugins -./test.sh -a -p +.venv/bin/pytest test/plugins/ -p ``` +The `-p` flag triggers live mode โ€” it connects to Snowflake, collects fresh data, and writes new NDJSON files to `test/test_data/`. After regenerating, review the diff and ensure no PII (names, tenant IDs, IP addresses) is present before committing. + +### Golden Result Files + +Expected telemetry output (metrics, logs, spans, events) is stored as structured JSON in `test/test_results/test_/`. These are generated automatically on the first live test run and used for regression comparison in subsequent local runs. + ### Test Data Locations -- **Input data**: `test/test_data/` (pickle files) -- **Expected results**: `test/test_results/` (text files) -- **Reference data**: NDJSON files for inspection +- **Input fixtures**: `test/test_data/*.ndjson` +- **Expected outputs**: `test/test_results/test_/` ## Dependencies @@ -143,7 +152,7 @@ pytest test/plugins/ ### Test Environment Setup 1. Copy `test/credentials.template.yaml` to `test/credentials.yml` (for live mode) -2. Generate config: `pytest test/core/test_config.py::TestConfig::test_init --pickle_conf y` +2. Generate config: `pytest test/core/test_config.py::TestConfig::test_init --save_conf y` 3. Run tests in local mode (recommended for development) ### CI/CD Integration diff --git a/test/test_data/_deny_patterns.json b/test/test_data/_deny_patterns.json new file mode 100644 index 00000000..7aa6bd6b --- /dev/null +++ b/test/test_data/_deny_patterns.json @@ -0,0 +1,23 @@ +{ + "_comment": "PII deny-list for fixture and golden-result files. Patterns are Python regexes. Add new patterns here when real data is discovered. Run scripts/dev/sanitize_fixtures.py to apply replacements after updating this file.", + "patterns": [ + { + "id": "public_ipv4", + "pattern": "\\b(?!(?:10|127)\\.)(?!192\\.168\\.)(?!172\\.(?:1[6-9]|2[0-9]|3[01])\\.)(?!0\\.0\\.0\\.0\\b)(?:[1-9]\\d?|1\\d{2}|2(?:[0-4]\\d|5[0-5]))\\.(?:\\d{1,3}\\.){2}\\d{1,3}\\b", + "description": "Public IPv4 address. Excludes RFC-1918 private ranges (10.x, 192.168.x, 172.16-31.x), loopback (127.x), and 0.0.0.0.", + "replacement": "10.0.0.1" + }, + { + "id": "real_tenant_id_prefix", + "pattern": "\\w{3}\\d{4}(\\w|\\d)", + "description": "Real Dynatrace environment/tenant ID โ€” example of a specific ID that must not appear in fixtures.", + "replacement": "test-tenant" + }, + { + "id": "dynatracelabs_domain", + "pattern": "dynatracelabs\\.com", + "description": "Internal Dynatrace Labs domain name.", + "replacement": "example.com" + } + ] +} \ No newline at end of file diff --git a/test/test_data/active_queries.ndjson b/test/test_data/active_queries.ndjson index 90a933bc..7ecbe574 100644 --- a/test/test_data/active_queries.ndjson +++ b/test/test_data/active_queries.ndjson @@ -1,2 +1,2 @@ -{"TIMESTAMP": 1760681128413537000, "QUERY_ID": "01bad30c-0413-9153-0040-e003054c7e46", "SESSION_ID": 18260702047616278, "NAME": "SQL query SUCCESS at DTAGENT_SKRUK_DB", "_MESSAGE": "SQL query SUCCESS at DTAGENT_SKRUK_DB", "START_TIME": 1760681128413537000, "END_TIME": 1760681128541537000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_SKRUK_DB\",\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_SKRUK_WH\"\n}", "ATTRIBUTES": "{\n \"db.operation.name\": \"SELECT\",\n \"db.query.text\": \"SELECT * FROM CONFIG.CONFIGURATIONS\",\n \"session.id\": 18260702047616278,\n \"snowflake.query.hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bad30c-0413-9153-0040-e003054c7e46\",\n \"snowflake.query.parametrized_hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.tag\": \"dt_snowagent:2025-03-06_10:20:48.705959\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.data.written_to_result\": 1781,\n \"snowflake.rows.written_to_result\": 60,\n \"snowflake.time.compilation\": 127,\n \"snowflake.time.execution\": 1,\n \"snowflake.time.total_elapsed\": 128\n}"} -{"TIMESTAMP": 1760681128438911000, "QUERY_ID": "01bad30c-0413-90fb-0040-e003054cc3da", "SESSION_ID": 18260702047672238, "NAME": "SQL query SUCCESS at DTAGENT_SKRUK_DB", "_MESSAGE": "SQL query SUCCESS at DTAGENT_SKRUK_DB", "START_TIME": 1760681128438911000, "END_TIME": 1760681128575911000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_SKRUK_DB\",\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_SKRUK_WH\"\n}", "ATTRIBUTES": "{\n \"db.operation.name\": \"SELECT\",\n \"db.query.text\": \"SELECT * FROM CONFIG.CONFIGURATIONS\",\n \"session.id\": 18260702047672238,\n \"snowflake.query.hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bad30c-0413-90fb-0040-e003054cc3da\",\n \"snowflake.query.parametrized_hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.tag\": \"dt_snowagent:2025-03-06_10:20:48.723769\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.data.written_to_result\": 1781,\n \"snowflake.rows.written_to_result\": 60,\n \"snowflake.time.compilation\": 135,\n \"snowflake.time.execution\": 2,\n \"snowflake.time.total_elapsed\": 137\n}"} +{"TIMESTAMP": 1760681128413537000, "QUERY_ID": "01bad30c-0413-9153-0040-e003054c7e46", "SESSION_ID": 18260702047616278, "NAME": "SQL query SUCCESS at DTAGENT_TEST_DB", "_MESSAGE": "SQL query SUCCESS at DTAGENT_TEST_DB", "START_TIME": 1760681128413537000, "END_TIME": 1760681128541537000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_TEST_DB\",\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_TEST_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_TEST_WH\"\n}", "ATTRIBUTES": "{\n \"db.operation.name\": \"SELECT\",\n \"db.query.text\": \"SELECT * FROM CONFIG.CONFIGURATIONS\",\n \"session.id\": 18260702047616278,\n \"snowflake.query.hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bad30c-0413-9153-0040-e003054c7e46\",\n \"snowflake.query.parametrized_hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.tag\": \"dt_snowagent:2025-03-06_10:20:48.705959\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.data.written_to_result\": 1781,\n \"snowflake.rows.written_to_result\": 60,\n \"snowflake.time.compilation\": 127,\n \"snowflake.time.execution\": 1,\n \"snowflake.time.total_elapsed\": 128\n}"} +{"TIMESTAMP": 1760681128438911000, "QUERY_ID": "01bad30c-0413-90fb-0040-e003054cc3da", "SESSION_ID": 18260702047672238, "NAME": "SQL query SUCCESS at DTAGENT_TEST_DB", "_MESSAGE": "SQL query SUCCESS at DTAGENT_TEST_DB", "START_TIME": 1760681128438911000, "END_TIME": 1760681128575911000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_TEST_DB\",\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_TEST_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_TEST_WH\"\n}", "ATTRIBUTES": "{\n \"db.operation.name\": \"SELECT\",\n \"db.query.text\": \"SELECT * FROM CONFIG.CONFIGURATIONS\",\n \"session.id\": 18260702047672238,\n \"snowflake.query.hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bad30c-0413-90fb-0040-e003054cc3da\",\n \"snowflake.query.parametrized_hash\": \"84326d52fdc36120cb311c5f4d8d4d6b\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.tag\": \"dt_snowagent:2025-03-06_10:20:48.723769\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.data.written_to_result\": 1781,\n \"snowflake.rows.written_to_result\": 60,\n \"snowflake.time.compilation\": 135,\n \"snowflake.time.execution\": 2,\n \"snowflake.time.total_elapsed\": 137\n}"} diff --git a/test/test_data/active_queries.pkl b/test/test_data/active_queries.pkl deleted file mode 100644 index 7ca81c00..00000000 --- a/test/test_data/active_queries.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6a9379126551c4c44adbad9566f9fea9bdb45e23156b0be2dd81ff7be35d90ae -size 322168 diff --git a/test/test_data/budget_spendings.pkl b/test/test_data/budget_spendings.pkl deleted file mode 100644 index 270e1e2e..00000000 --- a/test/test_data/budget_spendings.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:de0e3fc53c8c3d12597890efe65607ae0496959e22cf4dd0961c4bcf9d481b35 -size 877 diff --git a/test/test_data/budgets.pkl b/test/test_data/budgets.pkl deleted file mode 100644 index 2f8acecc..00000000 --- a/test/test_data/budgets.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:45cff839fd4fc6fb8ea2d019a479590af6f5a4cb6d30edcb4b068b4e7c6cacec -size 1649 diff --git a/test/test_data/sessions.ndjson b/test/test_data/budgets_spendings.ndjson similarity index 100% rename from test/test_data/sessions.ndjson rename to test/test_data/budgets_spendings.ndjson diff --git a/test/test_data/data_schemas.ndjson b/test/test_data/data_schemas.ndjson index 439c8f4d..2a33dc98 100644 --- a/test/test_data/data_schemas.ndjson +++ b/test/test_data/data_schemas.ndjson @@ -1,2 +1,2 @@ -{"_MESSAGE": "Objects accessed by query 01b165f1-0604-1812-0040-e0030291d142 run by STEFAN.SCHWEIGER", "TIMESTAMP": 1760681129358951000, "ATTRIBUTES": {"snowflake.query.id": "01b165f1-0604-1812-0040-e0030291d142", "snowflake.query.object.modified_by_ddl.domain": "Schema", "snowflake.query.object.modified_by_ddl.id": 1154, "snowflake.query.object.modified_by_ddl.name": "CHARGEBACK_HRC_TEST_DB.LI_TEST", "snowflake.query.object.modified_by_ddl.operation_type": "CREATE", "snowflake.query.user": "STEFAN.SCHWEIGER"}, "EVENT_TIMESTAMPS": "{\n \"snowflake.query.start_time\": 1704191128131000000\n}"} -{"_MESSAGE": "Objects accessed by query 01b165fb-0604-1945-0040-e0030291c3be run by STEFAN.SCHWEIGER", "TIMESTAMP": 1760681129381471000, "ATTRIBUTES": {"snowflake.query.id": "01b165fb-0604-1945-0040-e0030291c3be", "snowflake.query.object.modified_by_ddl.domain": "Table", "snowflake.query.object.modified_by_ddl.id": 719874, "snowflake.query.object.modified_by_ddl.name": "CHARGEBACK_HRC_TEST_DB.LI_TEST.DAILY_COSTS", "snowflake.query.object.modified_by_ddl.operation_type": "CREATE", "snowflake.query.object.modified_by_ddl.properties": "{\"columns\": {\"BOOKING_DATE\": {\"objectId\": {\"value\": 1680388}, \"subOperationType\": \"ADD\"}, \"CAPABILITY_ID\": {\"objectId\": {\"value\": 1680391}, \"subOperationType\": \"ADD\"}, \"COSTS\": {\"objectId\": {\"value\": 1680392}, \"subOperationType\": \"ADD\"}, \"ENVIRONMENT_ID\": {\"objectId\": {\"value\": 1680390}, \"subOperationType\": \"ADD\"}, \"SUBSCRIPTION_UUID\": {\"objectId\": {\"value\": 1680386}, \"subOperationType\": \"ADD\"}, \"UPDATED_AT\": {\"objectId\": {\"value\": 1680389}, \"subOperationType\": \"ADD\"}, \"USAGE_DATE\": {\"objectId\": {\"value\": 1680387}, \"subOperationType\": \"ADD\"}}, \"creationMode\": {\"value\": \"CREATE\"}}", "snowflake.query.user": "STEFAN.SCHWEIGER"}, "EVENT_TIMESTAMPS": "{\n \"snowflake.query.start_time\": 1704191707101000000\n}"} +{"_MESSAGE": "Objects accessed by query 01b165f1-0604-1812-0040-e0030291d142 run by TEST.USER", "TIMESTAMP": 1760681129358951000, "ATTRIBUTES": {"snowflake.query.id": "01b165f1-0604-1812-0040-e0030291d142", "snowflake.query.object.modified_by_ddl.domain": "Schema", "snowflake.query.object.modified_by_ddl.id": 1154, "snowflake.query.object.modified_by_ddl.name": "CHARGEBACK_HRC_TEST_DB.LI_TEST", "snowflake.query.object.modified_by_ddl.operation_type": "CREATE", "snowflake.query.user": "TEST.USER"}, "EVENT_TIMESTAMPS": "{\n \"snowflake.query.start_time\": 1704191128131000000\n}"} +{"_MESSAGE": "Objects accessed by query 01b165fb-0604-1945-0040-e0030291c3be run by TEST.USER", "TIMESTAMP": 1760681129381471000, "ATTRIBUTES": {"snowflake.query.id": "01b165fb-0604-1945-0040-e0030291c3be", "snowflake.query.object.modified_by_ddl.domain": "Table", "snowflake.query.object.modified_by_ddl.id": 719874, "snowflake.query.object.modified_by_ddl.name": "CHARGEBACK_HRC_TEST_DB.LI_TEST.DAILY_COSTS", "snowflake.query.object.modified_by_ddl.operation_type": "CREATE", "snowflake.query.object.modified_by_ddl.properties": "{\"columns\": {\"BOOKING_DATE\": {\"objectId\": {\"value\": 1680388}, \"subOperationType\": \"ADD\"}, \"CAPABILITY_ID\": {\"objectId\": {\"value\": 1680391}, \"subOperationType\": \"ADD\"}, \"COSTS\": {\"objectId\": {\"value\": 1680392}, \"subOperationType\": \"ADD\"}, \"ENVIRONMENT_ID\": {\"objectId\": {\"value\": 1680390}, \"subOperationType\": \"ADD\"}, \"SUBSCRIPTION_UUID\": {\"objectId\": {\"value\": 1680386}, \"subOperationType\": \"ADD\"}, \"UPDATED_AT\": {\"objectId\": {\"value\": 1680389}, \"subOperationType\": \"ADD\"}, \"USAGE_DATE\": {\"objectId\": {\"value\": 1680387}, \"subOperationType\": \"ADD\"}}, \"creationMode\": {\"value\": \"CREATE\"}}", "snowflake.query.user": "TEST.USER"}, "EVENT_TIMESTAMPS": "{\n \"snowflake.query.start_time\": 1704191707101000000\n}"} diff --git a/test/test_data/data_schemas.pkl b/test/test_data/data_schemas.pkl deleted file mode 100644 index c64e4d26..00000000 --- a/test/test_data/data_schemas.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:13665462e27d436aef7e3e6ac624ce26aae968f35e7e8fe0108445dc2338dc74 -size 216220261 diff --git a/test/test_data/data_volume.pkl b/test/test_data/data_volume.pkl deleted file mode 100644 index b6fbbcde..00000000 --- a/test/test_data/data_volume.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:93525c00cc6b9f581b0c15f71be39d2a3f1aebbda83e471a147425cbc1c837e5 -size 5962 diff --git a/test/test_data/dynamic_table_graph_history.pkl b/test/test_data/dynamic_table_graph_history.pkl deleted file mode 100644 index 83ffd035..00000000 --- a/test/test_data/dynamic_table_graph_history.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7480dc231017e88e3547d189adaf0fd0890e817ba6de5b6d3bc81c53c693143d -size 2273 diff --git a/test/test_data/dynamic_table_refresh_history.pkl b/test/test_data/dynamic_table_refresh_history.pkl deleted file mode 100644 index d0d05eec..00000000 --- a/test/test_data/dynamic_table_refresh_history.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:aad6078788b42cbe68ede5e60e8bd8364bc979370c9b0edf966bd73e08490115 -size 109516 diff --git a/test/test_data/dynamic_tables.pkl b/test/test_data/dynamic_tables.pkl deleted file mode 100644 index bb785bc4..00000000 --- a/test/test_data/dynamic_tables.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0fab17fd541ac9d65295708fdda659242511e9d4943c03be4e749d2e7bcc9965 -size 2061 diff --git a/test/test_data/event_log.pkl b/test/test_data/event_log.pkl deleted file mode 100644 index 592753ea..00000000 --- a/test/test_data/event_log.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b2a135cf57282c6214a25e90544a68f601d6704248cedef93ceee737af1595a9 -size 7718 diff --git a/test/test_data/event_log_metrics.ndjson b/test/test_data/event_log_metrics.ndjson index 1496d391..7e2724f4 100644 --- a/test/test_data/event_log_metrics.ndjson +++ b/test/test_data/event_log_metrics.ndjson @@ -1,2 +1,2 @@ -{"TIMESTAMP": 1760681130313840000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_SKRUK_DB\",\n \"db.user\": \"SYSTEM\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_SKRUK_DB\",\n \"snow.executable.id\": 51528,\n \"snow.executable.name\": \"DTAGENT(SOURCES ARRAY):OBJECT\",\n \"snow.executable.runtime.version\": \"3.11\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.query.id\": \"01ba3bba-0412-e356-0051-0c031e222a46\",\n \"snow.schema.id\": 6165,\n \"snow.schema.name\": \"APP\",\n \"snow.session.id\": 22812680207736954,\n \"snow.session.role.primary.id\": 567463,\n \"snow.session.role.primary.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.user.id\": 0,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_SKRUK_WH\",\n \"snowflake.query.id\": \"01ba3bba-0412-e356-0051-0c031e222a46\",\n \"snowflake.role.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.name\": \"DTAGENT_SKRUK_WH\"\n}", "METRICS": "{\n \"process.cpu.utilization\": {\n \"gauge\": 0\n },\n \"process.memory.usage\": {\n \"sum\": 0\n }\n}", "_INSTRUMENTS_DEF": "{\n \"process.cpu.utilization\": {\n \"displayName\": \"Snowflake metric: process.cpu.utilization\",\n \"unit\": \"1\"\n },\n \"process.memory.usage\": {\n \"displayName\": \"Snowflake metric: process.memory.usage\",\n \"unit\": \"bytes\"\n }\n}"} -{"TIMESTAMP": 1760681130336171000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_SKRUK_DB\",\n \"db.user\": \"SYSTEM\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_SKRUK_DB\",\n \"snow.executable.id\": 51528,\n \"snow.executable.name\": \"DTAGENT(SOURCES ARRAY):OBJECT\",\n \"snow.executable.runtime.version\": \"3.11\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.query.id\": \"01ba3bba-0412-e3aa-0051-0c031e22516a\",\n \"snow.schema.id\": 6165,\n \"snow.schema.name\": \"APP\",\n \"snow.session.id\": 22812680207694442,\n \"snow.session.role.primary.id\": 567463,\n \"snow.session.role.primary.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.user.id\": 0,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_SKRUK_WH\",\n \"snowflake.query.id\": \"01ba3bba-0412-e3aa-0051-0c031e22516a\",\n \"snowflake.role.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.name\": \"DTAGENT_SKRUK_WH\"\n}", "METRICS": "{\n \"process.cpu.utilization\": {\n \"gauge\": 0\n },\n \"process.memory.usage\": {\n \"sum\": 0\n }\n}", "_INSTRUMENTS_DEF": "{\n \"process.cpu.utilization\": {\n \"displayName\": \"Snowflake metric: process.cpu.utilization\",\n \"unit\": \"1\"\n },\n \"process.memory.usage\": {\n \"displayName\": \"Snowflake metric: process.memory.usage\",\n \"unit\": \"bytes\"\n }\n}"} +{"TIMESTAMP": 1760681130313840000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_TEST_DB\",\n \"db.user\": \"SYSTEM\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_TEST_DB\",\n \"snow.executable.id\": 51528,\n \"snow.executable.name\": \"DTAGENT(SOURCES ARRAY):OBJECT\",\n \"snow.executable.runtime.version\": \"3.11\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.query.id\": \"01ba3bba-0412-e356-0051-0c031e222a46\",\n \"snow.schema.id\": 6165,\n \"snow.schema.name\": \"APP\",\n \"snow.session.id\": 22812680207736954,\n \"snow.session.role.primary.id\": 567463,\n \"snow.session.role.primary.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.user.id\": 0,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_TEST_WH\",\n \"snowflake.query.id\": \"01ba3bba-0412-e356-0051-0c031e222a46\",\n \"snowflake.role.name\": \"DTAGENT_TEST_ADMIN\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.name\": \"DTAGENT_TEST_WH\"\n}", "METRICS": "{\n \"process.cpu.utilization\": {\n \"gauge\": 0\n },\n \"process.memory.usage\": {\n \"sum\": 0\n }\n}", "_INSTRUMENTS_DEF": "{\n \"process.cpu.utilization\": {\n \"displayName\": \"Snowflake metric: process.cpu.utilization\",\n \"unit\": \"1\"\n },\n \"process.memory.usage\": {\n \"displayName\": \"Snowflake metric: process.memory.usage\",\n \"unit\": \"bytes\"\n }\n}"} +{"TIMESTAMP": 1760681130336171000, "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_TEST_DB\",\n \"db.user\": \"SYSTEM\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_TEST_DB\",\n \"snow.executable.id\": 51528,\n \"snow.executable.name\": \"DTAGENT(SOURCES ARRAY):OBJECT\",\n \"snow.executable.runtime.version\": \"3.11\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.query.id\": \"01ba3bba-0412-e3aa-0051-0c031e22516a\",\n \"snow.schema.id\": 6165,\n \"snow.schema.name\": \"APP\",\n \"snow.session.id\": 22812680207694442,\n \"snow.session.role.primary.id\": 567463,\n \"snow.session.role.primary.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.user.id\": 0,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_TEST_WH\",\n \"snowflake.query.id\": \"01ba3bba-0412-e3aa-0051-0c031e22516a\",\n \"snowflake.role.name\": \"DTAGENT_TEST_ADMIN\",\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.name\": \"DTAGENT_TEST_WH\"\n}", "METRICS": "{\n \"process.cpu.utilization\": {\n \"gauge\": 0\n },\n \"process.memory.usage\": {\n \"sum\": 0\n }\n}", "_INSTRUMENTS_DEF": "{\n \"process.cpu.utilization\": {\n \"displayName\": \"Snowflake metric: process.cpu.utilization\",\n \"unit\": \"1\"\n },\n \"process.memory.usage\": {\n \"displayName\": \"Snowflake metric: process.memory.usage\",\n \"unit\": \"bytes\"\n }\n}"} diff --git a/test/test_data/event_log_metrics.pkl b/test/test_data/event_log_metrics.pkl deleted file mode 100644 index d86a14e7..00000000 --- a/test/test_data/event_log_metrics.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:402b7ce6833479526680670e233d5e24acf539c512b02ed12e3b3577825206bb -size 11569 diff --git a/test/test_data/event_log_spans.ndjson b/test/test_data/event_log_spans.ndjson index dbfbb433..d9507d89 100644 --- a/test/test_data/event_log_spans.ndjson +++ b/test/test_data/event_log_spans.ndjson @@ -1,2 +1,2 @@ -{"QUERY_ID": "01ba3bc5-0412-e34b-0051-0c031e22184a", "SESSION_ID": "22812680207733670", "NAME": "LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)", "START_TIME": 1760681130311440000, "END_TIME": 1760681130867380918, "STATUS_CODE": "OK", "TIMESTAMP": 1760681130311440000, "_SPAN_ID": "1fc735d0031735ea", "_TRACE_ID": "01ba3bc525c20677b026cf555d0c5a82", "_PARENT_SPAN_ID": null, "_SPAN_KIND": "SPAN_KIND_INTERNAL", "_RECORD": "{\n \"kind\": \"SPAN_KIND_INTERNAL\",\n \"name\": \"snow.auto_instrumented\",\n \"status\": {\n \"code\": \"STATUS_CODE_UNSET\"\n }\n}", "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_SKRUK_DB\",\n \"db.user\": \"SEBASTIAN.KRUK\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_SKRUK_DB\",\n \"snow.executable.id\": 51719,\n \"snow.executable.name\": \"LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.query.id\": \"01ba3bc5-0412-e34b-0051-0c031e22184a\",\n \"snow.schema.id\": 6167,\n \"snow.schema.name\": \"STATUS\",\n \"snow.session.id\": 22812680207733670,\n \"snow.session.role.primary.id\": 567483,\n \"snow.session.role.primary.name\": \"DTAGENT_SKRUK_VIEWER\",\n \"snow.user.id\": 361,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_SKRUK_WH\",\n \"snowflake.query.id\": \"01ba3bc5-0412-e34b-0051-0c031e22184a\",\n \"snowflake.role.name\": \"DTAGENT_SKRUK_VIEWER\",\n \"snowflake.schema.name\": \"STATUS\",\n \"snowflake.warehouse.name\": \"DTAGENT_SKRUK_WH\",\n \"telemetry.sdk.language\": \"sql\"\n}"} -{"QUERY_ID": "01ba3bcc-0412-e352-0051-0c031e22c1a6", "SESSION_ID": "22812680207726014", "NAME": "LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)", "START_TIME": 1760681130312327000, "END_TIME": 1760681130947252537, "STATUS_CODE": "OK", "TIMESTAMP": 1760681130312327000, "_SPAN_ID": "0235c0abc1e8a9ce", "_TRACE_ID": "01ba3bcc4289735fc8d0fe322b035cd1", "_PARENT_SPAN_ID": null, "_SPAN_KIND": "SPAN_KIND_INTERNAL", "_RECORD": "{\n \"kind\": \"SPAN_KIND_INTERNAL\",\n \"name\": \"snow.auto_instrumented\",\n \"status\": {\n \"code\": \"STATUS_CODE_UNSET\"\n }\n}", "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_SKRUK_DB\",\n \"db.user\": \"SYSTEM\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_SKRUK_DB\",\n \"snow.executable.id\": 51719,\n \"snow.executable.name\": \"LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.query.id\": \"01ba3bcc-0412-e352-0051-0c031e22c1a6\",\n \"snow.schema.id\": 6167,\n \"snow.schema.name\": \"STATUS\",\n \"snow.session.id\": 22812680207726014,\n \"snow.session.role.primary.id\": 567463,\n \"snow.session.role.primary.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snow.user.id\": 0,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_SKRUK_WH\",\n \"snowflake.query.id\": \"01ba3bcc-0412-e352-0051-0c031e22c1a6\",\n \"snowflake.role.name\": \"DTAGENT_SKRUK_ADMIN\",\n \"snowflake.schema.name\": \"STATUS\",\n \"snowflake.warehouse.name\": \"DTAGENT_SKRUK_WH\",\n \"telemetry.sdk.language\": \"sql\"\n}"} +{"QUERY_ID": "01ba3bc5-0412-e34b-0051-0c031e22184a", "SESSION_ID": "22812680207733670", "NAME": "LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)", "START_TIME": 1760681130311440000, "END_TIME": 1760681130867380918, "STATUS_CODE": "OK", "TIMESTAMP": 1760681130311440000, "_SPAN_ID": "1fc735d0031735ea", "_TRACE_ID": "01ba3bc525c20677b026cf555d0c5a82", "_PARENT_SPAN_ID": null, "_SPAN_KIND": "SPAN_KIND_INTERNAL", "_RECORD": "{\n \"kind\": \"SPAN_KIND_INTERNAL\",\n \"name\": \"snow.auto_instrumented\",\n \"status\": {\n \"code\": \"STATUS_CODE_UNSET\"\n }\n}", "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_TEST_DB\",\n \"db.user\": \"TEST.USER\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_TEST_DB\",\n \"snow.executable.id\": 51719,\n \"snow.executable.name\": \"LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.query.id\": \"01ba3bc5-0412-e34b-0051-0c031e22184a\",\n \"snow.schema.id\": 6167,\n \"snow.schema.name\": \"STATUS\",\n \"snow.session.id\": 22812680207733670,\n \"snow.session.role.primary.id\": 567483,\n \"snow.session.role.primary.name\": \"DTAGENT_TEST_VIEWER\",\n \"snow.user.id\": 361,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_TEST_WH\",\n \"snowflake.query.id\": \"01ba3bc5-0412-e34b-0051-0c031e22184a\",\n \"snowflake.role.name\": \"DTAGENT_TEST_VIEWER\",\n \"snowflake.schema.name\": \"STATUS\",\n \"snowflake.warehouse.name\": \"DTAGENT_TEST_WH\",\n \"telemetry.sdk.language\": \"sql\"\n}"} +{"QUERY_ID": "01ba3bcc-0412-e352-0051-0c031e22c1a6", "SESSION_ID": "22812680207726014", "NAME": "LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)", "START_TIME": 1760681130312327000, "END_TIME": 1760681130947252537, "STATUS_CODE": "OK", "TIMESTAMP": 1760681130312327000, "_SPAN_ID": "0235c0abc1e8a9ce", "_TRACE_ID": "01ba3bcc4289735fc8d0fe322b035cd1", "_PARENT_SPAN_ID": null, "_SPAN_KIND": "SPAN_KIND_INTERNAL", "_RECORD": "{\n \"kind\": \"SPAN_KIND_INTERNAL\",\n \"name\": \"snow.auto_instrumented\",\n \"status\": {\n \"code\": \"STATUS_CODE_UNSET\"\n }\n}", "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_TEST_DB\",\n \"db.user\": \"SYSTEM\",\n \"snow.database.id\": 632,\n \"snow.database.name\": \"DTAGENT_TEST_DB\",\n \"snow.executable.id\": 51719,\n \"snow.executable.name\": \"LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)\",\n \"snow.executable.type\": \"PROCEDURE\",\n \"snow.owner.id\": 567463,\n \"snow.owner.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.query.id\": \"01ba3bcc-0412-e352-0051-0c031e22c1a6\",\n \"snow.schema.id\": 6167,\n \"snow.schema.name\": \"STATUS\",\n \"snow.session.id\": 22812680207726014,\n \"snow.session.role.primary.id\": 567463,\n \"snow.session.role.primary.name\": \"DTAGENT_TEST_ADMIN\",\n \"snow.user.id\": 0,\n \"snow.warehouse.id\": 4649,\n \"snow.warehouse.name\": \"DTAGENT_TEST_WH\",\n \"snowflake.query.id\": \"01ba3bcc-0412-e352-0051-0c031e22c1a6\",\n \"snowflake.role.name\": \"DTAGENT_TEST_ADMIN\",\n \"snowflake.schema.name\": \"STATUS\",\n \"snowflake.warehouse.name\": \"DTAGENT_TEST_WH\",\n \"telemetry.sdk.language\": \"sql\"\n}"} diff --git a/test/test_data/event_log_spans.pkl b/test/test_data/event_log_spans.pkl deleted file mode 100644 index cc2a2a78..00000000 --- a/test/test_data/event_log_spans.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bb0c3a7e38d822121cb08144db3d20f39636b9a170322e85d20d7cb00237b8c2 -size 7564 diff --git a/test/test_data/event_usage.pkl b/test/test_data/event_usage.pkl deleted file mode 100644 index c697caa6..00000000 --- a/test/test_data/event_usage.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:70d6ccd13c92fefb08dab57ed21f5b91f9751eeb982a0cc889352b8f78d81b4a -size 4439 diff --git a/test/test_data/inbound_shares.ndjson b/test/test_data/inbound_shares.ndjson deleted file mode 100644 index 7258a616..00000000 --- a/test/test_data/inbound_shares.ndjson +++ /dev/null @@ -1,2 +0,0 @@ -{"_MESSAGE": "Inbound share details for BIET_MONITORING_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"CI360_SHARE_MONITORING_DB\",\n \"snowflake.share.name\": \"BIET_MONITORING_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.has_details_reported\": true,\n \"snowflake.share.is_secure_objects_only\": \"\",\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"WMBJBCQ.CI360TESTACCOUNT\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{}"} -{"_MESSAGE": "Inbound share details for BIET_MONITORING_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"DT_SHARE_MONITORING_DB\",\n \"snowflake.share.name\": \"BIET_MONITORING_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.has_details_reported\": true,\n \"snowflake.share.is_secure_objects_only\": \"\",\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"WMBJBCQ.DYNATRACEDIGITALBUSINESSDW\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{}"} diff --git a/test/test_data/inbound_shares.pkl b/test/test_data/inbound_shares.pkl deleted file mode 100644 index b4c298d2..00000000 --- a/test/test_data/inbound_shares.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2553b24ac84bcb068140512d11af7845a29c366686ada9cd835ac737c06dc0c6 -size 89005 diff --git a/test/test_data/login_history.ndjson b/test/test_data/login_history.ndjson index e1281c80..e19520a0 100644 --- a/test/test_data/login_history.ndjson +++ b/test/test_data/login_history.ndjson @@ -1,2 +1,2 @@ -{"TIMESTAMP": 1760681131197570000, "_message": "LOGIN: SEBASTIAN.KRUK", "DIMENSIONS": "{\n \"client.ip\": \"82.177.196.146\",\n \"client.type\": \"OTHER\",\n \"db.user\": \"SEBASTIAN.KRUK\",\n \"event.name\": \"LOGIN\"\n}", "ATTRIBUTES": "{\n \"authentiacation.factor.first\": \"SAML2_ASSERTION\",\n \"client.version\": \"1.22.1\",\n \"event.id\": 18260702004593906,\n \"event.related_id\": 0,\n \"status.code\": \"OK\"\n}"} -{"TIMESTAMP": 1760681131222874000, "_message": "LOGIN: SNOWAGENT_ADMIN", "DIMENSIONS": "{\n \"client.ip\": \"52.29.224.53\",\n \"client.type\": \"OTHER\",\n \"db.user\": \"SNOWAGENT_ADMIN\",\n \"event.name\": \"LOGIN\"\n}", "ATTRIBUTES": "{\n \"authentiacation.factor.first\": \"PASSWORD\",\n \"client.version\": \"1.21.0\",\n \"event.id\": 18260702004595858,\n \"event.related_id\": 0,\n \"status.code\": \"OK\"\n}"} +{"TIMESTAMP": 1760681131197570000, "_message": "LOGIN: TEST.USER", "DIMENSIONS": "{\n \"client.ip\": \"10.0.0.1\",\n \"client.type\": \"OTHER\",\n \"db.user\": \"TEST.USER\",\n \"event.name\": \"LOGIN\"\n}", "ATTRIBUTES": "{\n \"authentiacation.factor.first\": \"SAML2_ASSERTION\",\n \"client.version\": \"1.22.1\",\n \"event.id\": 18260702004593906,\n \"event.related_id\": 0,\n \"status.code\": \"OK\"\n}"} +{"TIMESTAMP": 1760681131222874000, "_message": "LOGIN: SNOWAGENT_ADMIN", "DIMENSIONS": "{\n \"client.ip\": \"10.0.0.1\",\n \"client.type\": \"OTHER\",\n \"db.user\": \"SNOWAGENT_ADMIN\",\n \"event.name\": \"LOGIN\"\n}", "ATTRIBUTES": "{\n \"authentiacation.factor.first\": \"PASSWORD\",\n \"client.version\": \"1.21.0\",\n \"event.id\": 18260702004595858,\n \"event.related_id\": 0,\n \"status.code\": \"OK\"\n}"} diff --git a/test/test_data/login_history.pkl b/test/test_data/login_history.pkl deleted file mode 100644 index 4958a946..00000000 --- a/test/test_data/login_history.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9353902c3c9d7c285531e25099634399bcac09d444f8c10c0790a3b1f01f7de7 -size 2429 diff --git a/test/test_data/wh_usage_events.ndjson b/test/test_data/login_history_sessions.ndjson similarity index 100% rename from test/test_data/wh_usage_events.ndjson rename to test/test_data/login_history_sessions.ndjson diff --git a/test/test_data/outbound_shares.ndjson b/test/test_data/outbound_shares.ndjson deleted file mode 100644 index d31e0f35..00000000 --- a/test/test_data/outbound_shares.ndjson +++ /dev/null @@ -1,2 +0,0 @@ -{"_MESSAGE": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"DATA_SCIENTIST_DEV_DB\",\n \"snowflake.grant.name\": \"DATA_SCIENTIST_DEV_DB\",\n \"snowflake.share.name\": \"DATA_SCIENTIST_DEVEL_DS_CI360_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.grant.by\": \"DEMIGOD\",\n \"snowflake.grant.grantee\": \"DEVDYNATRACEDIGITALBUSINESSDW.DATA_SCIENTIST_DEVEL_DS_CI360_SHARE\",\n \"snowflake.grant.on\": \"DATABASE\",\n \"snowflake.grant.option\": \"false\",\n \"snowflake.grant.privilege\": \"USAGE\",\n \"snowflake.grant.to\": \"SHARE\",\n \"snowflake.share.is_secure_objects_only\": \"true\",\n \"snowflake.share.kind\": \"OUTBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"DEMIGOD\",\n \"snowflake.share.shared_from\": \"WMBJBCQ.DEVDYNATRACEDIGITALBUSINESSDW\",\n \"snowflake.share.shared_to\": \"WMBJBCQ.CI360TESTACCOUNT\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.grant.created_on\": 1687246726499000000\n}"} -{"_MESSAGE": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"DATA_SCIENTIST_DEV_DB\",\n \"snowflake.grant.name\": \"DATA_SCIENTIST_DEV_DB.ACCOUNT_EXPERIENCE\",\n \"snowflake.share.name\": \"DATA_SCIENTIST_DEVEL_DS_CI360_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.grant.by\": \"INTEGRATION_CONSUMPTION_FORECASTING_ROLE\",\n \"snowflake.grant.grantee\": \"DEVDYNATRACEDIGITALBUSINESSDW.DATA_SCIENTIST_DEVEL_DS_CI360_SHARE\",\n \"snowflake.grant.on\": \"SCHEMA\",\n \"snowflake.grant.option\": \"false\",\n \"snowflake.grant.privilege\": \"USAGE\",\n \"snowflake.grant.to\": \"SHARE\",\n \"snowflake.share.is_secure_objects_only\": \"true\",\n \"snowflake.share.kind\": \"OUTBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"DEMIGOD\",\n \"snowflake.share.shared_from\": \"WMBJBCQ.DEVDYNATRACEDIGITALBUSINESSDW\",\n \"snowflake.share.shared_to\": \"WMBJBCQ.CI360TESTACCOUNT\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.grant.created_on\": 1668416468928000000\n}"} diff --git a/test/test_data/outbound_shares.pkl b/test/test_data/outbound_shares.pkl deleted file mode 100644 index 224ffd12..00000000 --- a/test/test_data/outbound_shares.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4b2cd0ec025c184efcbf8ce4a58b20eae57ca76fb2fe2497276e2be5e3654ff4 -size 183247 diff --git a/test/test_data/recent_queries2.ndjson b/test/test_data/query_history.ndjson similarity index 98% rename from test/test_data/recent_queries2.ndjson rename to test/test_data/query_history.ndjson index b2f285b3..3a6feac4 100644 --- a/test/test_data/recent_queries2.ndjson +++ b/test/test_data/query_history.ndjson @@ -1,3 +1,3 @@ -{"QUERY_ID": "01bcb05d-0415-b618-0047-e383330c174a", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172e", "SESSION_ID": 20234875336090534, "NAME": "select DTAGENT_DB", "START_TIME": 1760681131568692000, "END_TIME": 1760681133488692000, "_SPAN_ID": null, "_TRACE_ID": null, "STATUS_CODE": "OK", "DIMENSIONS": "{\n \"db.collection.name\": \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"db.namespace\": \"DTAGENT_DB\",\n \"db.operation.name\": \"SELECT\",\n \"db.snowflake.dbs\": [\n \"DTAGENT_DB\"\n ],\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_WH\"\n}", "ATTRIBUTES": "{\n \"db.query.text\": \"with cte_all_queries as (\\n (\\n select * from TABLE(result_scan(:QID_0))\\n )\\n union all\\n (\\n select * from TABLE(result_scan(:QID_1))\\n where END_TIME > DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('active_queries')\\n )\\n )\\n , cte_active_queries as (\\n select \\n START_TIME,\\n END_TIME,\\n\\n QUERY_ID,\\n SESSION_ID,\\n DATABASE_NAME,\\n SCHEMA_NAME,\\n\\n QUERY_TEXT,\\n QUERY_TYPE,\\n QUERY_TAG,\\n QUERY_HASH,\\n QUERY_HASH_VERSION,\\n QUERY_PARAMETERIZED_HASH,\\n QUERY_PARAMETERIZED_HASH_VERSION,\\n\\n USER_NAME,\\n ROLE_NAME,\\n\\n WAREHOUSE_NAME,\\n WAREHOUSE_TYPE,\\n\\n EXECUTION_STATUS,\\n ERROR_CODE,\\n ERROR_MESSAGE,\\n\\n // metrics\\n RUNNING_TIME,\\n EXECUTION_TIME,\\n COMPILATION_TIME,\\n TOTAL_ELAPSED_TIME,\\n BYTES_WRITTEN_TO_RESULT,\\n ROWS_WRITTEN_TO_RESULT,\\n from cte_all_queries aq\\n where\\n ( array_size(DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])) = 0\\n or array_contains(EXECUTION_STATUS::variant, DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])::array) )\\n )\\n select \\n qh.END_TIME::timestamp_ltz as TIMESTAMP,\\n\\n qh.query_id as QUERY_ID,\\n qh.session_id as SESSION_ID,\\n\\n CONCAT(\\n 'SQL query ',\\n qh.execution_status,\\n ' at ',\\n COALESCE(qh.database_name, '')\\n ) as NAME,\\n NAME as _MESSAGE,\\n\\n \\n extract(epoch_nanosecond from qh.start_time) as START_TIME,\\n extract(epoch_nanosecond from qh.end_time) as END_TIME,\\n\\n \\n OBJECT_CONSTRUCT(\\n 'db.namespace', qh.database_name,\\n 'snowflake.warehouse.name', qh.warehouse_name,\\n 'db.user', qh.user_name,\\n 'snowflake.role.name', qh.role_name,\\n 'snowflake.query.execution_status', qh.execution_status\\n ) as DIMENSIONS,\\n \\n OBJECT_CONSTRUCT(\\n 'db.query.text', qh.query_text,\\n 'db.operation.name', qh.query_type,\\n 'session.id', qh.session_id,\\n 'snowflake.query.id', qh.query_id,\\n 'snowflake.query.tag', qh.query_tag,\\n 'snowflake.query.hash', qh.query_hash,\\n 'snowflake.query.hash_version', qh.query_hash_version,\\n 'snowflake.query.parametrized_hash', qh.query_parameterized_hash,\\n 'snowflake.query.parametrized_hash_version', qh.query_parameterized_hash_version,\\n 'snowflake.error.code', qh.error_code,\\n 'snowflake.error.message', qh.error_message,\\n 'snowflake.warehouse.type', qh.warehouse_type,\\n 'snowflake.schema.name', qh.schema_name\\n ) as ATTRIBUTES,\\n \\n OBJECT_CONSTRUCT(\\n 'snowflake.time.running', qh.running_time,\\n 'snowflake.time.execution', qh.execution_time,\\n 'snowflake.time.compilation', qh.compilation_time,\\n 'snowflake.time.total_elapsed', qh.total_elapsed_time,\\n 'snowflake.data.written_to_result', qh.bytes_written_to_result,\\n 'snowflake.rows.written_to_result', qh.rows_written_to_result\\n ) as METRICS\\n from \\n cte_active_queries qh\\n order by \\n TIMESTAMP asc\",\n \"db.snowflake.tables\": [\n \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"DTAGENT_DB.CONFIG.CONFIGURATIONS\"\n ],\n \"db.snowflake.views\": [],\n \"session.id\": 20234875336090534,\n \"snowflake.cluster_number\": 1,\n \"snowflake.database.id\": 1868,\n \"snowflake.query.hash\": \"dbf5cf366b64603513aa8097d5d3174c\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bcb05d-0415-b618-0047-e383330c174a\",\n \"snowflake.query.is_client_generated\": false,\n \"snowflake.query.parametrized_hash\": \"dbf5cf366b64603513aa8097d5d3174c\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.parent_id\": \"01bcb05d-0415-b618-0047-e383330c172e\",\n \"snowflake.query.tag\": \"dt_snowagent.version:0.8.1.plugin:ActiveQueriesPlugin.2025-05-30T04:53:15.857Z\",\n \"snowflake.query.transaction_id\": 0,\n \"snowflake.release_version\": \"9.14.2\",\n \"snowflake.role.type\": \"ROLE\",\n \"snowflake.schema.id\": 40234,\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.cluster.number\": 1,\n \"snowflake.warehouse.id\": 3726,\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.acceleration.data.scanned\": 0,\n \"snowflake.acceleration.partitions.scanned\": 0,\n \"snowflake.acceleration.scale_factor.max\": 0,\n \"snowflake.credits.cloud_services\": 9.200000000000000e-05,\n \"snowflake.data.deleted\": 0,\n \"snowflake.data.read.from_result\": 0,\n \"snowflake.data.scanned\": 5640704,\n \"snowflake.data.scanned_from_cache\": 0.000000000000000e+00,\n \"snowflake.data.sent_over_the_network\": 0,\n \"snowflake.data.spilled.local\": 0,\n \"snowflake.data.spilled.remote\": 0,\n \"snowflake.data.transferred.inbound\": 0,\n \"snowflake.data.transferred.outbound\": 0,\n \"snowflake.data.written\": 0,\n \"snowflake.data.written_to_result\": 117512,\n \"snowflake.external_functions.data.received\": 0,\n \"snowflake.external_functions.data.sent\": 0,\n \"snowflake.external_functions.invocations\": 0,\n \"snowflake.external_functions.rows.received\": 0,\n \"snowflake.external_functions.rows.sent\": 0,\n \"snowflake.load.used\": 100,\n \"snowflake.partitions.scanned\": 47,\n \"snowflake.partitions.total\": 43,\n \"snowflake.rows.deleted\": 0,\n \"snowflake.rows.inserted\": 0,\n \"snowflake.rows.unloaded\": 0,\n \"snowflake.rows.updated\": 0,\n \"snowflake.rows.written_to_result\": 798,\n \"snowflake.time.child_queries_wait\": 0,\n \"snowflake.time.compilation\": 520,\n \"snowflake.time.execution\": 1381,\n \"snowflake.time.list_external_files\": 19,\n \"snowflake.time.queued.overload\": 0,\n \"snowflake.time.queued.provisioning\": 0,\n \"snowflake.time.repair\": 0,\n \"snowflake.time.total_elapsed\": 1920,\n \"snowflake.time.transaction_blocked\": 0\n}", "IS_PARENT": false, "IS_ROOT": false} -{"QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172e", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172a", "SESSION_ID": 20234875336090534, "NAME": "call DTAGENT_DB", "START_TIME": 1760681131568872000, "END_TIME": 1760681138304872000, "_SPAN_ID": "4328cfa95b61ec83", "_TRACE_ID": "01bcb05d68938b78428142dea30a6160", "STATUS_CODE": "OK", "DIMENSIONS": "{\n \"db.collection.name\": \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"db.namespace\": \"DTAGENT_DB\",\n \"db.operation.name\": \"CALL\",\n \"db.snowflake.dbs\": [\n \"DTAGENT_DB\"\n ],\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_WH\"\n}", "ATTRIBUTES": "{\n \"db.query.text\": \"with cte_all_queries as (\\n (\\n select * from TABLE(DTAGENT_DB.APP.F_GET_RUNNING_QUERIES())\\n )\\n union all\\n (\\n select * from TABLE(DTAGENT_DB.APP.F_GET_FINISHED_QUERIES())\\n where END_TIME > DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('active_queries')\\n )\\n )\\n , cte_active_queries as (\\n select \\n START_TIME,\\n END_TIME,\\n\\n QUERY_ID,\\n SESSION_ID,\\n DATABASE_NAME,\\n SCHEMA_NAME,\\n\\n QUERY_TEXT,\\n QUERY_TYPE,\\n QUERY_TAG,\\n QUERY_HASH,\\n QUERY_HASH_VERSION,\\n QUERY_PARAMETERIZED_HASH,\\n QUERY_PARAMETERIZED_HASH_VERSION,\\n\\n USER_NAME,\\n ROLE_NAME,\\n\\n WAREHOUSE_NAME,\\n WAREHOUSE_TYPE,\\n\\n EXECUTION_STATUS,\\n ERROR_CODE,\\n ERROR_MESSAGE,\\n\\n // metrics\\n RUNNING_TIME,\\n EXECUTION_TIME,\\n COMPILATION_TIME,\\n TOTAL_ELAPSED_TIME,\\n BYTES_WRITTEN_TO_RESULT,\\n ROWS_WRITTEN_TO_RESULT,\\n from cte_all_queries aq\\n where\\n ( array_size(DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])) = 0\\n or array_contains(EXECUTION_STATUS::variant, DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])::array) )\\n )\\n select \\n qh.END_TIME::timestamp_ltz as TIMESTAMP,\\n\\n qh.query_id as QUERY_ID,\\n qh.session_id as SESSION_ID,\\n\\n CONCAT(\\n 'SQL query ',\\n qh.execution_status,\\n ' at ',\\n COALESCE(qh.database_name, '')\\n ) as NAME,\\n NAME as _MESSAGE,\\n\\n \\n extract(epoch_nanosecond from qh.start_time) as START_TIME,\\n extract(epoch_nanosecond from qh.end_time) as END_TIME,\\n\\n \\n OBJECT_CONSTRUCT(\\n 'db.namespace', qh.database_name,\\n 'snowflake.warehouse.name', qh.warehouse_name,\\n 'db.user', qh.user_name,\\n 'snowflake.role.name', qh.role_name,\\n 'snowflake.query.execution_status', qh.execution_status\\n ) as DIMENSIONS,\\n \\n OBJECT_CONSTRUCT(\\n 'db.query.text', qh.query_text,\\n 'db.operation.name', qh.query_type,\\n 'session.id', qh.session_id,\\n 'snowflake.query.id', qh.query_id,\\n 'snowflake.query.tag', qh.query_tag,\\n 'snowflake.query.hash', qh.query_hash,\\n 'snowflake.query.hash_version', qh.query_hash_version,\\n 'snowflake.query.parametrized_hash', qh.query_parameterized_hash,\\n 'snowflake.query.parametrized_hash_version', qh.query_parameterized_hash_version,\\n 'snowflake.error.code', qh.error_code,\\n 'snowflake.error.message', qh.error_message,\\n 'snowflake.warehouse.type', qh.warehouse_type,\\n 'snowflake.schema.name', qh.schema_name\\n ) as ATTRIBUTES,\\n \\n OBJECT_CONSTRUCT(\\n 'snowflake.time.running', qh.running_time,\\n 'snowflake.time.execution', qh.execution_time,\\n 'snowflake.time.compilation', qh.compilation_time,\\n 'snowflake.time.total_elapsed', qh.total_elapsed_time,\\n 'snowflake.data.written_to_result', qh.bytes_written_to_result,\\n 'snowflake.rows.written_to_result', qh.rows_written_to_result\\n ) as METRICS\\n from \\n cte_active_queries qh\\n order by \\n TIMESTAMP asc\",\n \"db.snowflake.tables\": [\n \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"DTAGENT_DB.CONFIG.CONFIGURATIONS\"\n ],\n \"db.snowflake.views\": [],\n \"session.id\": 20234875336090534,\n \"snowflake.cluster_number\": 1,\n \"snowflake.database.id\": 1868,\n \"snowflake.query.hash\": \"0d8c82d99d518393458b4a68bc8c2930\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bcb05d-0415-b618-0047-e383330c172e\",\n \"snowflake.query.is_client_generated\": false,\n \"snowflake.query.parametrized_hash\": \"0d8c82d99d518393458b4a68bc8c2930\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.parent_id\": \"01bcb05d-0415-b618-0047-e383330c172a\",\n \"snowflake.query.tag\": \"dt_snowagent.version:0.8.1.plugin:ActiveQueriesPlugin.2025-05-30T04:53:15.857Z\",\n \"snowflake.query.transaction_id\": 0,\n \"snowflake.release_version\": \"9.14.2\",\n \"snowflake.role.type\": \"ROLE\",\n \"snowflake.schema.id\": 40234,\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.cluster.number\": 1,\n \"snowflake.warehouse.id\": 3726,\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.acceleration.data.scanned\": 0,\n \"snowflake.acceleration.partitions.scanned\": 0,\n \"snowflake.acceleration.scale_factor.max\": 0,\n \"snowflake.credits.cloud_services\": 2.700000000000000e-05,\n \"snowflake.data.deleted\": 0,\n \"snowflake.data.read.from_result\": 0,\n \"snowflake.data.scanned\": 0,\n \"snowflake.data.scanned_from_cache\": 0.000000000000000e+00,\n \"snowflake.data.sent_over_the_network\": 0,\n \"snowflake.data.spilled.local\": 0,\n \"snowflake.data.spilled.remote\": 0,\n \"snowflake.data.transferred.inbound\": 0,\n \"snowflake.data.transferred.outbound\": 0,\n \"snowflake.data.written\": 0,\n \"snowflake.data.written_to_result\": 640,\n \"snowflake.external_functions.data.received\": 0,\n \"snowflake.external_functions.data.sent\": 0,\n \"snowflake.external_functions.invocations\": 0,\n \"snowflake.external_functions.rows.received\": 0,\n \"snowflake.external_functions.rows.sent\": 0,\n \"snowflake.load.used\": 100,\n \"snowflake.partitions.scanned\": 0,\n \"snowflake.partitions.total\": 0,\n \"snowflake.rows.deleted\": 0,\n \"snowflake.rows.inserted\": 0,\n \"snowflake.rows.unloaded\": 0,\n \"snowflake.rows.updated\": 0,\n \"snowflake.rows.written_to_result\": 798,\n \"snowflake.time.child_queries_wait\": 0,\n \"snowflake.time.compilation\": 158,\n \"snowflake.time.execution\": 6578,\n \"snowflake.time.list_external_files\": 0,\n \"snowflake.time.queued.overload\": 0,\n \"snowflake.time.queued.provisioning\": 0,\n \"snowflake.time.repair\": 0,\n \"snowflake.time.total_elapsed\": 6736,\n \"snowflake.time.transaction_blocked\": 0\n}", "IS_PARENT": true, "IS_ROOT": false} -{"QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172a", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": null, "SESSION_ID": 20234875336090534, "NAME": "call DTAGENT_DB", "START_TIME": 1760681131569144000, "END_TIME": 1760681138581144000, "_SPAN_ID": "4a57ce274789b2ec", "_TRACE_ID": "01bcb05d68938b78428142dea30a6160", "STATUS_CODE": "OK", "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_DB\",\n \"db.operation.name\": \"CALL\",\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_WH\"\n}", "ATTRIBUTES": "{\n \"db.query.text\": \"CALL DTAGENT_DB.APP.F_ACTIVE_QUERIES_INSTRUMENTED();\\n\",\n \"session.id\": 20234875336090534,\n \"snowflake.cluster_number\": 1,\n \"snowflake.database.id\": 1868,\n \"snowflake.query.hash\": \"e86eb140052655351696b76a32722b70\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bcb05d-0415-b618-0047-e383330c172a\",\n \"snowflake.query.is_client_generated\": false,\n \"snowflake.query.parametrized_hash\": \"e86eb140052655351696b76a32722b70\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.tag\": \"dt_snowagent.version:0.8.1.plugin:ActiveQueriesPlugin.2025-05-30T04:53:15.857Z\",\n \"snowflake.query.transaction_id\": 0,\n \"snowflake.release_version\": \"9.14.2\",\n \"snowflake.role.type\": \"ROLE\",\n \"snowflake.schema.id\": 40234,\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.cluster.number\": 1,\n \"snowflake.warehouse.id\": 3726,\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.acceleration.data.scanned\": 0,\n \"snowflake.acceleration.partitions.scanned\": 0,\n \"snowflake.acceleration.scale_factor.max\": 0,\n \"snowflake.credits.cloud_services\": 1.000000000000000e-05,\n \"snowflake.data.deleted\": 0,\n \"snowflake.data.read.from_result\": 0,\n \"snowflake.data.scanned\": 0,\n \"snowflake.data.scanned_from_cache\": 0.000000000000000e+00,\n \"snowflake.data.sent_over_the_network\": 0,\n \"snowflake.data.spilled.local\": 0,\n \"snowflake.data.spilled.remote\": 0,\n \"snowflake.data.transferred.inbound\": 0,\n \"snowflake.data.transferred.outbound\": 0,\n \"snowflake.data.written\": 0,\n \"snowflake.data.written_to_result\": 640,\n \"snowflake.external_functions.data.received\": 0,\n \"snowflake.external_functions.data.sent\": 0,\n \"snowflake.external_functions.invocations\": 0,\n \"snowflake.external_functions.rows.received\": 0,\n \"snowflake.external_functions.rows.sent\": 0,\n \"snowflake.load.used\": 100,\n \"snowflake.partitions.scanned\": 0,\n \"snowflake.partitions.total\": 0,\n \"snowflake.rows.deleted\": 0,\n \"snowflake.rows.inserted\": 0,\n \"snowflake.rows.unloaded\": 0,\n \"snowflake.rows.updated\": 0,\n \"snowflake.rows.written_to_result\": 798,\n \"snowflake.time.child_queries_wait\": 0,\n \"snowflake.time.compilation\": 43,\n \"snowflake.time.execution\": 6969,\n \"snowflake.time.list_external_files\": 0,\n \"snowflake.time.queued.overload\": 0,\n \"snowflake.time.queued.provisioning\": 0,\n \"snowflake.time.repair\": 0,\n \"snowflake.time.total_elapsed\": 7012,\n \"snowflake.time.transaction_blocked\": 0\n}", "IS_PARENT": true, "IS_ROOT": true} +{"QUERY_ID": "01bcb05d-0415-b618-0047-e383330c174a", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172e", "SESSION_ID": 20234875336090534, "NAME": "select DTAGENT_DB", "START_TIME": 1748588001169000000, "END_TIME": 1748588003089000000, "_SPAN_ID": null, "_TRACE_ID": null, "STATUS_CODE": "OK", "DIMENSIONS": "{\n \"db.collection.name\": \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"db.namespace\": \"DTAGENT_DB\",\n \"db.operation.name\": \"SELECT\",\n \"db.snowflake.dbs\": [\n \"DTAGENT_DB\"\n ],\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_WH\"\n}", "ATTRIBUTES": "{\n \"db.query.text\": \"with cte_all_queries as (\\n (\\n select * from TABLE(result_scan(:QID_0))\\n )\\n union all\\n (\\n select * from TABLE(result_scan(:QID_1))\\n where END_TIME > DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('active_queries')\\n )\\n )\\n , cte_active_queries as (\\n select \\n START_TIME,\\n END_TIME,\\n\\n QUERY_ID,\\n SESSION_ID,\\n DATABASE_NAME,\\n SCHEMA_NAME,\\n\\n QUERY_TEXT,\\n QUERY_TYPE,\\n QUERY_TAG,\\n QUERY_HASH,\\n QUERY_HASH_VERSION,\\n QUERY_PARAMETERIZED_HASH,\\n QUERY_PARAMETERIZED_HASH_VERSION,\\n\\n USER_NAME,\\n ROLE_NAME,\\n\\n WAREHOUSE_NAME,\\n WAREHOUSE_TYPE,\\n\\n EXECUTION_STATUS,\\n ERROR_CODE,\\n ERROR_MESSAGE,\\n\\n // metrics\\n RUNNING_TIME,\\n EXECUTION_TIME,\\n COMPILATION_TIME,\\n TOTAL_ELAPSED_TIME,\\n BYTES_WRITTEN_TO_RESULT,\\n ROWS_WRITTEN_TO_RESULT,\\n from cte_all_queries aq\\n where\\n ( array_size(DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])) = 0\\n or array_contains(EXECUTION_STATUS::variant, DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])::array) )\\n )\\n select \\n qh.END_TIME::timestamp_ltz as TIMESTAMP,\\n\\n qh.query_id as QUERY_ID,\\n qh.session_id as SESSION_ID,\\n\\n CONCAT(\\n 'SQL query ',\\n qh.execution_status,\\n ' at ',\\n COALESCE(qh.database_name, '')\\n ) as NAME,\\n NAME as _MESSAGE,\\n\\n \\n extract(epoch_nanosecond from qh.start_time) as START_TIME,\\n extract(epoch_nanosecond from qh.end_time) as END_TIME,\\n\\n \\n OBJECT_CONSTRUCT(\\n 'db.namespace', qh.database_name,\\n 'snowflake.warehouse.name', qh.warehouse_name,\\n 'db.user', qh.user_name,\\n 'snowflake.role.name', qh.role_name,\\n 'snowflake.query.execution_status', qh.execution_status\\n ) as DIMENSIONS,\\n \\n OBJECT_CONSTRUCT(\\n 'db.query.text', qh.query_text,\\n 'db.operation.name', qh.query_type,\\n 'session.id', qh.session_id,\\n 'snowflake.query.id', qh.query_id,\\n 'snowflake.query.tag', qh.query_tag,\\n 'snowflake.query.hash', qh.query_hash,\\n 'snowflake.query.hash_version', qh.query_hash_version,\\n 'snowflake.query.parametrized_hash', qh.query_parameterized_hash,\\n 'snowflake.query.parametrized_hash_version', qh.query_parameterized_hash_version,\\n 'snowflake.error.code', qh.error_code,\\n 'snowflake.error.message', qh.error_message,\\n 'snowflake.warehouse.type', qh.warehouse_type,\\n 'snowflake.schema.name', qh.schema_name\\n ) as ATTRIBUTES,\\n \\n OBJECT_CONSTRUCT(\\n 'snowflake.time.running', qh.running_time,\\n 'snowflake.time.execution', qh.execution_time,\\n 'snowflake.time.compilation', qh.compilation_time,\\n 'snowflake.time.total_elapsed', qh.total_elapsed_time,\\n 'snowflake.data.written_to_result', qh.bytes_written_to_result,\\n 'snowflake.rows.written_to_result', qh.rows_written_to_result\\n ) as METRICS\\n from \\n cte_active_queries qh\\n order by \\n TIMESTAMP asc\",\n \"db.snowflake.tables\": [\n \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"DTAGENT_DB.CONFIG.CONFIGURATIONS\"\n ],\n \"db.snowflake.views\": [],\n \"session.id\": 20234875336090534,\n \"snowflake.cluster_number\": 1,\n \"snowflake.database.id\": 1868,\n \"snowflake.query.hash\": \"dbf5cf366b64603513aa8097d5d3174c\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bcb05d-0415-b618-0047-e383330c174a\",\n \"snowflake.query.is_client_generated\": false,\n \"snowflake.query.parametrized_hash\": \"dbf5cf366b64603513aa8097d5d3174c\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.parent_id\": \"01bcb05d-0415-b618-0047-e383330c172e\",\n \"snowflake.query.tag\": \"dt_snowagent.version:0.8.1.plugin:ActiveQueriesPlugin.2025-05-30T04:53:15.857Z\",\n \"snowflake.query.transaction_id\": 0,\n \"snowflake.release_version\": \"9.14.2\",\n \"snowflake.role.type\": \"ROLE\",\n \"snowflake.schema.id\": 40234,\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.cluster.number\": 1,\n \"snowflake.warehouse.id\": 3726,\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.acceleration.data.scanned\": 0,\n \"snowflake.acceleration.partitions.scanned\": 0,\n \"snowflake.acceleration.scale_factor.max\": 0,\n \"snowflake.credits.cloud_services\": 9.200000000000000e-05,\n \"snowflake.data.deleted\": 0,\n \"snowflake.data.read.from_result\": 0,\n \"snowflake.data.scanned\": 5640704,\n \"snowflake.data.scanned_from_cache\": 0.000000000000000e+00,\n \"snowflake.data.sent_over_the_network\": 0,\n \"snowflake.data.spilled.local\": 0,\n \"snowflake.data.spilled.remote\": 0,\n \"snowflake.data.transferred.inbound\": 0,\n \"snowflake.data.transferred.outbound\": 0,\n \"snowflake.data.written\": 0,\n \"snowflake.data.written_to_result\": 117512,\n \"snowflake.external_functions.data.received\": 0,\n \"snowflake.external_functions.data.sent\": 0,\n \"snowflake.external_functions.invocations\": 0,\n \"snowflake.external_functions.rows.received\": 0,\n \"snowflake.external_functions.rows.sent\": 0,\n \"snowflake.load.used\": 100,\n \"snowflake.partitions.scanned\": 47,\n \"snowflake.partitions.total\": 43,\n \"snowflake.rows.deleted\": 0,\n \"snowflake.rows.inserted\": 0,\n \"snowflake.rows.unloaded\": 0,\n \"snowflake.rows.updated\": 0,\n \"snowflake.rows.written_to_result\": 798,\n \"snowflake.time.child_queries_wait\": 0,\n \"snowflake.time.compilation\": 520,\n \"snowflake.time.execution\": 1381,\n \"snowflake.time.list_external_files\": 19,\n \"snowflake.time.queued.overload\": 0,\n \"snowflake.time.queued.provisioning\": 0,\n \"snowflake.time.repair\": 0,\n \"snowflake.time.total_elapsed\": 1920,\n \"snowflake.time.transaction_blocked\": 0\n}", "IS_PARENT": false, "IS_ROOT": false} +{"QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172e", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172a", "SESSION_ID": 20234875336090534, "NAME": "call DTAGENT_DB", "START_TIME": 1748587996524000000, "END_TIME": 1748588003260000000, "_SPAN_ID": "4328cfa95b61ec83", "_TRACE_ID": "01bcb05d68938b78428142dea30a6160", "STATUS_CODE": "OK", "DIMENSIONS": "{\n \"db.collection.name\": \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"db.namespace\": \"DTAGENT_DB\",\n \"db.operation.name\": \"CALL\",\n \"db.snowflake.dbs\": [\n \"DTAGENT_DB\"\n ],\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_WH\"\n}", "ATTRIBUTES": "{\n \"db.query.text\": \"with cte_all_queries as (\\n (\\n select * from TABLE(DTAGENT_DB.APP.F_GET_RUNNING_QUERIES())\\n )\\n union all\\n (\\n select * from TABLE(DTAGENT_DB.APP.F_GET_FINISHED_QUERIES())\\n where END_TIME > DTAGENT_DB.STATUS.F_LAST_PROCESSED_TS('active_queries')\\n )\\n )\\n , cte_active_queries as (\\n select \\n START_TIME,\\n END_TIME,\\n\\n QUERY_ID,\\n SESSION_ID,\\n DATABASE_NAME,\\n SCHEMA_NAME,\\n\\n QUERY_TEXT,\\n QUERY_TYPE,\\n QUERY_TAG,\\n QUERY_HASH,\\n QUERY_HASH_VERSION,\\n QUERY_PARAMETERIZED_HASH,\\n QUERY_PARAMETERIZED_HASH_VERSION,\\n\\n USER_NAME,\\n ROLE_NAME,\\n\\n WAREHOUSE_NAME,\\n WAREHOUSE_TYPE,\\n\\n EXECUTION_STATUS,\\n ERROR_CODE,\\n ERROR_MESSAGE,\\n\\n // metrics\\n RUNNING_TIME,\\n EXECUTION_TIME,\\n COMPILATION_TIME,\\n TOTAL_ELAPSED_TIME,\\n BYTES_WRITTEN_TO_RESULT,\\n ROWS_WRITTEN_TO_RESULT,\\n from cte_all_queries aq\\n where\\n ( array_size(DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])) = 0\\n or array_contains(EXECUTION_STATUS::variant, DTAGENT_DB.CONFIG.F_GET_CONFIG_VALUE('plugins.active_queries.report_execution_status', [])::array) )\\n )\\n select \\n qh.END_TIME::timestamp_ltz as TIMESTAMP,\\n\\n qh.query_id as QUERY_ID,\\n qh.session_id as SESSION_ID,\\n\\n CONCAT(\\n 'SQL query ',\\n qh.execution_status,\\n ' at ',\\n COALESCE(qh.database_name, '')\\n ) as NAME,\\n NAME as _MESSAGE,\\n\\n \\n extract(epoch_nanosecond from qh.start_time) as START_TIME,\\n extract(epoch_nanosecond from qh.end_time) as END_TIME,\\n\\n \\n OBJECT_CONSTRUCT(\\n 'db.namespace', qh.database_name,\\n 'snowflake.warehouse.name', qh.warehouse_name,\\n 'db.user', qh.user_name,\\n 'snowflake.role.name', qh.role_name,\\n 'snowflake.query.execution_status', qh.execution_status\\n ) as DIMENSIONS,\\n \\n OBJECT_CONSTRUCT(\\n 'db.query.text', qh.query_text,\\n 'db.operation.name', qh.query_type,\\n 'session.id', qh.session_id,\\n 'snowflake.query.id', qh.query_id,\\n 'snowflake.query.tag', qh.query_tag,\\n 'snowflake.query.hash', qh.query_hash,\\n 'snowflake.query.hash_version', qh.query_hash_version,\\n 'snowflake.query.parametrized_hash', qh.query_parameterized_hash,\\n 'snowflake.query.parametrized_hash_version', qh.query_parameterized_hash_version,\\n 'snowflake.error.code', qh.error_code,\\n 'snowflake.error.message', qh.error_message,\\n 'snowflake.warehouse.type', qh.warehouse_type,\\n 'snowflake.schema.name', qh.schema_name\\n ) as ATTRIBUTES,\\n \\n OBJECT_CONSTRUCT(\\n 'snowflake.time.running', qh.running_time,\\n 'snowflake.time.execution', qh.execution_time,\\n 'snowflake.time.compilation', qh.compilation_time,\\n 'snowflake.time.total_elapsed', qh.total_elapsed_time,\\n 'snowflake.data.written_to_result', qh.bytes_written_to_result,\\n 'snowflake.rows.written_to_result', qh.rows_written_to_result\\n ) as METRICS\\n from \\n cte_active_queries qh\\n order by \\n TIMESTAMP asc\",\n \"db.snowflake.tables\": [\n \"DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG\",\n \"DTAGENT_DB.CONFIG.CONFIGURATIONS\"\n ],\n \"db.snowflake.views\": [],\n \"session.id\": 20234875336090534,\n \"snowflake.cluster_number\": 1,\n \"snowflake.database.id\": 1868,\n \"snowflake.query.hash\": \"0d8c82d99d518393458b4a68bc8c2930\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bcb05d-0415-b618-0047-e383330c172e\",\n \"snowflake.query.is_client_generated\": false,\n \"snowflake.query.parametrized_hash\": \"0d8c82d99d518393458b4a68bc8c2930\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.parent_id\": \"01bcb05d-0415-b618-0047-e383330c172a\",\n \"snowflake.query.tag\": \"dt_snowagent.version:0.8.1.plugin:ActiveQueriesPlugin.2025-05-30T04:53:15.857Z\",\n \"snowflake.query.transaction_id\": 0,\n \"snowflake.release_version\": \"9.14.2\",\n \"snowflake.role.type\": \"ROLE\",\n \"snowflake.schema.id\": 40234,\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.cluster.number\": 1,\n \"snowflake.warehouse.id\": 3726,\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.acceleration.data.scanned\": 0,\n \"snowflake.acceleration.partitions.scanned\": 0,\n \"snowflake.acceleration.scale_factor.max\": 0,\n \"snowflake.credits.cloud_services\": 2.700000000000000e-05,\n \"snowflake.data.deleted\": 0,\n \"snowflake.data.read.from_result\": 0,\n \"snowflake.data.scanned\": 0,\n \"snowflake.data.scanned_from_cache\": 0.000000000000000e+00,\n \"snowflake.data.sent_over_the_network\": 0,\n \"snowflake.data.spilled.local\": 0,\n \"snowflake.data.spilled.remote\": 0,\n \"snowflake.data.transferred.inbound\": 0,\n \"snowflake.data.transferred.outbound\": 0,\n \"snowflake.data.written\": 0,\n \"snowflake.data.written_to_result\": 640,\n \"snowflake.external_functions.data.received\": 0,\n \"snowflake.external_functions.data.sent\": 0,\n \"snowflake.external_functions.invocations\": 0,\n \"snowflake.external_functions.rows.received\": 0,\n \"snowflake.external_functions.rows.sent\": 0,\n \"snowflake.load.used\": 100,\n \"snowflake.partitions.scanned\": 0,\n \"snowflake.partitions.total\": 0,\n \"snowflake.rows.deleted\": 0,\n \"snowflake.rows.inserted\": 0,\n \"snowflake.rows.unloaded\": 0,\n \"snowflake.rows.updated\": 0,\n \"snowflake.rows.written_to_result\": 798,\n \"snowflake.time.child_queries_wait\": 0,\n \"snowflake.time.compilation\": 158,\n \"snowflake.time.execution\": 6578,\n \"snowflake.time.list_external_files\": 0,\n \"snowflake.time.queued.overload\": 0,\n \"snowflake.time.queued.provisioning\": 0,\n \"snowflake.time.repair\": 0,\n \"snowflake.time.total_elapsed\": 6736,\n \"snowflake.time.transaction_blocked\": 0\n}", "IS_PARENT": true, "IS_ROOT": false} +{"QUERY_ID": "01bcb05d-0415-b618-0047-e383330c172a", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": null, "SESSION_ID": 20234875336090534, "NAME": "call DTAGENT_DB", "START_TIME": 1748587996395000000, "END_TIME": 1748588003407000000, "_SPAN_ID": "4a57ce274789b2ec", "_TRACE_ID": "01bcb05d68938b78428142dea30a6160", "STATUS_CODE": "OK", "DIMENSIONS": "{\n \"db.namespace\": \"DTAGENT_DB\",\n \"db.operation.name\": \"CALL\",\n \"db.user\": \"SYSTEM\",\n \"snowflake.query.execution_status\": \"SUCCESS\",\n \"snowflake.role.name\": \"DTAGENT_ADMIN\",\n \"snowflake.warehouse.name\": \"DTAGENT_WH\"\n}", "ATTRIBUTES": "{\n \"db.query.text\": \"CALL DTAGENT_DB.APP.F_ACTIVE_QUERIES_INSTRUMENTED();\\n\",\n \"session.id\": 20234875336090534,\n \"snowflake.cluster_number\": 1,\n \"snowflake.database.id\": 1868,\n \"snowflake.query.hash\": \"e86eb140052655351696b76a32722b70\",\n \"snowflake.query.hash_version\": 2,\n \"snowflake.query.id\": \"01bcb05d-0415-b618-0047-e383330c172a\",\n \"snowflake.query.is_client_generated\": false,\n \"snowflake.query.parametrized_hash\": \"e86eb140052655351696b76a32722b70\",\n \"snowflake.query.parametrized_hash_version\": 1,\n \"snowflake.query.tag\": \"dt_snowagent.version:0.8.1.plugin:ActiveQueriesPlugin.2025-05-30T04:53:15.857Z\",\n \"snowflake.query.transaction_id\": 0,\n \"snowflake.release_version\": \"9.14.2\",\n \"snowflake.role.type\": \"ROLE\",\n \"snowflake.schema.id\": 40234,\n \"snowflake.schema.name\": \"APP\",\n \"snowflake.warehouse.cluster.number\": 1,\n \"snowflake.warehouse.id\": 3726,\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "METRICS": "{\n \"snowflake.acceleration.data.scanned\": 0,\n \"snowflake.acceleration.partitions.scanned\": 0,\n \"snowflake.acceleration.scale_factor.max\": 0,\n \"snowflake.credits.cloud_services\": 1.000000000000000e-05,\n \"snowflake.data.deleted\": 0,\n \"snowflake.data.read.from_result\": 0,\n \"snowflake.data.scanned\": 0,\n \"snowflake.data.scanned_from_cache\": 0.000000000000000e+00,\n \"snowflake.data.sent_over_the_network\": 0,\n \"snowflake.data.spilled.local\": 0,\n \"snowflake.data.spilled.remote\": 0,\n \"snowflake.data.transferred.inbound\": 0,\n \"snowflake.data.transferred.outbound\": 0,\n \"snowflake.data.written\": 0,\n \"snowflake.data.written_to_result\": 640,\n \"snowflake.external_functions.data.received\": 0,\n \"snowflake.external_functions.data.sent\": 0,\n \"snowflake.external_functions.invocations\": 0,\n \"snowflake.external_functions.rows.received\": 0,\n \"snowflake.external_functions.rows.sent\": 0,\n \"snowflake.load.used\": 100,\n \"snowflake.partitions.scanned\": 0,\n \"snowflake.partitions.total\": 0,\n \"snowflake.rows.deleted\": 0,\n \"snowflake.rows.inserted\": 0,\n \"snowflake.rows.unloaded\": 0,\n \"snowflake.rows.updated\": 0,\n \"snowflake.rows.written_to_result\": 798,\n \"snowflake.time.child_queries_wait\": 0,\n \"snowflake.time.compilation\": 43,\n \"snowflake.time.execution\": 6969,\n \"snowflake.time.list_external_files\": 0,\n \"snowflake.time.queued.overload\": 0,\n \"snowflake.time.queued.provisioning\": 0,\n \"snowflake.time.repair\": 0,\n \"snowflake.time.total_elapsed\": 7012,\n \"snowflake.time.transaction_blocked\": 0\n}", "IS_PARENT": true, "IS_ROOT": true} diff --git a/test/test_data/query_history_nested_sp.ndjson b/test/test_data/query_history_nested_sp.ndjson new file mode 100644 index 00000000..b9ce5f82 --- /dev/null +++ b/test/test_data/query_history_nested_sp.ndjson @@ -0,0 +1,3 @@ +{"QUERY_ID": "sp-root-0001-0000-0000-000000000001", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": null, "SESSION_ID": 11111111111111111, "NAME": "call MY_DB", "START_TIME": 1748588000000000000, "END_TIME": 1748588010000000000, "_SPAN_ID": "aabbccdd11223344", "_TRACE_ID": "aabbccdd112233440000000000000001", "STATUS_CODE": "OK", "DIMENSIONS": "{\"db.namespace\": \"MY_DB\", \"db.operation.name\": \"CALL\", \"db.user\": \"TEST_USER\", \"snowflake.query.execution_status\": \"SUCCESS\", \"snowflake.role.name\": \"TEST_ROLE\", \"snowflake.warehouse.name\": \"TEST_WH\"}", "ATTRIBUTES": "{\"db.query.text\": \"CALL MY_DB.PUBLIC.P_OUTER_SP();\", \"snowflake.query.id\": \"sp-root-0001-0000-0000-000000000001\", \"snowflake.query.parent_id\": null}", "METRICS": "{\"snowflake.time.execution\": 9000, \"snowflake.time.total_elapsed\": 10000, \"snowflake.time.compilation\": 500, \"snowflake.time.queued.overload\": 0, \"snowflake.time.queued.provisioning\": 0, \"snowflake.data.spilled.local\": 0, \"snowflake.data.spilled.remote\": 0, \"snowflake.partitions.scanned\": 0, \"snowflake.partitions.total\": 0}", "IS_PARENT": true, "IS_ROOT": true} +{"QUERY_ID": "sp-mid1-0001-0000-0000-000000000002", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": "sp-root-0001-0000-0000-000000000001", "SESSION_ID": 11111111111111111, "NAME": "call MY_DB", "START_TIME": 1748588001000000000, "END_TIME": 1748588007000000000, "_SPAN_ID": "aabbccdd11223345", "_TRACE_ID": "aabbccdd112233440000000000000001", "STATUS_CODE": "OK", "DIMENSIONS": "{\"db.namespace\": \"MY_DB\", \"db.operation.name\": \"CALL\", \"db.user\": \"TEST_USER\", \"snowflake.query.execution_status\": \"SUCCESS\", \"snowflake.role.name\": \"TEST_ROLE\", \"snowflake.warehouse.name\": \"TEST_WH\"}", "ATTRIBUTES": "{\"db.query.text\": \"CALL MY_DB.PUBLIC.P_INNER_SP();\", \"snowflake.query.id\": \"sp-mid1-0001-0000-0000-000000000002\", \"snowflake.query.parent_id\": \"sp-root-0001-0000-0000-000000000001\"}", "METRICS": "{\"snowflake.time.execution\": 5000, \"snowflake.time.total_elapsed\": 6000, \"snowflake.time.compilation\": 300, \"snowflake.time.queued.overload\": 0, \"snowflake.time.queued.provisioning\": 0, \"snowflake.data.spilled.local\": 0, \"snowflake.data.spilled.remote\": 0, \"snowflake.partitions.scanned\": 0, \"snowflake.partitions.total\": 0}", "IS_PARENT": true, "IS_ROOT": false} +{"QUERY_ID": "sp-leaf-0001-0000-0000-000000000003", "QUERY_OPERATOR_STATS": null, "PARENT_QUERY_ID": "sp-mid1-0001-0000-0000-000000000002", "SESSION_ID": 11111111111111111, "NAME": "select MY_DB", "START_TIME": 1748588002000000000, "END_TIME": 1748588005000000000, "_SPAN_ID": null, "_TRACE_ID": null, "STATUS_CODE": "OK", "DIMENSIONS": "{\"db.namespace\": \"MY_DB\", \"db.operation.name\": \"SELECT\", \"db.user\": \"TEST_USER\", \"snowflake.query.execution_status\": \"SUCCESS\", \"snowflake.role.name\": \"TEST_ROLE\", \"snowflake.warehouse.name\": \"TEST_WH\"}", "ATTRIBUTES": "{\"db.query.text\": \"SELECT * FROM MY_DB.PUBLIC.MY_TABLE;\", \"snowflake.query.id\": \"sp-leaf-0001-0000-0000-000000000003\", \"snowflake.query.parent_id\": \"sp-mid1-0001-0000-0000-000000000002\"}", "METRICS": "{\"snowflake.time.execution\": 2000, \"snowflake.time.total_elapsed\": 3000, \"snowflake.time.compilation\": 200, \"snowflake.time.queued.overload\": 0, \"snowflake.time.queued.provisioning\": 0, \"snowflake.data.spilled.local\": 0, \"snowflake.data.spilled.remote\": 0, \"snowflake.partitions.scanned\": 5, \"snowflake.partitions.total\": 10}", "IS_PARENT": false, "IS_ROOT": false} diff --git a/test/test_data/recent_queries2.pkl b/test/test_data/recent_queries2.pkl deleted file mode 100644 index da5bb017..00000000 --- a/test/test_data/recent_queries2.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2030680993b4eec3132d0c4f501006b55ea0d71d6ad03dabd9aeab54d5acb991 -size 22601 diff --git a/test/test_data/resource_monitors.pkl b/test/test_data/resource_monitors.pkl deleted file mode 100644 index b2c418c7..00000000 --- a/test/test_data/resource_monitors.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:83b110d992462d7c3f878bab1ecb4050f71699353197aa7b8935b1837a4c1bfa -size 37198 diff --git a/test/test_data/warehouses.ndjson b/test/test_data/resource_monitors_warehouses.ndjson similarity index 96% rename from test/test_data/warehouses.ndjson rename to test/test_data/resource_monitors_warehouses.ndjson index fcf1e9d6..645db014 100644 --- a/test/test_data/warehouses.ndjson +++ b/test/test_data/resource_monitors_warehouses.ndjson @@ -1,2 +1,2 @@ -{"START_TIME": 1760681131999454000, "IS_UNMONITORED": false, "_MESSAGE": "Warehouse details for COMPUTE_WH", "DIMENSIONS": "{\n \"snowflake.resource_monitor.name\": \"COMPUTE_RS\",\n \"snowflake.warehouse.name\": \"COMPUTE_WH\"\n}", "ATTRIBUTES": "{\n \"snowflake.credits.quota\": \"1.00\",\n \"snowflake.credits.quota.remaining\": \"0.94\",\n \"snowflake.credits.quota.used\": \"0.06\",\n \"snowflake.resource_monitor.frequency\": \"DAILY\",\n \"snowflake.resource_monitor.level\": \"WAREHOUSE\",\n \"snowflake.warehouse.execution_state\": \"SUSPENDED\",\n \"snowflake.warehouse.has_query_acceleration_enabled\": \"false\",\n \"snowflake.warehouse.is_auto_resume\": \"true\",\n \"snowflake.warehouse.is_auto_suspend\": 10,\n \"snowflake.warehouse.is_current\": \"N\",\n \"snowflake.warehouse.is_default\": \"N\",\n \"snowflake.warehouse.owner\": \"SYSADMIN\",\n \"snowflake.warehouse.owner.role_type\": \"ROLE\",\n \"snowflake.warehouse.scaling_policy\": \"STANDARD\",\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.warehouse.created_on\": 1614961356578000000,\n \"snowflake.warehouse.resumed_on\": 1741738203231000000,\n \"snowflake.warehouse.updated_on\": 1741738203231000000\n}", "METRICS": "{\n \"snowflake.acceleration.scale_factor.max\": \"8\",\n \"snowflake.compute.available\": \"\",\n \"snowflake.compute.other\": \"\",\n \"snowflake.compute.provisioning\": \"\",\n \"snowflake.compute.quiescing\": \"\",\n \"snowflake.queries.queued\": 0,\n \"snowflake.queries.running\": 0,\n \"snowflake.warehouse.clusters.max\": 1,\n \"snowflake.warehouse.clusters.min\": 1,\n \"snowflake.warehouse.clusters.started\": 0\n}"} -{"START_TIME": 1760681132022405000, "IS_UNMONITORED": false, "_MESSAGE": "Warehouse details for DEVELCLONE_WH", "DIMENSIONS": "{\n \"snowflake.resource_monitor.name\": \"DEVELCLONE_RS\",\n \"snowflake.warehouse.name\": \"DEVELCLONE_WH\"\n}", "ATTRIBUTES": "{\n \"snowflake.credits.quota\": \"1.00\",\n \"snowflake.credits.quota.remaining\": \"1.00\",\n \"snowflake.credits.quota.used\": \"0.00\",\n \"snowflake.resource_monitor.frequency\": \"DAILY\",\n \"snowflake.resource_monitor.level\": \"WAREHOUSE\",\n \"snowflake.warehouse.execution_state\": \"SUSPENDED\",\n \"snowflake.warehouse.has_query_acceleration_enabled\": \"false\",\n \"snowflake.warehouse.is_auto_resume\": \"true\",\n \"snowflake.warehouse.is_auto_suspend\": 600,\n \"snowflake.warehouse.is_current\": \"N\",\n \"snowflake.warehouse.is_default\": \"N\",\n \"snowflake.warehouse.owner\": \"SYSADMIN\",\n \"snowflake.warehouse.owner.role_type\": \"ROLE\",\n \"snowflake.warehouse.scaling_policy\": \"ECONOMY\",\n \"snowflake.warehouse.size\": \"Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.warehouse.created_on\": 1623296570505000000,\n \"snowflake.warehouse.resumed_on\": 1723467521001000000,\n \"snowflake.warehouse.updated_on\": 1728174334355000000\n}", "METRICS": "{\n \"snowflake.acceleration.scale_factor.max\": \"8\",\n \"snowflake.compute.available\": \"\",\n \"snowflake.compute.other\": \"\",\n \"snowflake.compute.provisioning\": \"\",\n \"snowflake.compute.quiescing\": \"\",\n \"snowflake.queries.queued\": 0,\n \"snowflake.queries.running\": 0,\n \"snowflake.warehouse.clusters.max\": 1,\n \"snowflake.warehouse.clusters.min\": 1,\n \"snowflake.warehouse.clusters.started\": 0\n}"} +{"START_TIME": 1741769510651000000, "IS_UNMONITORED": false, "_MESSAGE": "Warehouse details for COMPUTE_WH", "DIMENSIONS": "{\n \"snowflake.resource_monitor.name\": \"COMPUTE_RS\",\n \"snowflake.warehouse.name\": \"COMPUTE_WH\"\n}", "ATTRIBUTES": "{\n \"snowflake.credits.quota\": \"1.00\",\n \"snowflake.credits.quota.remaining\": \"0.94\",\n \"snowflake.credits.quota.used\": \"0.06\",\n \"snowflake.resource_monitor.frequency\": \"DAILY\",\n \"snowflake.resource_monitor.level\": \"WAREHOUSE\",\n \"snowflake.warehouse.execution_state\": \"SUSPENDED\",\n \"snowflake.warehouse.has_query_acceleration_enabled\": \"false\",\n \"snowflake.warehouse.is_auto_resume\": \"true\",\n \"snowflake.warehouse.is_auto_suspend\": 10,\n \"snowflake.warehouse.is_current\": \"N\",\n \"snowflake.warehouse.is_default\": \"N\",\n \"snowflake.warehouse.owner\": \"SYSADMIN\",\n \"snowflake.warehouse.owner.role_type\": \"ROLE\",\n \"snowflake.warehouse.scaling_policy\": \"STANDARD\",\n \"snowflake.warehouse.size\": \"X-Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.warehouse.created_on\": 1614961356578000000,\n \"snowflake.warehouse.resumed_on\": 1741738203231000000,\n \"snowflake.warehouse.updated_on\": 1741738203231000000\n}", "METRICS": "{\n \"snowflake.acceleration.scale_factor.max\": \"8\",\n \"snowflake.compute.available\": \"\",\n \"snowflake.compute.other\": \"\",\n \"snowflake.compute.provisioning\": \"\",\n \"snowflake.compute.quiescing\": \"\",\n \"snowflake.queries.queued\": 0,\n \"snowflake.queries.running\": 0,\n \"snowflake.warehouse.clusters.max\": 1,\n \"snowflake.warehouse.clusters.min\": 1,\n \"snowflake.warehouse.clusters.started\": 0\n}"} +{"START_TIME": 1741769510651000000, "IS_UNMONITORED": false, "_MESSAGE": "Warehouse details for DEVELCLONE_WH", "DIMENSIONS": "{\n \"snowflake.resource_monitor.name\": \"DEVELCLONE_RS\",\n \"snowflake.warehouse.name\": \"DEVELCLONE_WH\"\n}", "ATTRIBUTES": "{\n \"snowflake.credits.quota\": \"1.00\",\n \"snowflake.credits.quota.remaining\": \"1.00\",\n \"snowflake.credits.quota.used\": \"0.00\",\n \"snowflake.resource_monitor.frequency\": \"DAILY\",\n \"snowflake.resource_monitor.level\": \"WAREHOUSE\",\n \"snowflake.warehouse.execution_state\": \"SUSPENDED\",\n \"snowflake.warehouse.has_query_acceleration_enabled\": \"false\",\n \"snowflake.warehouse.is_auto_resume\": \"true\",\n \"snowflake.warehouse.is_auto_suspend\": 600,\n \"snowflake.warehouse.is_current\": \"N\",\n \"snowflake.warehouse.is_default\": \"N\",\n \"snowflake.warehouse.owner\": \"SYSADMIN\",\n \"snowflake.warehouse.owner.role_type\": \"ROLE\",\n \"snowflake.warehouse.scaling_policy\": \"ECONOMY\",\n \"snowflake.warehouse.size\": \"Small\",\n \"snowflake.warehouse.type\": \"STANDARD\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.warehouse.created_on\": 1623296570505000000,\n \"snowflake.warehouse.resumed_on\": 1723467521001000000,\n \"snowflake.warehouse.updated_on\": 1728174334355000000\n}", "METRICS": "{\n \"snowflake.acceleration.scale_factor.max\": \"8\",\n \"snowflake.compute.available\": \"\",\n \"snowflake.compute.other\": \"\",\n \"snowflake.compute.provisioning\": \"\",\n \"snowflake.compute.quiescing\": \"\",\n \"snowflake.queries.queued\": 0,\n \"snowflake.queries.running\": 0,\n \"snowflake.warehouse.clusters.max\": 1,\n \"snowflake.warehouse.clusters.min\": 1,\n \"snowflake.warehouse.clusters.started\": 0\n}"} diff --git a/test/test_data/sessions.pkl b/test/test_data/sessions.pkl deleted file mode 100644 index d733e16a..00000000 --- a/test/test_data/sessions.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2fca76b25f36ca75b594f3808e919940d87ce2f278be6098048bb96bff9f9ce3 -size 1017 diff --git a/test/test_data/shares.ndjson b/test/test_data/shares.ndjson deleted file mode 100644 index 2ffc67b6..00000000 --- a/test/test_data/shares.ndjson +++ /dev/null @@ -1,2 +0,0 @@ -{"_MESSAGE": "Share details for Monte Carlo", "DIMENSIONS": "{\n \"db.namespace\": \"\",\n \"snowflake.share.name\": \"Monte Carlo\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.is_secure_objects_only\": \"\",\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"JKMKTPS.DKA87615\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.share.created_on\": 1633629486209000000\n}"} -{"_MESSAGE": "Share details for ACCOUNT_USAGE", "DIMENSIONS": "{\n \"db.namespace\": \"SNOWFLAKE\",\n \"snowflake.share.name\": \"ACCOUNT_USAGE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.is_secure_objects_only\": \"\",\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"SNOWFLAKE\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.share.created_on\": 1568850151405000000\n}"} diff --git a/test/test_data/shares.pkl b/test/test_data/shares.pkl deleted file mode 100644 index d2e17dd4..00000000 --- a/test/test_data/shares.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ae72ab7746b23c2e734ea42ddb5c75a7534edad99c701254b47cda4073b644d3 -size 11842 diff --git a/test/test_data/shares_events.ndjson b/test/test_data/shares_events.ndjson new file mode 100644 index 00000000..03a68898 --- /dev/null +++ b/test/test_data/shares_events.ndjson @@ -0,0 +1,2 @@ +{"_MESSAGE": "Share details for Monte Carlo", "DIMENSIONS": "{\n \"db.namespace\": \"\",\n \"snowflake.share.name\": \"Monte Carlo\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"TEST123.TESTACCOUNT\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.share.created_on\": 1633629486209000000\n}"} +{"_MESSAGE": "Share details for ACCOUNT_USAGE", "DIMENSIONS": "{\n \"db.namespace\": \"SNOWFLAKE\",\n \"snowflake.share.name\": \"ACCOUNT_USAGE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"SNOWFLAKE\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.share.created_on\": 1568850151405000000\n}"} diff --git a/test/test_data/shares_inbound.ndjson b/test/test_data/shares_inbound.ndjson new file mode 100644 index 00000000..8a662029 --- /dev/null +++ b/test/test_data/shares_inbound.ndjson @@ -0,0 +1,3 @@ +{"_MESSAGE": "Inbound share details for BIET_MONITORING_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"BI_SHARE_MONITORING_DB\",\n \"snowflake.share.name\": \"BIET_MONITORING_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.has_details_reported\": true,\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"TEST123.TESTACCOUNT\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{}"} +{"_MESSAGE": "Inbound share details for BIET_MONITORING_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"DT_SHARE_MONITORING_DB\",\n \"snowflake.share.name\": \"BIET_MONITORING_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.has_details_reported\": true,\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"\",\n \"snowflake.share.shared_from\": \"TEST123.TESTACCOUNT_WA\",\n \"snowflake.share.shared_to\": \"\"\n}", "EVENT_TIMESTAMPS": "{}"} +{"_MESSAGE": "Inbound share \"DELETED_DB_SHARE\" has a deleted database - data is no longer accessible", "DIMENSIONS": "{\n \"db.namespace\": \"DELETED_SHARED_DB\",\n \"snowflake.share.name\": \"DELETED_DB_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.share.has_db_deleted\": true,\n \"snowflake.share.has_details_reported\": true,\n \"snowflake.share.kind\": \"INBOUND\",\n \"snowflake.share.shared_from\": \"ABC123.SOURCE_ACCOUNT\"\n}", "EVENT_TIMESTAMPS": "{}"} diff --git a/test/test_data/shares_outbound.ndjson b/test/test_data/shares_outbound.ndjson new file mode 100644 index 00000000..4e14af24 --- /dev/null +++ b/test/test_data/shares_outbound.ndjson @@ -0,0 +1,2 @@ +{"_MESSAGE": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"DATA_SCIENTIST_DEV_DB\",\n \"snowflake.grant.name\": \"DATA_SCIENTIST_DEV_DB\",\n \"snowflake.share.name\": \"DATA_SCIENTIST_DEVEL_DS_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.grant.by\": \"DEMIGOD\",\n \"snowflake.grant.grantee\": \"TESTACCOUNT.DATA_SCIENTIST_DEVEL_DS_SHARE\",\n \"snowflake.grant.on\": \"DATABASE\",\n \"snowflake.grant.option\": \"false\",\n \"snowflake.grant.privilege\": \"USAGE\",\n \"snowflake.grant.to\": \"SHARE\",\n \"snowflake.share.is_secure_objects_only\": true,\n \"snowflake.share.kind\": \"OUTBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"DEMIGOD\",\n \"snowflake.share.shared_from\": \"TEST123.TESTACCOUNT_WD\",\n \"snowflake.share.shared_to\": \"TEST123.TESTACCOUNT\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.grant.created_on\": 1687246726499000000\n}"} +{"_MESSAGE": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_SHARE", "DIMENSIONS": "{\n \"db.namespace\": \"DATA_SCIENTIST_DEV_DB\",\n \"snowflake.grant.name\": \"DATA_SCIENTIST_DEV_DB.ACCOUNT_EXPERIENCE\",\n \"snowflake.share.name\": \"DATA_SCIENTIST_DEVEL_DS_SHARE\"\n}", "ATTRIBUTES": "{\n \"snowflake.grant.by\": \"INTEGRATION_CONSUMPTION_FORECASTING_ROLE\",\n \"snowflake.grant.grantee\": \"TESTACCOUNT.DATA_SCIENTIST_DEVEL_DS_SHARE\",\n \"snowflake.grant.on\": \"SCHEMA\",\n \"snowflake.grant.option\": \"false\",\n \"snowflake.grant.privilege\": \"USAGE\",\n \"snowflake.grant.to\": \"SHARE\",\n \"snowflake.share.is_secure_objects_only\": true,\n \"snowflake.share.kind\": \"OUTBOUND\",\n \"snowflake.share.listing_global_name\": \"\",\n \"snowflake.share.owner\": \"DEMIGOD\",\n \"snowflake.share.shared_from\": \"TEST123.TESTACCOUNT_WD\",\n \"snowflake.share.shared_to\": \"TEST123.TESTACCOUNT\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.grant.created_on\": 1668416468928000000\n}"} diff --git a/test/test_data/tasks_history.pkl b/test/test_data/tasks_history.pkl deleted file mode 100644 index 056466cc..00000000 --- a/test/test_data/tasks_history.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:38d3e60ecd6ec5e02297cff0dc99ce9fe22b7c9470444e60b9bc9ea12d5b37e3 -size 11359 diff --git a/test/test_data/tasks_serverless.pkl b/test/test_data/tasks_serverless.pkl deleted file mode 100644 index f7c11437..00000000 --- a/test/test_data/tasks_serverless.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7d30462a4bff848c2ca9b8feb077e871d8f3f744916a44f3df971e83fa5e8cee -size 941 diff --git a/test/test_data/tasks_versions.pkl b/test/test_data/tasks_versions.pkl deleted file mode 100644 index 448fd7e1..00000000 --- a/test/test_data/tasks_versions.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e862fae0a3aebbcb604eb0ff8102b9125d0b5fbd1a95c7e68307665d93c4a445 -size 950 diff --git a/test/test_data/trust_center_instr.pkl b/test/test_data/trust_center_instr.pkl deleted file mode 100644 index cd5f4458..00000000 --- a/test/test_data/trust_center_instr.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2baa2ba82dd9e92cfc53c9db231d1f0c7e16c561268c0561b2c67bbdd484283a -size 43816 diff --git a/test/test_data/trust_center_instr.ndjson b/test/test_data/trust_center_instrumented.ndjson similarity index 83% rename from test/test_data/trust_center_instr.ndjson rename to test/test_data/trust_center_instrumented.ndjson index 889e3d21..5ea1732f 100644 --- a/test/test_data/trust_center_instr.ndjson +++ b/test/test_data/trust_center_instrumented.ndjson @@ -1,2 +1,2 @@ -{"_MESSAGE": "[MEDIUM] Native Apps Event Sharing Check Ensure Event Table is configured for Native Application event sharing", "_SEVERITY": "MEDIUM", "STATUS_CODE": "OK", "START_TIME": 1760681133160916000, "EVENT_START": 1758043263598000000, "EVENT_END": 1758043267087000000, "TIMESTAMP": 1760681133160916000, "DIMENSIONS": "{\n \"event.category\": \"Vulnerability management\",\n \"snowflake.trust_center.scanner.id\": \"SECURITY_ESSENTIALS_NA_CONSUMER_ES_CHECK\",\n \"snowflake.trust_center.scanner.package.id\": \"SECURITY_ESSENTIALS\",\n \"snowflake.trust_center.scanner.type\": \"Vulnerability\",\n \"vulnerability.risk.level\": \"MEDIUM\"\n}", "ATTRIBUTES": "{\n \"error.code\": \"SECURITY_ESSENTIALS_NA_CONSUMER_ES_CHECK\",\n \"event.id\": 252009,\n \"event.kind\": \"SECURITY_EVENT\",\n \"snowflake.trust_center.scanner.description\": \"Ensure Event Table is configured for Native Application event sharing\",\n \"snowflake.trust_center.scanner.name\": \"Native Apps Event Sharing Check\",\n \"snowflake.trust_center.scanner.package.name\": \"Security Essentials\",\n \"status.message\": \"[MEDIUM] Native Apps Event Sharing Check Ensure Event Table is configured for Native Application event sharing\"\n}"} -{"_MESSAGE": "[CRITICAL] 3.1 Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses", "_SEVERITY": "CRITICAL", "STATUS_CODE": "OK", "START_TIME": 1760681133185199000, "EVENT_START": 1758043265132000000, "EVENT_END": 1758043272074000000, "TIMESTAMP": 1760681133185199000, "DIMENSIONS": "{\n \"event.category\": \"Vulnerability management\",\n \"snowflake.trust_center.scanner.id\": \"SECURITY_ESSENTIALS_CIS3_1\",\n \"snowflake.trust_center.scanner.package.id\": \"SECURITY_ESSENTIALS\",\n \"snowflake.trust_center.scanner.type\": \"Vulnerability\",\n \"vulnerability.risk.level\": \"CRITICAL\"\n}", "ATTRIBUTES": "{\n \"error.code\": \"SECURITY_ESSENTIALS_CIS3_1\",\n \"event.id\": 252010,\n \"event.kind\": \"SECURITY_EVENT\",\n \"snowflake.trust_center.scanner.description\": \"Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses\",\n \"snowflake.trust_center.scanner.name\": \"3.1\",\n \"snowflake.trust_center.scanner.package.name\": \"Security Essentials\",\n \"status.message\": \"[CRITICAL] 3.1 Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses\"\n}"} +{"_MESSAGE": "[MEDIUM] Native Apps Event Sharing Check Ensure Event Table is configured for Native Application event sharing", "_SEVERITY": "MEDIUM", "STATUS_CODE": "OK", "START_TIME": 1758043263598000000, "EVENT_START": 1758043263598000000, "EVENT_END": 1758043267087000000, "TIMESTAMP": 1758043268431000000, "DIMENSIONS": "{\n \"event.category\": \"Vulnerability management\",\n \"snowflake.trust_center.scanner.id\": \"SECURITY_ESSENTIALS_NA_CONSUMER_ES_CHECK\",\n \"snowflake.trust_center.scanner.package.id\": \"SECURITY_ESSENTIALS\",\n \"snowflake.trust_center.scanner.type\": \"Vulnerability\",\n \"vulnerability.risk.level\": \"MEDIUM\"\n}", "ATTRIBUTES": "{\n \"error.code\": \"SECURITY_ESSENTIALS_NA_CONSUMER_ES_CHECK\",\n \"event.id\": 252009,\n \"event.kind\": \"SECURITY_EVENT\",\n \"snowflake.trust_center.scanner.description\": \"Ensure Event Table is configured for Native Application event sharing\",\n \"snowflake.trust_center.scanner.name\": \"Native Apps Event Sharing Check\",\n \"snowflake.trust_center.scanner.package.name\": \"Security Essentials\",\n \"status.message\": \"[MEDIUM] Native Apps Event Sharing Check Ensure Event Table is configured for Native Application event sharing\"\n}"} +{"_MESSAGE": "[CRITICAL] 3.1 Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses", "_SEVERITY": "CRITICAL", "STATUS_CODE": "OK", "START_TIME": 1758043265132000000, "EVENT_START": 1758043265132000000, "EVENT_END": 1758043272074000000, "TIMESTAMP": 1758043273604000000, "DIMENSIONS": "{\n \"event.category\": \"Vulnerability management\",\n \"snowflake.trust_center.scanner.id\": \"SECURITY_ESSENTIALS_CIS3_1\",\n \"snowflake.trust_center.scanner.package.id\": \"SECURITY_ESSENTIALS\",\n \"snowflake.trust_center.scanner.type\": \"Vulnerability\",\n \"vulnerability.risk.level\": \"CRITICAL\"\n}", "ATTRIBUTES": "{\n \"error.code\": \"SECURITY_ESSENTIALS_CIS3_1\",\n \"event.id\": 252010,\n \"event.kind\": \"SECURITY_EVENT\",\n \"snowflake.trust_center.scanner.description\": \"Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses\",\n \"snowflake.trust_center.scanner.name\": \"3.1\",\n \"snowflake.trust_center.scanner.package.name\": \"Security Essentials\",\n \"status.message\": \"[CRITICAL] 3.1 Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses\"\n}"} \ No newline at end of file diff --git a/test/test_data/trust_center_metrics.pkl b/test/test_data/trust_center_metrics.pkl deleted file mode 100644 index 47bc299b..00000000 --- a/test/test_data/trust_center_metrics.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:81251b6c52cc92b08c8cdf519b74bb8b3e3effaf30d3269e263a19d6e4e7eff9 -size 14103 diff --git a/test/test_data/users_all_privileges.ndjson b/test/test_data/users_all_privileges.ndjson index b4584bf2..16e2910e 100644 --- a/test/test_data/users_all_privileges.ndjson +++ b/test/test_data/users_all_privileges.ndjson @@ -1,2 +1,2 @@ -{"TIMESTAMP": 1762437938569999872, "DIMENSIONS": "{\n \"db.user\": \"TEST_PIPELINE\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.privilege\": \"APPLYBUDGET:TABLE\",\n \"snowflake.user.privilege.granted_by\": [\n \"SECURITYADMIN\"\n ],\n \"snowflake.user.privilege.grants_on\": \"CDH_DASHBOARD_CONFIG_FILTER_USAGE_V2_HISTORY,ADA_ACCOUNT,CDH_PLUGIN_STATE_HISTORY,CDH_COMPLETENESS_BY_CLOUD_HISTORY,CDH_ELASTICSEARCH_METRIC_DIMENSIONS_AFFILIATION_HISTORY,CDH_PREFERENCES_SETTINGS_HISTORY,TENANT_USAGE_SUMMARY,CDH_MONITORED_VIRTUALIZATION_SERVICE_TYPES,INTERCOM_CONTACTS,EXTENSION_REPOSITORY_INFO,CDH_SOFTWARE_COMPONENT_DETAILS_HISTORY,CDH_RUM_USER_SESSIONS_MOBILE_BOUNCES_HISTORY,EXTERNAL_DQ_CHECKS_RESULTS,CDH_PROBLEM_EVENT_METADATA_HISTORY,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_V2_HISTORY,CDH_APPSEC_INTEGRATION_TYPES_HISTORY,DEV_JIRA_COMMENTS,CDH_RUM_USER_SESSIONS_WEB_SESSIONS_HISTORY,CDH_DASHBOARD_CONFIG_HISTORY,DIM_COLUMN,CDH_APPSEC_MONITORED_HOSTS_BY_FUNCTIONALITY_HISTORY,CDH_CF_FOUNDATION_HOST_HISTORY,CDH_HOST_BILLING_INFRASTRUCTURE_MONITORING_HISTORY,UPGRADE_EXECUTION,CDH_FDI_EVENT_ENTITY_TYPE_AGGREGATIONS_HISTORY,DPS_SUBSCRIPTION_SKU,CDH_METRIC_DATA_TYPE_HISTORY,CDH_METRIC_EVENT_CONFIG_THRESHOLD_BASED_MODEL_HISTORY,TIME_ZONE,CDH_CLUSTER_NETWORK_ZONE_STATS_HISTORY,CDH_UEM_CONFIG_HISTORY,PROMO_CODE,EXTERNAL_DQ_CHECKS_DEFINITIONS,CDH_SERVICE_HISTORY,ENVIRONMENT_SERVICE_SUMMARY,CDH_RUM_BILLING_PERIODS_V1_HISTORY,CDH_METRIC_EVENT_V2_NAME_FILTER_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_SESSIONS_HISTORY,RUM_BEHAVIORAL_EVENTS_V3,CDH_KUBERNETES_CLUSTER_HISTORY,REGION,CDH_SECURITY_PROBLEM_ASSESSMENT_VULNERABLE_FUNCTIONS_HISTORY,BI_STATUS,CDH_CLOUD_APPLICATION_HISTORY,CDH_BILLING_APP_SESSIONS_V3_HISTORY,CDH_DATABASE_INSIGHTS_ENDPOINT_DETAILS_HISTORY,DIM_DATA_SOURCE,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V1,SFDC_POC,ZENDESK_USERS,CDH_CREDENTIALS_VAULT_ENTRIES_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_BOUNCES_HISTORY,AZURE_METADATA,ZENDESK_TICKET_METRICS_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_V2_HISTORY,CDH_INTERNAL_ENTITY_MODEL_CAPPING_INFORMATION_HISTORY,CDH_UEM_CONFIG_METADATA_CAPTURING_SETTINGS_HISTORY,CDH_UEM_CONFIG_TENANT_HISTORY,CONTRACT_PRICING,GRAIL_QUERY_LOG_V2,SQL_SHARE_LOG,DEV_JIRA_CHANGE_LOG,CDH_DATABASE_INSIGHTS_HISTORY,EMPLOYEE_COUNT,LIMA_SUBSCRIPTION_BUDGET_HISTORY,CDH_PROBLEM_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_HISTORY,SNOWFLAKE_CONNECTOR_SETTINGS_HISTORY,CDH_DDU_METRICS_TOTAL_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY_ARCHIVE,CDH_APPSEC_RUNTIME_APPLICATION_PROTECTION_SETTINGS_HISTORY,CDH_VISIT_STORAGE_V2_HISTORY,ZENDESK_GROUP_MEMBERSHIP,CDH_INSTRUMENTATION_LIBRARY_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_V2_HISTORY,SFDC_CONSUMPTION_REVENUE_MONTHLY,CDH_PROCESS_HISTORY,CDH_TOKEN_STATS_HISTORY,AWS_MARKETPLACE_OFFER_PRODUCT,CDH_RUM_USER_SESSIONS_IF_ONLY_CRASH_ENABLED_HISTORY,BAS_AUDIT_ENTITY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_TYPE_HISTORY,DIM_DEPLOYMENT_STAGE,DTU_ACTIVITIES,CDH_CUSTOM_CHART_STATS_HISTORY,CDH_TOKEN_STATS_PERMISSION_HISTORY,CDH_COMPLETENESS_BY_CLUSTER_HISTORY,CDH_DDU_METRICS_CONSUMED_INCLUDED_HISTORY,DPS_CONSUMPTION_FORECAST,CDH_TILE_FILTER_CONFIG_HISTORY,LIMA_USAGE,RUM_SESSION,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_V2_HISTORY,CDH_ODIN_AGENT_HISTORY,CDH_AUTO_UPDATE_SUCCESS_STATISTICS_HISTORY,CDH_CONDITIONAL_PROCEDURES_RULES_HISTORY,CDH_METRIC_EVENT_CONFIG_HISTORY,FACT_COLUMN_USAGE,FACT_OVALEDGE_COLUMN_TERM,CDH_SYNTHETIC_MONITOR_HISTORY,CDH_SERVERLESS_HISTORY,MC_MANAGED_CLUSTER,INSTRUMENTED_FUNCTION_HASHES,DIM_PERMISSION_GROUP,CDH_CLOUD_NETWORK_POLICY_HISTORY,CDH_LOG_1CLICK_ACTIVATIONS_HISTORY,CDH_CLUSTER_CONTACTS_HISTORY,CDH_ENDED_SESSIONS_HISTORY,ENVIRONMENT_USAGE_SUMMARY,CDH_APPSEC_MONITORING_RULES_SETTINGS_HISTORY,SFDC_TENANT,CDH_COMPETITOR_JS_FRAMEWORK_USAGE_HISTORY,CDH_APPSEC_CONSUMPTION_BY_ENTITY_HISTORY,DIM_SYNC_TYPE,CDH_SECURITY_PROBLEM_HISTORY,USER_ACCOUNT,SFDC_ACCOUNT,TENANT_LAST_ACCESS_DATE,CDH_CLOUD_AUTOMATION_INSTANCE_STATS_HISTORY,ZENDESK_GROUP_MEMBERSHIP_V2,CDH_EXTERNAL_DATA_POINTS_V2_HISTORY,DIM_DATAHUB_COLUMN,PACKAGE,AWS_MARKETPLACE_BILLING_EVENT,CDH_VERSIONED_MODULE_HISTORY,CDH_RELEASE_HISTORY,CDH_METRIC_EVENT_V2_HISTORY,CDH_PROCESS_VISIBILITY_HISTORY,CDH_LOG_INGEST_ADVANCED_SETTINGS_HISTORY,AWS_MARKETPLACE_OFFER,ROLE,CDH_DASHBOARD_CONFIG_V2_HISTORY,CDH_CLOUD_EVENT_V2_HISTORY,CUSTOMER_BASE_HISTORY_V2,CDH_METRIC_QUERY_STATS_HISTORY,FACT_TABLE,SFDC_OPPORTUNITY,CDH_LOG_MONITORING_STATS_HISTORY,BITBUCKET_PR_COMMITS,SQL_PERFORMANCE,ZENDESK_SIDE_CONVERSATIONS_V2,CDH_DDU_SERVERLESS_BY_ENTITY_V2_HISTORY,USAGE_CREDITS,SFDC_MANAGED_LICENSE,CONTRACT_BILLING_INFO,CDH_MOBILE_AGENT_VERSION_USAGE_HISTORY,DIM_TABLE,ZENDESK_ORGANIZATIONS_HISTORY,CDH_SERVICE_CALLED_SERVICES_HISTORY,JOBSTATUS,CDH_PROCESS_VISIBILITY_HISTORY_V2,INTERCOM_CONVERSATION_PARTS,SFDC_TASK,AWS_MARKETPLACE_LEGACY_ID_MAPPING,CDH_CLOUD_NETWORK_SERVICE_HISTORY,CDH_SOFTWARE_COMPONENT_PGI_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_V2_HISTORY,TENANT_SUB_ENVIRONMENT,QUERY_STATS,CDH_CONDITIONAL_PROCEDURES_HISTORY,TENANT_USAGE_DAILY_SUMMARY_VIEW,REPORT_STATUS,LIMA_SUBSCRIPTION,FACT_TABLE_USAGE,DIM_OVALEDGE_COLUMN,CDH_BULK_CONFIG_CHANGES_HISTORY,DIM_OVALEDGE_DOMAIN,CDH_REQUEST_ATTRIBUTE_STATS_HISTORY,CDH_FDI_EVENT_HISTORY,COMPANY,CDH_DEEP_MONITORING_SETTINGS_V2_HISTORY,DIM_TABLE_STATUS,AWS_MARKETPLACE_OFFER_TARGET,FACT_DATAHUB_COLUMN_CHANGE_LOG,CDH_TAG_COVERAGE_HISTORY,DIM_OVALEDGE_DOMAIN_DIRECTORY,SFDC_DYNATRACE_ACCOUNT,KEPTN,SQL_LOG,ZENDESK_SIDE_CONVERSATION_EVENTS_V2,CDH_CLOUD_NETWORK_INGRESS_HISTORY,SFDC_ASSIGNMENT,RUM_PAGE_REPOSITORY_INFO,CDH_DDU_METRICS_RAW_V2_HISTORY,ZENDESK_TICKET_METRICS_CURRENT_V2,CDH_HOST_HISTORY,CDH_DASHBOARD_CONFIG_TILE_V2_HISTORY,REFERRAL_CODE,LIMA_SUBSCRIPTION_USAGE_HOURLY,CDH_EXTENSIONS_DISTINCT_DEVICES_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY,CDH_BILLING_SYNTHETIC_USAGE_V2_HISTORY,CDH_CONTAINER_GROUP_HISTORY,FACT_USER_GROUP_MAP,MANAGED_LICENSE_QUOTA,USERS_AND_QUERIES_COUNT_STATS,CDH_ACTIVE_GATE_MODULES_STATUSES_HISTORY,DEV_JIRA_WORKLOGS,ACCOUNT,CDH_CLUSTER_HISTORY,CDH_VULNERABILITY_MATCHING_METADATA_HISTORY,CDH_BILLING_APP_PROPERTIES_V2_HISTORY,CDH_EXTENSION_HISTORY,CDH_MAINFRAME_MSU_HISTORY,SFDC_ACCOUNT_ARR_BANDS_MONTHLY,CDH_BILLING_SYNTHETIC_USAGE_HISTORY,CDH_TAG_COVERAGE_ENTITIES_HISTORY,DIM_OVALEDGE_CATEGORY,SFDC_ACCOUNT_ARR_BANDS_DAILY,BAS_USER,SQL_LOG_PIPELINE,CDH_MOBILE_OS_VERSION_USAGE_HISTORY,FACT_DEPLOYMENT_DATES,AWS_METADATA,CDH_RUM_USER_SESSIONS_MOBILE_SESSIONS_HISTORY,TEAMS_EMPLOYEES,GRAIL_QUERY_LOG,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_BILLING_TYPE_HISTORY,CDH_SYNTHETIC_MONITOR_LOCATION_HISTORY,PBI_DATASET_PARAMETER,PBI_ENTITY_PERMISSIONS,CDH_CLOUD_APPLICATION_INSTANCE_HISTORY,CDH_INSTALLERS_DOWNLOAD_SERVLET_USAGES_HISTORY,LIMA_RATE_CARD,CDH_DASHBOARD_CONFIG_TILE_HISTORY,CDH_VISIT_STORE_USAGE_HISTORY,CDH_SETTING_HISTORY,AWS_MARKETPLACE_PRODUCT,CDH_VIRTUALIZATION_SUBSCRIPTION_HISTORY,DPS_SUBSCRIPTION,BILLING_SERVICE_TYPE,CDH_CLOUD_APPLICATION_NAMESPACE_HISTORY,CDH_BILLING_APP_PROPERTIES_HISTORY,CDH_WEB_APP_CALL_BY_BROWSER_HISTORY,CDH_RUM_BILLING_DEM_UNITS_V1_HISTORY,CDH_APPLICATION_HISTORY,MC_ENVIRONMENTS,MC_CLUSTER_CONSUMPTION,APPENGINE_INVOCATIONS_PER_APP,CDH_VISIT_STORE_NEW_BILLING_METRICS_HISTORY,FACT_COLUMN_HISTORY,FACT_DATAHUB_TABLE_CHANGE_LOG,CDH_PROBLEM_CAPPING_INFORMATION_HISTORY,BITBUCKET_PR,ZENDESK_USERS_HISTORY,CDH_PGI_PROCESS_COUNT_HISTORY,CDH_AGENT_HEALTH_METRICS_HISTORY,AWS_ACCOUNT_MAPPING,CDH_MAINTENANCE_WINDOW_FILTER_HISTORY,DEV_JIRA_ISSUES,CDH_LOG_MONITORING_ES_STATS_HISTORY,CDH_CLUSTERS,CDH_SYNTHETIC_API_CALLS_HISTORY,CDH_APPSEC_CODE_LEVEL_VULNERABILITY_DETECTION_SETTINGS_HISTORY,CDH_SECURITY_PROBLEM_SC_HISTORY,ZENDESK_TICKETS,CDH_DDU_SERVERLESS_BY_DESCRIPTION_HISTORY,CDH_ALERTING_PROFILE_SEVERITY_RULE_HISTORY,TABLE_LOAD_INFO,ZENDESK_GROUPS_V2,INTERCOM_COMPANIES,INTERCOM_CONVERSATIONS,SFDC_OPPORTUNITY_PRODUCT,AWS_CONSUMPTION_HISTORY,ZENDESK_ORGANIZATIONS,SERVICE_USAGE_SUMMARY,BITBUCKET_COMMITS,GCP_METADATA,SFDC_PROJECT,CDH_FDI_EVENT_INSTANCE_CLASSES_HISTORY,CDH_FDI_EVENT_METADATA_HISTORY,CDH_LOG_MONITORING_STATS_V2_HISTORY,CDH_JS_FRAMEWORK_USAGE_HISTORY,CDH_MAINFRAME_MSU_V2_HISTORY,CDH_BILLING_APP_SESSIONS_HISTORY,CDH_SERVERLESS_COMPLETENESS_HISTORY,MC_MANAGED_LICENSE,BILLING_PROVIDER,CDH_ATTACK_CANDIDATES_V2_HISTORY,CDH_API_USAGE_HISTORY2,PBI_WORKSPACE_ENTITY_NAMES,ENVIRONMENT_SERVICE_DAILY_SUMMARY,MC_ENVIRONMENT_CONSUMPTION,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V2,CDH_MDA_CONFIGS_HISTORY,SIGNUP_AWS_MARKETPLACE,CDH_CF_FOUNDATION_HISTORY,CDH_ENVIRONMENTS,CDH_SESSION_STORAGE_USAGE_HISTORY,INTERCOM_ADMINS,CDH_SOFTWARE_COMPONENT_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY_VIEW,CDH_CLUSTER_EMERGENCY_EMAILS_HISTORY,CDH_API_USER_AGENT_USAGE_HISTORY,HOST_USAGE_DAILY_SUMMARY,CDH_DEEP_MONITORING_SETTINGS_HISTORY,PARTNER_REFERRAL,SFDC_ACCOUNT_TEAMMEMBER,CDH_WORKFLOWS_HISTORY,NEW_EMPLOYEES,CDH_JS_AGENT_VERSIONS,MONTHLY_USAGE,LIMITS,AWS_MARKETPLACE_TAX_ITEM,SFDC_VW_SALES_USERACCESS,CDH_SECURITY_PROBLEM_MUTE_STATE_HISTORY,GRAIL_APP_INSTALLATIONS,CDH_HOST_BILLING_FOUNDATION_AND_DISCOVERY_HISTORY,CDH_APPSEC_ALERTING_PROFILES_HISTORY,MANAGED_ACCOUNT,CDH_HOST_MEMORY_LIMIT_HOURLY_RESOLUTION_HISTORY,CDH_DDU_METRICS_TOTAL_V2_HISTORY,ZENDESK_TICKET_METRICS_CURRENT,CDH_SECURITY_PROBLEM_TRACKING_LINKS_HISTORY,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_SIZE_HISTORY,CDH_KUBERNETES_NODE_HISTORY,TENANT,CDH_METRIC_EVENT_CONFIG_ID_FILTER_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURES_HISTORY,BILLING_ACCOUNT,BAS_AUDIT_ENTRY,CDH_CONTAINER_GROUP_INSTANCE_HISTORY,DIM_OVALEDGE_TABLE,RUM_BEHAVIORAL_EVENTS,INTERCOM_USERS,AWS_MARKETPLACE_ACCOUNT,CDH_CLUSTER_TAGS_HISTORY,FACT_DATA_QUALITY_ISSUES,CDH_COMPLETENESS_BY_ENVIRONMENT_HISTORY,DIM_OVALEDGE_TERM,CDH_CLOUD_AUTOMATION_UNITS_HISTORY,CDH_METRIC_EVENT_V2_COUNT_HISTORY,BITBUCKET_REPOSITORY_STATUS,CDH_HOST_MEMORY_LIMIT_HISTORY,DIM_PII_STATE,DIM_DATAHUB_EXISTING_COLUMN,CDH_MOBILE_SESSION_COUNT_BY_AGENT_TECHNOLOGY_HISTORY,CDH_LOG_MONITORING_CONFIGURATION_STATS_HISTORY,CDH_WORKFLOWS_V3_HISTORY,SYSTEM_STATUS_DAILY_STATISTICS,CDH_DDU_METRICS_RAW_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_V2_HISTORY,DEV_JIRA_CUSTOM_FIELD,CDH_MAINTENANCE_WINDOW_HISTORY,MC_ACCOUNT,CDH_MOBILE_CRASHES_BY_RETRIEVAL_DELAY_HISTORY,CDH_TIMESERIES_ARRIVAL_LATENCY_HISTORY,DPS_SUBSCRIPTION_CONSUMPTION,PROMO_USAGE,CDH_TOTAL_FDI_EVENT_COUNT_HISTORY,LIMA_RATE_CARD_V2,DIM_DEPLOYMENT_STATUS,CDH_PROBLEM_RANKED_ENTITY_HISTORY,CDH_LOG_MONITORING_CUSTOM_ATTRIBUTE_HISTORY,CDH_SDK_LANGUAGE_HISTORY,ZENDESK_GROUPS,DIM_LIFECYCLE_STAGE,CDH_EXTENDED_TENANT_CONFIG_HISTORY,DIM_DATAHUB_TABLE,ZENDESK_SIDE_CONVERSATIONS,CDH_TIMESERIES_MAINTENANCE_LAG_HISTORY,DEV_JIRA_PROJECT,CDH_DDU_METRICS_BY_METRIC_HISTORY,DATASOURCES,CDH_WORKFLOWS_V2_HISTORY,CDH_ACTIVE_GATE_API_USAGE_HISTORY,PBI_ACTIVITY_LOG,AWS_MARKETPLACE_ADDRESS,CDH_METRIC_EVENT_CONFIG_COUNT_HISTORY,SQL_PII_SNOWFLAKE_LOG,CDH_DISCOVERED_VIRTUALIZATION_SERVICE_TYPES,DATA_VOLUME,AUTOPROV_EVENTS,CDH_SERVICE_CALLING_APPLICATIONS_HISTORY,CDH_RELEASE_V3_HISTORY,LIMA_SUBSCRIPTION_CONSUMPTION_RATED,LIMA_CONSUMPTION,ZENDESK_ORGANIZATIONS_V2,CDH_SESSION_STORAGE_USAGE_V2_HISTORY,CDH_ENVIRONMENT_METRICS_METADATA_HISTORY,CDH_ACTIVE_GATE_UPDATE_STATUS_HISTORY,SFDC_TRIAL,CDH_DDU_METRICS_BY_METRIC_V2_HISTORY,CDH_BILLING_APP_SESSIONS_V2_HISTORY,CDH_SETTING_V2_HISTORY,CDH_HOST_MEMORY_USAGE_HISTORY,LIMA_USAGE_HOURLY,CDH_ISSUE_TRACKER_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURE_V2_HISTORY,DIM_JSON_VALIDATION,CDH_SECURITY_PROBLEM_ASSESSMENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_HISTORY,ACCOUNT_STATUS,DIM_OVALEDGE_SCHEMA,AUTOPROV_EVENTS_FEATURES,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_V2_HISTORY,TENANT_USAGE_DAILY_SUMMARY,SQL_PII_LOG,CDH_VERSIONED_MODULE_V2_HISTORY,CDH_NOTIFICATION_SETTINGS_HISTORY,CDH_METRIC_EVENT_V2_VALIDATION_RESULT_HISTORY,CDH_METRIC_EVENT_V2_ID_FILTER_HISTORY,DIM_OBJECT,CDH_RUM_BILLING_PERIODS_V2_HISTORY,FACT_TABLE_OWNERS,FACT_UNIQUE_COLUMNS_HISTORY,PROCESS_STATUS,CDH_CLOUD_AUTOMATION_INSTANCE_HISTORY,CDH_EXTRACT_STATISTICS,REPORTS_EXECUTION_LOG,LIMA_SUBSCRIPTION_CONSUMPTION,FACT_OBJECT_LINEAGE,CDH_OWNERSHIP_COVERAGE_HISTORY,TENANT_LICENSE,ZENDESK_SIDE_CONVERSATION_RECIPIENTS_V2,DPS_CONSUMPTION,SERVICE_USAGE_DAILY_SUMMARY,SYSTEM_PROPERTIES,TENANT_STATUS,CDH_AGENT_HISTORY,CDH_MOBILE_SESSION_REPLAY_HISTORY,CDH_UEM_CONFIG_PROPERTY_TAG_HISTORY,FACT_OVALEDGE_TABLE_TERM,CDH_ATTACK_CANDIDATES_HISTORY,CDH_EXTERNAL_DATA_POINTS_HISTORY,DIM_OVALEDGE_CONNECTION,MANAGED_CLUSTER,LIMA_SUBSCRIPTION_HISTORY,DIM_PRIORITY,CDH_KEY_REQUEST_STATS_HISTORY,ZENDESK_TICKETS_HISTORY_V2,CDH_SETTING_V3_HISTORY,CDH_ACTIVE_GATE_HISTORY,DIM_USER,CDH_CLASSIC_BILLING_METRICS_HISTORY,CDH_INTEGRATION_HISTORY,DIM_DATA_CRITICALITY_LEVEL,CUSTOMER_BASE_HISTORY,CDH_CTC_LOAD_HISTORY,INTERCOM_CONVERSATION_TAGS,FACT_COLUMN,CDH_PLUGIN_METRIC_STATS_HISTORY,CDH_API_USAGE_HISTORY,CONTRACT,DIM_QUALITY_TYPE,CDH_SLO_HISTORY,CDH_METRIC_EVENT_CONFIG_NAME_FILTER_HISTORY,ZENDESK_TICKETS_V2,FACT_COLUMN_PROTECTION,CDH_FDI_EVENT_TYPE_AGGREGATIONS_HISTORY,BAS_AUDIT_FIELD,CDH_PROBLEM_ROOT_CAUSE_GROUP_HISTORY,FACT_COLUMN_LINEAGE,PBI_ENTITY_REFRESH_HISTORY,CDH_APPSEC_RUNTIME_VULNERABILITY_DETECTION_SETTINGS_HISTORY,ZENDESK_TICKETS_HISTORY,SOFTWARE_COMPONENT_PACKAGE_NAME_HASHES,RUM_PAGEVIEW,COMMUNITY_PRODUCT_IDEAS,CDH_MOBILE_REPLAY_FULL_SESSION_METRICS_HISTORY,TABLE_STORAGE_METRICS_HISTORY,CDH_PROBLEM_EVENT_INSTANCE_CLASSES_HISTORY,CDH_MAINFRAME_MSU_V3_HISTORY,RUM_BEHAVIORAL_EVENT_PROPERTIES,CDH_EXTERNAL_DATA_POINTS_V3_HISTORY,TEAMS_CAPABILITIES,SYNTHETIC_LOCATIONS,CDH_RUM_BILLING_DEM_UNITS_V2_HISTORY,CDH_ODIN_AGENT_ME_IDENTIFIER_HISTORY,CDH_APPSEC_NOTIFICATION_SETTINGS_HISTORY,CDH_FEATURE_FLAG_HISTORY,CDH_VIRTUALIZATION_HISTORY,ZENDESK_USERS_V2,LIMA_CAPABILITIES,CDH_PROBLEM_EVIDENCE_HISTORY,CDH_K8S_DATA_VOLUME_HISTORY,CDH_PROBLEM_NATURAL_EVENT_HISTORY,VALIDATION_PROBLEMS_HISTORY,JIRA_ISSUES,CDH_PROBLEM_IMPACTED_ENTITIES_HISTORY,BITBUCKET_PR_ACTIVITIES,CDH_WORKFLOWS_TASK_EXECUTION_HISTORY,MANAGED_LICENSE,SERVICE,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_COUNT_HISTORY,CDH_HOST_MEMORY_USAGE_HOURLY_RESOLUTION_HISTORY,DATA_ANALYTICS_CLA_CONTRACTS,LIMA_UNASSIGNED_CONSUMPTION_HOURLY,CDH_LOG_MONITORING_METRIC_STATS_HISTORY,CDH_HOST_TECH_HISTORY,CDH_PLUGIN_HOST_DETAILS_HISTORY,DPS_RATED_CONSUMPTION,LIMA_ACCOUNT_GROUP_MEMBERSHIP,CDH_HOST_BILLING_FULL_STACK_MONITORING_HISTORY,CDH_TENANT_NETWORK_ZONE_STATS_HISTORY,DIM_DATA_QUALITY_CHECK,CDH_RUM_USER_SESSIONS_WEB_BOUNCES_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_HISTORY,AWS_MARKETPLACE_AGREEMENT,CDH_ALERTING_PROFILE_HISTORY\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.privilege.last_altered\": 1720426973204000000\n}"} -{"TIMESTAMP": 1762437938569999872, "DIMENSIONS": "{\n \"db.user\": \"JAKUBBARTOSZEWICZ\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.privilege\": \"UPDATE:VIEW\",\n \"snowflake.user.privilege.granted_by\": [\n \"SECURITYADMIN\"\n ],\n \"snowflake.user.privilege.grants_on\": \"BAS_BILLING_ACCOUNT,BAS_HOST_USAGE_DAILY_SUMMARY,FACT_OPPORTUNITIES,BAS_REGION,SFDC_DYNATRACE_ACCOUNT,CUSTOMER_ACCOUNT_MAPPING,PARTNER_REFERRAL,DIM_DT_ACCOUNT,DIM_ENVIRONMENT_ATTRIBUTES,CDH_CLOUD_APPLICATION_HISTORY,DIM_SESSION_REGION,DIM_DATE,V_ABUSIVE_TRIALS,CDH_ENVIRONMENT,CLOUD_ADOPTION,DIM_DEVICE_TYPE,CDH_DDU_TRACES_BY_DESCRIPTION_HISTORY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_BILLING_TYPE_HISTORY,CDH_SOFTWARE_COMPONENT_PGI_HISTORY,CDH_METRIC_EVENT_CONFIG_THRESHOLD_BASED_MODEL_HISTORY,CDH_METRIC_EVENT_CONFIG_COUNT_HISTORY,BAS_ENVIRONMENT_USAGE_SUMMARY,CDH_SLO_HISTORY,BAS_MANAGED_ACCOUNT,V_ACCOUNT_STAR_RATING,DIM_PROBLEM,DIM_HOST_ATTRIBUTES_HISTORY,BAS_BILLING_PROVIDER,CDH_FDI_EVENT_METADATA_HISTORY,V_UNIQUE_HOST_TECHNOLOGIES,CDH_APPLICATION_HISTORY,BAS_MANAGED_LICENSE,CDH_DDU_METRICS_BY_METRIC_HISTORY,FACT_PRODUCT_BEHAVIORAL_EVENTS,DIM_PRODUCT_USER,DIM_APPLICATION_TYPE,CDH_UEM_CONFIG_TENANT_HISTORY,DIM_APPLICATION,DIM_CURRENCY,BAS_TENANT,FACT_VERSIONED_MODULES,CDH_SECURITY_PROBLEM_HISTORY,CDH_CF_FOUNDATION_HOST_HISTORY,RUM_SESSION,DIM_SESSION_OS,DIM_METRIC,VW_REP_SNOWFLAKE_STORAGE_USAGE_MONTHLY_SUMMARY,CDH_JS_FRAMEWORK_USAGE_HISTORY,BAS_COMPANY,CDH_PROBLEM_EVIDENCE_HISTORY_ARCHIVE,WOOPRA_SESSION,VW_REP_SNOWFLAKE_PIPE_USAGE_HISTORY,CDH_INTERNAL_ENTITY_MODEL_CAPPING_INFORMATION_HISTORY,DIM_USAGE_TYPE,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_HISTORY,BAS_CONTRACT,DIM_ENVIRONMENT,DAVIS_ACCOUNT_LINK,DIM_CONTAINER_GROUP,DIM_VERTICAL,CDH_PROCESS_HISTORY,SFDC_MANAGED_LICENSE,MONTHLY_CONSUMPTION_STAGING,SESSION_REPLAYS_REPORT,CDH_PROBLEM_HISTORY,DAVIS_CHANNEL_CONFIGURATION,V_DT_VALUE_METRICS,FACT_ENVIRONMENT_USAGE,CDH_METRIC_EVENT_CONFIG_NAME_FILTER_HISTORY,DIM_SCREEN_RESOLUTION,CDH_MAINFRAME_MSU_HISTORY,DIM_CONTAINER_PROVIDER_TYPE,RUM_BEHAVIORAL_EVENT_PROPERTIES,FACT_TAG_COVERAGE,CDH_CLUSTER_EMERGENCY_EMAILS_HISTORY,DIM_CLUSTER,BAS_BILLING_REQUEST,DTU_ACTIVITIES,BAS_SERVICE,DIM_CLOUD_APPLICATION,FACT_PROBLEM_NATURAL_EVENTS,FACT_CONTRACT_USAGE,DIM_CUSTOMER,CDH_DDU_TRACES_BY_ENTITY_HISTORY,V_APPS_AND_MICROSERVICES_SERVERLESS_USAGE,V_RUM_PROPERTIES,X,CDH_VERSIONED_MODULE_V2_HISTORY,FACT_CF_FOUNDATIONS,DAVIS_TENANT,BAS_LIMITS,DIM_POC_ACCOUNT,CDH_SESSION_STORAGE_USAGE_HISTORY,DIM_CONTAINER_GROUP_INSTANCE,EXTENSION_REPOSITORY_INFO,CDH_TOKEN_STATS_HISTORY,FACT_ACCOUNT_VERTICALS,CDH_ENDED_SESSIONS_HISTORY,INTERCOM_COMPANIES,CDH_CLOUD_APPLICATION_INSTANCE_HISTORY,CDH_LOG_MONITORING_METRIC_STATS_HISTORY,FACT_ENVIRONMENT_UNITS_CONSUMPTION,DEHASH_TENANT,CDH_SERVICE_HISTORY,CDH_TAG_COVERAGE_ENTITIES_HISTORY,CDH_VISITSTORE2_METRICS_HISTORY,DIM_CF_FOUNDATION,SFDC_OPPORTUNITY_PRODUCT,INTERCOM_ADMINS,DIM_PRODUCT_BEHAVIORAL_EVENT,CDH_SYNTHETIC_MONITOR_LOCATION_HISTORY,FACT_ACTIVE_GATE_UPDATE_STATUS,BAS_ENVIRONMENT_SERVICE_DAILY_SUMMARY,BAS_ENVIRONMENT_USAGE_DAILY_SUMMARY_VIEW,CDH_SECURITY_PROBLEM_MUTE_STATE_HISTORY,FACT_TRIALS,FACT_API_USAGE,FACT_DDU_METRICS,FACT_PRODUCT_PAGE_VIEWS,FACT_HOSTS_AGG_MONTH,DIM_ACTIVE_GATE_OS,FACT_PROBLEMS,CDH_VIRTUALIZATION_SUBSCRIPTION_HISTORY,BAS_EMPLOYEE_COUNT,DIM_CONTRACT,MC_ACCOUNT,FACT_CLOUD_APPLICATION_INSTANCES,DIM_API_ENDPOINT,CDH_MONITORED_VIRTUALIZATION_SERVICE_TYPES,FACT_CUSTOMER_TASKS,DIM_PRODUCT,CDH_API_USAGE_HISTORY,DIM_AGENT_ATTRIBUTES,V_CONVERSION_GOALS,SFDC_ACCOUNT,BAS_ENVIRONMENT_SERVICE_SUMMARY,DIM_VERSIONED_MODULE,DIM_CLOUD_APPLICATION_INSTANCE_TYPE,CDH_CLOUD_NETWORK_INGRESS_HISTORY,BAS_ENVIRONMENT_USAGE_DAILY_SUMMARY,MONTHLY_CONSUMPTION_REPORT,BAS_ACCOUNT,BAS_MONTHLY_USAGE,DIM_PRODUCT_TYPE,CDH_EXTENSION_HISTORY,CDH_DEEP_MONITORING_SETTINGS_HISTORY,CDH_SECURITY_PROBLEM_SC_HISTORY,CDH_BILLING_APP_SESSIONS_HISTORY,FACT_CUSTOMER_ACCOUNTS,FACT_ACTIVE_GATE_MODULES_STATUSES,FACT_CLOUD_APPLICATION_NAMESPACES,BAS_BILLING_USAGE,CDH_PROBLEM_CAPPING_INFORMATION_HISTORY,DIM_SESSION_CONTINENT,CDH_TILE_FILTER_CONFIG_HISTORY,CDH_DISCOVERED_VIRTUALIZATION_SERVICE_TYPES,CDH_DATABASE_INSIGHTS_ENDPOINT_DETAILS_HISTORY,BAS_TENANT_LICENSE,CDH_BILLING_SYNTHETIC_USAGE_HISTORY,FACT_ACTIVE_GATES,FACT_PROBLEM_EVIDENCE,FACT_PROBLEM_RANKED_ENTITIES,INFRASTRUCTURE_SOLUTION_ENVIRONMENTS_NEW,DAILY_USAGE,DIM_CONTAINERIZATION_TYPE,CDH_VISIT_STORE_USAGE_HISTORY,V_KEY_USER_ACTION,CDH_PROBLEM_IMPACTED_ENTITIES_HISTORY,CDH_SYNTHETIC_MONITOR_HISTORY,DIM_DEM_GOALS,CDH_PLUGIN_STATE_HISTORY,CDH_SETTING_HISTORY,FACT_ENTITY_TAG_COVERAGE,CDH_LOG_MONITORING_STATS_HISTORY,CDH_CLUSTER_TAGS_HISTORY,CDH_PROBLEM_NATURAL_EVENT_HISTORY,DIM_AGENT_TECHNOLOGY_TYPE,DIM_OPPORTUNITY_TYPE,FACT_APPLICATIONS,UNIQUE_HOSTS_TECHNOLOGY_HISTORY,CDH_SOFTWARE_COMPONENT_HISTORY,SFDC_TENANT,FACT_KUBERNETES_NODES,MC_MANAGED_LICENSE_QUOTA,V_INFRASTRUCTURE_SOLUTION,BAS_SERVICE_USAGE_SUMMARY,CDH_FDI_EVENT_TYPE_AGGREGATIONS_HISTORY,DIM_PROBLEM_EVENT_TYPE,CDH_CONTAINER_GROUP_INSTANCE_HISTORY,V_DEM_GOALS,BAS_SERVICE_USAGE_DAILY_SUMMARY,CDH_VERSIONED_MODULE_HISTORY,V_DEM_CUSTOM_APPLICATION_SESSION,JIRA_ISSUES,DIM_CUSTOMER_TASK,A,DIM_PROBLEM_EVIDENCE_TYPE,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_HISTORY,INTERCOM_CONVERSATION_PARTS,FACT_OPPORTUNITY_PRODUCTS,CDH_ALERTING_PROFILE_SEVERITY_RULE_HISTORY,FACT_HOST_TECHNOLOGIES,CDH_KUBERNETES_NODE_HISTORY,CDH_ODIN_AGENT_HISTORY,CDH_ENVIRONMENT_RAW,FACT_CONTAINER_GROUP_INSTANCES,CDH_DEEP_MONITORING_SETTINGS_FEATURES_HISTORY,CDH_CLUSTERS,UNIQUE_HOSTS_TECHNOLOGY_HISTORY_NEW,BAS_BILLING_SERVICE_TYPE,DIM_OPPORTUNITY,CDH_FDI_EVENT_HISTORY,PAYING_ACCOUNTS_MONTHLY,DIM_HOST,CDH_CF_FOUNDATION_HISTORY,BAS_TENANT_USAGE_DAILY_SUMMARY,FACT_HOSTS_AGG,CDH_CLOUD_NETWORK_SERVICE_HISTORY,DIM_VERBATIM_TYPE,CDH_TAG_COVERAGE_HISTORY,SFDC_TASK,DIM_POC,WOOPRA_PAGEVIEW_BAS,DIM_ACTIVE_GATE_TYPE,CDH_COMPETITOR_JS_FRAMEWORK_USAGE_HISTORY,CONTRACT_CONSUMPTION,CDH_RELEASE_HISTORY,BAS_TIME_ZONE,CDH_CONDITIONAL_PROCEDURES_HISTORY,CDH_INTEGRATION_HISTORY,DIM_AREA,BAS_MANAGED_CLUSTER,CDH_PROBLEM_EVIDENCE_HISTORY,DIM_SESSION_CITY,CDH_METRIC_EVENT_CONFIG_HISTORY,BAS_TENANT_USAGE_SUMMARY,DIM_DT_ACCOUNT_CURRENT_ATTRIBUTES,DIM_ACTIVE_GATE,DIM_SALES_REGION,CDH_TOTAL_FDI_EVENT_COUNT_HISTORY,DIM_SERVICE,BAS_TENANT_SUB_ENVIRONMENT,FACT_SERVICE_TECHNOLOGIES,CDH_CUSTOM_CHART_STATS_HISTORY,CDH_VIRTUALIZATION_HISTORY,SYNTHETIC_LOCATIONS,FACT_CUSTOMERS,DIM_PROCESS_GROUP,CDH_VULNERABILITY_MATCHING_METADATA_HISTORY,DIM_DT_ACCOUNT_ATTRIBUTES,CDH_UEM_CONFIG_PROPERTY_TAG_HISTORY,DIM_AGENT,INTERCOM_CONTACTS,DIM_KUBERNETES_NODE,CDH_API_USER_AGENT_USAGE_HISTORY,CDH_DASHBOARD_CONFIG_TILE_HISTORY,V_LOG_EVENTS,CDH_CLUSTER_NETWORK_ZONE_STATS_HISTORY,DIM_COUNTRY,BAS_USER,FACT_CUSTOMER_SUPPORT,CDH_CONTAINER_MEMORY_USAGE_HISTORY,DIM_SERVICE_TYPE,B,FACT_CONTAINER_GROUPS,CDH_DASHBOARD_CONFIG_HISTORY,CDH_UEM_CONFIG_HISTORY,CDH_TOKEN_STATS_PERMISSION_HISTORY,FACT_SERVICES,INTERCOM_CONVERSATIONS,REFERRAL_CODE,FACT_CONTRACT_LATEST_USAGE,FACT_PROCESSES,SFDC_PROJECT,DIM_COMPETITOR,FACT_PROBLEM_IMPACTED_ENTITIES,CDH_CLOUD_NETWORK_POLICY_HISTORY,BAS_TENANT_USAGE,DIM_PRODUCT_PAGE,CONTRACT_CONSUMPTION_LATEST,CDH_ACTIVE_GATE_UPDATE_STATUS_HISTORY,BAS_TENANT_STATUS,CDH_DEEP_MONITORING_SETTINGS_FEATURE_V2_HISTORY,CDH_AGENT_HISTORY,CDH_DATABASE_INSIGHTS_HISTORY,DIM_PROBLEM_EVENT_STATUS,CDH_EXTENDED_TENANT_CONFIG_HISTORY,FACT_AGENTS,V_APPS_AND_MICROSERVICES_OPENT,V_CLOUD_INTEGRATION_USAGE,DIM_CONDITIONAL_PROCEDURE,DAVIS_USAGE,DIM_CONTRACT_LIMIT_ATTRIBUTES,FACT_PROCESS_ATTRIBUTES,V_COVID_19_CONSUMPTION,DIM_HOST_TECHNOLOGY,FACT_CONDITIONAL_PROCEDURE_ENTITY_RULES,BAS_MANAGED_LICENSE_QUOTA,CDH_ACTIVE_GATE_HISTORY,CDH_CONTAINER_GROUP_HISTORY,DIM_GEOGRAPHY,WOOPRA_ACTION,DIM_API_USER_AGENT,V_DEM_STAR_RATING,CDH_PLUGIN_METRIC_STATS_HISTORY,INTERCOM_USERS,CDH_CONTAINER_MEMORY_LIMIT_HISTORY,CDH_CONDITIONAL_PROCEDURES_RULES_HISTORY,CDH_PROBLEM_EVENT_METADATA_HISTORY,BAS_PROMO_CODE,V_COLUMN_USAGE_LOG,CDH_UEM_CONFIG_METADATA_CAPTURING_SETTINGS_HISTORY,INFRASTRUCTURE_SOLUTION_ENVIRONMENTS,BAS_TENANT_USAGE_DAILY_SUMMARY_VIEW,BAS_USAGE_CREDITS,CDH_ODIN_AGENT_ME_IDENTIFIER_HISTORY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_TYPE_HISTORY,CDH_PROBLEM_EVENT_INSTANCE_CLASSES_HISTORY,RUM_BEHAVIORAL_EVENTS,DIM_HOST_ATTRIBUTES,DIM_TECHNOLOGY,DIM_UNIQUE_ENVIRONMENTS,SFDC_POC,DIM_EXTENSION,SFDC_ASSIGNMENT,BAS_ROLE,DIM_PROCESS,WOOPRA_PAGES,DIM_TRIAL,DIM_CLOUD_APPLICATION_INSTANCE,CLOUD_USAGE_REPORT,DIM_OPPORTUNITY_STAGE,DIM_ACTIVE_GATE_MODULE,FACT_HOSTS,AUTOPROV_EVENTS,CDH_FEATURE_FLAG_HISTORY,FACT_AGENT_TECHNOLOGIES,BAS_AUDIT_FIELD,TYPES_V,BAS_PROMO_USAGE,CDH_TENANT_NETWORK_ZONE_STATS_HISTORY,CDH_REQUEST_ATTRIBUTE_STATS_HISTORY,BAS_CONTRACT_PRICING,WOOPRA_ACTION_PROPERTIES,CDH_PREFERENCES_SETTINGS_HISTORY,DEHASH_ACCOUNT,FACT_PROCESS_TECHNOLOGIES,CDH_SOFTWARE_COMPONENT_DETAILS_HISTORY,CDH_PROBLEM_ROOT_CAUSE_GROUP_HISTORY,CDH_HOST_TECH_HISTORY,BAS_PACKAGE,MONTHLY_CONSUMPTION_EXT,BAS_AUDIT_ENTRY,CDH_MOBILE_CRASHES_BY_RETRIEVAL_DELAY_HISTORY,CDH_MOBILE_AGENT_VERSION_USAGE_HISTORY,CDH_PLUGIN_HOST_DETAILS_HISTORY,CDH_CLUSTER_CONTACTS_HISTORY,DIM_CLOUD_APPLICATION_INSTANCE_PHASE,MONITORED_CLOUD_SERVICES_REPORT,CDH_MOBILE_OS_VERSION_USAGE_HISTORY,BAS_SFDC_TRIAL,DAVIS_USER,CDH_ENVIRONMENTS,DEHASH_CLUSTER,DIM_SESSION_BROWSER,WOOPRA_PAGEVIEW,DIM_CLOUD_APPLICATION_TYPE,CDH_FDI_EVENT_INSTANCE_CLASSES_HISTORY,DIM_KUBERNETES_CLUSTER,DIM_POC_PRODUCT,V_CLOUD_AUTOMATION_SOLUTION,SFDC_OPPORTUNITY,DIM_CLOUD_APPLICATION_NAMESPACE,INFRASTRUCTURE_SOLUTION_ACCOUNTS,FACT_SESSIONS,BAS_TENANT_LAST_ACCESS_DATE,VW_REP_SNOWFLAKE_WAREHOUSE_METERING_HISTORY,DIM_LICENSE,VW_REP_DATE_DIMENSION,BAS_CONTRACT_BILLING_INFO,DIM_EXTENSION_METRIC_GROUP,CDH_METRIC_EVENT_CONFIG_ID_FILTER_HISTORY,CDH_METRIC_QUERY_STATS_HISTORY,INTERCOM_CONVERSATION_TAGS,CDH_HOST_HISTORY,BAS_ACCOUNT_STATUS,CDH_FDI_EVENT_ENTITY_TYPE_AGGREGATIONS_HISTORY,BAS_AUDIT_ENTITY,DIM_REGION,FACT_POC,DIM_CLUSTER_ATTRIBUTES,DIM_VERSION,CDH_DEEP_MONITORING_SETTINGS_V2_HISTORY,AUTOPROV_EVENTS_FEATURES,CDH_PROBLEM_RANKED_ENTITY_HISTORY,CDH_SECURITY_PROBLEM_ASSESSMENT_HISTORY,FACT_API_USER_AGENT_USAGE,DISCOVERED_CLOUD_SERVICES_REPORT,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_HISTORY,FACT_CLOUD_APPLICATIONS,CDH_ALERTING_PROFILE_HISTORY,DIM_DT_ONE,CDH_CLUSTER_HISTORY,CDH_ACTIVE_GATE_MODULES_STATUSES_HISTORY,CDH_CLOUD_APPLICATION_NAMESPACE_HISTORY,RUM_PAGEVIEW,CDH_LOG_MONITORING_ES_STATS_HISTORY,INFRASTRUCTURE_SOLUTION_ACCOUNTS_NEW,DIM_PROBLEM_SOURCE,V_CLOUD_USAGE_REPORT\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.privilege.last_altered\": 1615219847793000000\n}"} +{"TIMESTAMP": 1762437938569999872, "DIMENSIONS": "{\n \"db.user\": \"TEST_PIPELINE\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.privilege\": \"APPLYBUDGET:TABLE\",\n \"snowflake.user.privilege.granted_by\": [\n \"SECURITYADMIN\"\n ],\n \"snowflake.user.privilege.grants_on\": \"TESTTABLE1,TESTTABLE2\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.privilege.last_altered\": 1720426973204000000\n}"} +{"TIMESTAMP": 1762437938569999872, "DIMENSIONS": "{\n \"db.user\": \"TESTUSER3\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.privilege\": \"UPDATE:VIEW\",\n \"snowflake.user.privilege.granted_by\": [\n \"SECURITYADMIN\"\n ],\n \"snowflake.user.privilege.grants_on\": \"BILLING_PROVIDER,CDH_SLO_HISTORY\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.privilege.last_altered\": 1615219847793000000\n}"} diff --git a/test/test_data/users_all_privileges.pkl b/test/test_data/users_all_privileges.pkl deleted file mode 100644 index eb5be0a4..00000000 --- a/test/test_data/users_all_privileges.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:892ddc758fedc65220070ba499196aae95ebb41564130aa82a87029030f85ff8 -size 3200739 diff --git a/test/test_data/users_all_roles.ndjson b/test/test_data/users_all_roles.ndjson index b8cd9d4f..e4e942b4 100644 --- a/test/test_data/users_all_roles.ndjson +++ b/test/test_data/users_all_roles.ndjson @@ -1,2 +1,2 @@ -{"TIMESTAMP": 1762438009657999872, "DIMENSIONS": "{\n \"db.user\": \"TERRAFORM_USER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.all\": \"DEVACT_FINANCIAL,POWERBILOG_FINANCIAL,JIRA_FULL,APPSEC_SENSITIVE,DATAMODEL_UPGRADER,DEVEL_SYSADMIN_ROLE,INTERCOM_BASIC,IEM_BASIC,METADATA_FINANCIAL,WOOPRA_FINANCIAL,UNIVERSITY_FULL,ZENDESK_SENSITIVE,RAW_FULL,BI_FINANCIAL,CONSUMPTION_FULL,DAVIS_BASIC,INTERNALCOSTS_SENSITIVE,WOOPRA_BASIC,LIMA_BASIC,AUTOPROV_BASIC,SFM_SENSITIVE,RUM_FULL,METADATAAUDIT_SENSITIVE,DEVELCLONE_PIPELINE,SOFTCOMP_FINANCIAL,CONSUMPTION_FINANCIAL,SNOWFLAKE_FINANCE,LIMA_SENSITIVE,REPORTS_CONSUMPTION,WOOPRA_SENSITIVE,CONSUMPTION_SENSITIVE,DEVEL_SECURITYADMIN_ROLE,CONSUMPTION_BASIC,REPORTS_SENSITIVE,SYSADMIN,IEM_FULL,SFM_FINANCIAL,EMPLOYEES_FINANCIAL,METADATA_SENSITIVE,RUM_BASIC,APPSEC_BASIC,ALL_BASIC,TEST_SYSADMIN,TEST_COLDSTORE_ROLE,SYNTHETIC_BASIC,SANDBOX_TEST_BI_PREDICTIONS_ROLE,CDH_SENSITIVE,SECURITYADMIN,TEAMS_FINANCIAL,TEST_BI_PREDICTIONS_ROLE,DATAQUALITY_BASIC,RUM_SENSITIVE,BAS_BASIC,TEAMS_BASIC,BI_SENSITIVE,EXTENSIONREPOSITORYINFO_SENSITIVE,RNDWORKLOGS_FULL,REVENUE_SENSITIVE,DEV_SF_DATAMODELUPGRADER_ROLE,BAS_SENSITIVE,TEAMS_FULL,SYNTHETIC_FINANCIAL,ALL_FINANCIAL,LIMA_FULL,WOOPRA_FULL,TEST_DATAMODEL_UPGRADER_ROLE,BI_REPORTING,CDH_FINANCIAL,DEVEL_PIPELINE_ROLE,TEST_POWERBI_ROLE,DEVEL_COLDSTORE_ROLE,EMPLOYEES_BASIC,SANDBOX_ANDRZEJ_BI_REPORTING,SFM_BASIC,SANDBOX_ANDRZEJ_DATAMODEL_UPGRADER,SALESFORCE_FULL,REVENUE_FINANCIAL,UNIVERSITY_BASIC,SANDBOX_ANDRZEJ_PIPELINE,DEVACT_SENSITIVE,METADATAAUDIT_BASIC,RNDWORKLOGS_BASIC,ANY_BASIC,DEVOPS_ROLE,BAS_FULL,DEVELCLONE_DATAMODEL_UPGRADER,TEST_BI_MODELER_ROLE,SALESFORCE_BASIC,ZENDESK_FULL,DATAQUALITY_FULL,REVENUE_FULL,APPSEC_FINANCIAL,SYNTHETIC_SENSITIVE,REVENUE_BASIC,RNDWORKLOGS_SENSITIVE,POWERBILOG_SENSITIVE,LIMA_FINANCIAL,SANDBOX_TEST_BI_MODELER_ROLE,AUTOPROV_SENSITIVE,CDH_FULL,DEVELCLONE_BI_MODELER,BI_BASIC,DEVACT_BASIC,SANDBOX_TEST_DB_OWNER_ROLE,INTERCOM_SENSITIVE,INTERCOM_FINANCIAL,DEVELCLONE_BI_REPORTING,REPORTS_BASIC,JIRA_SENSITIVE,EXTENSIONREPOSITORYINFO_BASIC,REPORTS_FULL,TEST_DB_OWNER_ROLE,POWERBILOG_BASIC,DAVIS_FULL,DEVEL_DATAMODEL_UPGRADER_ROLE,SANDBOX_TEST_READONLY_USER_ROLE,POWERBILOG_FULL,REPORTS_FINANCIAL,INTERNALCOSTS_FINANCIAL,SANDBOX_TEST_DATAMODEL_UPGRADER_ROLE,BI_MODELER,SANDBOX_TEST_PIPELINE_ROLE,METADATA_FULL,COMMUNITY_SENSITIVE,EMPLOYEES_FULL,AUTOPROV_FULL,JIRA_BASIC,SOFTCOMP_SENSITIVE,METADATAAUDIT_FINANCIAL,TERRAFORM_USER_ROLE,RUM_FINANCIAL,METADATAAUDIT_FULL,TEST_BI_REPORTING_ROLE,TEST_PIPELINE_ROLE,SOFTCOMP_BASIC,ZENDESK_BASIC,EXTENSIONREPOSITORYINFO_FINANCIAL,COMMUNITY_FULL,SANDBOX_TEST_BI_REPORTING_ROLE,DAVIS_FINANCIAL,CDH_BASIC,SFM_FULL,COLDSTORE,COMMUNITY_BASIC,IEM_SENSITIVE,SYNTHETIC_FULL,UNIVERSITY_FINANCIAL,SANDBOX_TEST_POWERBI_ROLE,DEVACT_FULL,DAVIS_SENSITIVE,POWERBI_MODEL,AUTOPROV_FINANCIAL,TEST_ETL_DQ_CHECKS_ROLE,JIRA_FINANCIAL,IEM_FINANCIAL,SCRATCHPAD_ROLE,TEAMS_SENSITIVE,INTERNALCOSTS_BASIC,MONITORING,ZENDESK_FINANCIAL,RNDWORKLOGS_FINANCIAL,DATAQUALITY_SENSITIVE,ALL_SENSITIVE,DEVEL_BI_MODELER_ROLE,INTERCOM_FULL,SOFTCOMP_FULL,DATAQUALITY_FINANCIAL,BAS_FINANCIAL,APPSEC_FULL,EMPLOYEES_SENSITIVE,DEVEL_BI_REPORTING_ROLE,SANDBOX_ANDRZEJ_BI_MODELER,ALL_FULL,REPORTS_TECHNOLOGY,INTERNALCOSTS_FULL,SALESFORCE_FINANCIAL,UNIVERSITY_SENSITIVE,COMMUNITY_FINANCIAL,EXTENSIONREPOSITORYINFO_FULL,METADATA_BASIC,SANDBOX_TEST_COLDSTORE_ROLE,SALESFORCE_SENSITIVE,PIPELINE\",\n \"snowflake.user.roles.granted_by\": [\n \"DEMIGOD\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1649326499175000000\n}"} -{"TIMESTAMP": 1762438009657999872, "DIMENSIONS": "{\n \"db.user\": \"BEATASZWICHTENBERG\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.all\": \"BI_MODELER,DATAMODEL_UPGRADER,COLDSTORE,SCRATCHPAD_ROLE,BI_REPORTING,DEV_SF_DATAMODELUPGRADER_ROLE,PIPELINE,DEVOPS_ROLE\",\n \"snowflake.user.roles.granted_by\": [\n \"DEMIGOD\",\n \"SECURITYADMIN\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1624012210371000000\n}"} +{"TIMESTAMP": 1762438009657999872, "DIMENSIONS": "{\n \"db.user\": \"TERRAFORM_USER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.all\": \"DEMIGOD,PIPELINE\",\n \"snowflake.user.roles.granted_by\": [\n \"DEMIGOD\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1649326499175000000\n}"} +{"TIMESTAMP": 1762438009657999872, "DIMENSIONS": "{\n \"db.user\": \"TESTUSER2\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.all\": \"BI_MODELER,DATAMODEL_UPGRADER,BI_REPORTING,COLDSTORE,SCRATCHPAD_ROLE,DEV_SF_DATAMODELUPGRADER_ROLE,PIPELINE,DEVOPS_ROLE\",\n \"snowflake.user.roles.granted_by\": [\n \"DEMIGOD\",\n \"SECURITYADMIN\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1624012210371000000\n}"} diff --git a/test/test_data/users_all_roles.pkl b/test/test_data/users_all_roles.pkl deleted file mode 100644 index a514cc86..00000000 --- a/test/test_data/users_all_roles.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b312944e908e28e993da2fdfb5b04f1deb5cb0d934754081f85b3558fa5fc5c7 -size 70772 diff --git a/test/test_data/users_hist.ndjson b/test/test_data/users_hist.ndjson deleted file mode 100644 index 75ac31e9..00000000 --- a/test/test_data/users_hist.ndjson +++ /dev/null @@ -1,2 +0,0 @@ -{"LOGIN": "SEBASTIAN.KRUK", "_MESSAGE": "User details for SEBASTIAN.KRUK", "TIMESTAMP": 1762440508592000000, "DIMENSIONS": "{\n \"db.user\": \"SEBASTIAN.KRUK\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.default.namespace\": \"DEV_DB\",\n \"snowflake.user.default.role\": \"SEBASTIAN_KRUK_ROLE\",\n \"snowflake.user.default.secondary_role\": \"[]\",\n \"snowflake.user.default.warehouse\": \"COMPUTE_WH\",\n \"snowflake.user.display_name\": \"Sebastian Kruk\",\n \"snowflake.user.email\": \"95ab5ef6a07c48fe4e0d1049b5b16b07cb2334dead8801d4d6078dd283b338f6\",\n \"snowflake.user.ext_authn.duo\": false,\n \"snowflake.user.has_password\": false,\n \"snowflake.user.id\": 298,\n \"snowflake.user.is_disabled\": false,\n \"snowflake.user.is_from_organization\": false,\n \"snowflake.user.is_locked\": false,\n \"snowflake.user.must_change_password\": false,\n \"snowflake.user.name\": \"SEBASTIAN.KRUK\",\n \"snowflake.user.name.first\": \"Sebastian\",\n \"snowflake.user.name.last\": \"Kruk\",\n \"snowflake.user.owner\": \"AAD_PROVISIONER\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.created_on\": 1644434689039000000,\n \"snowflake.user.last_success_login\": 1762440232376000000\n}"} -{"LOGIN": "TEST_DATAMODEL_UPGRADER", "_MESSAGE": "User details for TEST_DATAMODEL_UPGRADER", "TIMESTAMP": 1762440508592000000, "DIMENSIONS": "{\n \"db.user\": \"TEST_DATAMODEL_UPGRADER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.default.namespace\": \"TEST_DB.ETL\",\n \"snowflake.user.default.role\": \"TEST_DATAMODEL_UPGRADER_ROLE\",\n \"snowflake.user.default.secondary_role\": \"[]\",\n \"snowflake.user.default.warehouse\": \"TEST_ETL_UPGRADE_WH\",\n \"snowflake.user.display_name\": \"TEST_DATAMODEL_UPGRADER\",\n \"snowflake.user.ext_authn.duo\": false,\n \"snowflake.user.has_password\": true,\n \"snowflake.user.id\": 618,\n \"snowflake.user.is_disabled\": false,\n \"snowflake.user.is_from_organization\": false,\n \"snowflake.user.is_locked\": false,\n \"snowflake.user.must_change_password\": false,\n \"snowflake.user.name\": \"TEST_DATAMODEL_UPGRADER\",\n \"snowflake.user.owner\": \"SECURITYADMIN\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.created_on\": 1679517315842000000,\n \"snowflake.user.last_success_login\": 1762440027946000000,\n \"snowflake.user.password_last_set_time\": 1679517315842000000\n}"} diff --git a/test/test_data/users_hist.pkl b/test/test_data/users_hist.pkl deleted file mode 100644 index 70714c4d..00000000 --- a/test/test_data/users_hist.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cf75ce18ab867b41a4f5f74f13456ee99f72bee834474ff207d174af4e8e48cd -size 3021 diff --git a/test/test_data/users_history.ndjson b/test/test_data/users_history.ndjson new file mode 100644 index 00000000..63f0a6c0 --- /dev/null +++ b/test/test_data/users_history.ndjson @@ -0,0 +1,2 @@ +{"LOGIN": "TEST_USER_1", "_MESSAGE": "User details for TEST_USER_1", "TIMESTAMP": 1762440508592000000, "DIMENSIONS": "{\n \"db.user\": \"TEST_USER_1\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.default.namespace\": \"DEV_DB\",\n \"snowflake.user.default.role\": \"TEST_USER_ROLE\",\n \"snowflake.user.default.secondary_role\": \"[]\",\n \"snowflake.user.default.warehouse\": \"COMPUTE_WH\",\n \"snowflake.user.display_name\": \"Test User\",\n \"snowflake.user.email\": \"95ab5ef6a07c48fe4e0d1049b5b16b07cb2334dead8801d4d6078dd283b338f6\",\n \"snowflake.user.ext_authn.duo\": false,\n \"snowflake.user.has_mfa\": false,\n \"snowflake.user.has_password\": false,\n \"snowflake.user.has_pat\": false,\n \"snowflake.user.has_rsa\": false,\n \"snowflake.user.has_workload_identity\": false,\n \"snowflake.user.id\": 298,\n \"snowflake.user.is_disabled\": false,\n \"snowflake.user.is_from_organization\": false,\n \"snowflake.user.is_locked\": false,\n \"snowflake.user.must_change_password\": false,\n \"snowflake.user.name\": \"TEST_USER_1\",\n \"snowflake.user.name.first\": \"Test\",\n \"snowflake.user.name.last\": \"User\",\n \"snowflake.user.owner\": \"AAD_PROVISIONER\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.created_on\": 1644434689039000000,\n \"snowflake.user.last_success_login\": 1762440232376000000\n}"} +{"LOGIN": "TEST_DATAMODEL_UPGRADER", "_MESSAGE": "User details for TEST_DATAMODEL_UPGRADER", "TIMESTAMP": 1762440508592000000, "DIMENSIONS": "{\n \"db.user\": \"TEST_DATAMODEL_UPGRADER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.default.namespace\": \"TEST_DB.ETL\",\n \"snowflake.user.default.role\": \"TEST_DATAMODEL_UPGRADER_ROLE\",\n \"snowflake.user.default.secondary_role\": \"[]\",\n \"snowflake.user.default.warehouse\": \"TEST_ETL_UPGRADE_WH\",\n \"snowflake.user.display_name\": \"TEST_DATAMODEL_UPGRADER\",\n \"snowflake.user.ext_authn.duo\": false,\n \"snowflake.user.has_mfa\": false,\n \"snowflake.user.has_password\": true,\n \"snowflake.user.has_pat\": false,\n \"snowflake.user.has_rsa\": false,\n \"snowflake.user.has_workload_identity\": false,\n \"snowflake.user.id\": 618,\n \"snowflake.user.is_disabled\": false,\n \"snowflake.user.is_from_organization\": false,\n \"snowflake.user.is_locked\": false,\n \"snowflake.user.must_change_password\": false,\n \"snowflake.user.name\": \"TEST_DATAMODEL_UPGRADER\",\n \"snowflake.user.owner\": \"SECURITYADMIN\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.created_on\": 1679517315842000000,\n \"snowflake.user.last_success_login\": 1762440027946000000,\n \"snowflake.user.password_last_set_time\": 1679517315842000000\n}"} diff --git a/test/test_data/users_roles_direct.ndjson b/test/test_data/users_roles_direct.ndjson index cab9b788..080ff9ed 100644 --- a/test/test_data/users_roles_direct.ndjson +++ b/test/test_data/users_roles_direct.ndjson @@ -1,2 +1,2 @@ -{"TIMESTAMP": 1762438066208000000, "DIMENSIONS": "{\n \"db.user\": \"ALEKSANDRA_RUMINSKA\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct\": [\n \"ALEKSANDRA_RUMINSKA_ROLE\"\n ],\n \"snowflake.user.roles.granted_by\": [\n \"USERADMIN\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1661498740003000000\n}"} -{"TIMESTAMP": 1762438066208000000, "DIMENSIONS": "{\n \"db.user\": \"MICHALLITKA\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct\": [\n \"SCRATCHPAD_ROLE\"\n ],\n \"snowflake.user.roles.granted_by\": [\n \"SECURITYADMIN\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1615219848339000000\n}"} +{"TIMESTAMP": 1762438066208000000, "DIMENSIONS": "{\n \"db.user\": \"TEST_USER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct\": [\n \"TEST_USER_ROLE\"\n ],\n \"snowflake.user.roles.granted_by\": [\n \"USERADMIN\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1661498740003000000\n}"} +{"TIMESTAMP": 1762438066208000000, "DIMENSIONS": "{\n \"db.user\": \"TESTUSER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct\": [\n \"SCRATCHPAD_ROLE\"\n ],\n \"snowflake.user.roles.granted_by\": [\n \"SECURITYADMIN\"\n ]\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.last_altered\": 1615219848339000000\n}"} diff --git a/test/test_data/users_roles_direct.pkl b/test/test_data/users_roles_direct.pkl deleted file mode 100644 index 02c21741..00000000 --- a/test/test_data/users_roles_direct.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bd36d621e5542419819380b3e1e8f4405cfd4aef46ce0b85860c445ba148f556 -size 18170 diff --git a/test/test_data/users_roles_direct_removed.ndjson b/test/test_data/users_roles_direct_removed.ndjson index 13881915..d3390fd7 100644 --- a/test/test_data/users_roles_direct_removed.ndjson +++ b/test/test_data/users_roles_direct_removed.ndjson @@ -1,2 +1,2 @@ {"TIMESTAMP": 1762438068083000064, "_MESSAGE": "User direct roles removed since 1970-01-01 00:00:00.000 Z", "DIMENSIONS": "{\n \"db.user\": \"DEVEL_PIPELINE\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct.removed\": \"DEVEL_PIPELINE_ROLE\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.direct.removed_on\": 1622116800000000000\n}"} -{"TIMESTAMP": 1762438068083000064, "_MESSAGE": "User direct roles removed since 1970-01-01 00:00:00.000 Z", "DIMENSIONS": "{\n \"db.user\": \"SEBASTIAN.KRUK\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct.removed\": \"DTAGENT_SA082_VIEWER\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.direct.removed_on\": 1747375200000000000\n}"} +{"TIMESTAMP": 1762438068083000064, "_MESSAGE": "User direct roles removed since 1970-01-01 00:00:00.000 Z", "DIMENSIONS": "{\n \"db.user\": \"TEST.USER\"\n}", "ATTRIBUTES": "{\n \"snowflake.user.roles.direct.removed\": \"DTAGENT_SA082_VIEWER\"\n}", "EVENT_TIMESTAMPS": "{\n \"snowflake.user.roles.direct.removed_on\": 1747375200000000000\n}"} diff --git a/test/test_data/users_roles_direct_removed.pkl b/test/test_data/users_roles_direct_removed.pkl deleted file mode 100644 index ef673809..00000000 --- a/test/test_data/users_roles_direct_removed.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ebcab84972a47ccc488e4836acd13ad2b659b5a6721dd8593df697a8946e2c44 -size 49294 diff --git a/test/test_data/wh_usage_loads.ndjson b/test/test_data/warehouse_usage_events.ndjson similarity index 100% rename from test/test_data/wh_usage_loads.ndjson rename to test/test_data/warehouse_usage_events.ndjson diff --git a/test/test_data/wh_usage_metering.ndjson b/test/test_data/warehouse_usage_loads.ndjson similarity index 100% rename from test/test_data/wh_usage_metering.ndjson rename to test/test_data/warehouse_usage_loads.ndjson diff --git a/test/test_data/warehouse_usage_metering.ndjson b/test/test_data/warehouse_usage_metering.ndjson new file mode 100644 index 00000000..e69de29b diff --git a/test/test_data/warehouses.pkl b/test/test_data/warehouses.pkl deleted file mode 100644 index 5d565ea6..00000000 --- a/test/test_data/warehouses.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2a5a9167c6ceb85922d7b36a58e0d59e1787e27e04faa961ccf58ebae7124eac -size 71999 diff --git a/test/test_data/wh_usage_events.pkl b/test/test_data/wh_usage_events.pkl deleted file mode 100644 index d733e16a..00000000 --- a/test/test_data/wh_usage_events.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2fca76b25f36ca75b594f3808e919940d87ce2f278be6098048bb96bff9f9ce3 -size 1017 diff --git a/test/test_data/wh_usage_loads.pkl b/test/test_data/wh_usage_loads.pkl deleted file mode 100644 index 80dcac10..00000000 --- a/test/test_data/wh_usage_loads.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2024418d7cf4d70c464766afd69ff8087b94f9242f799842c428ff184f764abd -size 1255 diff --git a/test/test_data/wh_usage_metering.pkl b/test/test_data/wh_usage_metering.pkl deleted file mode 100644 index 80dcac10..00000000 --- a/test/test_data/wh_usage_metering.pkl +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2024418d7cf4d70c464766afd69ff8087b94f9242f799842c428ff184f764abd -size 1255 diff --git a/test/test_results/test_active_queries/logs.json b/test/test_results/test_active_queries/logs.json index 2821de32..63706833 100644 --- a/test/test_results/test_active_queries/logs.json +++ b/test/test_results/test_active_queries/logs.json @@ -1,6 +1,6 @@ [ { - "content": "SQL query SUCCESS at DTAGENT_SKRUK_DB", + "content": "SQL query SUCCESS at DTAGENT_TEST_DB", "snowflake.data.written_to_result": 1781, "snowflake.rows.written_to_result": 60, "snowflake.time.compilation": 127, @@ -17,16 +17,16 @@ "snowflake.query.tag": "dt_snowagent:2025-03-06_10:20:48.705959", "snowflake.schema.name": "APP", "snowflake.warehouse.type": "STANDARD", - "db.namespace": "DTAGENT_SKRUK_DB", + "db.namespace": "DTAGENT_TEST_DB", "db.user": "SYSTEM", "snowflake.query.execution_status": "SUCCESS", - "snowflake.role.name": "DTAGENT_SKRUK_ADMIN", - "snowflake.warehouse.name": "DTAGENT_SKRUK_WH", + "snowflake.role.name": "DTAGENT_TEST_ADMIN", + "snowflake.warehouse.name": "DTAGENT_TEST_WH", "dsoa.run.context": "active_queries", "dsoa.run.plugin": "test_active_queries" }, { - "content": "SQL query SUCCESS at DTAGENT_SKRUK_DB", + "content": "SQL query SUCCESS at DTAGENT_TEST_DB", "snowflake.data.written_to_result": 1781, "snowflake.rows.written_to_result": 60, "snowflake.time.compilation": 135, @@ -43,12 +43,12 @@ "snowflake.query.tag": "dt_snowagent:2025-03-06_10:20:48.723769", "snowflake.schema.name": "APP", "snowflake.warehouse.type": "STANDARD", - "db.namespace": "DTAGENT_SKRUK_DB", + "db.namespace": "DTAGENT_TEST_DB", "db.user": "SYSTEM", "snowflake.query.execution_status": "SUCCESS", - "snowflake.role.name": "DTAGENT_SKRUK_ADMIN", - "snowflake.warehouse.name": "DTAGENT_SKRUK_WH", + "snowflake.role.name": "DTAGENT_TEST_ADMIN", + "snowflake.warehouse.name": "DTAGENT_TEST_WH", "dsoa.run.context": "active_queries", "dsoa.run.plugin": "test_active_queries" } -] +] \ No newline at end of file diff --git a/test/test_results/test_active_queries/metrics.txt b/test/test_results/test_active_queries/metrics.txt index f9ae04a3..850e6491 100644 --- a/test/test_results/test_active_queries/metrics.txt +++ b/test/test_results/test_active_queries/metrics.txt @@ -1,13 +1,13 @@ -snowflake.data.written_to_result,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 1781 +snowflake.data.written_to_result,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 1781 #snowflake.data.written_to_result gauge dt.meta.displayName="Bytes Written to Result",dt.meta.unit="bytes" -snowflake.rows.written_to_result,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 60 +snowflake.rows.written_to_result,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 60 #snowflake.rows.written_to_result gauge dt.meta.displayName="Rows Written to Result",dt.meta.unit="rows" -snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 127 +snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 127 #snowflake.time.compilation gauge dt.meta.displayName="Query Compilation Time",dt.meta.unit="ms" -snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 1 +snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 1 #snowflake.time.execution gauge dt.meta.displayName="Execution Time",dt.meta.unit="ms" -snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 128 +snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 128 #snowflake.time.total_elapsed gauge dt.meta.displayName="Total Elapsed Time",dt.meta.unit="ms" -snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 135 -snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 2 -snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.warehouse.name="DTAGENT_SKRUK_WH" 137 \ No newline at end of file +snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 135 +snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 2 +snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="active_queries",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snowflake.query.execution_status="SUCCESS",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.warehouse.name="DTAGENT_TEST_WH" 137 \ No newline at end of file diff --git a/test/test_results/test_active_queries_results.txt b/test/test_results/test_active_queries_results.txt deleted file mode 100644 index 2cdc0c9f..00000000 --- a/test/test_results/test_active_queries_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:064f014bfd5576ed075c58ee86d1017398bc3e06d48fe20e039f9f8ad3b85825 -size 70459 diff --git a/test/test_results/test_automode/000/logs.json b/test/test_results/test_automode/000/logs.json index 4d47d892..aa71ee4f 100644 --- a/test/test_results/test_automode/000/logs.json +++ b/test/test_results/test_automode/000/logs.json @@ -13,6 +13,8 @@ }, { "content": "test_automode/000", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "snowflake.table.ddl": 1733122324468000000, "snowflake.table.update": 1741707972204000000, "snowflake.table.time_since.last_ddl": 144135, @@ -36,4 +38,4 @@ "dsoa.run.plugin": "telemetry_sender", "dsoa.run.context": "self_monitoring" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/001/logs.json b/test/test_results/test_automode/001/logs.json index ee603a4d..a8aa761c 100644 --- a/test/test_results/test_automode/001/logs.json +++ b/test/test_results/test_automode/001/logs.json @@ -13,6 +13,8 @@ }, { "content": "test_automode/001", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "snowflake.table.ddl": 1733122324468000000, "snowflake.table.update": 1741707972204000000, "snowflake.table.time_since.last_ddl": 144135, @@ -36,4 +38,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/002/logs.json b/test/test_results/test_automode/002/logs.json index d9753c8f..fa51e2f1 100644 --- a/test/test_results/test_automode/002/logs.json +++ b/test/test_results/test_automode/002/logs.json @@ -13,6 +13,8 @@ }, { "content": "test_automode/002", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "snowflake.table.ddl": 1733122324468000000, "snowflake.table.update": 1741707972204000000, "snowflake.table.time_since.last_ddl": 144135, @@ -36,4 +38,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/004/logs.json b/test/test_results/test_automode/004/logs.json index f67a29df..c8055a57 100644 --- a/test/test_results/test_automode/004/logs.json +++ b/test/test_results/test_automode/004/logs.json @@ -1,6 +1,8 @@ [ { "content": "test_automode/004", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "data_update": 1731675389099000000, "ddl": 1731436379508000000, "snowflake.table.time_since.last_ddl": 4030, @@ -24,4 +26,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/005/logs.json b/test/test_results/test_automode/005/logs.json index a140bcc0..cbfa3683 100644 --- a/test/test_results/test_automode/005/logs.json +++ b/test/test_results/test_automode/005/logs.json @@ -1,6 +1,8 @@ [ { "content": "test_automode/005", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "data_update": 1731675389099000000, "ddl": 1731436379508000000, "snowflake.table.time_since.last_ddl": 4030, @@ -36,4 +38,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/006/logs.json b/test/test_results/test_automode/006/logs.json index 788d8a2a..4a69576e 100644 --- a/test/test_results/test_automode/006/logs.json +++ b/test/test_results/test_automode/006/logs.json @@ -1,6 +1,8 @@ [ { "content": "test_automode/006", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "data_update": 1731675389099000000, "ddl": 1731436379508000000, "snowflake.table.time_since.last_ddl": 4030, @@ -36,4 +38,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/007/logs.json b/test/test_results/test_automode/007/logs.json index 0d7352f4..09ff403f 100644 --- a/test/test_results/test_automode/007/logs.json +++ b/test/test_results/test_automode/007/logs.json @@ -1,6 +1,8 @@ [ { "content": "test_automode/007", + "snowflake.data.rows": 0, + "snowflake.data.size": 0, "data_update": 1731675389099000000, "ddl": 1731436379508000000, "snowflake.table.time_since.last_ddl": 4030, @@ -36,4 +38,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/009/logs.json b/test/test_results/test_automode/009/logs.json index 6df8870d..641a326c 100644 --- a/test/test_results/test_automode/009/logs.json +++ b/test/test_results/test_automode/009/logs.json @@ -20,6 +20,7 @@ { "content": "This is a test object 2", "status.code": "OK", + "value.bool": false, "event.type": "PERFORMANCE_EVENT", "value.int": 10000000, "value.str": "test 2", @@ -65,4 +66,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_automode/012/logs.json b/test/test_results/test_automode/012/logs.json index 919d42f0..2d6f821c 100644 --- a/test/test_results/test_automode/012/logs.json +++ b/test/test_results/test_automode/012/logs.json @@ -20,6 +20,7 @@ { "content": "This is a test object 2", "status.code": "OK", + "value.bool": false, "event.type": "PERFORMANCE_EVENT", "value.int": 10000000, "value.str": "test 2", @@ -65,4 +66,4 @@ "dsoa.run.context": "self_monitoring", "dsoa.run.plugin": "telemetry_sender" } -] +] \ No newline at end of file diff --git a/test/test_results/test_budgets_results.txt b/test/test_results/test_budgets_results.txt deleted file mode 100644 index 3c0b5cf0..00000000 --- a/test/test_results/test_budgets_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4a2d0fb2db396962fea91ddfff1a3c00d7edd6e93a32d0889583ed6354c684d6 -size 4540 diff --git a/test/test_results/test_data_schemas/events.json b/test/test_results/test_data_schemas/events.json index 41ef2e5e..7a86802a 100644 --- a/test/test_results/test_data_schemas/events.json +++ b/test/test_results/test_data_schemas/events.json @@ -7,7 +7,7 @@ "snowflake.query.object.modified_by_ddl.id": 1154, "snowflake.query.object.modified_by_ddl.name": "CHARGEBACK_HRC_TEST_DB.LI_TEST", "snowflake.query.object.modified_by_ddl.operation_type": "CREATE", - "snowflake.query.user": "STEFAN.SCHWEIGER", + "snowflake.query.user": "TEST.USER", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -16,7 +16,7 @@ "dsoa.run.context": "data_schemas", "dsoa.run.plugin": "test_data_schemas", "eventType": "CUSTOM_INFO", - "title": "Objects accessed by query 01b165f1-0604-1812-0040-e0030291d142 run by STEFAN.SCHWEIGER" + "title": "Objects accessed by query 01b165f1-0604-1812-0040-e0030291d142 run by TEST.USER" }, { "snowflake.object.event": "snowflake.object.ddl", @@ -27,7 +27,7 @@ "snowflake.query.object.modified_by_ddl.name": "CHARGEBACK_HRC_TEST_DB.LI_TEST.DAILY_COSTS", "snowflake.query.object.modified_by_ddl.operation_type": "CREATE", "snowflake.query.object.modified_by_ddl.properties": "{\"columns\": {\"BOOKING_DATE\": {\"objectId\": {\"value\": 1680388}, \"subOperationType\": \"ADD\"}, \"CAPABILITY_ID\": {\"objectId\": {\"value\": 1680391}, \"subOperationType\": \"ADD\"}, \"COSTS\": {\"objectId\": {\"value\": 1680392}, \"subOperationType\": \"ADD\"}, \"ENVIRONMENT_ID\": {\"objectId\": {\"value\": 1680390}, \"subOperationType\": \"ADD\"}, \"SUBSCRIPTION_UUID\": {\"objectId\": {\"value\": 1680386}, \"subOperationType\": \"ADD\"}, \"UPDATED_AT\": {\"objectId\": {\"value\": 1680389}, \"subOperationType\": \"ADD\"}, \"USAGE_DATE\": {\"objectId\": {\"value\": 1680387}, \"subOperationType\": \"ADD\"}}, \"creationMode\": {\"value\": \"CREATE\"}}", - "snowflake.query.user": "STEFAN.SCHWEIGER", + "snowflake.query.user": "TEST.USER", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -36,6 +36,6 @@ "dsoa.run.context": "data_schemas", "dsoa.run.plugin": "test_data_schemas", "eventType": "CUSTOM_INFO", - "title": "Objects accessed by query 01b165fb-0604-1945-0040-e0030291c3be run by STEFAN.SCHWEIGER" + "title": "Objects accessed by query 01b165fb-0604-1945-0040-e0030291c3be run by TEST.USER" } ] \ No newline at end of file diff --git a/test/test_results/test_data_schemas_results.txt b/test/test_results/test_data_schemas_results.txt deleted file mode 100644 index e3886d57..00000000 --- a/test/test_results/test_data_schemas_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e5cd32ce1ec4f45dcfbaf46e51bcf5c0da9c9ced4eb28df200299690a3f805b3 -size 2778 diff --git a/test/test_results/test_data_volume_results.txt b/test/test_results/test_data_volume_results.txt deleted file mode 100644 index af3dc38d..00000000 --- a/test/test_results/test_data_volume_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:815ee9a282af22849ec4431882c0e378185f545a361e7079061cd0335525b746 -size 4329 diff --git a/test/test_results/test_dynamic_tables/logs.json b/test/test_results/test_dynamic_tables/logs.json index e08949c3..c6eabd41 100644 --- a/test/test_results/test_dynamic_tables/logs.json +++ b/test/test_results/test_dynamic_tables/logs.json @@ -30,6 +30,11 @@ "snowflake.table.dynamic.refresh.start": "2025-03-12 06:59:30.668 Z", "snowflake.table.dynamic.refresh.state": "SUCCEEDED", "snowflake.table.dynamic.refresh.trigger": "SCHEDULED", + "snowflake.partitions.added": 0, + "snowflake.partitions.removed": 0, + "snowflake.rows.copied": 0, + "snowflake.rows.deleted": 0, + "snowflake.rows.inserted": 0, "db.collection.name": "EMPLOYEE_DET", "db.namespace": "DYNAMIC_TABLE_DB", "snowflake.schema.name": "DYNAMIC_TABLE_SCH", @@ -49,6 +54,11 @@ "snowflake.table.dynamic.refresh.start": "2025-03-12 07:00:19.058 Z", "snowflake.table.dynamic.refresh.state": "SUCCEEDED", "snowflake.table.dynamic.refresh.trigger": "SCHEDULED", + "snowflake.partitions.added": 0, + "snowflake.partitions.removed": 0, + "snowflake.rows.copied": 0, + "snowflake.rows.deleted": 0, + "snowflake.rows.inserted": 0, "db.collection.name": "EMPLOYEE_DET", "db.namespace": "DYNAMIC_TABLE_DB", "snowflake.schema.name": "DYNAMIC_TABLE_SCH", diff --git a/test/test_results/test_dynamic_tables_results.txt b/test/test_results/test_dynamic_tables_results.txt deleted file mode 100644 index 1657f66e..00000000 --- a/test/test_results/test_dynamic_tables_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fcc861ba42f03301ae4a4a0d8a6317b6e7b6feae37cce3ea8421d03f89334106 -size 73957 diff --git a/test/test_results/test_event_log/logs.json b/test/test_results/test_event_log/logs.json index 561c7b0b..6d5c7cfc 100644 --- a/test/test_results/test_event_log/logs.json +++ b/test/test_results/test_event_log/logs.json @@ -7,28 +7,29 @@ "process.memory.usage": { "sum": 0 }, - "db.namespace": "DTAGENT_SKRUK_DB", + "db.namespace": "DTAGENT_TEST_DB", "db.user": "SYSTEM", "snow.database.id": 632, - "snow.database.name": "DTAGENT_SKRUK_DB", + "snow.database.name": "DTAGENT_TEST_DB", "snow.executable.id": 51528, "snow.executable.name": "DTAGENT(SOURCES ARRAY):OBJECT", "snow.executable.runtime.version": 3.11, "snow.executable.type": "PROCEDURE", "snow.owner.id": 567463, - "snow.owner.name": "DTAGENT_SKRUK_ADMIN", + "snow.owner.name": "DTAGENT_TEST_ADMIN", "snow.query.id": "01ba3bba-0412-e356-0051-0c031e222a46", "snow.schema.id": 6165, "snow.schema.name": "APP", "snow.session.id": 22812680207736954, "snow.session.role.primary.id": 567463, - "snow.session.role.primary.name": "DTAGENT_SKRUK_ADMIN", + "snow.session.role.primary.name": "DTAGENT_TEST_ADMIN", + "snow.user.id": 0, "snow.warehouse.id": 4649, - "snow.warehouse.name": "DTAGENT_SKRUK_WH", + "snow.warehouse.name": "DTAGENT_TEST_WH", "snowflake.query.id": "01ba3bba-0412-e356-0051-0c031e222a46", - "snowflake.role.name": "DTAGENT_SKRUK_ADMIN", + "snowflake.role.name": "DTAGENT_TEST_ADMIN", "snowflake.schema.name": "APP", - "snowflake.warehouse.name": "DTAGENT_SKRUK_WH", + "snowflake.warehouse.name": "DTAGENT_TEST_WH", "dsoa.run.context": "event_log_metrics", "dsoa.run.plugin": "test_event_log" }, @@ -40,28 +41,29 @@ "process.memory.usage": { "sum": 0 }, - "db.namespace": "DTAGENT_SKRUK_DB", + "db.namespace": "DTAGENT_TEST_DB", "db.user": "SYSTEM", "snow.database.id": 632, - "snow.database.name": "DTAGENT_SKRUK_DB", + "snow.database.name": "DTAGENT_TEST_DB", "snow.executable.id": 51528, "snow.executable.name": "DTAGENT(SOURCES ARRAY):OBJECT", "snow.executable.runtime.version": 3.11, "snow.executable.type": "PROCEDURE", "snow.owner.id": 567463, - "snow.owner.name": "DTAGENT_SKRUK_ADMIN", + "snow.owner.name": "DTAGENT_TEST_ADMIN", "snow.query.id": "01ba3bba-0412-e3aa-0051-0c031e22516a", "snow.schema.id": 6165, "snow.schema.name": "APP", "snow.session.id": 22812680207694442, "snow.session.role.primary.id": 567463, - "snow.session.role.primary.name": "DTAGENT_SKRUK_ADMIN", + "snow.session.role.primary.name": "DTAGENT_TEST_ADMIN", + "snow.user.id": 0, "snow.warehouse.id": 4649, - "snow.warehouse.name": "DTAGENT_SKRUK_WH", + "snow.warehouse.name": "DTAGENT_TEST_WH", "snowflake.query.id": "01ba3bba-0412-e3aa-0051-0c031e22516a", - "snowflake.role.name": "DTAGENT_SKRUK_ADMIN", + "snowflake.role.name": "DTAGENT_TEST_ADMIN", "snowflake.schema.name": "APP", - "snowflake.warehouse.name": "DTAGENT_SKRUK_WH", + "snowflake.warehouse.name": "DTAGENT_TEST_WH", "dsoa.run.context": "event_log_metrics", "dsoa.run.plugin": "test_event_log" }, @@ -82,6 +84,7 @@ "snow.session.id": 22812680207694358, "snow.session.role.primary.id": 566820, "snow.session.role.primary.name": "DTAGENT_ADMIN", + "snow.user.id": 0, "snow.warehouse.id": 4637, "snow.warehouse.name": "DTAGENT_WH", "code.filepath": "/usr/lib/python_udf/5dc6be55a3cd750f7a845533bacb523ed5bf419a303f987f5c6daae4105f654e/lib/python3.10/site-packages/opentelemetry/attributes/__init__.py", @@ -160,6 +163,7 @@ "snow.session.id": 22812680207694358, "snow.session.role.primary.id": 566820, "snow.session.role.primary.name": "DTAGENT_ADMIN", + "snow.user.id": 0, "snow.warehouse.id": 4637, "snow.warehouse.name": "DTAGENT_WH", "code.filepath": "_udf_code.py", diff --git a/test/test_results/test_event_log/metrics.txt b/test/test_results/test_event_log/metrics.txt index 41ec096c..74bf26a6 100644 --- a/test/test_results/test_event_log/metrics.txt +++ b/test/test_results/test_event_log/metrics.txt @@ -1,6 +1,6 @@ -process.cpu.utilization,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_SKRUK_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_SKRUK_ADMIN",snow.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207736954",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_SKRUK_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_SKRUK_WH",snowflake.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_SKRUK_WH",telemetry.sdk.language="python" 0 +process.cpu.utilization,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_TEST_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_TEST_ADMIN",snow.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207736954",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_TEST_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_TEST_WH",snowflake.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_TEST_WH" 0 #process.cpu.utilization gauge dt.meta.displayName="Process CPU Utilization",dt.meta.unit="1" -process.memory.usage,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_SKRUK_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_SKRUK_ADMIN",snow.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207736954",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_SKRUK_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_SKRUK_WH",snowflake.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_SKRUK_WH",telemetry.sdk.language="python" 0 +process.memory.usage,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_TEST_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_TEST_ADMIN",snow.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207736954",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_TEST_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_TEST_WH",snowflake.query.id="01ba3bba-0412-e356-0051-0c031e222a46",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_TEST_WH" 0 #process.memory.usage gauge dt.meta.displayName="Process Memory Usage",dt.meta.unit="bytes" -process.cpu.utilization,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_SKRUK_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_SKRUK_ADMIN",snow.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207694442",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_SKRUK_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_SKRUK_WH",snowflake.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_SKRUK_WH",telemetry.sdk.language="python" 0 -process.memory.usage,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_SKRUK_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_SKRUK_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_SKRUK_ADMIN",snow.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207694442",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_SKRUK_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_SKRUK_WH",snowflake.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snowflake.role.name="DTAGENT_SKRUK_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_SKRUK_WH",telemetry.sdk.language="python" 0 \ No newline at end of file +process.cpu.utilization,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_TEST_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_TEST_ADMIN",snow.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207694442",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_TEST_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_TEST_WH",snowflake.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_TEST_WH" 0 +process.memory.usage,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="event_log_metrics",db.namespace="DTAGENT_TEST_DB",db.user="SYSTEM",snow.database.id="632",snow.database.name="DTAGENT_TEST_DB",snow.executable.id="51528",snow.executable.name="DTAGENT(SOURCES ARRAY):OBJECT",snow.executable.runtime.version="3.11",snow.executable.type="PROCEDURE",snow.owner.id="567463",snow.owner.name="DTAGENT_TEST_ADMIN",snow.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snow.schema.id="6165",snow.schema.name="APP",snow.session.id="22812680207694442",snow.session.role.primary.id="567463",snow.session.role.primary.name="DTAGENT_TEST_ADMIN",snow.user.id="0",snow.warehouse.id="4649",snow.warehouse.name="DTAGENT_TEST_WH",snowflake.query.id="01ba3bba-0412-e3aa-0051-0c031e22516a",snowflake.role.name="DTAGENT_TEST_ADMIN",snowflake.schema.name="APP",snowflake.warehouse.name="DTAGENT_TEST_WH" 0 \ No newline at end of file diff --git a/test/test_results/test_event_log/spans.json b/test/test_results/test_event_log/spans.json index 29f0ea8d..178f8b21 100644 --- a/test/test_results/test_event_log/spans.json +++ b/test/test_results/test_event_log/spans.json @@ -1,27 +1,27 @@ [ { - "db.namespace": "DTAGENT_SKRUK_DB", - "db.user": "SEBASTIAN.KRUK", + "db.namespace": "DTAGENT_TEST_DB", + "db.user": "TEST.USER", "snow.database.id": 632, - "snow.database.name": "DTAGENT_SKRUK_DB", + "snow.database.name": "DTAGENT_TEST_DB", "snow.executable.id": 51719, "snow.executable.name": "LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)", "snow.executable.type": "PROCEDURE", "snow.owner.id": 567463, - "snow.owner.name": "DTAGENT_SKRUK_ADMIN", + "snow.owner.name": "DTAGENT_TEST_ADMIN", "snow.query.id": "01ba3bc5-0412-e34b-0051-0c031e22184a", "snow.schema.id": 6167, "snow.schema.name": "STATUS", "snow.session.id": 22812680207733670, "snow.session.role.primary.id": 567483, - "snow.session.role.primary.name": "DTAGENT_SKRUK_VIEWER", + "snow.session.role.primary.name": "DTAGENT_TEST_VIEWER", "snow.user.id": 361, "snow.warehouse.id": 4649, - "snow.warehouse.name": "DTAGENT_SKRUK_WH", + "snow.warehouse.name": "DTAGENT_TEST_WH", "snowflake.query.id": "01ba3bc5-0412-e34b-0051-0c031e22184a", - "snowflake.role.name": "DTAGENT_SKRUK_VIEWER", + "snowflake.role.name": "DTAGENT_TEST_VIEWER", "snowflake.schema.name": "STATUS", - "snowflake.warehouse.name": "DTAGENT_SKRUK_WH", + "snowflake.warehouse.name": "DTAGENT_TEST_WH", "telemetry.sdk.language": "sql", "dsoa.run.context": "event_log_spans", "dsoa.run.plugin": "test_event_log", @@ -29,28 +29,28 @@ "dsoa.debug.span.events.failed": 0 }, { - "db.namespace": "DTAGENT_SKRUK_DB", + "db.namespace": "DTAGENT_TEST_DB", "db.user": "SYSTEM", "snow.database.id": 632, - "snow.database.name": "DTAGENT_SKRUK_DB", + "snow.database.name": "DTAGENT_TEST_DB", "snow.executable.id": 51719, "snow.executable.name": "LOG_PROCESSED_MEASUREMENTS(MEASUREMENTS_SOURCE VARCHAR, LAST_TIMESTAMP VARCHAR, LAST_ID VARCHAR, ENTRIES_COUNT VARCHAR):VARCHAR(16777216)", "snow.executable.type": "PROCEDURE", "snow.owner.id": 567463, - "snow.owner.name": "DTAGENT_SKRUK_ADMIN", + "snow.owner.name": "DTAGENT_TEST_ADMIN", "snow.query.id": "01ba3bcc-0412-e352-0051-0c031e22c1a6", "snow.schema.id": 6167, "snow.schema.name": "STATUS", "snow.session.id": 22812680207726014, "snow.session.role.primary.id": 567463, - "snow.session.role.primary.name": "DTAGENT_SKRUK_ADMIN", + "snow.session.role.primary.name": "DTAGENT_TEST_ADMIN", "snow.user.id": 0, "snow.warehouse.id": 4649, - "snow.warehouse.name": "DTAGENT_SKRUK_WH", + "snow.warehouse.name": "DTAGENT_TEST_WH", "snowflake.query.id": "01ba3bcc-0412-e352-0051-0c031e22c1a6", - "snowflake.role.name": "DTAGENT_SKRUK_ADMIN", + "snowflake.role.name": "DTAGENT_TEST_ADMIN", "snowflake.schema.name": "STATUS", - "snowflake.warehouse.name": "DTAGENT_SKRUK_WH", + "snowflake.warehouse.name": "DTAGENT_TEST_WH", "telemetry.sdk.language": "sql", "dsoa.run.context": "event_log_spans", "dsoa.run.plugin": "test_event_log", diff --git a/test/test_results/test_event_log_results.txt b/test/test_results/test_event_log_results.txt deleted file mode 100644 index f690c434..00000000 --- a/test/test_results/test_event_log_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3b8bb7e363c92098053b712be49b5c2f3ed9fab72dcfa12f8c262cf35316f80e -size 16605 diff --git a/test/test_results/test_event_usage/logs.json b/test/test_results/test_event_usage/logs.json index 8f0be0cd..b7a34cae 100644 --- a/test/test_results/test_event_usage/logs.json +++ b/test/test_results/test_event_usage/logs.json @@ -9,7 +9,8 @@ { "content": "Event Usage", "snowflake.credits.used": 0.000215595, + "snowflake.data.ingested": 0, "dsoa.run.context": "event_usage", "dsoa.run.plugin": "test_event_usage" } -] +] \ No newline at end of file diff --git a/test/test_results/test_event_usage_results.txt b/test/test_results/test_event_usage_results.txt deleted file mode 100644 index ab3f1761..00000000 --- a/test/test_results/test_event_usage_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d89971392a3338a37e68f27f7d959fc57d19db33825961b7bb6b1f15577fe16f -size 5479 diff --git a/test/test_results/test_login_history/logs.json b/test/test_results/test_login_history/logs.json index f389db06..a1e992e8 100644 --- a/test/test_results/test_login_history/logs.json +++ b/test/test_results/test_login_history/logs.json @@ -5,10 +5,11 @@ "client.version": "1.22.1", "event.id": 18260702004593906, "status.code": "OK", - "client.ip": "82.177.196.146", + "client.ip": "10.0.0.1", "client.type": "OTHER", - "db.user": "SEBASTIAN.KRUK", + "db.user": "TEST.USER", "event.name": "LOGIN", + "event.related_id": 0, "dsoa.run.context": "login_history", "dsoa.run.plugin": "test_login_history" }, @@ -18,11 +19,12 @@ "client.version": "1.21.0", "event.id": 18260702004595858, "status.code": "OK", - "client.ip": "52.29.224.53", + "client.ip": "10.0.0.1", "client.type": "OTHER", "db.user": "SNOWAGENT_ADMIN", "event.name": "LOGIN", + "event.related_id": 0, "dsoa.run.context": "login_history", "dsoa.run.plugin": "test_login_history" } -] +] \ No newline at end of file diff --git a/test/test_results/test_login_history_results.txt b/test/test_results/test_login_history_results.txt deleted file mode 100644 index ff48cf82..00000000 --- a/test/test_results/test_login_history_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:eaa9be91858129912b4424106027285a251107a6c5d920c81c9dbd6bbee92d39 -size 5042 diff --git a/test/test_results/test_query_history/logs.json b/test/test_results/test_query_history/logs.json index f6e902b4..c1bd93f8 100644 --- a/test/test_results/test_query_history/logs.json +++ b/test/test_results/test_query_history/logs.json @@ -26,6 +26,38 @@ "snowflake.warehouse.id": 3726, "snowflake.warehouse.size": "X-Small", "snowflake.warehouse.type": "STANDARD", + "snowflake.acceleration.data.scanned": 0, + "snowflake.acceleration.partitions.scanned": 0, + "snowflake.acceleration.scale_factor.max": 0, + "snowflake.data.deleted": 0, + "snowflake.data.read.from_result": 0, + "snowflake.data.scanned": 0, + "snowflake.data.scanned_from_cache": 0.0, + "snowflake.data.sent_over_the_network": 0, + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.data.transferred.inbound": 0, + "snowflake.data.transferred.outbound": 0, + "snowflake.data.written": 0, + "snowflake.external_functions.data.received": 0, + "snowflake.external_functions.data.sent": 0, + "snowflake.external_functions.invocations": 0, + "snowflake.external_functions.rows.received": 0, + "snowflake.external_functions.rows.sent": 0, + "snowflake.partitions.scanned": 0, + "snowflake.partitions.total": 0, + "snowflake.query.is_client_generated": false, + "snowflake.query.transaction_id": 0, + "snowflake.rows.deleted": 0, + "snowflake.rows.inserted": 0, + "snowflake.rows.unloaded": 0, + "snowflake.rows.updated": 0, + "snowflake.time.child_queries_wait": 0, + "snowflake.time.list_external_files": 0, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "snowflake.time.repair": 0, + "snowflake.time.transaction_blocked": 0, "db.namespace": "DTAGENT_DB", "db.operation.name": "CALL", "db.user": "SYSTEM", @@ -67,6 +99,38 @@ "snowflake.warehouse.id": 3726, "snowflake.warehouse.size": "X-Small", "snowflake.warehouse.type": "STANDARD", + "snowflake.acceleration.data.scanned": 0, + "snowflake.acceleration.partitions.scanned": 0, + "snowflake.acceleration.scale_factor.max": 0, + "snowflake.data.deleted": 0, + "snowflake.data.read.from_result": 0, + "snowflake.data.scanned": 0, + "snowflake.data.scanned_from_cache": 0.0, + "snowflake.data.sent_over_the_network": 0, + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.data.transferred.inbound": 0, + "snowflake.data.transferred.outbound": 0, + "snowflake.data.written": 0, + "snowflake.external_functions.data.received": 0, + "snowflake.external_functions.data.sent": 0, + "snowflake.external_functions.invocations": 0, + "snowflake.external_functions.rows.received": 0, + "snowflake.external_functions.rows.sent": 0, + "snowflake.partitions.scanned": 0, + "snowflake.partitions.total": 0, + "snowflake.query.is_client_generated": false, + "snowflake.query.transaction_id": 0, + "snowflake.rows.deleted": 0, + "snowflake.rows.inserted": 0, + "snowflake.rows.unloaded": 0, + "snowflake.rows.updated": 0, + "snowflake.time.child_queries_wait": 0, + "snowflake.time.list_external_files": 0, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "snowflake.time.repair": 0, + "snowflake.time.transaction_blocked": 0, "db.collection.name": "DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG", "db.namespace": "DTAGENT_DB", "db.operation.name": "CALL", @@ -116,6 +180,34 @@ "snowflake.warehouse.id": 3726, "snowflake.warehouse.size": "X-Small", "snowflake.warehouse.type": "STANDARD", + "snowflake.acceleration.data.scanned": 0, + "snowflake.acceleration.partitions.scanned": 0, + "snowflake.acceleration.scale_factor.max": 0, + "snowflake.data.deleted": 0, + "snowflake.data.read.from_result": 0, + "snowflake.data.scanned_from_cache": 0.0, + "snowflake.data.sent_over_the_network": 0, + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.data.transferred.inbound": 0, + "snowflake.data.transferred.outbound": 0, + "snowflake.data.written": 0, + "snowflake.external_functions.data.received": 0, + "snowflake.external_functions.data.sent": 0, + "snowflake.external_functions.invocations": 0, + "snowflake.external_functions.rows.received": 0, + "snowflake.external_functions.rows.sent": 0, + "snowflake.query.is_client_generated": false, + "snowflake.query.transaction_id": 0, + "snowflake.rows.deleted": 0, + "snowflake.rows.inserted": 0, + "snowflake.rows.unloaded": 0, + "snowflake.rows.updated": 0, + "snowflake.time.child_queries_wait": 0, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "snowflake.time.repair": 0, + "snowflake.time.transaction_blocked": 0, "db.collection.name": "DTAGENT_DB.STATUS.PROCESSED_MEASUREMENTS_LOG", "db.namespace": "DTAGENT_DB", "db.operation.name": "SELECT", diff --git a/test/test_results/test_query_history_results.txt b/test/test_results/test_query_history_results.txt deleted file mode 100644 index 92e17e78..00000000 --- a/test/test_results/test_query_history_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a3597dc92e3e06edf50f03e221de5de60f1040cbddee7f49c099807bb35e04a6 -size 33869 diff --git a/test/test_results/test_query_history_span_hierarchy/biz_events.json b/test/test_results/test_query_history_span_hierarchy/biz_events.json new file mode 100644 index 00000000..f21d3c78 --- /dev/null +++ b/test/test_results/test_query_history_span_hierarchy/biz_events.json @@ -0,0 +1,40 @@ +[ + { + "specversion": "1.0", + "source": "test.dsoa2025.snowflakecomputing.com", + "type": "dsoa.task", + "data": { + "event.provider": "test.dsoa2025.snowflakecomputing.com", + "dsoa.task.name": "test_query_history_span_hierarchy", + "dsoa.task.exec.status": "STARTED", + "app.bundle": "self_monitoring", + "app.id": "dynatrace.snowagent", + "db.system": "snowflake", + "service.name": "test.dsoa2025", + "deployment.environment": "TEST", + "host.name": "test.dsoa2025.snowflakecomputing.com", + "telemetry.exporter.name": "dynatrace.snowagent", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "dsoa.run.context": "self_monitoring" + } + }, + { + "specversion": "1.0", + "source": "test.dsoa2025.snowflakecomputing.com", + "type": "dsoa.task", + "data": { + "event.provider": "test.dsoa2025.snowflakecomputing.com", + "dsoa.task.name": "test_query_history_span_hierarchy", + "dsoa.task.exec.status": "FINISHED", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "app.bundle": "self_monitoring", + "app.id": "dynatrace.snowagent", + "db.system": "snowflake", + "service.name": "test.dsoa2025", + "deployment.environment": "TEST", + "host.name": "test.dsoa2025.snowflakecomputing.com", + "telemetry.exporter.name": "dynatrace.snowagent", + "dsoa.run.context": "self_monitoring" + } + } +] \ No newline at end of file diff --git a/test/test_results/test_query_history_span_hierarchy/logs.json b/test/test_results/test_query_history_span_hierarchy/logs.json new file mode 100644 index 00000000..3c018742 --- /dev/null +++ b/test/test_results/test_query_history_span_hierarchy/logs.json @@ -0,0 +1,70 @@ +[ + { + "content": "CALL MY_DB.PUBLIC.P_OUTER_SP();", + "snowflake.time.execution": 9000, + "snowflake.time.total_elapsed": 10000, + "snowflake.time.compilation": 500, + "db.query.text": "CALL MY_DB.PUBLIC.P_OUTER_SP();", + "snowflake.query.id": "sp-root-0001-0000-0000-000000000001", + "db.namespace": "MY_DB", + "db.operation.name": "CALL", + "db.user": "TEST_USER", + "snowflake.query.execution_status": "SUCCESS", + "snowflake.role.name": "TEST_ROLE", + "snowflake.warehouse.name": "TEST_WH", + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.partitions.scanned": 0, + "snowflake.partitions.total": 0, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "dsoa.run.context": "query_history" + }, + { + "content": "CALL MY_DB.PUBLIC.P_INNER_SP();", + "snowflake.time.execution": 5000, + "snowflake.time.total_elapsed": 6000, + "snowflake.time.compilation": 300, + "db.query.text": "CALL MY_DB.PUBLIC.P_INNER_SP();", + "snowflake.query.id": "sp-mid1-0001-0000-0000-000000000002", + "snowflake.query.parent_id": "sp-root-0001-0000-0000-000000000001", + "db.namespace": "MY_DB", + "db.operation.name": "CALL", + "db.user": "TEST_USER", + "snowflake.query.execution_status": "SUCCESS", + "snowflake.role.name": "TEST_ROLE", + "snowflake.warehouse.name": "TEST_WH", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.partitions.scanned": 0, + "snowflake.partitions.total": 0, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "dsoa.run.context": "query_history" + }, + { + "content": "SELECT * FROM MY_DB.PUBLIC.MY_TABLE;", + "snowflake.time.execution": 2000, + "snowflake.time.total_elapsed": 3000, + "snowflake.time.compilation": 200, + "snowflake.partitions.scanned": 5, + "snowflake.partitions.total": 10, + "db.query.text": "SELECT * FROM MY_DB.PUBLIC.MY_TABLE;", + "snowflake.query.id": "sp-leaf-0001-0000-0000-000000000003", + "snowflake.query.parent_id": "sp-mid1-0001-0000-0000-000000000002", + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "db.namespace": "MY_DB", + "db.operation.name": "SELECT", + "db.user": "TEST_USER", + "snowflake.query.execution_status": "SUCCESS", + "snowflake.role.name": "TEST_ROLE", + "snowflake.warehouse.name": "TEST_WH", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "dsoa.run.context": "query_history" + } +] \ No newline at end of file diff --git a/test/test_results/test_query_history_span_hierarchy/metrics.txt b/test/test_results/test_query_history_span_hierarchy/metrics.txt new file mode 100644 index 00000000..2c0a7111 --- /dev/null +++ b/test/test_results/test_query_history_span_hierarchy/metrics.txt @@ -0,0 +1,30 @@ +snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 9000 +#snowflake.time.execution gauge dt.meta.displayName="Execution Time",dt.meta.unit="ms" +snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 10000 +#snowflake.time.total_elapsed gauge dt.meta.displayName="Total Elapsed Time",dt.meta.unit="ms" +snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 500 +#snowflake.time.compilation gauge dt.meta.displayName="Query Compilation Time",dt.meta.unit="ms" +snowflake.time.queued.overload,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +#snowflake.time.queued.overload gauge dt.meta.displayName="Queued Overload Time",dt.meta.unit="ms" +snowflake.time.queued.provisioning,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +#snowflake.time.queued.provisioning gauge dt.meta.displayName="Queued Provisioning Time",dt.meta.unit="ms" +snowflake.data.spilled.local,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +#snowflake.data.spilled.local gauge dt.meta.displayName="Bytes Spilled to Local Storage",dt.meta.unit="bytes" +snowflake.data.spilled.remote,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +#snowflake.data.spilled.remote gauge dt.meta.displayName="Bytes Spilled to Remote Storage",dt.meta.unit="bytes" +snowflake.partitions.scanned,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +#snowflake.partitions.scanned gauge dt.meta.displayName="Partitions Scanned",dt.meta.unit="partitions" +snowflake.partitions.total,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +#snowflake.partitions.total gauge dt.meta.displayName="Partitions Total",dt.meta.unit="partitions" +snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 5000 +snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 6000 +snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="CALL",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 300 +snowflake.time.execution,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 2000 +snowflake.time.total_elapsed,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 3000 +snowflake.time.compilation,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 200 +snowflake.time.queued.overload,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +snowflake.time.queued.provisioning,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +snowflake.data.spilled.local,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +snowflake.data.spilled.remote,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 0 +snowflake.partitions.scanned,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 5 +snowflake.partitions.total,db.system="snowflake",service.name="test.dsoa2025",deployment.environment="TEST",host.name="test.dsoa2025.snowflakecomputing.com",dsoa.run.context="query_history",db.namespace="MY_DB",db.operation.name="SELECT",db.user="TEST_USER",snowflake.query.execution_status="SUCCESS",snowflake.role.name="TEST_ROLE",snowflake.warehouse.name="TEST_WH" 10 \ No newline at end of file diff --git a/test/test_results/test_query_history_span_hierarchy/spans.json b/test/test_results/test_query_history_span_hierarchy/spans.json new file mode 100644 index 00000000..80d3d21c --- /dev/null +++ b/test/test_results/test_query_history_span_hierarchy/spans.json @@ -0,0 +1,73 @@ +[ + { + "snowflake.time.execution": 2000, + "snowflake.time.total_elapsed": 3000, + "snowflake.time.compilation": 200, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.partitions.scanned": 5, + "snowflake.partitions.total": 10, + "db.query.text": "SELECT * FROM MY_DB.PUBLIC.MY_TABLE;", + "snowflake.query.id": "sp-leaf-0001-0000-0000-000000000003", + "snowflake.query.parent_id": "sp-mid1-0001-0000-0000-000000000002", + "db.namespace": "MY_DB", + "db.operation.name": "SELECT", + "db.user": "TEST_USER", + "snowflake.query.execution_status": "SUCCESS", + "snowflake.role.name": "TEST_ROLE", + "snowflake.warehouse.name": "TEST_WH", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "dsoa.run.context": "query_history", + "dsoa.debug.span.events.added": 0, + "dsoa.debug.span.events.failed": 0 + }, + { + "snowflake.time.execution": 5000, + "snowflake.time.total_elapsed": 6000, + "snowflake.time.compilation": 300, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.partitions.scanned": 0, + "snowflake.partitions.total": 0, + "db.query.text": "CALL MY_DB.PUBLIC.P_INNER_SP();", + "snowflake.query.id": "sp-mid1-0001-0000-0000-000000000002", + "snowflake.query.parent_id": "sp-root-0001-0000-0000-000000000001", + "db.namespace": "MY_DB", + "db.operation.name": "CALL", + "db.user": "TEST_USER", + "snowflake.query.execution_status": "SUCCESS", + "snowflake.role.name": "TEST_ROLE", + "snowflake.warehouse.name": "TEST_WH", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "dsoa.run.context": "query_history", + "dsoa.debug.span.events.added": 0, + "dsoa.debug.span.events.failed": 0 + }, + { + "snowflake.time.execution": 9000, + "snowflake.time.total_elapsed": 10000, + "snowflake.time.compilation": 500, + "snowflake.time.queued.overload": 0, + "snowflake.time.queued.provisioning": 0, + "snowflake.data.spilled.local": 0, + "snowflake.data.spilled.remote": 0, + "snowflake.partitions.scanned": 0, + "snowflake.partitions.total": 0, + "db.query.text": "CALL MY_DB.PUBLIC.P_OUTER_SP();", + "snowflake.query.id": "sp-root-0001-0000-0000-000000000001", + "db.namespace": "MY_DB", + "db.operation.name": "CALL", + "db.user": "TEST_USER", + "snowflake.query.execution_status": "SUCCESS", + "snowflake.role.name": "TEST_ROLE", + "snowflake.warehouse.name": "TEST_WH", + "dsoa.run.plugin": "test_query_history_span_hierarchy", + "dsoa.run.context": "query_history", + "dsoa.debug.span.events.added": 0, + "dsoa.debug.span.events.failed": 0 + } +] \ No newline at end of file diff --git a/test/test_results/test_resource_monitors_results.txt b/test/test_results/test_resource_monitors_results.txt deleted file mode 100644 index 89cceede..00000000 --- a/test/test_results/test_resource_monitors_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:64ac70c21299e066c29595c444f76b634dceb67c87ca5476aabd58e9c024a8ea -size 4785 diff --git a/test/test_results/test_shares/events.json b/test/test_results/test_shares/events.json index 44dd9104..d5d5ced7 100644 --- a/test/test_results/test_shares/events.json +++ b/test/test_results/test_shares/events.json @@ -5,19 +5,19 @@ "snowflake.event.trigger": "snowflake.grant.created_on", "snowflake.grant.created_on": 1687246726499000000, "snowflake.grant.by": "DEMIGOD", - "snowflake.grant.grantee": "DEVDYNATRACEDIGITALBUSINESSDW.DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.grant.grantee": "TESTACCOUNT.DATA_SCIENTIST_DEVEL_DS_SHARE", "snowflake.grant.on": "DATABASE", "snowflake.grant.option": "false", "snowflake.grant.privilege": "USAGE", "snowflake.grant.to": "SHARE", - "snowflake.share.is_secure_objects_only": "true", + "snowflake.share.is_secure_objects_only": true, "snowflake.share.kind": "OUTBOUND", "snowflake.share.owner": "DEMIGOD", - "snowflake.share.shared_from": "WMBJBCQ.DEVDYNATRACEDIGITALBUSINESSDW", - "snowflake.share.shared_to": "WMBJBCQ.CI360TESTACCOUNT", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT_WD", + "snowflake.share.shared_to": "TEST123.TESTACCOUNT", "db.namespace": "DATA_SCIENTIST_DEV_DB", "snowflake.grant.name": "DATA_SCIENTIST_DEV_DB", - "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_SHARE", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -32,19 +32,19 @@ "snowflake.event.trigger": "snowflake.grant.created_on", "snowflake.grant.created_on": 1668416468928000000, "snowflake.grant.by": "INTEGRATION_CONSUMPTION_FORECASTING_ROLE", - "snowflake.grant.grantee": "DEVDYNATRACEDIGITALBUSINESSDW.DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.grant.grantee": "TESTACCOUNT.DATA_SCIENTIST_DEVEL_DS_SHARE", "snowflake.grant.on": "SCHEMA", "snowflake.grant.option": "false", "snowflake.grant.privilege": "USAGE", "snowflake.grant.to": "SHARE", - "snowflake.share.is_secure_objects_only": "true", + "snowflake.share.is_secure_objects_only": true, "snowflake.share.kind": "OUTBOUND", "snowflake.share.owner": "DEMIGOD", - "snowflake.share.shared_from": "WMBJBCQ.DEVDYNATRACEDIGITALBUSINESSDW", - "snowflake.share.shared_to": "WMBJBCQ.CI360TESTACCOUNT", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT_WD", + "snowflake.share.shared_to": "TEST123.TESTACCOUNT", "db.namespace": "DATA_SCIENTIST_DEV_DB", "snowflake.grant.name": "DATA_SCIENTIST_DEV_DB.ACCOUNT_EXPERIENCE", - "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_SHARE", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -59,7 +59,7 @@ "snowflake.event.trigger": "snowflake.share.created_on", "snowflake.share.created_on": 1633629486209000000, "snowflake.share.kind": "INBOUND", - "snowflake.share.shared_from": "JKMKTPS.DKA87615", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT", "snowflake.share.name": "Monte Carlo", "db.system": "snowflake", "service.name": "test.dsoa2025", diff --git a/test/test_results/test_shares/logs.json b/test/test_results/test_shares/logs.json index b516824b..78e6f3fd 100644 --- a/test/test_results/test_shares/logs.json +++ b/test/test_results/test_shares/logs.json @@ -1,39 +1,43 @@ [ { - "content": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "content": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_SHARE", "snowflake.grant.created_on": 1687246726499000000, "snowflake.grant.by": "DEMIGOD", - "snowflake.grant.grantee": "DEVDYNATRACEDIGITALBUSINESSDW.DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.grant.grantee": "TESTACCOUNT.DATA_SCIENTIST_DEVEL_DS_SHARE", "snowflake.grant.on": "DATABASE", "snowflake.grant.privilege": "USAGE", "snowflake.grant.to": "SHARE", "snowflake.share.is_secure_objects_only": true, "snowflake.share.kind": "OUTBOUND", "snowflake.share.owner": "DEMIGOD", - "snowflake.share.shared_from": "WMBJBCQ.DEVDYNATRACEDIGITALBUSINESSDW", - "snowflake.share.shared_to": "WMBJBCQ.CI360TESTACCOUNT", + "snowflake.grant.option": false, + "snowflake.share.listing_global_name": "", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT_WD", + "snowflake.share.shared_to": "TEST123.TESTACCOUNT", "db.namespace": "DATA_SCIENTIST_DEV_DB", "snowflake.grant.name": "DATA_SCIENTIST_DEV_DB", - "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_SHARE", "dsoa.run.context": "outbound_shares", "dsoa.run.plugin": "test_shares" }, { - "content": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "content": "Outbound share details for DATA_SCIENTIST_DEVEL_DS_SHARE", "snowflake.grant.created_on": 1668416468928000000, "snowflake.grant.by": "INTEGRATION_CONSUMPTION_FORECASTING_ROLE", - "snowflake.grant.grantee": "DEVDYNATRACEDIGITALBUSINESSDW.DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.grant.grantee": "TESTACCOUNT.DATA_SCIENTIST_DEVEL_DS_SHARE", "snowflake.grant.on": "SCHEMA", "snowflake.grant.privilege": "USAGE", "snowflake.grant.to": "SHARE", "snowflake.share.is_secure_objects_only": true, "snowflake.share.kind": "OUTBOUND", "snowflake.share.owner": "DEMIGOD", - "snowflake.share.shared_from": "WMBJBCQ.DEVDYNATRACEDIGITALBUSINESSDW", - "snowflake.share.shared_to": "WMBJBCQ.CI360TESTACCOUNT", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT_WD", + "snowflake.share.shared_to": "TEST123.TESTACCOUNT", + "snowflake.grant.option": false, + "snowflake.share.listing_global_name": "", "db.namespace": "DATA_SCIENTIST_DEV_DB", "snowflake.grant.name": "DATA_SCIENTIST_DEV_DB.ACCOUNT_EXPERIENCE", - "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_CI360_SHARE", + "snowflake.share.name": "DATA_SCIENTIST_DEVEL_DS_SHARE", "dsoa.run.context": "outbound_shares", "dsoa.run.plugin": "test_shares" }, @@ -41,9 +45,12 @@ "content": "Inbound share details for BIET_MONITORING_SHARE", "snowflake.share.has_details_reported": true, "snowflake.share.kind": "INBOUND", - "snowflake.share.shared_from": "WMBJBCQ.CI360TESTACCOUNT", - "db.namespace": "CI360_SHARE_MONITORING_DB", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT", + "db.namespace": "BI_SHARE_MONITORING_DB", "snowflake.share.name": "BIET_MONITORING_SHARE", + "snowflake.share.listing_global_name": "", + "snowflake.share.owner": "", + "snowflake.share.shared_to": "", "dsoa.run.context": "inbound_shares", "dsoa.run.plugin": "test_shares" }, @@ -51,10 +58,24 @@ "content": "Inbound share details for BIET_MONITORING_SHARE", "snowflake.share.has_details_reported": true, "snowflake.share.kind": "INBOUND", - "snowflake.share.shared_from": "WMBJBCQ.DYNATRACEDIGITALBUSINESSDW", + "snowflake.share.shared_from": "TEST123.TESTACCOUNT_WA", "db.namespace": "DT_SHARE_MONITORING_DB", "snowflake.share.name": "BIET_MONITORING_SHARE", + "snowflake.share.listing_global_name": "", + "snowflake.share.owner": "", + "snowflake.share.shared_to": "", + "dsoa.run.context": "inbound_shares", + "dsoa.run.plugin": "test_shares" + }, + { + "content": "Inbound share \"DELETED_DB_SHARE\" has a deleted database - data is no longer accessible", + "snowflake.share.has_db_deleted": true, + "snowflake.share.has_details_reported": true, + "snowflake.share.kind": "INBOUND", + "snowflake.share.shared_from": "ABC123.SOURCE_ACCOUNT", + "db.namespace": "DELETED_SHARED_DB", + "snowflake.share.name": "DELETED_DB_SHARE", "dsoa.run.context": "inbound_shares", "dsoa.run.plugin": "test_shares" } -] +] \ No newline at end of file diff --git a/test/test_results/test_shares_results.txt b/test/test_results/test_shares_results.txt deleted file mode 100644 index e815df63..00000000 --- a/test/test_results/test_shares_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f36dcaef73ca6edc0d26b8bcfb786439d68b85615a418a5ac2bec48505329229 -size 4857 diff --git a/test/test_results/test_tasks_results.txt b/test/test_results/test_tasks_results.txt deleted file mode 100644 index 1f8c0c50..00000000 --- a/test/test_results/test_tasks_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:53f111e32eba88d09de5d0eca79735b408f465597025a446a4f4c00bb82d0272 -size 8822 diff --git a/test/test_results/test_trust_center_results.txt b/test/test_results/test_trust_center_results.txt deleted file mode 100644 index 3cac0b93..00000000 --- a/test/test_results/test_trust_center_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8ff6ddcec8cadc7982f9d05821cb0c6dc387a2a205b15c45fa9318307e9c17e5 -size 6143 diff --git a/test/test_results/test_users/events.json b/test/test_results/test_users/events.json index e7f28c0b..e6cd299f 100644 --- a/test/test_results/test_users/events.json +++ b/test/test_results/test_users/events.json @@ -6,22 +6,26 @@ "snowflake.user.created_on": 1644434689039000000, "snowflake.user.last_success_login": 1762440232376000000, "snowflake.user.default.namespace": "DEV_DB", - "snowflake.user.default.role": "SEBASTIAN_KRUK_ROLE", + "snowflake.user.default.role": "TEST_USER_ROLE", "snowflake.user.default.warehouse": "COMPUTE_WH", - "snowflake.user.display_name": "Sebastian Kruk", + "snowflake.user.display_name": "Test User", "snowflake.user.email": "95ab5ef6a07c48fe4e0d1049b5b16b07cb2334dead8801d4d6078dd283b338f6", "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_mfa": false, "snowflake.user.has_password": false, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.has_workload_identity": false, "snowflake.user.id": 298, "snowflake.user.is_disabled": false, "snowflake.user.is_from_organization": false, "snowflake.user.is_locked": false, "snowflake.user.must_change_password": false, - "snowflake.user.name": "SEBASTIAN.KRUK", - "snowflake.user.name.first": "Sebastian", - "snowflake.user.name.last": "Kruk", + "snowflake.user.name": "TEST_USER_1", + "snowflake.user.name.first": "Test", + "snowflake.user.name.last": "User", "snowflake.user.owner": "AAD_PROVISIONER", - "db.user": "SEBASTIAN.KRUK", + "db.user": "TEST_USER_1", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -37,22 +41,26 @@ "snowflake.user.created_on": 1644434689039000000, "snowflake.user.last_success_login": 1762440232376000000, "snowflake.user.default.namespace": "DEV_DB", - "snowflake.user.default.role": "SEBASTIAN_KRUK_ROLE", + "snowflake.user.default.role": "TEST_USER_ROLE", "snowflake.user.default.warehouse": "COMPUTE_WH", - "snowflake.user.display_name": "Sebastian Kruk", + "snowflake.user.display_name": "Test User", "snowflake.user.email": "95ab5ef6a07c48fe4e0d1049b5b16b07cb2334dead8801d4d6078dd283b338f6", "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_mfa": false, "snowflake.user.has_password": false, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.has_workload_identity": false, "snowflake.user.id": 298, "snowflake.user.is_disabled": false, "snowflake.user.is_from_organization": false, "snowflake.user.is_locked": false, "snowflake.user.must_change_password": false, - "snowflake.user.name": "SEBASTIAN.KRUK", - "snowflake.user.name.first": "Sebastian", - "snowflake.user.name.last": "Kruk", + "snowflake.user.name": "TEST_USER_1", + "snowflake.user.name.first": "Test", + "snowflake.user.name.last": "User", "snowflake.user.owner": "AAD_PROVISIONER", - "db.user": "SEBASTIAN.KRUK", + "db.user": "TEST_USER_1", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -73,7 +81,11 @@ "snowflake.user.default.warehouse": "TEST_ETL_UPGRADE_WH", "snowflake.user.display_name": "TEST_DATAMODEL_UPGRADER", "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_mfa": false, "snowflake.user.has_password": true, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.has_workload_identity": false, "snowflake.user.id": 618, "snowflake.user.is_disabled": false, "snowflake.user.is_from_organization": false, @@ -102,7 +114,11 @@ "snowflake.user.default.warehouse": "TEST_ETL_UPGRADE_WH", "snowflake.user.display_name": "TEST_DATAMODEL_UPGRADER", "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_mfa": false, "snowflake.user.has_password": true, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.has_workload_identity": false, "snowflake.user.id": 618, "snowflake.user.is_disabled": false, "snowflake.user.is_from_organization": false, @@ -131,7 +147,11 @@ "snowflake.user.default.warehouse": "TEST_ETL_UPGRADE_WH", "snowflake.user.display_name": "TEST_DATAMODEL_UPGRADER", "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_mfa": false, "snowflake.user.has_password": true, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.has_workload_identity": false, "snowflake.user.id": 618, "snowflake.user.is_disabled": false, "snowflake.user.is_from_organization": false, @@ -153,9 +173,9 @@ "title": "Table event snowflake.user.roles.last_altered.", "snowflake.event.trigger": "snowflake.user.roles.last_altered", "snowflake.user.roles.last_altered": 1661498740003000000, - "snowflake.user.roles.direct": "[\"ALEKSANDRA_RUMINSKA_ROLE\"]", + "snowflake.user.roles.direct": "[\"TEST_USER_ROLE\"]", "snowflake.user.roles.granted_by": "[\"USERADMIN\"]", - "db.user": "ALEKSANDRA_RUMINSKA", + "db.user": "TEST_USER", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -171,7 +191,7 @@ "snowflake.user.roles.last_altered": 1615219848339000000, "snowflake.user.roles.direct": "[\"SCRATCHPAD_ROLE\"]", "snowflake.user.roles.granted_by": "[\"SECURITYADMIN\"]", - "db.user": "MICHALLITKA", + "db.user": "TESTUSER", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -184,9 +204,9 @@ "eventType": "CUSTOM_INFO", "title": "Table event snowflake.user.roles.direct.removed_on.", "snowflake.event.trigger": "snowflake.user.roles.direct.removed_on", - "snowflake.user.roles.direct.removed_on": 1646226000000000000, - "snowflake.user.roles.direct.removed": "SANDBOX_KSTEST_COLDSTORE_ROLE", - "db.user": "SANDBOX_KSTEST_COLDSTORE", + "snowflake.user.roles.direct.removed_on": 1747375200000000000, + "snowflake.user.roles.direct.removed": "DTAGENT_SA082_VIEWER", + "db.user": "TEST.USER", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -215,7 +235,7 @@ "title": "Table event snowflake.user.roles.last_altered.", "snowflake.event.trigger": "snowflake.user.roles.last_altered", "snowflake.user.roles.last_altered": 1649326499175000000, - "snowflake.user.roles.all": "DEVACT_FINANCIAL,POWERBILOG_FINANCIAL,JIRA_FULL,APPSEC_SENSITIVE,DATAMODEL_UPGRADER,DEVEL_SYSADMIN_ROLE,INTERCOM_BASIC,IEM_BASIC,METADATA_FINANCIAL,WOOPRA_FINANCIAL,UNIVERSITY_FULL,ZENDESK_SENSITIVE,RAW_FULL,BI_FINANCIAL,CONSUMPTION_FULL,DAVIS_BASIC,INTERNALCOSTS_SENSITIVE,EMPLOYEES_FINANCIAL,SFM_FINANCIAL,METADATA_SENSITIVE,RUM_BASIC,APPSEC_BASIC,ALL_BASIC,TEST_SYSADMIN,TEST_COLDSTORE_ROLE,SYNTHETIC_BASIC,SANDBOX_TEST_BI_PREDICTIONS_ROLE,CDH_SENSITIVE,SECURITYADMIN,TEAMS_FINANCIAL,TEST_BI_PREDICTIONS_ROLE,DATAQUALITY_BASIC,RUM_SENSITIVE,BAS_BASIC,EXTENSIONREPOSITORYINFO_SENSITIVE,TEAMS_BASIC,BI_SENSITIVE,RNDWORKLOGS_FULL,BI_BASIC,DEVACT_BASIC,SANDBOX_TEST_DB_OWNER_ROLE,INTERCOM_SENSITIVE,INTERCOM_FINANCIAL,DEVELCLONE_BI_REPORTING,REPORTS_BASIC,JIRA_SENSITIVE,EXTENSIONREPOSITORYINFO_BASIC,REPORTS_FULL,TEST_DB_OWNER_ROLE,POWERBILOG_BASIC,DAVIS_FULL,DEVEL_DATAMODEL_UPGRADER_ROLE,SANDBOX_TEST_READONLY_USER_ROLE,POWERBILOG_FULL,REPORTS_FINANCIAL,INTERNALCOSTS_FINANCIAL,SANDBOX_TEST_DATAMODEL_UPGRADER_ROLE,BI_MODELER,SANDBOX_TEST_PIPELINE_ROLE,LIMA_BASIC,WOOPRA_BASIC,SFM_SENSITIVE,AUTOPROV_BASIC,METADATAAUDIT_SENSITIVE,RUM_FULL,DEVELCLONE_PIPELINE,SOFTCOMP_FINANCIAL,CONSUMPTION_FINANCIAL,SNOWFLAKE_FINANCE,LIMA_SENSITIVE,REPORTS_CONSUMPTION,WOOPRA_SENSITIVE,CONSUMPTION_SENSITIVE,DEVEL_SECURITYADMIN_ROLE,CONSUMPTION_BASIC,REPORTS_SENSITIVE,SYSADMIN,IEM_FULL,REVENUE_SENSITIVE,DEV_SF_DATAMODELUPGRADER_ROLE,BAS_SENSITIVE,SYNTHETIC_FINANCIAL,ALL_FINANCIAL,TEAMS_FULL,LIMA_FULL,WOOPRA_FULL,TEST_DATAMODEL_UPGRADER_ROLE,BI_REPORTING,CDH_FINANCIAL,DEVEL_PIPELINE_ROLE,TEST_POWERBI_ROLE,DEVEL_COLDSTORE_ROLE,METADATA_FULL,COMMUNITY_SENSITIVE,EMPLOYEES_FULL,AUTOPROV_FULL,JIRA_BASIC,SOFTCOMP_SENSITIVE,METADATAAUDIT_FINANCIAL,METADATAAUDIT_FULL,TERRAFORM_USER_ROLE,RUM_FINANCIAL,SOFTCOMP_BASIC,ZENDESK_BASIC,TEST_BI_REPORTING_ROLE,TEST_PIPELINE_ROLE,DAVIS_FINANCIAL,COMMUNITY_FULL,SANDBOX_TEST_BI_REPORTING_ROLE,EXTENSIONREPOSITORYINFO_FINANCIAL,SFM_FULL,CDH_BASIC,IEM_SENSITIVE,COLDSTORE,COMMUNITY_BASIC,SYNTHETIC_FULL,SANDBOX_TEST_POWERBI_ROLE,UNIVERSITY_FINANCIAL,DAVIS_SENSITIVE,DEVACT_FULL,POWERBI_MODEL,AUTOPROV_FINANCIAL,TEST_ETL_DQ_CHECKS_ROLE,JIRA_FINANCIAL,IEM_FINANCIAL,SCRATCHPAD_ROLE,INTERNALCOSTS_BASIC,MONITORING,TEAMS_SENSITIVE,ALL_SENSITIVE,DEVEL_BI_MODELER_ROLE,RNDWORKLOGS_FINANCIAL,ZENDESK_FINANCIAL,INTERCOM_FULL,DATAQUALITY_SENSITIVE,SOFTCOMP_FULL,DATAQUALITY_FINANCIAL,BAS_FINANCIAL,ALL_FULL,REPORTS_TECHNOLOGY,APPSEC_FULL,DEVEL_BI_REPORTING_ROLE,SANDBOX_ANDRZEJ_BI_MODELER,EMPLOYEES_SENSITIVE,SALESFORCE_FINANCIAL,INTERNALCOSTS_FULL,UNIVERSITY_SENSITIVE,COMMUNITY_FINANCIAL,EXTENSIONREPOSITORYINFO_FULL,METADATA_BASIC,SANDBOX_TEST_COLDSTORE_ROLE,PIPELINE,SALESFORCE_SENSITIVE,EMPLOYEES_BASIC,SANDBOX_ANDRZEJ_DATAMODEL_UPGRADER,SANDBOX_ANDRZEJ_BI_REPORTING,SFM_BASIC,SALESFORCE_FULL,REVENUE_FINANCIAL,UNIVERSITY_BASIC,SANDBOX_ANDRZEJ_PIPELINE,DEVACT_SENSITIVE,METADATAAUDIT_BASIC,RNDWORKLOGS_BASIC,ANY_BASIC,DEVOPS_ROLE,BAS_FULL,TEST_BI_MODELER_ROLE,ZENDESK_FULL,SALESFORCE_BASIC,DEVELCLONE_DATAMODEL_UPGRADER,DATAQUALITY_FULL,REVENUE_FULL,APPSEC_FINANCIAL,SYNTHETIC_SENSITIVE,REVENUE_BASIC,RNDWORKLOGS_SENSITIVE,LIMA_FINANCIAL,POWERBILOG_SENSITIVE,SANDBOX_TEST_BI_MODELER_ROLE,AUTOPROV_SENSITIVE,CDH_FULL,DEVELCLONE_BI_MODELER", + "snowflake.user.roles.all": "DEMIGOD,PIPELINE", "snowflake.user.roles.granted_by": "[\"DEMIGOD\"]", "db.user": "TERRAFORM_USER", "db.system": "snowflake", @@ -233,7 +253,7 @@ "snowflake.user.roles.last_altered": 1624012210371000000, "snowflake.user.roles.all": "BI_MODELER,DATAMODEL_UPGRADER,BI_REPORTING,COLDSTORE,SCRATCHPAD_ROLE,DEV_SF_DATAMODELUPGRADER_ROLE,PIPELINE,DEVOPS_ROLE", "snowflake.user.roles.granted_by": "[\"DEMIGOD\", \"SECURITYADMIN\"]", - "db.user": "BEATASZWICHTENBERG", + "db.user": "TESTUSER2", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -246,11 +266,11 @@ "eventType": "CUSTOM_INFO", "title": "Table event snowflake.user.privilege.last_altered.", "snowflake.event.trigger": "snowflake.user.privilege.last_altered", - "snowflake.user.privilege.last_altered": 1720426972449000000, - "snowflake.user.privilege": "UPDATE:TABLE", + "snowflake.user.privilege.last_altered": 1615219847793000000, + "snowflake.user.privilege": "UPDATE:VIEW", "snowflake.user.privilege.granted_by": "[\"SECURITYADMIN\"]", - "snowflake.user.privilege.grants_on": "BILLING_PROVIDER,CDH_SLO_HISTORY,FACT_DATAHUB_COLUMN_CHANGE_LOG,CDH_PROBLEM_IMPACTED_ENTITIES_HISTORY,FACT_TABLE_USAGE,CDH_ACTIVE_GATE_UPDATE_STATUS_HISTORY,DIM_DEPLOYMENT_STAGE,CDH_DASHBOARD_CONFIG_FILTER_USAGE_V2_HISTORY,INTERCOM_CONVERSATIONS,CDH_ODIN_AGENT_HISTORY,LIMA_SUBSCRIPTION_BUDGET_HISTORY,BITBUCKET_PR_COMMITS,CDH_PROCESS_HISTORY,CDH_ATTACK_CANDIDATES_V2_HISTORY,ZENDESK_TICKETS_V2,CDH_RUM_BILLING_DEM_UNITS_V1_HISTORY,FACT_COLUMN_PROTECTION,PARTNER_REFERRAL,CDH_APPLICATION_HISTORY,CDH_AGENT_HEALTH_METRICS_HISTORY,DPS_SUBSCRIPTION_CONSUMPTION,SNOWFLAKE_CONNECTOR_SETTINGS_HISTORY,DIM_DATAHUB_EXISTING_COLUMN,AWS_ACCOUNT_MAPPING,CDH_COMPLETENESS_BY_CLUSTER_HISTORY,RUM_BEHAVIORAL_EVENT_PROPERTIES,LIMA_USAGE,SQL_PII_SNOWFLAKE_LOG,CDH_LOG_MONITORING_STATS_HISTORY,MC_ENVIRONMENTS,AWS_MARKETPLACE_LEGACY_ID_MAPPING,CDH_ATTACK_CANDIDATES_HISTORY,DIM_COLUMN,PROMO_CODE,CDH_RUM_USER_SESSIONS_IF_ONLY_CRASH_ENABLED_HISTORY,CDH_LOG_MONITORING_STATS_V2_HISTORY,SFDC_OPPORTUNITY_PRODUCT,DEV_JIRA_PROJECT,CDH_MONITORED_VIRTUALIZATION_SERVICE_TYPES,DATA_ANALYTICS_CLA_CONTRACTS,MANAGED_ACCOUNT,CDH_API_USAGE_HISTORY,CDH_HOST_BILLING_FOUNDATION_AND_DISCOVERY_HISTORY,AWS_MARKETPLACE_OFFER_PRODUCT,CDH_LOG_MONITORING_CONFIGURATION_STATS_HISTORY,CDH_INSTRUMENTATION_LIBRARY_HISTORY,JIRA_ISSUES,CDH_SECURITY_PROBLEM_TRACKING_LINKS_HISTORY,MC_MANAGED_CLUSTER,ACCOUNT_STATUS,DIM_OVALEDGE_TERM,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_SIZE_HISTORY,SFDC_POC,SFDC_VW_SALES_USERACCESS,EXTERNAL_DQ_CHECKS_RESULTS,FACT_COLUMN_USAGE,AWS_MARKETPLACE_BILLING_EVENT,KEPTN,CDH_BILLING_APP_SESSIONS_HISTORY,CDH_CLOUD_EVENT_V2_HISTORY,CDH_CTC_LOAD_HISTORY,ENVIRONMENT_SERVICE_DAILY_SUMMARY,CDH_BILLING_APP_PROPERTIES_V2_HISTORY,SERVICE_USAGE_SUMMARY,AUTOPROV_EVENTS,DIM_DEPLOYMENT_STATUS,CDH_CLUSTER_CONTACTS_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY,RUM_BEHAVIORAL_EVENTS_V3,LIMA_SUBSCRIPTION,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_V2_HISTORY,CDH_PROBLEM_CAPPING_INFORMATION_HISTORY,FACT_DATAHUB_TABLE_CHANGE_LOG,CUSTOMER_BASE_HISTORY_V2,ZENDESK_ORGANIZATIONS_HISTORY,CDH_ODIN_AGENT_ME_IDENTIFIER_HISTORY,CDH_TOKEN_STATS_HISTORY,CDH_SERVICE_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_HISTORY,DTU_ACTIVITIES,CDH_RUM_BILLING_DEM_UNITS_V2_HISTORY,CDH_DDU_METRICS_TOTAL_V2_HISTORY,CDH_PROBLEM_HISTORY,DIM_OVALEDGE_COLUMN,CDH_UEM_CONFIG_HISTORY,AWS_METADATA,CDH_PROBLEM_EVENT_INSTANCE_CLASSES_HISTORY,CDH_CLOUD_NETWORK_POLICY_HISTORY,CDH_TIMESERIES_ARRIVAL_LATENCY_HISTORY,CDH_LOG_MONITORING_ES_STATS_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURE_V2_HISTORY,CUSTOMER_BASE_HISTORY,CDH_METRIC_EVENT_V2_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_HISTORY,LIMITS,CDH_EXTERNAL_DATA_POINTS_V2_HISTORY,LIMA_SUBSCRIPTION_HISTORY,FACT_USER_GROUP_MAP,DEV_JIRA_CUSTOM_FIELD,DIM_PII_STATE,CDH_JS_AGENT_VERSIONS,CDH_ENVIRONMENT_METRICS_METADATA_HISTORY,REFERRAL_CODE,CDH_SECURITY_PROBLEM_ASSESSMENT_VULNERABLE_FUNCTIONS_HISTORY,LIMA_USAGE_HOURLY,CDH_SOFTWARE_COMPONENT_DETAILS_V2_HISTORY,ZENDESK_USERS_V2,DEV_JIRA_WORKLOGS,SYNTHETIC_LOCATIONS,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_HISTORY,CDH_BILLING_APP_SESSIONS_V2_HISTORY,CDH_KUBERNETES_NODE_HISTORY,CDH_LOG_INGEST_ADVANCED_SETTINGS_HISTORY,TENANT_USAGE_SUMMARY,CDH_SYNTHETIC_MONITOR_HISTORY,CDH_PROBLEM_ROOT_CAUSE_GROUP_HISTORY,CDH_CLOUD_AUTOMATION_UNITS_HISTORY,SFDC_TASK,CDH_PLUGIN_METRIC_STATS_HISTORY,CDH_RUM_BILLING_PERIODS_V2_HISTORY,CDH_TOTAL_FDI_EVENT_COUNT_HISTORY,CDH_FDI_EVENT_INSTANCE_CLASSES_HISTORY,CDH_CLOUD_APPLICATION_HISTORY,CDH_TILE_FILTER_CONFIG_HISTORY,ZENDESK_GROUPS_V2,DEV_JIRA_CHANGE_LOG,NEW_EMPLOYEES,SFDC_MANAGED_LICENSE,CDH_JS_FRAMEWORK_USAGE_HISTORY,CDH_PROCESS_VISIBILITY_HISTORY_V2,ZENDESK_SIDE_CONVERSATION_EVENTS_V2,LIMA_SUBSCRIPTION_CONSUMPTION,LIMA_SUBSCRIPTION_USAGE_HOURLY,CONTRACT_PRICING,CDH_TIMESERIES_MAINTENANCE_LAG_HISTORY,CDH_NOTIFICATION_SETTINGS_HISTORY,CDH_DEEP_MONITORING_SETTINGS_V2_HISTORY,AZURE_METADATA,CDH_METRIC_EVENT_CONFIG_ID_FILTER_HISTORY,CDH_DISCOVERED_VIRTUALIZATION_SERVICE_TYPES,CDH_SDK_LANGUAGE_HISTORY,DIM_SYNC_TYPE,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_V2_HISTORY,CDH_SETTING_V3_HISTORY,PBI_ENTITY_REFRESH_HISTORY,CDH_METRIC_EVENT_V2_VALIDATION_RESULT_HISTORY,CDH_BULK_CONFIG_CHANGES_HISTORY,CDH_TAG_COVERAGE_HISTORY,CDH_INTERNAL_ENTITY_MODEL_CAPPING_INFORMATION_HISTORY,CDH_FDI_EVENT_METADATA_HISTORY,CDH_VIRTUALIZATION_SUBSCRIPTION_HISTORY,CDH_BILLING_APP_SESSIONS_V3_HISTORY,CDH_CONTAINER_GROUP_INSTANCE_HISTORY,ZENDESK_USERS,CDH_LOG_MONITORING_METRIC_STATS_HISTORY,CDH_RELEASE_V3_HISTORY,SFDC_ASSIGNMENT,FACT_COLUMN_LINEAGE,ZENDESK_TICKETS_HISTORY_V2,CDH_DDU_METRICS_RAW_V2_HISTORY,DIM_QUALITY_TYPE,AWS_MARKETPLACE_OFFER_TARGET,CDH_APPSEC_NOTIFICATION_SETTINGS_HISTORY,TENANT_SUB_ENVIRONMENT,CDH_ALERTING_PROFILE_SEVERITY_RULE_HISTORY,CDH_VERSIONED_MODULE_V2_HISTORY,CDH_APPSEC_RUNTIME_VULNERABILITY_DETECTION_SETTINGS_HISTORY,CDH_HOST_TECH_HISTORY,DPS_SUBSCRIPTION,CDH_METRIC_EVENT_V2_NAME_FILTER_HISTORY,CDH_EXTENSION_HISTORY,INSTRUMENTED_FUNCTION_HASHES,DIM_JSON_VALIDATION,SERVICE,TENANT,BILLING_SERVICE_TYPE,CDH_DATABASE_INSIGHTS_ENDPOINT_DETAILS_HISTORY,LIMA_ACCOUNT_GROUP_MEMBERSHIP,AWS_MARKETPLACE_AGREEMENT,CDH_MAINFRAME_MSU_V2_HISTORY,BITBUCKET_PR,DIM_DATAHUB_TABLE,GRAIL_QUERY_LOG_V2,CDH_COMPLETENESS_BY_ENVIRONMENT_HISTORY,CDH_MAINFRAME_MSU_V3_HISTORY,LIMA_RATE_CARD,USER_ACCOUNT,CDH_RUM_USER_SESSIONS_WEB_BOUNCES_HISTORY,CDH_CONDITIONAL_PROCEDURES_RULES_HISTORY,TIME_ZONE,CDH_TENANT_NETWORK_ZONE_STATS_HISTORY,CDH_CLOUD_APPLICATION_NAMESPACE_HISTORY,CDH_METRIC_EVENT_CONFIG_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_BOUNCES_HISTORY,CDH_FEATURE_FLAG_HISTORY,BILLING_ACCOUNT,MC_ACCOUNT,DIM_OVALEDGE_CATEGORY,PACKAGE,CDH_WEB_APP_CALL_BY_BROWSER_HISTORY,CDH_EXTENSIONS_DISTINCT_DEVICES_HISTORY,MANAGED_CLUSTER,BITBUCKET_PR_ACTIVITIES,FACT_COLUMN,CDH_HOST_MEMORY_USAGE_HOURLY_RESOLUTION_HISTORY,CDH_CLUSTER_TAGS_HISTORY,ZENDESK_SIDE_CONVERSATION_RECIPIENTS_V2,CDH_SESSION_STORAGE_USAGE_V2_HISTORY,REGION,CDH_EXTERNAL_DATA_POINTS_HISTORY,CDH_SETTING_HISTORY,INTERCOM_CONVERSATION_TAGS,FACT_OVALEDGE_TABLE_TERM,SFDC_OPPORTUNITY,BITBUCKET_COMMITS,TENANT_STATUS,CDH_APPSEC_MONITORING_RULES_SETTINGS_HISTORY,CDH_PROBLEM_RANKED_ENTITY_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_V2_HISTORY,SQL_LOG_PIPELINE,ZENDESK_GROUPS,CDH_CLOUD_APPLICATION_INSTANCE_HISTORY,CDH_SYNTHETIC_API_CALLS_HISTORY,CDH_CLOUD_NETWORK_INGRESS_HISTORY,CDH_DASHBOARD_CONFIG_V2_HISTORY,CDH_DDU_METRICS_CONSUMED_INCLUDED_HISTORY,CDH_SERVICE_CALLING_APPLICATIONS_HISTORY,CDH_DASHBOARD_CONFIG_HISTORY,CDH_METRIC_DATA_TYPE_HISTORY,CDH_CLUSTERS,DPS_CONSUMPTION,CDH_REQUEST_ATTRIBUTE_STATS_HISTORY,LIMA_UNASSIGNED_CONSUMPTION_HOURLY,CDH_VISIT_STORE_USAGE_HISTORY,CDH_HOST_MEMORY_USAGE_HISTORY,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V1,PBI_WORKSPACE_ENTITY_NAMES,SOFTWARE_COMPONENT_PACKAGE_NAME_HASHES,MONTHLY_USAGE,SFDC_CONSUMPTION_REVENUE_MONTHLY,FACT_TABLE,DIM_TABLE,CDH_CONTAINER_GROUP_HISTORY,CDH_API_USER_AGENT_USAGE_HISTORY,SFDC_TRIAL,ZENDESK_TICKET_METRICS_CURRENT_V2,CDH_SECURITY_PROBLEM_ASSESSMENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_V2_HISTORY,CDH_APPSEC_INTEGRATION_TYPES_HISTORY,CDH_KUBERNETES_CLUSTER_HISTORY,CDH_SERVERLESS_HISTORY,SERVICE_USAGE_DAILY_SUMMARY,CDH_FDI_EVENT_TYPE_AGGREGATIONS_HISTORY,CDH_WORKFLOWS_V2_HISTORY,CDH_MOBILE_SESSION_COUNT_BY_AGENT_TECHNOLOGY_HISTORY,CDH_EXTRACT_STATISTICS,GRAIL_APP_INSTALLATIONS,MC_ENVIRONMENT_CONSUMPTION,CDH_CLUSTER_HISTORY,CDH_SECURITY_PROBLEM_HISTORY,SIGNUP_AWS_MARKETPLACE,CDH_MOBILE_CRASHES_BY_RETRIEVAL_DELAY_HISTORY,ZENDESK_TICKET_METRICS_CURRENT,FACT_UNIQUE_COLUMNS_HISTORY,TENANT_USAGE_DAILY_SUMMARY_VIEW,BAS_AUDIT_ENTITY,ZENDESK_TICKETS,AWS_MARKETPLACE_ACCOUNT,CDH_RUM_USER_SESSIONS_MOBILE_BOUNCES_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_HISTORY,FACT_DATA_QUALITY_ISSUES,CDH_SYNTHETIC_MONITOR_LOCATION_HISTORY,CDH_HOST_MEMORY_LIMIT_HOURLY_RESOLUTION_HISTORY,ZENDESK_USERS_HISTORY,CDH_APPSEC_CONSUMPTION_BY_ENTITY_HISTORY,CDH_WORKFLOWS_TASK_EXECUTION_HISTORY,CDH_CREDENTIALS_VAULT_ENTRIES_HISTORY,SFDC_ACCOUNT,CONTRACT,CDH_PGI_PROCESS_COUNT_HISTORY,CDH_APPSEC_RUNTIME_APPLICATION_PROTECTION_SETTINGS_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_HISTORY,CDH_MOBILE_SESSION_REPLAY_HISTORY,GRAIL_QUERY_LOG,DIM_PRIORITY,INTERCOM_COMPANIES,CDH_WORKFLOWS_V3_HISTORY,CDH_CLOUD_AUTOMATION_INSTANCE_HISTORY,CDH_PLUGIN_HOST_DETAILS_HISTORY,CDH_APPSEC_CODE_LEVEL_VULNERABILITY_DETECTION_SETTINGS_HISTORY,MC_MANAGED_LICENSE,DPS_CONSUMPTION_FORECAST,AWS_CONSUMPTION_HISTORY,CDH_ACTIVE_GATE_HISTORY,CDH_MAINTENANCE_WINDOW_FILTER_HISTORY,CDH_MOBILE_AGENT_VERSION_USAGE_HISTORY,CDH_METRIC_EVENT_V2_ID_FILTER_HISTORY,DIM_OVALEDGE_SCHEMA,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_V2_HISTORY,CDH_INTEGRATION_HISTORY,COMMUNITY_PRODUCT_IDEAS,CDH_SETTING_V2_HISTORY,CDH_CONDITIONAL_PROCEDURES_HISTORY,CDH_COMPLETENESS_BY_CLOUD_HISTORY,DIM_DATA_QUALITY_CHECK,CDH_EXTENDED_TENANT_CONFIG_HISTORY,DIM_LIFECYCLE_STAGE,PROMO_USAGE,CDH_INSTALLERS_DOWNLOAD_SERVLET_USAGES_HISTORY,INTERCOM_ADMINS,CDH_PROCESS_VISIBILITY_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY_ARCHIVE,CDH_HOST_HISTORY,CDH_MOBILE_REPLAY_FULL_SESSION_METRICS_HISTORY,CDH_METRIC_EVENT_CONFIG_COUNT_HISTORY,RUM_BEHAVIORAL_EVENTS,GCP_METADATA,USAGE_CREDITS,CDH_SERVERLESS_COMPLETENESS_HISTORY,CDH_ISSUE_TRACKER_HISTORY,DIM_OBJECT,DPS_RATED_CONSUMPTION,CDH_KEY_REQUEST_STATS_HISTORY,REPORTS_EXECUTION_LOG,PBI_ACTIVITY_LOG,LIMA_CAPABILITIES,CDH_PLUGIN_STATE_HISTORY,RUM_PAGEVIEW,HOST_USAGE_DAILY_SUMMARY,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_V2_HISTORY,CDH_DDU_METRICS_BY_METRIC_HISTORY,CDH_CLUSTER_EMERGENCY_EMAILS_HISTORY,CDH_DATABASE_INSIGHTS_HISTORY,SFDC_ACCOUNT_TEAMMEMBER,CDH_VULNERABILITY_MATCHING_METADATA_HISTORY,CDH_SECURITY_PROBLEM_MUTE_STATE_HISTORY,CDH_HOST_BILLING_FULL_STACK_MONITORING_HISTORY,DEV_JIRA_ISSUES,CDH_AUTO_UPDATE_SUCCESS_STATISTICS_HISTORY,FACT_DEPLOYMENT_DATES,CDH_RUM_BILLING_PERIODS_V1_HISTORY,CDH_ALERTING_PROFILE_HISTORY,CDH_LOG_1CLICK_ACTIVATIONS_HISTORY,CDH_ELASTICSEARCH_METRIC_DIMENSIONS_AFFILIATION_HISTORY,CONTRACT_BILLING_INFO,TEAMS_CAPABILITIES,SFDC_ACCOUNT_ARR_BANDS_MONTHLY,CDH_SECURITY_PROBLEM_SC_HISTORY,EXTERNAL_DQ_CHECKS_DEFINITIONS,CDH_VERSIONED_MODULE_HISTORY,RUM_PAGE_REPOSITORY_INFO,CDH_METRIC_EVENT_CONFIG_NAME_FILTER_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_SESSIONS_HISTORY,CDH_CF_FOUNDATION_HISTORY,ZENDESK_SIDE_CONVERSATIONS_V2,CDH_LOG_MONITORING_CUSTOM_ATTRIBUTE_HISTORY,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V2,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_HISTORY,RUM_SESSION,CDH_BILLING_SYNTHETIC_USAGE_HISTORY,CDH_AGENT_HISTORY,CDH_SESSION_STORAGE_USAGE_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_HISTORY,CDH_DDU_METRICS_RAW_HISTORY,DIM_PERMISSION_GROUP,CDH_CLASSIC_BILLING_METRICS_HISTORY,CDH_COMPETITOR_JS_FRAMEWORK_USAGE_HISTORY,TEAMS_EMPLOYEES,TENANT_LAST_ACCESS_DATE,CDH_METRIC_EVENT_V2_COUNT_HISTORY,FACT_OBJECT_LINEAGE,ZENDESK_TICKETS_HISTORY,SQL_LOG,BAS_AUDIT_FIELD,CDH_DEEP_MONITORING_SETTINGS_HISTORY,CDH_MDA_CONFIGS_HISTORY,CDH_FDI_EVENT_ENTITY_TYPE_AGGREGATIONS_HISTORY,CDH_ACTIVE_GATE_API_USAGE_HISTORY,BAS_AUDIT_ENTRY,LIMA_RATE_CARD_V2,PBI_ENTITY_PERMISSIONS,ACCOUNT,CDH_RUM_USER_SESSIONS_WEB_SESSIONS_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY_VIEW,CDH_HOST_BILLING_INFRASTRUCTURE_MONITORING_HISTORY,TENANT_LICENSE,SQL_PII_LOG,CDH_METRIC_EVENT_CONFIG_THRESHOLD_BASED_MODEL_HISTORY,AUTOPROV_EVENTS_FEATURES,DIM_OVALEDGE_DOMAIN_DIRECTORY,SFDC_ACCOUNT_ARR_BANDS_DAILY,VALIDATION_PROBLEMS_HISTORY,CDH_PROBLEM_EVENT_METADATA_HISTORY,CDH_MOBILE_OS_VERSION_USAGE_HISTORY,AWS_MARKETPLACE_TAX_ITEM,DIM_OVALEDGE_CONNECTION,CDH_CLUSTER_NETWORK_ZONE_STATS_HISTORY,FACT_COLUMN_HISTORY,CDH_PROBLEM_NATURAL_EVENT_HISTORY,ZENDESK_ORGANIZATIONS,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_COUNT_HISTORY,CDH_DASHBOARD_CONFIG_TILE_HISTORY,ZENDESK_SIDE_CONVERSATIONS,CDH_DASHBOARD_CONFIG_TILE_V2_HISTORY,CDH_METRIC_QUERY_STATS_HISTORY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_TYPE_HISTORY,CDH_UEM_CONFIG_PROPERTY_TAG_HISTORY,ZENDESK_ORGANIZATIONS_V2,ZENDESK_GROUP_MEMBERSHIP,DIM_DATA_SOURCE,MANAGED_LICENSE,DIM_USER,MANAGED_LICENSE_QUOTA,CDH_CLOUD_AUTOMATION_INSTANCE_STATS_HISTORY,ZENDESK_GROUP_MEMBERSHIP_V2,FACT_OVALEDGE_COLUMN_TERM,CDH_HOST_MEMORY_LIMIT_HISTORY,CDH_ACTIVE_GATE_MODULES_STATUSES_HISTORY,CDH_BILLING_SYNTHETIC_USAGE_V2_HISTORY,LIMA_CONSUMPTION,TENANT_USAGE_DAILY_SUMMARY,SFDC_DYNATRACE_ACCOUNT,COMPANY,CDH_DDU_METRICS_BY_METRIC_V2_HISTORY,CDH_TAG_COVERAGE_ENTITIES_HISTORY,CDH_API_USAGE_HISTORY2,CDH_SOFTWARE_COMPONENT_PGI_HISTORY,CDH_ENDED_SESSIONS_HISTORY,CDH_MAINTENANCE_WINDOW_HISTORY,DIM_OVALEDGE_DOMAIN,CDH_APPSEC_MONITORED_HOSTS_BY_FUNCTIONALITY_HISTORY,CDH_BILLING_APP_PROPERTIES_HISTORY,CDH_VISIT_STORE_NEW_BILLING_METRICS_HISTORY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_BILLING_TYPE_HISTORY,SFDC_PROJECT,DIM_DATAHUB_COLUMN,CDH_EXTERNAL_DATA_POINTS_V3_HISTORY,CDH_K8S_DATA_VOLUME_HISTORY,AWS_MARKETPLACE_OFFER,CDH_CF_FOUNDATION_HOST_HISTORY,ROLE,TABLE_LOAD_INFO,AWS_MARKETPLACE_ADDRESS,CDH_RUM_USER_SESSIONS_MOBILE_SESSIONS_HISTORY,ENVIRONMENT_SERVICE_SUMMARY,DPS_SUBSCRIPTION_SKU,CDH_UEM_CONFIG_TENANT_HISTORY,EMPLOYEE_COUNT,CDH_UEM_CONFIG_METADATA_CAPTURING_SETTINGS_HISTORY,CDH_PREFERENCES_SETTINGS_HISTORY,CDH_CUSTOM_CHART_STATS_HISTORY,CDH_SERVICE_CALLED_SERVICES_HISTORY,APPENGINE_INVOCATIONS_PER_APP,DIM_DATA_CRITICALITY_LEVEL,CDH_SOFTWARE_COMPONENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_HISTORY,DIM_TABLE_STATUS,DEV_JIRA_COMMENTS,ADA_ACCOUNT,LIMA_SUBSCRIPTION_CONSUMPTION_RATED,SFDC_TENANT,INTERCOM_CONTACTS,CDH_MAINFRAME_MSU_HISTORY,CDH_DDU_METRICS_TOTAL_HISTORY,CDH_APPSEC_ALERTING_PROFILES_HISTORY,FACT_TABLE_OWNERS,PROCESS_STATUS,CDH_FDI_EVENT_HISTORY,PBI_DATASET_PARAMETER,INTERCOM_CONVERSATION_PARTS,CDH_VIRTUALIZATION_HISTORY,MC_CLUSTER_CONSUMPTION,CDH_DEEP_MONITORING_SETTINGS_FEATURES_HISTORY,ENVIRONMENT_USAGE_SUMMARY,SQL_SHARE_LOG,SQL_PERFORMANCE,INTERCOM_USERS,DIM_OVALEDGE_TABLE,ZENDESK_TICKET_METRICS_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY,CDH_TOKEN_STATS_PERMISSION_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_V2_HISTORY,CDH_OWNERSHIP_COVERAGE_HISTORY,EXTENSION_REPOSITORY_INFO,CDH_WORKFLOWS_HISTORY,JOBSTATUS,AWS_MARKETPLACE_PRODUCT,CDH_RELEASE_HISTORY,CDH_ENVIRONMENTS,CDH_VISIT_STORAGE_V2_HISTORY,CDH_CLOUD_NETWORK_SERVICE_HISTORY,BAS_USER", - "db.user": "TEST_COLDSTORE", + "snowflake.user.privilege.grants_on": "BILLING_PROVIDER,CDH_SLO_HISTORY", + "db.user": "TESTUSER3", "db.system": "snowflake", "service.name": "test.dsoa2025", "deployment.environment": "TEST", @@ -266,7 +286,7 @@ "snowflake.user.privilege.last_altered": 1720426973204000000, "snowflake.user.privilege": "APPLYBUDGET:TABLE", "snowflake.user.privilege.granted_by": "[\"SECURITYADMIN\"]", - "snowflake.user.privilege.grants_on": "CDH_UEM_CONFIG_TENANT_HISTORY,CONTRACT_PRICING,SQL_SHARE_LOG,CDH_UEM_CONFIG_METADATA_CAPTURING_SETTINGS_HISTORY,GRAIL_QUERY_LOG_V2,DEV_JIRA_CHANGE_LOG,CDH_DATABASE_INSIGHTS_HISTORY,EMPLOYEE_COUNT,CDH_PROBLEM_HISTORY,LIMA_SUBSCRIPTION_BUDGET_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_HISTORY,CDH_DDU_METRICS_TOTAL_HISTORY,SNOWFLAKE_CONNECTOR_SETTINGS_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY_ARCHIVE,CDH_VISIT_STORAGE_V2_HISTORY,ZENDESK_GROUP_MEMBERSHIP,CDH_PROCESS_HISTORY,CDH_APPSEC_RUNTIME_APPLICATION_PROTECTION_SETTINGS_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_V2_HISTORY,CDH_INSTRUMENTATION_LIBRARY_HISTORY,SFDC_CONSUMPTION_REVENUE_MONTHLY,CDH_TOKEN_STATS_HISTORY,AWS_MARKETPLACE_OFFER_PRODUCT,CDH_RUM_USER_SESSIONS_IF_ONLY_CRASH_ENABLED_HISTORY,BAS_AUDIT_ENTITY,CDH_TOKEN_STATS_PERMISSION_HISTORY,DIM_DEPLOYMENT_STAGE,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_TYPE_HISTORY,DTU_ACTIVITIES,CDH_COMPLETENESS_BY_CLUSTER_HISTORY,CDH_CUSTOM_CHART_STATS_HISTORY,DPS_CONSUMPTION_FORECAST,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_V2_HISTORY,CDH_ODIN_AGENT_HISTORY,CDH_AUTO_UPDATE_SUCCESS_STATISTICS_HISTORY,CDH_DDU_METRICS_CONSUMED_INCLUDED_HISTORY,CDH_TILE_FILTER_CONFIG_HISTORY,LIMA_USAGE,RUM_SESSION,CDH_CONDITIONAL_PROCEDURES_RULES_HISTORY,CDH_METRIC_EVENT_CONFIG_HISTORY,FACT_OVALEDGE_COLUMN_TERM,CDH_SYNTHETIC_MONITOR_HISTORY,CDH_SERVERLESS_HISTORY,FACT_COLUMN_USAGE,MC_MANAGED_CLUSTER,CDH_CLUSTER_CONTACTS_HISTORY,CDH_LOG_1CLICK_ACTIVATIONS_HISTORY,INSTRUMENTED_FUNCTION_HASHES,DIM_PERMISSION_GROUP,CDH_CLOUD_NETWORK_POLICY_HISTORY,ENVIRONMENT_USAGE_SUMMARY,CDH_ENDED_SESSIONS_HISTORY,CDH_APPSEC_MONITORING_RULES_SETTINGS_HISTORY,DIM_TABLE_STATUS,AWS_MARKETPLACE_OFFER_TARGET,ZENDESK_SIDE_CONVERSATION_EVENTS_V2,CDH_TAG_COVERAGE_HISTORY,DIM_OVALEDGE_DOMAIN_DIRECTORY,FACT_DATAHUB_COLUMN_CHANGE_LOG,SFDC_DYNATRACE_ACCOUNT,SQL_LOG,KEPTN,CDH_CLOUD_NETWORK_INGRESS_HISTORY,CDH_DDU_METRICS_RAW_V2_HISTORY,SFDC_ASSIGNMENT,ZENDESK_TICKET_METRICS_CURRENT_V2,RUM_PAGE_REPOSITORY_INFO,CDH_HOST_HISTORY,CDH_BILLING_SYNTHETIC_USAGE_V2_HISTORY,CDH_DASHBOARD_CONFIG_TILE_V2_HISTORY,REFERRAL_CODE,ENVIRONMENT_USAGE_DAILY_SUMMARY,LIMA_SUBSCRIPTION_USAGE_HOURLY,CDH_EXTENSIONS_DISTINCT_DEVICES_HISTORY,CDH_CONTAINER_GROUP_HISTORY,CDH_BILLING_APP_PROPERTIES_V2_HISTORY,FACT_USER_GROUP_MAP,MANAGED_LICENSE_QUOTA,CDH_BILLING_SYNTHETIC_USAGE_HISTORY,USERS_AND_QUERIES_COUNT_STATS,ACCOUNT,CDH_TAG_COVERAGE_ENTITIES_HISTORY,DEV_JIRA_WORKLOGS,CDH_ACTIVE_GATE_MODULES_STATUSES_HISTORY,CDH_CLUSTER_HISTORY,CDH_MAINFRAME_MSU_HISTORY,CDH_VULNERABILITY_MATCHING_METADATA_HISTORY,CDH_EXTENSION_HISTORY,SFDC_ACCOUNT_ARR_BANDS_MONTHLY,DIM_OVALEDGE_CATEGORY,CDH_MOBILE_OS_VERSION_USAGE_HISTORY,FACT_DEPLOYMENT_DATES,AWS_METADATA,BAS_USER,SQL_LOG_PIPELINE,SFDC_ACCOUNT_ARR_BANDS_DAILY,CDH_RUM_USER_SESSIONS_MOBILE_SESSIONS_HISTORY,TEAMS_EMPLOYEES,CDH_CLOUD_APPLICATION_INSTANCE_HISTORY,GRAIL_QUERY_LOG,PBI_ENTITY_PERMISSIONS,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_BILLING_TYPE_HISTORY,CDH_INSTALLERS_DOWNLOAD_SERVLET_USAGES_HISTORY,CDH_SYNTHETIC_MONITOR_LOCATION_HISTORY,PBI_DATASET_PARAMETER,LIMA_RATE_CARD,CDH_DASHBOARD_CONFIG_TILE_HISTORY,CDH_VISIT_STORE_USAGE_HISTORY,CDH_SETTING_HISTORY,AWS_MARKETPLACE_PRODUCT,CDH_VIRTUALIZATION_SUBSCRIPTION_HISTORY,CDH_WEB_APP_CALL_BY_BROWSER_HISTORY,CDH_RUM_BILLING_DEM_UNITS_V1_HISTORY,DPS_SUBSCRIPTION,BILLING_SERVICE_TYPE,CDH_CLOUD_APPLICATION_NAMESPACE_HISTORY,CDH_BILLING_APP_PROPERTIES_HISTORY,CDH_SESSION_STORAGE_USAGE_V2_HISTORY,CDH_ENVIRONMENT_METRICS_METADATA_HISTORY,CDH_ACTIVE_GATE_UPDATE_STATUS_HISTORY,SFDC_TRIAL,CDH_DDU_METRICS_BY_METRIC_V2_HISTORY,CDH_BILLING_APP_SESSIONS_V2_HISTORY,CDH_SETTING_V2_HISTORY,CDH_HOST_MEMORY_USAGE_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURE_V2_HISTORY,LIMA_USAGE_HOURLY,CDH_ISSUE_TRACKER_HISTORY,DIM_JSON_VALIDATION,ACCOUNT_STATUS,DIM_OVALEDGE_SCHEMA,CDH_SECURITY_PROBLEM_ASSESSMENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_HISTORY,TENANT_USAGE_DAILY_SUMMARY,CDH_SOFTWARE_COMPONENT_DETAILS_V2_HISTORY,AUTOPROV_EVENTS_FEATURES,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_HISTORY,SQL_PII_LOG,CDH_VERSIONED_MODULE_V2_HISTORY,DIM_OBJECT,CDH_METRIC_EVENT_V2_VALIDATION_RESULT_HISTORY,CDH_NOTIFICATION_SETTINGS_HISTORY,CDH_RUM_BILLING_PERIODS_V2_HISTORY,CDH_METRIC_EVENT_V2_ID_FILTER_HISTORY,FACT_TABLE_OWNERS,CDH_CLOUD_AUTOMATION_INSTANCE_HISTORY,PROCESS_STATUS,FACT_UNIQUE_COLUMNS_HISTORY,REPORTS_EXECUTION_LOG,FACT_OBJECT_LINEAGE,CDH_EXTRACT_STATISTICS,LIMA_SUBSCRIPTION_CONSUMPTION,TENANT_LICENSE,SYSTEM_PROPERTIES,SERVICE_USAGE_DAILY_SUMMARY,CDH_OWNERSHIP_COVERAGE_HISTORY,ZENDESK_SIDE_CONVERSATION_RECIPIENTS_V2,DPS_CONSUMPTION,TENANT_STATUS,CDH_AGENT_HISTORY,CDH_EXTERNAL_DATA_POINTS_HISTORY,CDH_MOBILE_SESSION_REPLAY_HISTORY,CDH_UEM_CONFIG_PROPERTY_TAG_HISTORY,FACT_OVALEDGE_TABLE_TERM,CDH_ATTACK_CANDIDATES_HISTORY,DIM_OVALEDGE_CONNECTION,MANAGED_CLUSTER,ZENDESK_TICKETS_HISTORY_V2,DIM_PRIORITY,CDH_KEY_REQUEST_STATS_HISTORY,CDH_SETTING_V3_HISTORY,LIMA_SUBSCRIPTION_HISTORY,GRAIL_APP_INSTALLATIONS,CDH_HOST_BILLING_FOUNDATION_AND_DISCOVERY_HISTORY,MANAGED_ACCOUNT,CDH_APPSEC_ALERTING_PROFILES_HISTORY,CDH_HOST_MEMORY_LIMIT_HOURLY_RESOLUTION_HISTORY,CDH_DDU_METRICS_TOTAL_V2_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURES_HISTORY,ZENDESK_TICKET_METRICS_CURRENT,BILLING_ACCOUNT,CDH_SECURITY_PROBLEM_TRACKING_LINKS_HISTORY,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_SIZE_HISTORY,CDH_KUBERNETES_NODE_HISTORY,TENANT,CDH_METRIC_EVENT_CONFIG_ID_FILTER_HISTORY,DIM_OVALEDGE_TABLE,CDH_CONTAINER_GROUP_INSTANCE_HISTORY,BAS_AUDIT_ENTRY,RUM_BEHAVIORAL_EVENTS,AWS_MARKETPLACE_ACCOUNT,FACT_DATA_QUALITY_ISSUES,INTERCOM_USERS,CDH_CLUSTER_TAGS_HISTORY,CDH_COMPLETENESS_BY_ENVIRONMENT_HISTORY,CDH_HOST_MEMORY_LIMIT_HISTORY,CDH_METRIC_EVENT_V2_COUNT_HISTORY,DIM_OVALEDGE_TERM,DIM_PII_STATE,CDH_CLOUD_AUTOMATION_UNITS_HISTORY,BITBUCKET_REPOSITORY_STATUS,CDH_MOBILE_SESSION_COUNT_BY_AGENT_TECHNOLOGY_HISTORY,DIM_DATAHUB_EXISTING_COLUMN,CDH_LOG_MONITORING_CONFIGURATION_STATS_HISTORY,CDH_WORKFLOWS_V3_HISTORY,SYSTEM_STATUS_DAILY_STATISTICS,CDH_DDU_METRICS_RAW_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_HISTORY,DEV_JIRA_CUSTOM_FIELD,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_V2_HISTORY,CDH_MAINTENANCE_WINDOW_HISTORY,MC_ACCOUNT,CDH_MOBILE_CRASHES_BY_RETRIEVAL_DELAY_HISTORY,DIM_DEPLOYMENT_STATUS,CDH_PROBLEM_RANKED_ENTITY_HISTORY,CDH_LOG_MONITORING_CUSTOM_ATTRIBUTE_HISTORY,CDH_SDK_LANGUAGE_HISTORY,ZENDESK_GROUPS,LIMA_RATE_CARD_V2,DPS_SUBSCRIPTION_CONSUMPTION,CDH_TIMESERIES_ARRIVAL_LATENCY_HISTORY,PROMO_USAGE,CDH_TOTAL_FDI_EVENT_COUNT_HISTORY,DIM_LIFECYCLE_STAGE,CDH_EXTENDED_TENANT_CONFIG_HISTORY,DEV_JIRA_PROJECT,CDH_DDU_METRICS_BY_METRIC_HISTORY,CDH_WORKFLOWS_V2_HISTORY,DIM_DATAHUB_TABLE,DATASOURCES,ZENDESK_SIDE_CONVERSATIONS,CDH_TIMESERIES_MAINTENANCE_LAG_HISTORY,CDH_ACTIVE_GATE_API_USAGE_HISTORY,PBI_ACTIVITY_LOG,SQL_PII_SNOWFLAKE_LOG,CDH_METRIC_EVENT_CONFIG_COUNT_HISTORY,AWS_MARKETPLACE_ADDRESS,AUTOPROV_EVENTS,LIMA_SUBSCRIPTION_CONSUMPTION_RATED,CDH_DISCOVERED_VIRTUALIZATION_SERVICE_TYPES,ZENDESK_ORGANIZATIONS_V2,DATA_VOLUME,LIMA_CONSUMPTION,CDH_RELEASE_V3_HISTORY,CDH_SERVICE_CALLING_APPLICATIONS_HISTORY,SFDC_TENANT,CDH_APPSEC_CONSUMPTION_BY_ENTITY_HISTORY,SFDC_ACCOUNT,CDH_COMPETITOR_JS_FRAMEWORK_USAGE_HISTORY,DIM_SYNC_TYPE,CDH_SECURITY_PROBLEM_HISTORY,USER_ACCOUNT,TENANT_LAST_ACCESS_DATE,CDH_VERSIONED_MODULE_HISTORY,CDH_CLOUD_AUTOMATION_INSTANCE_STATS_HISTORY,ZENDESK_GROUP_MEMBERSHIP_V2,PACKAGE,CDH_EXTERNAL_DATA_POINTS_V2_HISTORY,DIM_DATAHUB_COLUMN,AWS_MARKETPLACE_BILLING_EVENT,CDH_RELEASE_HISTORY,CDH_METRIC_EVENT_V2_HISTORY,CDH_LOG_INGEST_ADVANCED_SETTINGS_HISTORY,AWS_MARKETPLACE_OFFER,CDH_PROCESS_VISIBILITY_HISTORY,ROLE,CDH_DASHBOARD_CONFIG_V2_HISTORY,CDH_METRIC_QUERY_STATS_HISTORY,CUSTOMER_BASE_HISTORY_V2,CDH_CLOUD_EVENT_V2_HISTORY,FACT_TABLE,BITBUCKET_PR_COMMITS,SQL_PERFORMANCE,USAGE_CREDITS,SFDC_OPPORTUNITY,CDH_LOG_MONITORING_STATS_HISTORY,ZENDESK_SIDE_CONVERSATIONS_V2,CDH_DDU_SERVERLESS_BY_ENTITY_V2_HISTORY,SFDC_MANAGED_LICENSE,CONTRACT_BILLING_INFO,ZENDESK_ORGANIZATIONS_HISTORY,CDH_MOBILE_AGENT_VERSION_USAGE_HISTORY,DIM_TABLE,CDH_SERVICE_CALLED_SERVICES_HISTORY,JOBSTATUS,CDH_PROCESS_VISIBILITY_HISTORY_V2,INTERCOM_CONVERSATION_PARTS,CDH_SOFTWARE_COMPONENT_PGI_HISTORY,SFDC_TASK,AWS_MARKETPLACE_LEGACY_ID_MAPPING,CDH_CLOUD_NETWORK_SERVICE_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_V2_HISTORY,TENANT_SUB_ENVIRONMENT,TENANT_USAGE_DAILY_SUMMARY_VIEW,QUERY_STATS,CDH_CONDITIONAL_PROCEDURES_HISTORY,REPORT_STATUS,LIMA_SUBSCRIPTION,DIM_OVALEDGE_COLUMN,DIM_OVALEDGE_DOMAIN,COMPANY,CDH_BULK_CONFIG_CHANGES_HISTORY,FACT_TABLE_USAGE,CDH_REQUEST_ATTRIBUTE_STATS_HISTORY,CDH_FDI_EVENT_HISTORY,CDH_DEEP_MONITORING_SETTINGS_V2_HISTORY,CDH_INTEGRATION_HISTORY,CDH_ACTIVE_GATE_HISTORY,DIM_USER,CDH_CLASSIC_BILLING_METRICS_HISTORY,DIM_DATA_CRITICALITY_LEVEL,CDH_CTC_LOAD_HISTORY,CUSTOMER_BASE_HISTORY,CDH_PLUGIN_METRIC_STATS_HISTORY,CDH_API_USAGE_HISTORY,INTERCOM_CONVERSATION_TAGS,FACT_COLUMN,CONTRACT,DIM_QUALITY_TYPE,ZENDESK_TICKETS_V2,CDH_SLO_HISTORY,CDH_METRIC_EVENT_CONFIG_NAME_FILTER_HISTORY,CDH_FDI_EVENT_TYPE_AGGREGATIONS_HISTORY,FACT_COLUMN_PROTECTION,BAS_AUDIT_FIELD,FACT_COLUMN_LINEAGE,CDH_PROBLEM_ROOT_CAUSE_GROUP_HISTORY,CDH_MOBILE_REPLAY_FULL_SESSION_METRICS_HISTORY,COMMUNITY_PRODUCT_IDEAS,PBI_ENTITY_REFRESH_HISTORY,CDH_APPSEC_RUNTIME_VULNERABILITY_DETECTION_SETTINGS_HISTORY,SOFTWARE_COMPONENT_PACKAGE_NAME_HASHES,ZENDESK_TICKETS_HISTORY,RUM_PAGEVIEW,TABLE_STORAGE_METRICS_HISTORY,CDH_PROBLEM_EVENT_INSTANCE_CLASSES_HISTORY,CDH_MAINFRAME_MSU_V3_HISTORY,CDH_EXTERNAL_DATA_POINTS_V3_HISTORY,RUM_BEHAVIORAL_EVENT_PROPERTIES,TEAMS_CAPABILITIES,SYNTHETIC_LOCATIONS,CDH_RUM_BILLING_DEM_UNITS_V2_HISTORY,CDH_ODIN_AGENT_ME_IDENTIFIER_HISTORY,CDH_FEATURE_FLAG_HISTORY,ZENDESK_USERS_V2,CDH_APPSEC_NOTIFICATION_SETTINGS_HISTORY,CDH_VIRTUALIZATION_HISTORY,LIMA_CAPABILITIES,CDH_PROBLEM_EVIDENCE_HISTORY,CDH_K8S_DATA_VOLUME_HISTORY,CDH_PROBLEM_NATURAL_EVENT_HISTORY,VALIDATION_PROBLEMS_HISTORY,JIRA_ISSUES,CDH_HOST_MEMORY_USAGE_HOURLY_RESOLUTION_HISTORY,CDH_PROBLEM_IMPACTED_ENTITIES_HISTORY,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_COUNT_HISTORY,BITBUCKET_PR_ACTIVITIES,CDH_WORKFLOWS_TASK_EXECUTION_HISTORY,MANAGED_LICENSE,SERVICE,DATA_ANALYTICS_CLA_CONTRACTS,LIMA_UNASSIGNED_CONSUMPTION_HOURLY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_HISTORY,CDH_PLUGIN_HOST_DETAILS_HISTORY,LIMA_ACCOUNT_GROUP_MEMBERSHIP,DPS_RATED_CONSUMPTION,CDH_LOG_MONITORING_METRIC_STATS_HISTORY,CDH_HOST_TECH_HISTORY,CDH_ALERTING_PROFILE_HISTORY,DIM_DATA_QUALITY_CHECK,CDH_TENANT_NETWORK_ZONE_STATS_HISTORY,CDH_HOST_BILLING_FULL_STACK_MONITORING_HISTORY,AWS_MARKETPLACE_AGREEMENT,CDH_RUM_USER_SESSIONS_WEB_BOUNCES_HISTORY,TENANT_USAGE_SUMMARY,CDH_ELASTICSEARCH_METRIC_DIMENSIONS_AFFILIATION_HISTORY,CDH_DASHBOARD_CONFIG_FILTER_USAGE_V2_HISTORY,CDH_PREFERENCES_SETTINGS_HISTORY,CDH_PLUGIN_STATE_HISTORY,CDH_COMPLETENESS_BY_CLOUD_HISTORY,ADA_ACCOUNT,INTERCOM_CONTACTS,CDH_MONITORED_VIRTUALIZATION_SERVICE_TYPES,EXTENSION_REPOSITORY_INFO,CDH_SOFTWARE_COMPONENT_DETAILS_HISTORY,CDH_RUM_USER_SESSIONS_MOBILE_BOUNCES_HISTORY,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_V2_HISTORY,EXTERNAL_DQ_CHECKS_RESULTS,CDH_PROBLEM_EVENT_METADATA_HISTORY,CDH_APPSEC_INTEGRATION_TYPES_HISTORY,DEV_JIRA_COMMENTS,CDH_RUM_USER_SESSIONS_WEB_SESSIONS_HISTORY,CDH_DASHBOARD_CONFIG_HISTORY,DIM_COLUMN,CDH_APPSEC_MONITORED_HOSTS_BY_FUNCTIONALITY_HISTORY,CDH_METRIC_DATA_TYPE_HISTORY,CDH_CF_FOUNDATION_HOST_HISTORY,DPS_SUBSCRIPTION_SKU,UPGRADE_EXECUTION,CDH_HOST_BILLING_INFRASTRUCTURE_MONITORING_HISTORY,CDH_FDI_EVENT_ENTITY_TYPE_AGGREGATIONS_HISTORY,TIME_ZONE,CDH_METRIC_EVENT_CONFIG_THRESHOLD_BASED_MODEL_HISTORY,CDH_RUM_BILLING_PERIODS_V1_HISTORY,CDH_CLUSTER_NETWORK_ZONE_STATS_HISTORY,ENVIRONMENT_SERVICE_SUMMARY,CDH_METRIC_EVENT_V2_NAME_FILTER_HISTORY,PROMO_CODE,CDH_UEM_CONFIG_HISTORY,EXTERNAL_DQ_CHECKS_DEFINITIONS,CDH_SERVICE_HISTORY,CDH_KUBERNETES_CLUSTER_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_SESSIONS_HISTORY,REGION,RUM_BEHAVIORAL_EVENTS_V3,CDH_SECURITY_PROBLEM_ASSESSMENT_VULNERABLE_FUNCTIONS_HISTORY,CDH_CLOUD_APPLICATION_HISTORY,BI_STATUS,CDH_DATABASE_INSIGHTS_ENDPOINT_DETAILS_HISTORY,SFDC_POC,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V1,DIM_DATA_SOURCE,CDH_BILLING_APP_SESSIONS_V3_HISTORY,ZENDESK_USERS,AZURE_METADATA,ZENDESK_TICKET_METRICS_HISTORY,CDH_CREDENTIALS_VAULT_ENTRIES_HISTORY,CDH_INTERNAL_ENTITY_MODEL_CAPPING_INFORMATION_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_BOUNCES_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_V2_HISTORY,CDH_APPLICATION_HISTORY,FACT_COLUMN_HISTORY,FACT_DATAHUB_TABLE_CHANGE_LOG,CDH_PROBLEM_CAPPING_INFORMATION_HISTORY,CDH_VISIT_STORE_NEW_BILLING_METRICS_HISTORY,MC_ENVIRONMENTS,MC_CLUSTER_CONSUMPTION,APPENGINE_INVOCATIONS_PER_APP,BITBUCKET_PR,ZENDESK_USERS_HISTORY,CDH_APPSEC_CODE_LEVEL_VULNERABILITY_DETECTION_SETTINGS_HISTORY,CDH_AGENT_HEALTH_METRICS_HISTORY,CDH_PGI_PROCESS_COUNT_HISTORY,AWS_ACCOUNT_MAPPING,CDH_MAINTENANCE_WINDOW_FILTER_HISTORY,CDH_LOG_MONITORING_ES_STATS_HISTORY,CDH_CLUSTERS,CDH_SYNTHETIC_API_CALLS_HISTORY,DEV_JIRA_ISSUES,INTERCOM_CONVERSATIONS,SFDC_OPPORTUNITY_PRODUCT,ZENDESK_TICKETS,CDH_SECURITY_PROBLEM_SC_HISTORY,ZENDESK_GROUPS_V2,TABLE_LOAD_INFO,AWS_CONSUMPTION_HISTORY,CDH_ALERTING_PROFILE_SEVERITY_RULE_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_HISTORY,INTERCOM_COMPANIES,ZENDESK_ORGANIZATIONS,CDH_FDI_EVENT_INSTANCE_CLASSES_HISTORY,CDH_FDI_EVENT_METADATA_HISTORY,BITBUCKET_COMMITS,SERVICE_USAGE_SUMMARY,GCP_METADATA,SFDC_PROJECT,CDH_LOG_MONITORING_STATS_V2_HISTORY,CDH_JS_FRAMEWORK_USAGE_HISTORY,CDH_ATTACK_CANDIDATES_V2_HISTORY,CDH_API_USAGE_HISTORY2,CDH_BILLING_APP_SESSIONS_HISTORY,CDH_MAINFRAME_MSU_V2_HISTORY,MC_MANAGED_LICENSE,CDH_SERVERLESS_COMPLETENESS_HISTORY,BILLING_PROVIDER,PBI_WORKSPACE_ENTITY_NAMES,CDH_SOFTWARE_COMPONENT_HISTORY,MC_ENVIRONMENT_CONSUMPTION,ENVIRONMENT_SERVICE_DAILY_SUMMARY,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V2,CDH_MDA_CONFIGS_HISTORY,SIGNUP_AWS_MARKETPLACE,INTERCOM_ADMINS,CDH_ENVIRONMENTS,CDH_SESSION_STORAGE_USAGE_HISTORY,CDH_CF_FOUNDATION_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY_VIEW,CDH_API_USER_AGENT_USAGE_HISTORY,HOST_USAGE_DAILY_SUMMARY,CDH_CLUSTER_EMERGENCY_EMAILS_HISTORY,PARTNER_REFERRAL,SFDC_ACCOUNT_TEAMMEMBER,CDH_DEEP_MONITORING_SETTINGS_HISTORY,NEW_EMPLOYEES,CDH_WORKFLOWS_HISTORY,CDH_JS_AGENT_VERSIONS,MONTHLY_USAGE,LIMITS,AWS_MARKETPLACE_TAX_ITEM,SFDC_VW_SALES_USERACCESS,CDH_SECURITY_PROBLEM_MUTE_STATE_HISTORY", + "snowflake.user.privilege.grants_on": "TESTTABLE1,TESTTABLE2", "db.user": "TEST_PIPELINE", "db.system": "snowflake", "service.name": "test.dsoa2025", diff --git a/test/test_results/test_users/logs.json b/test/test_results/test_users/logs.json index 7e552313..aec8ff93 100644 --- a/test/test_results/test_users/logs.json +++ b/test/test_results/test_users/logs.json @@ -1,19 +1,29 @@ [ { - "content": "User details for SEBASTIAN.KRUK", + "content": "User details for TEST_USER_1", "snowflake.user.created_on": 1644434689039000000, "snowflake.user.last_success_login": 1762440232376000000, "snowflake.user.default.namespace": "DEV_DB", - "snowflake.user.default.role": "SEBASTIAN_KRUK_ROLE", + "snowflake.user.default.role": "TEST_USER_ROLE", "snowflake.user.default.warehouse": "COMPUTE_WH", - "snowflake.user.display_name": "Sebastian Kruk", + "snowflake.user.display_name": "Test User", "snowflake.user.email": "95ab5ef6a07c48fe4e0d1049b5b16b07cb2334dead8801d4d6078dd283b338f6", + "snowflake.user.has_mfa": false, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.has_workload_identity": false, + "snowflake.user.is_disabled": false, + "snowflake.user.is_from_organization": false, + "snowflake.user.is_locked": false, + "snowflake.user.must_change_password": false, + "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_password": false, "snowflake.user.id": 298, - "snowflake.user.name": "SEBASTIAN.KRUK", - "snowflake.user.name.first": "Sebastian", - "snowflake.user.name.last": "Kruk", + "snowflake.user.name": "TEST_USER_1", + "snowflake.user.name.first": "Test", + "snowflake.user.name.last": "User", "snowflake.user.owner": "AAD_PROVISIONER", - "db.user": "SEBASTIAN.KRUK", + "db.user": "TEST_USER_1", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" }, @@ -26,7 +36,16 @@ "snowflake.user.default.role": "TEST_DATAMODEL_UPGRADER_ROLE", "snowflake.user.default.warehouse": "TEST_ETL_UPGRADE_WH", "snowflake.user.display_name": "TEST_DATAMODEL_UPGRADER", + "snowflake.user.has_mfa": false, "snowflake.user.has_password": true, + "snowflake.user.has_pat": false, + "snowflake.user.has_rsa": false, + "snowflake.user.is_disabled": false, + "snowflake.user.is_from_organization": false, + "snowflake.user.is_locked": false, + "snowflake.user.must_change_password": false, + "snowflake.user.ext_authn.duo": false, + "snowflake.user.has_workload_identity": false, "snowflake.user.id": 618, "snowflake.user.name": "TEST_DATAMODEL_UPGRADER", "snowflake.user.owner": "SECURITYADMIN", @@ -38,12 +57,12 @@ "content": "users", "snowflake.user.roles.last_altered": 1661498740003000000, "snowflake.user.roles.direct": [ - "ALEKSANDRA_RUMINSKA_ROLE" + "TEST_USER_ROLE" ], "snowflake.user.roles.granted_by": [ "USERADMIN" ], - "db.user": "ALEKSANDRA_RUMINSKA", + "db.user": "TEST_USER", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" }, @@ -56,15 +75,15 @@ "snowflake.user.roles.granted_by": [ "SECURITYADMIN" ], - "db.user": "MICHALLITKA", + "db.user": "TESTUSER", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" }, { "content": "User direct roles removed since 1970-01-01 00:00:00.000 Z", - "snowflake.user.roles.direct.removed_on": 1646226000000000000, - "snowflake.user.roles.direct.removed": "SANDBOX_KSTEST_COLDSTORE_ROLE", - "db.user": "SANDBOX_KSTEST_COLDSTORE", + "snowflake.user.roles.direct.removed_on": 1747375200000000000, + "snowflake.user.roles.direct.removed": "DTAGENT_SA082_VIEWER", + "db.user": "TEST.USER", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" }, @@ -79,7 +98,7 @@ { "content": "users", "snowflake.user.roles.last_altered": 1649326499175000000, - "snowflake.user.roles.all": "DEVACT_FINANCIAL,POWERBILOG_FINANCIAL,JIRA_FULL,APPSEC_SENSITIVE,DATAMODEL_UPGRADER,DEVEL_SYSADMIN_ROLE,INTERCOM_BASIC,IEM_BASIC,METADATA_FINANCIAL,WOOPRA_FINANCIAL,UNIVERSITY_FULL,ZENDESK_SENSITIVE,RAW_FULL,BI_FINANCIAL,CONSUMPTION_FULL,DAVIS_BASIC,INTERNALCOSTS_SENSITIVE,EMPLOYEES_FINANCIAL,SFM_FINANCIAL,METADATA_SENSITIVE,RUM_BASIC,APPSEC_BASIC,ALL_BASIC,TEST_SYSADMIN,TEST_COLDSTORE_ROLE,SYNTHETIC_BASIC,SANDBOX_TEST_BI_PREDICTIONS_ROLE,CDH_SENSITIVE,SECURITYADMIN,TEAMS_FINANCIAL,TEST_BI_PREDICTIONS_ROLE,DATAQUALITY_BASIC,RUM_SENSITIVE,BAS_BASIC,EXTENSIONREPOSITORYINFO_SENSITIVE,TEAMS_BASIC,BI_SENSITIVE,RNDWORKLOGS_FULL,BI_BASIC,DEVACT_BASIC,SANDBOX_TEST_DB_OWNER_ROLE,INTERCOM_SENSITIVE,INTERCOM_FINANCIAL,DEVELCLONE_BI_REPORTING,REPORTS_BASIC,JIRA_SENSITIVE,EXTENSIONREPOSITORYINFO_BASIC,REPORTS_FULL,TEST_DB_OWNER_ROLE,POWERBILOG_BASIC,DAVIS_FULL,DEVEL_DATAMODEL_UPGRADER_ROLE,SANDBOX_TEST_READONLY_USER_ROLE,POWERBILOG_FULL,REPORTS_FINANCIAL,INTERNALCOSTS_FINANCIAL,SANDBOX_TEST_DATAMODEL_UPGRADER_ROLE,BI_MODELER,SANDBOX_TEST_PIPELINE_ROLE,LIMA_BASIC,WOOPRA_BASIC,SFM_SENSITIVE,AUTOPROV_BASIC,METADATAAUDIT_SENSITIVE,RUM_FULL,DEVELCLONE_PIPELINE,SOFTCOMP_FINANCIAL,CONSUMPTION_FINANCIAL,SNOWFLAKE_FINANCE,LIMA_SENSITIVE,REPORTS_CONSUMPTION,WOOPRA_SENSITIVE,CONSUMPTION_SENSITIVE,DEVEL_SECURITYADMIN_ROLE,CONSUMPTION_BASIC,REPORTS_SENSITIVE,SYSADMIN,IEM_FULL,REVENUE_SENSITIVE,DEV_SF_DATAMODELUPGRADER_ROLE,BAS_SENSITIVE,SYNTHETIC_FINANCIAL,ALL_FINANCIAL,TEAMS_FULL,LIMA_FULL,WOOPRA_FULL,TEST_DATAMODEL_UPGRADER_ROLE,BI_REPORTING,CDH_FINANCIAL,DEVEL_PIPELINE_ROLE,TEST_POWERBI_ROLE,DEVEL_COLDSTORE_ROLE,METADATA_FULL,COMMUNITY_SENSITIVE,EMPLOYEES_FULL,AUTOPROV_FULL,JIRA_BASIC,SOFTCOMP_SENSITIVE,METADATAAUDIT_FINANCIAL,METADATAAUDIT_FULL,TERRAFORM_USER_ROLE,RUM_FINANCIAL,SOFTCOMP_BASIC,ZENDESK_BASIC,TEST_BI_REPORTING_ROLE,TEST_PIPELINE_ROLE,DAVIS_FINANCIAL,COMMUNITY_FULL,SANDBOX_TEST_BI_REPORTING_ROLE,EXTENSIONREPOSITORYINFO_FINANCIAL,SFM_FULL,CDH_BASIC,IEM_SENSITIVE,COLDSTORE,COMMUNITY_BASIC,SYNTHETIC_FULL,SANDBOX_TEST_POWERBI_ROLE,UNIVERSITY_FINANCIAL,DAVIS_SENSITIVE,DEVACT_FULL,POWERBI_MODEL,AUTOPROV_FINANCIAL,TEST_ETL_DQ_CHECKS_ROLE,JIRA_FINANCIAL,IEM_FINANCIAL,SCRATCHPAD_ROLE,INTERNALCOSTS_BASIC,MONITORING,TEAMS_SENSITIVE,ALL_SENSITIVE,DEVEL_BI_MODELER_ROLE,RNDWORKLOGS_FINANCIAL,ZENDESK_FINANCIAL,INTERCOM_FULL,DATAQUALITY_SENSITIVE,SOFTCOMP_FULL,DATAQUALITY_FINANCIAL,BAS_FINANCIAL,ALL_FULL,REPORTS_TECHNOLOGY,APPSEC_FULL,DEVEL_BI_REPORTING_ROLE,SANDBOX_ANDRZEJ_BI_MODELER,EMPLOYEES_SENSITIVE,SALESFORCE_FINANCIAL,INTERNALCOSTS_FULL,UNIVERSITY_SENSITIVE,COMMUNITY_FINANCIAL,EXTENSIONREPOSITORYINFO_FULL,METADATA_BASIC,SANDBOX_TEST_COLDSTORE_ROLE,PIPELINE,SALESFORCE_SENSITIVE,EMPLOYEES_BASIC,SANDBOX_ANDRZEJ_DATAMODEL_UPGRADER,SANDBOX_ANDRZEJ_BI_REPORTING,SFM_BASIC,SALESFORCE_FULL,REVENUE_FINANCIAL,UNIVERSITY_BASIC,SANDBOX_ANDRZEJ_PIPELINE,DEVACT_SENSITIVE,METADATAAUDIT_BASIC,RNDWORKLOGS_BASIC,ANY_BASIC,DEVOPS_ROLE,BAS_FULL,TEST_BI_MODELER_ROLE,ZENDESK_FULL,SALESFORCE_BASIC,DEVELCLONE_DATAMODEL_UPGRADER,DATAQUALITY_FULL,REVENUE_FULL,APPSEC_FINANCIAL,SYNTHETIC_SENSITIVE,REVENUE_BASIC,RNDWORKLOGS_SENSITIVE,LIMA_FINANCIAL,POWERBILOG_SENSITIVE,SANDBOX_TEST_BI_MODELER_ROLE,AUTOPROV_SENSITIVE,CDH_FULL,DEVELCLONE_BI_MODELER", + "snowflake.user.roles.all": "DEMIGOD,PIPELINE", "snowflake.user.roles.granted_by": [ "DEMIGOD" ], @@ -95,19 +114,19 @@ "DEMIGOD", "SECURITYADMIN" ], - "db.user": "BEATASZWICHTENBERG", + "db.user": "TESTUSER2", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" }, { "content": "users", - "snowflake.user.privilege.last_altered": 1720426972449000000, - "snowflake.user.privilege": "UPDATE:TABLE", + "snowflake.user.privilege.last_altered": 1615219847793000000, + "snowflake.user.privilege": "UPDATE:VIEW", "snowflake.user.privilege.granted_by": [ "SECURITYADMIN" ], - "snowflake.user.privilege.grants_on": "BILLING_PROVIDER,CDH_SLO_HISTORY,FACT_DATAHUB_COLUMN_CHANGE_LOG,CDH_PROBLEM_IMPACTED_ENTITIES_HISTORY,FACT_TABLE_USAGE,CDH_ACTIVE_GATE_UPDATE_STATUS_HISTORY,DIM_DEPLOYMENT_STAGE,CDH_DASHBOARD_CONFIG_FILTER_USAGE_V2_HISTORY,INTERCOM_CONVERSATIONS,CDH_ODIN_AGENT_HISTORY,LIMA_SUBSCRIPTION_BUDGET_HISTORY,BITBUCKET_PR_COMMITS,CDH_PROCESS_HISTORY,CDH_ATTACK_CANDIDATES_V2_HISTORY,ZENDESK_TICKETS_V2,CDH_RUM_BILLING_DEM_UNITS_V1_HISTORY,FACT_COLUMN_PROTECTION,PARTNER_REFERRAL,CDH_APPLICATION_HISTORY,CDH_AGENT_HEALTH_METRICS_HISTORY,DPS_SUBSCRIPTION_CONSUMPTION,SNOWFLAKE_CONNECTOR_SETTINGS_HISTORY,DIM_DATAHUB_EXISTING_COLUMN,AWS_ACCOUNT_MAPPING,CDH_COMPLETENESS_BY_CLUSTER_HISTORY,RUM_BEHAVIORAL_EVENT_PROPERTIES,LIMA_USAGE,SQL_PII_SNOWFLAKE_LOG,CDH_LOG_MONITORING_STATS_HISTORY,MC_ENVIRONMENTS,AWS_MARKETPLACE_LEGACY_ID_MAPPING,CDH_ATTACK_CANDIDATES_HISTORY,DIM_COLUMN,PROMO_CODE,CDH_RUM_USER_SESSIONS_IF_ONLY_CRASH_ENABLED_HISTORY,CDH_LOG_MONITORING_STATS_V2_HISTORY,SFDC_OPPORTUNITY_PRODUCT,DEV_JIRA_PROJECT,CDH_MONITORED_VIRTUALIZATION_SERVICE_TYPES,DATA_ANALYTICS_CLA_CONTRACTS,MANAGED_ACCOUNT,CDH_API_USAGE_HISTORY,CDH_HOST_BILLING_FOUNDATION_AND_DISCOVERY_HISTORY,AWS_MARKETPLACE_OFFER_PRODUCT,CDH_LOG_MONITORING_CONFIGURATION_STATS_HISTORY,CDH_INSTRUMENTATION_LIBRARY_HISTORY,JIRA_ISSUES,CDH_SECURITY_PROBLEM_TRACKING_LINKS_HISTORY,MC_MANAGED_CLUSTER,ACCOUNT_STATUS,DIM_OVALEDGE_TERM,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_SIZE_HISTORY,SFDC_POC,SFDC_VW_SALES_USERACCESS,EXTERNAL_DQ_CHECKS_RESULTS,FACT_COLUMN_USAGE,AWS_MARKETPLACE_BILLING_EVENT,KEPTN,CDH_BILLING_APP_SESSIONS_HISTORY,CDH_CLOUD_EVENT_V2_HISTORY,CDH_CTC_LOAD_HISTORY,ENVIRONMENT_SERVICE_DAILY_SUMMARY,CDH_BILLING_APP_PROPERTIES_V2_HISTORY,SERVICE_USAGE_SUMMARY,AUTOPROV_EVENTS,DIM_DEPLOYMENT_STATUS,CDH_CLUSTER_CONTACTS_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY,RUM_BEHAVIORAL_EVENTS_V3,LIMA_SUBSCRIPTION,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_V2_HISTORY,CDH_PROBLEM_CAPPING_INFORMATION_HISTORY,FACT_DATAHUB_TABLE_CHANGE_LOG,CUSTOMER_BASE_HISTORY_V2,ZENDESK_ORGANIZATIONS_HISTORY,CDH_ODIN_AGENT_ME_IDENTIFIER_HISTORY,CDH_TOKEN_STATS_HISTORY,CDH_SERVICE_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_HISTORY,DTU_ACTIVITIES,CDH_RUM_BILLING_DEM_UNITS_V2_HISTORY,CDH_DDU_METRICS_TOTAL_V2_HISTORY,CDH_PROBLEM_HISTORY,DIM_OVALEDGE_COLUMN,CDH_UEM_CONFIG_HISTORY,AWS_METADATA,CDH_PROBLEM_EVENT_INSTANCE_CLASSES_HISTORY,CDH_CLOUD_NETWORK_POLICY_HISTORY,CDH_TIMESERIES_ARRIVAL_LATENCY_HISTORY,CDH_LOG_MONITORING_ES_STATS_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURE_V2_HISTORY,CUSTOMER_BASE_HISTORY,CDH_METRIC_EVENT_V2_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_HISTORY,LIMITS,CDH_EXTERNAL_DATA_POINTS_V2_HISTORY,LIMA_SUBSCRIPTION_HISTORY,FACT_USER_GROUP_MAP,DEV_JIRA_CUSTOM_FIELD,DIM_PII_STATE,CDH_JS_AGENT_VERSIONS,CDH_ENVIRONMENT_METRICS_METADATA_HISTORY,REFERRAL_CODE,CDH_SECURITY_PROBLEM_ASSESSMENT_VULNERABLE_FUNCTIONS_HISTORY,LIMA_USAGE_HOURLY,CDH_SOFTWARE_COMPONENT_DETAILS_V2_HISTORY,ZENDESK_USERS_V2,DEV_JIRA_WORKLOGS,SYNTHETIC_LOCATIONS,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_HISTORY,CDH_BILLING_APP_SESSIONS_V2_HISTORY,CDH_KUBERNETES_NODE_HISTORY,CDH_LOG_INGEST_ADVANCED_SETTINGS_HISTORY,TENANT_USAGE_SUMMARY,CDH_SYNTHETIC_MONITOR_HISTORY,CDH_PROBLEM_ROOT_CAUSE_GROUP_HISTORY,CDH_CLOUD_AUTOMATION_UNITS_HISTORY,SFDC_TASK,CDH_PLUGIN_METRIC_STATS_HISTORY,CDH_RUM_BILLING_PERIODS_V2_HISTORY,CDH_TOTAL_FDI_EVENT_COUNT_HISTORY,CDH_FDI_EVENT_INSTANCE_CLASSES_HISTORY,CDH_CLOUD_APPLICATION_HISTORY,CDH_TILE_FILTER_CONFIG_HISTORY,ZENDESK_GROUPS_V2,DEV_JIRA_CHANGE_LOG,NEW_EMPLOYEES,SFDC_MANAGED_LICENSE,CDH_JS_FRAMEWORK_USAGE_HISTORY,CDH_PROCESS_VISIBILITY_HISTORY_V2,ZENDESK_SIDE_CONVERSATION_EVENTS_V2,LIMA_SUBSCRIPTION_CONSUMPTION,LIMA_SUBSCRIPTION_USAGE_HOURLY,CONTRACT_PRICING,CDH_TIMESERIES_MAINTENANCE_LAG_HISTORY,CDH_NOTIFICATION_SETTINGS_HISTORY,CDH_DEEP_MONITORING_SETTINGS_V2_HISTORY,AZURE_METADATA,CDH_METRIC_EVENT_CONFIG_ID_FILTER_HISTORY,CDH_DISCOVERED_VIRTUALIZATION_SERVICE_TYPES,CDH_SDK_LANGUAGE_HISTORY,DIM_SYNC_TYPE,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_V2_HISTORY,CDH_SETTING_V3_HISTORY,PBI_ENTITY_REFRESH_HISTORY,CDH_METRIC_EVENT_V2_VALIDATION_RESULT_HISTORY,CDH_BULK_CONFIG_CHANGES_HISTORY,CDH_TAG_COVERAGE_HISTORY,CDH_INTERNAL_ENTITY_MODEL_CAPPING_INFORMATION_HISTORY,CDH_FDI_EVENT_METADATA_HISTORY,CDH_VIRTUALIZATION_SUBSCRIPTION_HISTORY,CDH_BILLING_APP_SESSIONS_V3_HISTORY,CDH_CONTAINER_GROUP_INSTANCE_HISTORY,ZENDESK_USERS,CDH_LOG_MONITORING_METRIC_STATS_HISTORY,CDH_RELEASE_V3_HISTORY,SFDC_ASSIGNMENT,FACT_COLUMN_LINEAGE,ZENDESK_TICKETS_HISTORY_V2,CDH_DDU_METRICS_RAW_V2_HISTORY,DIM_QUALITY_TYPE,AWS_MARKETPLACE_OFFER_TARGET,CDH_APPSEC_NOTIFICATION_SETTINGS_HISTORY,TENANT_SUB_ENVIRONMENT,CDH_ALERTING_PROFILE_SEVERITY_RULE_HISTORY,CDH_VERSIONED_MODULE_V2_HISTORY,CDH_APPSEC_RUNTIME_VULNERABILITY_DETECTION_SETTINGS_HISTORY,CDH_HOST_TECH_HISTORY,DPS_SUBSCRIPTION,CDH_METRIC_EVENT_V2_NAME_FILTER_HISTORY,CDH_EXTENSION_HISTORY,INSTRUMENTED_FUNCTION_HASHES,DIM_JSON_VALIDATION,SERVICE,TENANT,BILLING_SERVICE_TYPE,CDH_DATABASE_INSIGHTS_ENDPOINT_DETAILS_HISTORY,LIMA_ACCOUNT_GROUP_MEMBERSHIP,AWS_MARKETPLACE_AGREEMENT,CDH_MAINFRAME_MSU_V2_HISTORY,BITBUCKET_PR,DIM_DATAHUB_TABLE,GRAIL_QUERY_LOG_V2,CDH_COMPLETENESS_BY_ENVIRONMENT_HISTORY,CDH_MAINFRAME_MSU_V3_HISTORY,LIMA_RATE_CARD,USER_ACCOUNT,CDH_RUM_USER_SESSIONS_WEB_BOUNCES_HISTORY,CDH_CONDITIONAL_PROCEDURES_RULES_HISTORY,TIME_ZONE,CDH_TENANT_NETWORK_ZONE_STATS_HISTORY,CDH_CLOUD_APPLICATION_NAMESPACE_HISTORY,CDH_METRIC_EVENT_CONFIG_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_BOUNCES_HISTORY,CDH_FEATURE_FLAG_HISTORY,BILLING_ACCOUNT,MC_ACCOUNT,DIM_OVALEDGE_CATEGORY,PACKAGE,CDH_WEB_APP_CALL_BY_BROWSER_HISTORY,CDH_EXTENSIONS_DISTINCT_DEVICES_HISTORY,MANAGED_CLUSTER,BITBUCKET_PR_ACTIVITIES,FACT_COLUMN,CDH_HOST_MEMORY_USAGE_HOURLY_RESOLUTION_HISTORY,CDH_CLUSTER_TAGS_HISTORY,ZENDESK_SIDE_CONVERSATION_RECIPIENTS_V2,CDH_SESSION_STORAGE_USAGE_V2_HISTORY,REGION,CDH_EXTERNAL_DATA_POINTS_HISTORY,CDH_SETTING_HISTORY,INTERCOM_CONVERSATION_TAGS,FACT_OVALEDGE_TABLE_TERM,SFDC_OPPORTUNITY,BITBUCKET_COMMITS,TENANT_STATUS,CDH_APPSEC_MONITORING_RULES_SETTINGS_HISTORY,CDH_PROBLEM_RANKED_ENTITY_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_V2_HISTORY,SQL_LOG_PIPELINE,ZENDESK_GROUPS,CDH_CLOUD_APPLICATION_INSTANCE_HISTORY,CDH_SYNTHETIC_API_CALLS_HISTORY,CDH_CLOUD_NETWORK_INGRESS_HISTORY,CDH_DASHBOARD_CONFIG_V2_HISTORY,CDH_DDU_METRICS_CONSUMED_INCLUDED_HISTORY,CDH_SERVICE_CALLING_APPLICATIONS_HISTORY,CDH_DASHBOARD_CONFIG_HISTORY,CDH_METRIC_DATA_TYPE_HISTORY,CDH_CLUSTERS,DPS_CONSUMPTION,CDH_REQUEST_ATTRIBUTE_STATS_HISTORY,LIMA_UNASSIGNED_CONSUMPTION_HOURLY,CDH_VISIT_STORE_USAGE_HISTORY,CDH_HOST_MEMORY_USAGE_HISTORY,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V1,PBI_WORKSPACE_ENTITY_NAMES,SOFTWARE_COMPONENT_PACKAGE_NAME_HASHES,MONTHLY_USAGE,SFDC_CONSUMPTION_REVENUE_MONTHLY,FACT_TABLE,DIM_TABLE,CDH_CONTAINER_GROUP_HISTORY,CDH_API_USER_AGENT_USAGE_HISTORY,SFDC_TRIAL,ZENDESK_TICKET_METRICS_CURRENT_V2,CDH_SECURITY_PROBLEM_ASSESSMENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_V2_HISTORY,CDH_APPSEC_INTEGRATION_TYPES_HISTORY,CDH_KUBERNETES_CLUSTER_HISTORY,CDH_SERVERLESS_HISTORY,SERVICE_USAGE_DAILY_SUMMARY,CDH_FDI_EVENT_TYPE_AGGREGATIONS_HISTORY,CDH_WORKFLOWS_V2_HISTORY,CDH_MOBILE_SESSION_COUNT_BY_AGENT_TECHNOLOGY_HISTORY,CDH_EXTRACT_STATISTICS,GRAIL_APP_INSTALLATIONS,MC_ENVIRONMENT_CONSUMPTION,CDH_CLUSTER_HISTORY,CDH_SECURITY_PROBLEM_HISTORY,SIGNUP_AWS_MARKETPLACE,CDH_MOBILE_CRASHES_BY_RETRIEVAL_DELAY_HISTORY,ZENDESK_TICKET_METRICS_CURRENT,FACT_UNIQUE_COLUMNS_HISTORY,TENANT_USAGE_DAILY_SUMMARY_VIEW,BAS_AUDIT_ENTITY,ZENDESK_TICKETS,AWS_MARKETPLACE_ACCOUNT,CDH_RUM_USER_SESSIONS_MOBILE_BOUNCES_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_HISTORY,FACT_DATA_QUALITY_ISSUES,CDH_SYNTHETIC_MONITOR_LOCATION_HISTORY,CDH_HOST_MEMORY_LIMIT_HOURLY_RESOLUTION_HISTORY,ZENDESK_USERS_HISTORY,CDH_APPSEC_CONSUMPTION_BY_ENTITY_HISTORY,CDH_WORKFLOWS_TASK_EXECUTION_HISTORY,CDH_CREDENTIALS_VAULT_ENTRIES_HISTORY,SFDC_ACCOUNT,CONTRACT,CDH_PGI_PROCESS_COUNT_HISTORY,CDH_APPSEC_RUNTIME_APPLICATION_PROTECTION_SETTINGS_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_HISTORY,CDH_MOBILE_SESSION_REPLAY_HISTORY,GRAIL_QUERY_LOG,DIM_PRIORITY,INTERCOM_COMPANIES,CDH_WORKFLOWS_V3_HISTORY,CDH_CLOUD_AUTOMATION_INSTANCE_HISTORY,CDH_PLUGIN_HOST_DETAILS_HISTORY,CDH_APPSEC_CODE_LEVEL_VULNERABILITY_DETECTION_SETTINGS_HISTORY,MC_MANAGED_LICENSE,DPS_CONSUMPTION_FORECAST,AWS_CONSUMPTION_HISTORY,CDH_ACTIVE_GATE_HISTORY,CDH_MAINTENANCE_WINDOW_FILTER_HISTORY,CDH_MOBILE_AGENT_VERSION_USAGE_HISTORY,CDH_METRIC_EVENT_V2_ID_FILTER_HISTORY,DIM_OVALEDGE_SCHEMA,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_V2_HISTORY,CDH_INTEGRATION_HISTORY,COMMUNITY_PRODUCT_IDEAS,CDH_SETTING_V2_HISTORY,CDH_CONDITIONAL_PROCEDURES_HISTORY,CDH_COMPLETENESS_BY_CLOUD_HISTORY,DIM_DATA_QUALITY_CHECK,CDH_EXTENDED_TENANT_CONFIG_HISTORY,DIM_LIFECYCLE_STAGE,PROMO_USAGE,CDH_INSTALLERS_DOWNLOAD_SERVLET_USAGES_HISTORY,INTERCOM_ADMINS,CDH_PROCESS_VISIBILITY_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY_ARCHIVE,CDH_HOST_HISTORY,CDH_MOBILE_REPLAY_FULL_SESSION_METRICS_HISTORY,CDH_METRIC_EVENT_CONFIG_COUNT_HISTORY,RUM_BEHAVIORAL_EVENTS,GCP_METADATA,USAGE_CREDITS,CDH_SERVERLESS_COMPLETENESS_HISTORY,CDH_ISSUE_TRACKER_HISTORY,DIM_OBJECT,DPS_RATED_CONSUMPTION,CDH_KEY_REQUEST_STATS_HISTORY,REPORTS_EXECUTION_LOG,PBI_ACTIVITY_LOG,LIMA_CAPABILITIES,CDH_PLUGIN_STATE_HISTORY,RUM_PAGEVIEW,HOST_USAGE_DAILY_SUMMARY,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_V2_HISTORY,CDH_DDU_METRICS_BY_METRIC_HISTORY,CDH_CLUSTER_EMERGENCY_EMAILS_HISTORY,CDH_DATABASE_INSIGHTS_HISTORY,SFDC_ACCOUNT_TEAMMEMBER,CDH_VULNERABILITY_MATCHING_METADATA_HISTORY,CDH_SECURITY_PROBLEM_MUTE_STATE_HISTORY,CDH_HOST_BILLING_FULL_STACK_MONITORING_HISTORY,DEV_JIRA_ISSUES,CDH_AUTO_UPDATE_SUCCESS_STATISTICS_HISTORY,FACT_DEPLOYMENT_DATES,CDH_RUM_BILLING_PERIODS_V1_HISTORY,CDH_ALERTING_PROFILE_HISTORY,CDH_LOG_1CLICK_ACTIVATIONS_HISTORY,CDH_ELASTICSEARCH_METRIC_DIMENSIONS_AFFILIATION_HISTORY,CONTRACT_BILLING_INFO,TEAMS_CAPABILITIES,SFDC_ACCOUNT_ARR_BANDS_MONTHLY,CDH_SECURITY_PROBLEM_SC_HISTORY,EXTERNAL_DQ_CHECKS_DEFINITIONS,CDH_VERSIONED_MODULE_HISTORY,RUM_PAGE_REPOSITORY_INFO,CDH_METRIC_EVENT_CONFIG_NAME_FILTER_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_SESSIONS_HISTORY,CDH_CF_FOUNDATION_HISTORY,ZENDESK_SIDE_CONVERSATIONS_V2,CDH_LOG_MONITORING_CUSTOM_ATTRIBUTE_HISTORY,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V2,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_HISTORY,RUM_SESSION,CDH_BILLING_SYNTHETIC_USAGE_HISTORY,CDH_AGENT_HISTORY,CDH_SESSION_STORAGE_USAGE_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_HISTORY,CDH_DDU_METRICS_RAW_HISTORY,DIM_PERMISSION_GROUP,CDH_CLASSIC_BILLING_METRICS_HISTORY,CDH_COMPETITOR_JS_FRAMEWORK_USAGE_HISTORY,TEAMS_EMPLOYEES,TENANT_LAST_ACCESS_DATE,CDH_METRIC_EVENT_V2_COUNT_HISTORY,FACT_OBJECT_LINEAGE,ZENDESK_TICKETS_HISTORY,SQL_LOG,BAS_AUDIT_FIELD,CDH_DEEP_MONITORING_SETTINGS_HISTORY,CDH_MDA_CONFIGS_HISTORY,CDH_FDI_EVENT_ENTITY_TYPE_AGGREGATIONS_HISTORY,CDH_ACTIVE_GATE_API_USAGE_HISTORY,BAS_AUDIT_ENTRY,LIMA_RATE_CARD_V2,PBI_ENTITY_PERMISSIONS,ACCOUNT,CDH_RUM_USER_SESSIONS_WEB_SESSIONS_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY_VIEW,CDH_HOST_BILLING_INFRASTRUCTURE_MONITORING_HISTORY,TENANT_LICENSE,SQL_PII_LOG,CDH_METRIC_EVENT_CONFIG_THRESHOLD_BASED_MODEL_HISTORY,AUTOPROV_EVENTS_FEATURES,DIM_OVALEDGE_DOMAIN_DIRECTORY,SFDC_ACCOUNT_ARR_BANDS_DAILY,VALIDATION_PROBLEMS_HISTORY,CDH_PROBLEM_EVENT_METADATA_HISTORY,CDH_MOBILE_OS_VERSION_USAGE_HISTORY,AWS_MARKETPLACE_TAX_ITEM,DIM_OVALEDGE_CONNECTION,CDH_CLUSTER_NETWORK_ZONE_STATS_HISTORY,FACT_COLUMN_HISTORY,CDH_PROBLEM_NATURAL_EVENT_HISTORY,ZENDESK_ORGANIZATIONS,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_COUNT_HISTORY,CDH_DASHBOARD_CONFIG_TILE_HISTORY,ZENDESK_SIDE_CONVERSATIONS,CDH_DASHBOARD_CONFIG_TILE_V2_HISTORY,CDH_METRIC_QUERY_STATS_HISTORY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_TYPE_HISTORY,CDH_UEM_CONFIG_PROPERTY_TAG_HISTORY,ZENDESK_ORGANIZATIONS_V2,ZENDESK_GROUP_MEMBERSHIP,DIM_DATA_SOURCE,MANAGED_LICENSE,DIM_USER,MANAGED_LICENSE_QUOTA,CDH_CLOUD_AUTOMATION_INSTANCE_STATS_HISTORY,ZENDESK_GROUP_MEMBERSHIP_V2,FACT_OVALEDGE_COLUMN_TERM,CDH_HOST_MEMORY_LIMIT_HISTORY,CDH_ACTIVE_GATE_MODULES_STATUSES_HISTORY,CDH_BILLING_SYNTHETIC_USAGE_V2_HISTORY,LIMA_CONSUMPTION,TENANT_USAGE_DAILY_SUMMARY,SFDC_DYNATRACE_ACCOUNT,COMPANY,CDH_DDU_METRICS_BY_METRIC_V2_HISTORY,CDH_TAG_COVERAGE_ENTITIES_HISTORY,CDH_API_USAGE_HISTORY2,CDH_SOFTWARE_COMPONENT_PGI_HISTORY,CDH_ENDED_SESSIONS_HISTORY,CDH_MAINTENANCE_WINDOW_HISTORY,DIM_OVALEDGE_DOMAIN,CDH_APPSEC_MONITORED_HOSTS_BY_FUNCTIONALITY_HISTORY,CDH_BILLING_APP_PROPERTIES_HISTORY,CDH_VISIT_STORE_NEW_BILLING_METRICS_HISTORY,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_BILLING_TYPE_HISTORY,SFDC_PROJECT,DIM_DATAHUB_COLUMN,CDH_EXTERNAL_DATA_POINTS_V3_HISTORY,CDH_K8S_DATA_VOLUME_HISTORY,AWS_MARKETPLACE_OFFER,CDH_CF_FOUNDATION_HOST_HISTORY,ROLE,TABLE_LOAD_INFO,AWS_MARKETPLACE_ADDRESS,CDH_RUM_USER_SESSIONS_MOBILE_SESSIONS_HISTORY,ENVIRONMENT_SERVICE_SUMMARY,DPS_SUBSCRIPTION_SKU,CDH_UEM_CONFIG_TENANT_HISTORY,EMPLOYEE_COUNT,CDH_UEM_CONFIG_METADATA_CAPTURING_SETTINGS_HISTORY,CDH_PREFERENCES_SETTINGS_HISTORY,CDH_CUSTOM_CHART_STATS_HISTORY,CDH_SERVICE_CALLED_SERVICES_HISTORY,APPENGINE_INVOCATIONS_PER_APP,DIM_DATA_CRITICALITY_LEVEL,CDH_SOFTWARE_COMPONENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_HISTORY,DIM_TABLE_STATUS,DEV_JIRA_COMMENTS,ADA_ACCOUNT,LIMA_SUBSCRIPTION_CONSUMPTION_RATED,SFDC_TENANT,INTERCOM_CONTACTS,CDH_MAINFRAME_MSU_HISTORY,CDH_DDU_METRICS_TOTAL_HISTORY,CDH_APPSEC_ALERTING_PROFILES_HISTORY,FACT_TABLE_OWNERS,PROCESS_STATUS,CDH_FDI_EVENT_HISTORY,PBI_DATASET_PARAMETER,INTERCOM_CONVERSATION_PARTS,CDH_VIRTUALIZATION_HISTORY,MC_CLUSTER_CONSUMPTION,CDH_DEEP_MONITORING_SETTINGS_FEATURES_HISTORY,ENVIRONMENT_USAGE_SUMMARY,SQL_SHARE_LOG,SQL_PERFORMANCE,INTERCOM_USERS,DIM_OVALEDGE_TABLE,ZENDESK_TICKET_METRICS_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY,CDH_TOKEN_STATS_PERMISSION_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_V2_HISTORY,CDH_OWNERSHIP_COVERAGE_HISTORY,EXTENSION_REPOSITORY_INFO,CDH_WORKFLOWS_HISTORY,JOBSTATUS,AWS_MARKETPLACE_PRODUCT,CDH_RELEASE_HISTORY,CDH_ENVIRONMENTS,CDH_VISIT_STORAGE_V2_HISTORY,CDH_CLOUD_NETWORK_SERVICE_HISTORY,BAS_USER", - "db.user": "TEST_COLDSTORE", + "snowflake.user.privilege.grants_on": "BILLING_PROVIDER,CDH_SLO_HISTORY", + "db.user": "TESTUSER3", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" }, @@ -118,7 +137,7 @@ "snowflake.user.privilege.granted_by": [ "SECURITYADMIN" ], - "snowflake.user.privilege.grants_on": "CDH_UEM_CONFIG_TENANT_HISTORY,CONTRACT_PRICING,SQL_SHARE_LOG,CDH_UEM_CONFIG_METADATA_CAPTURING_SETTINGS_HISTORY,GRAIL_QUERY_LOG_V2,DEV_JIRA_CHANGE_LOG,CDH_DATABASE_INSIGHTS_HISTORY,EMPLOYEE_COUNT,CDH_PROBLEM_HISTORY,LIMA_SUBSCRIPTION_BUDGET_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_HISTORY,CDH_DDU_METRICS_TOTAL_HISTORY,SNOWFLAKE_CONNECTOR_SETTINGS_HISTORY,CDH_PROBLEM_EVIDENCE_HISTORY_ARCHIVE,CDH_VISIT_STORAGE_V2_HISTORY,ZENDESK_GROUP_MEMBERSHIP,CDH_PROCESS_HISTORY,CDH_APPSEC_RUNTIME_APPLICATION_PROTECTION_SETTINGS_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_V2_HISTORY,CDH_INSTRUMENTATION_LIBRARY_HISTORY,SFDC_CONSUMPTION_REVENUE_MONTHLY,CDH_TOKEN_STATS_HISTORY,AWS_MARKETPLACE_OFFER_PRODUCT,CDH_RUM_USER_SESSIONS_IF_ONLY_CRASH_ENABLED_HISTORY,BAS_AUDIT_ENTITY,CDH_TOKEN_STATS_PERMISSION_HISTORY,DIM_DEPLOYMENT_STAGE,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_TYPE_HISTORY,DTU_ACTIVITIES,CDH_COMPLETENESS_BY_CLUSTER_HISTORY,CDH_CUSTOM_CHART_STATS_HISTORY,DPS_CONSUMPTION_FORECAST,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_V2_HISTORY,CDH_ODIN_AGENT_HISTORY,CDH_AUTO_UPDATE_SUCCESS_STATISTICS_HISTORY,CDH_DDU_METRICS_CONSUMED_INCLUDED_HISTORY,CDH_TILE_FILTER_CONFIG_HISTORY,LIMA_USAGE,RUM_SESSION,CDH_CONDITIONAL_PROCEDURES_RULES_HISTORY,CDH_METRIC_EVENT_CONFIG_HISTORY,FACT_OVALEDGE_COLUMN_TERM,CDH_SYNTHETIC_MONITOR_HISTORY,CDH_SERVERLESS_HISTORY,FACT_COLUMN_USAGE,MC_MANAGED_CLUSTER,CDH_CLUSTER_CONTACTS_HISTORY,CDH_LOG_1CLICK_ACTIVATIONS_HISTORY,INSTRUMENTED_FUNCTION_HASHES,DIM_PERMISSION_GROUP,CDH_CLOUD_NETWORK_POLICY_HISTORY,ENVIRONMENT_USAGE_SUMMARY,CDH_ENDED_SESSIONS_HISTORY,CDH_APPSEC_MONITORING_RULES_SETTINGS_HISTORY,DIM_TABLE_STATUS,AWS_MARKETPLACE_OFFER_TARGET,ZENDESK_SIDE_CONVERSATION_EVENTS_V2,CDH_TAG_COVERAGE_HISTORY,DIM_OVALEDGE_DOMAIN_DIRECTORY,FACT_DATAHUB_COLUMN_CHANGE_LOG,SFDC_DYNATRACE_ACCOUNT,SQL_LOG,KEPTN,CDH_CLOUD_NETWORK_INGRESS_HISTORY,CDH_DDU_METRICS_RAW_V2_HISTORY,SFDC_ASSIGNMENT,ZENDESK_TICKET_METRICS_CURRENT_V2,RUM_PAGE_REPOSITORY_INFO,CDH_HOST_HISTORY,CDH_BILLING_SYNTHETIC_USAGE_V2_HISTORY,CDH_DASHBOARD_CONFIG_TILE_V2_HISTORY,REFERRAL_CODE,ENVIRONMENT_USAGE_DAILY_SUMMARY,LIMA_SUBSCRIPTION_USAGE_HOURLY,CDH_EXTENSIONS_DISTINCT_DEVICES_HISTORY,CDH_CONTAINER_GROUP_HISTORY,CDH_BILLING_APP_PROPERTIES_V2_HISTORY,FACT_USER_GROUP_MAP,MANAGED_LICENSE_QUOTA,CDH_BILLING_SYNTHETIC_USAGE_HISTORY,USERS_AND_QUERIES_COUNT_STATS,ACCOUNT,CDH_TAG_COVERAGE_ENTITIES_HISTORY,DEV_JIRA_WORKLOGS,CDH_ACTIVE_GATE_MODULES_STATUSES_HISTORY,CDH_CLUSTER_HISTORY,CDH_MAINFRAME_MSU_HISTORY,CDH_VULNERABILITY_MATCHING_METADATA_HISTORY,CDH_EXTENSION_HISTORY,SFDC_ACCOUNT_ARR_BANDS_MONTHLY,DIM_OVALEDGE_CATEGORY,CDH_MOBILE_OS_VERSION_USAGE_HISTORY,FACT_DEPLOYMENT_DATES,AWS_METADATA,BAS_USER,SQL_LOG_PIPELINE,SFDC_ACCOUNT_ARR_BANDS_DAILY,CDH_RUM_USER_SESSIONS_MOBILE_SESSIONS_HISTORY,TEAMS_EMPLOYEES,CDH_CLOUD_APPLICATION_INSTANCE_HISTORY,GRAIL_QUERY_LOG,PBI_ENTITY_PERMISSIONS,CDH_CUSTOM_SESSIONS_APPLICATION_TECHNOLOGY_BILLING_TYPE_HISTORY,CDH_INSTALLERS_DOWNLOAD_SERVLET_USAGES_HISTORY,CDH_SYNTHETIC_MONITOR_LOCATION_HISTORY,PBI_DATASET_PARAMETER,LIMA_RATE_CARD,CDH_DASHBOARD_CONFIG_TILE_HISTORY,CDH_VISIT_STORE_USAGE_HISTORY,CDH_SETTING_HISTORY,AWS_MARKETPLACE_PRODUCT,CDH_VIRTUALIZATION_SUBSCRIPTION_HISTORY,CDH_WEB_APP_CALL_BY_BROWSER_HISTORY,CDH_RUM_BILLING_DEM_UNITS_V1_HISTORY,DPS_SUBSCRIPTION,BILLING_SERVICE_TYPE,CDH_CLOUD_APPLICATION_NAMESPACE_HISTORY,CDH_BILLING_APP_PROPERTIES_HISTORY,CDH_SESSION_STORAGE_USAGE_V2_HISTORY,CDH_ENVIRONMENT_METRICS_METADATA_HISTORY,CDH_ACTIVE_GATE_UPDATE_STATUS_HISTORY,SFDC_TRIAL,CDH_DDU_METRICS_BY_METRIC_V2_HISTORY,CDH_BILLING_APP_SESSIONS_V2_HISTORY,CDH_SETTING_V2_HISTORY,CDH_HOST_MEMORY_USAGE_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURE_V2_HISTORY,LIMA_USAGE_HOURLY,CDH_ISSUE_TRACKER_HISTORY,DIM_JSON_VALIDATION,ACCOUNT_STATUS,DIM_OVALEDGE_SCHEMA,CDH_SECURITY_PROBLEM_ASSESSMENT_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_EVIDENCE_HISTORY,TENANT_USAGE_DAILY_SUMMARY,CDH_SOFTWARE_COMPONENT_DETAILS_V2_HISTORY,AUTOPROV_EVENTS_FEATURES,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_HISTORY,SQL_PII_LOG,CDH_VERSIONED_MODULE_V2_HISTORY,DIM_OBJECT,CDH_METRIC_EVENT_V2_VALIDATION_RESULT_HISTORY,CDH_NOTIFICATION_SETTINGS_HISTORY,CDH_RUM_BILLING_PERIODS_V2_HISTORY,CDH_METRIC_EVENT_V2_ID_FILTER_HISTORY,FACT_TABLE_OWNERS,CDH_CLOUD_AUTOMATION_INSTANCE_HISTORY,PROCESS_STATUS,FACT_UNIQUE_COLUMNS_HISTORY,REPORTS_EXECUTION_LOG,FACT_OBJECT_LINEAGE,CDH_EXTRACT_STATISTICS,LIMA_SUBSCRIPTION_CONSUMPTION,TENANT_LICENSE,SYSTEM_PROPERTIES,SERVICE_USAGE_DAILY_SUMMARY,CDH_OWNERSHIP_COVERAGE_HISTORY,ZENDESK_SIDE_CONVERSATION_RECIPIENTS_V2,DPS_CONSUMPTION,TENANT_STATUS,CDH_AGENT_HISTORY,CDH_EXTERNAL_DATA_POINTS_HISTORY,CDH_MOBILE_SESSION_REPLAY_HISTORY,CDH_UEM_CONFIG_PROPERTY_TAG_HISTORY,FACT_OVALEDGE_TABLE_TERM,CDH_ATTACK_CANDIDATES_HISTORY,DIM_OVALEDGE_CONNECTION,MANAGED_CLUSTER,ZENDESK_TICKETS_HISTORY_V2,DIM_PRIORITY,CDH_KEY_REQUEST_STATS_HISTORY,CDH_SETTING_V3_HISTORY,LIMA_SUBSCRIPTION_HISTORY,GRAIL_APP_INSTALLATIONS,CDH_HOST_BILLING_FOUNDATION_AND_DISCOVERY_HISTORY,MANAGED_ACCOUNT,CDH_APPSEC_ALERTING_PROFILES_HISTORY,CDH_HOST_MEMORY_LIMIT_HOURLY_RESOLUTION_HISTORY,CDH_DDU_METRICS_TOTAL_V2_HISTORY,CDH_DEEP_MONITORING_SETTINGS_FEATURES_HISTORY,ZENDESK_TICKET_METRICS_CURRENT,BILLING_ACCOUNT,CDH_SECURITY_PROBLEM_TRACKING_LINKS_HISTORY,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_SIZE_HISTORY,CDH_KUBERNETES_NODE_HISTORY,TENANT,CDH_METRIC_EVENT_CONFIG_ID_FILTER_HISTORY,DIM_OVALEDGE_TABLE,CDH_CONTAINER_GROUP_INSTANCE_HISTORY,BAS_AUDIT_ENTRY,RUM_BEHAVIORAL_EVENTS,AWS_MARKETPLACE_ACCOUNT,FACT_DATA_QUALITY_ISSUES,INTERCOM_USERS,CDH_CLUSTER_TAGS_HISTORY,CDH_COMPLETENESS_BY_ENVIRONMENT_HISTORY,CDH_HOST_MEMORY_LIMIT_HISTORY,CDH_METRIC_EVENT_V2_COUNT_HISTORY,DIM_OVALEDGE_TERM,DIM_PII_STATE,CDH_CLOUD_AUTOMATION_UNITS_HISTORY,BITBUCKET_REPOSITORY_STATUS,CDH_MOBILE_SESSION_COUNT_BY_AGENT_TECHNOLOGY_HISTORY,DIM_DATAHUB_EXISTING_COLUMN,CDH_LOG_MONITORING_CONFIGURATION_STATS_HISTORY,CDH_WORKFLOWS_V3_HISTORY,SYSTEM_STATUS_DAILY_STATISTICS,CDH_DDU_METRICS_RAW_HISTORY,CDH_CODE_LEVEL_VULNERABILITY_FINDING_EVENTS_HISTORY,DEV_JIRA_CUSTOM_FIELD,CDH_SOFTWARE_COMPONENT_DETAILS_VERSION_V2_HISTORY,CDH_MAINTENANCE_WINDOW_HISTORY,MC_ACCOUNT,CDH_MOBILE_CRASHES_BY_RETRIEVAL_DELAY_HISTORY,DIM_DEPLOYMENT_STATUS,CDH_PROBLEM_RANKED_ENTITY_HISTORY,CDH_LOG_MONITORING_CUSTOM_ATTRIBUTE_HISTORY,CDH_SDK_LANGUAGE_HISTORY,ZENDESK_GROUPS,LIMA_RATE_CARD_V2,DPS_SUBSCRIPTION_CONSUMPTION,CDH_TIMESERIES_ARRIVAL_LATENCY_HISTORY,PROMO_USAGE,CDH_TOTAL_FDI_EVENT_COUNT_HISTORY,DIM_LIFECYCLE_STAGE,CDH_EXTENDED_TENANT_CONFIG_HISTORY,DEV_JIRA_PROJECT,CDH_DDU_METRICS_BY_METRIC_HISTORY,CDH_WORKFLOWS_V2_HISTORY,DIM_DATAHUB_TABLE,DATASOURCES,ZENDESK_SIDE_CONVERSATIONS,CDH_TIMESERIES_MAINTENANCE_LAG_HISTORY,CDH_ACTIVE_GATE_API_USAGE_HISTORY,PBI_ACTIVITY_LOG,SQL_PII_SNOWFLAKE_LOG,CDH_METRIC_EVENT_CONFIG_COUNT_HISTORY,AWS_MARKETPLACE_ADDRESS,AUTOPROV_EVENTS,LIMA_SUBSCRIPTION_CONSUMPTION_RATED,CDH_DISCOVERED_VIRTUALIZATION_SERVICE_TYPES,ZENDESK_ORGANIZATIONS_V2,DATA_VOLUME,LIMA_CONSUMPTION,CDH_RELEASE_V3_HISTORY,CDH_SERVICE_CALLING_APPLICATIONS_HISTORY,SFDC_TENANT,CDH_APPSEC_CONSUMPTION_BY_ENTITY_HISTORY,SFDC_ACCOUNT,CDH_COMPETITOR_JS_FRAMEWORK_USAGE_HISTORY,DIM_SYNC_TYPE,CDH_SECURITY_PROBLEM_HISTORY,USER_ACCOUNT,TENANT_LAST_ACCESS_DATE,CDH_VERSIONED_MODULE_HISTORY,CDH_CLOUD_AUTOMATION_INSTANCE_STATS_HISTORY,ZENDESK_GROUP_MEMBERSHIP_V2,PACKAGE,CDH_EXTERNAL_DATA_POINTS_V2_HISTORY,DIM_DATAHUB_COLUMN,AWS_MARKETPLACE_BILLING_EVENT,CDH_RELEASE_HISTORY,CDH_METRIC_EVENT_V2_HISTORY,CDH_LOG_INGEST_ADVANCED_SETTINGS_HISTORY,AWS_MARKETPLACE_OFFER,CDH_PROCESS_VISIBILITY_HISTORY,ROLE,CDH_DASHBOARD_CONFIG_V2_HISTORY,CDH_METRIC_QUERY_STATS_HISTORY,CUSTOMER_BASE_HISTORY_V2,CDH_CLOUD_EVENT_V2_HISTORY,FACT_TABLE,BITBUCKET_PR_COMMITS,SQL_PERFORMANCE,USAGE_CREDITS,SFDC_OPPORTUNITY,CDH_LOG_MONITORING_STATS_HISTORY,ZENDESK_SIDE_CONVERSATIONS_V2,CDH_DDU_SERVERLESS_BY_ENTITY_V2_HISTORY,SFDC_MANAGED_LICENSE,CONTRACT_BILLING_INFO,ZENDESK_ORGANIZATIONS_HISTORY,CDH_MOBILE_AGENT_VERSION_USAGE_HISTORY,DIM_TABLE,CDH_SERVICE_CALLED_SERVICES_HISTORY,JOBSTATUS,CDH_PROCESS_VISIBILITY_HISTORY_V2,INTERCOM_CONVERSATION_PARTS,CDH_SOFTWARE_COMPONENT_PGI_HISTORY,SFDC_TASK,AWS_MARKETPLACE_LEGACY_ID_MAPPING,CDH_CLOUD_NETWORK_SERVICE_HISTORY,CDH_DDU_SERVERLESS_BY_ENTITY_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_V2_HISTORY,TENANT_SUB_ENVIRONMENT,TENANT_USAGE_DAILY_SUMMARY_VIEW,QUERY_STATS,CDH_CONDITIONAL_PROCEDURES_HISTORY,REPORT_STATUS,LIMA_SUBSCRIPTION,DIM_OVALEDGE_COLUMN,DIM_OVALEDGE_DOMAIN,COMPANY,CDH_BULK_CONFIG_CHANGES_HISTORY,FACT_TABLE_USAGE,CDH_REQUEST_ATTRIBUTE_STATS_HISTORY,CDH_FDI_EVENT_HISTORY,CDH_DEEP_MONITORING_SETTINGS_V2_HISTORY,CDH_INTEGRATION_HISTORY,CDH_ACTIVE_GATE_HISTORY,DIM_USER,CDH_CLASSIC_BILLING_METRICS_HISTORY,DIM_DATA_CRITICALITY_LEVEL,CDH_CTC_LOAD_HISTORY,CUSTOMER_BASE_HISTORY,CDH_PLUGIN_METRIC_STATS_HISTORY,CDH_API_USAGE_HISTORY,INTERCOM_CONVERSATION_TAGS,FACT_COLUMN,CONTRACT,DIM_QUALITY_TYPE,ZENDESK_TICKETS_V2,CDH_SLO_HISTORY,CDH_METRIC_EVENT_CONFIG_NAME_FILTER_HISTORY,CDH_FDI_EVENT_TYPE_AGGREGATIONS_HISTORY,FACT_COLUMN_PROTECTION,BAS_AUDIT_FIELD,FACT_COLUMN_LINEAGE,CDH_PROBLEM_ROOT_CAUSE_GROUP_HISTORY,CDH_MOBILE_REPLAY_FULL_SESSION_METRICS_HISTORY,COMMUNITY_PRODUCT_IDEAS,PBI_ENTITY_REFRESH_HISTORY,CDH_APPSEC_RUNTIME_VULNERABILITY_DETECTION_SETTINGS_HISTORY,SOFTWARE_COMPONENT_PACKAGE_NAME_HASHES,ZENDESK_TICKETS_HISTORY,RUM_PAGEVIEW,TABLE_STORAGE_METRICS_HISTORY,CDH_PROBLEM_EVENT_INSTANCE_CLASSES_HISTORY,CDH_MAINFRAME_MSU_V3_HISTORY,CDH_EXTERNAL_DATA_POINTS_V3_HISTORY,RUM_BEHAVIORAL_EVENT_PROPERTIES,TEAMS_CAPABILITIES,SYNTHETIC_LOCATIONS,CDH_RUM_BILLING_DEM_UNITS_V2_HISTORY,CDH_ODIN_AGENT_ME_IDENTIFIER_HISTORY,CDH_FEATURE_FLAG_HISTORY,ZENDESK_USERS_V2,CDH_APPSEC_NOTIFICATION_SETTINGS_HISTORY,CDH_VIRTUALIZATION_HISTORY,LIMA_CAPABILITIES,CDH_PROBLEM_EVIDENCE_HISTORY,CDH_K8S_DATA_VOLUME_HISTORY,CDH_PROBLEM_NATURAL_EVENT_HISTORY,VALIDATION_PROBLEMS_HISTORY,JIRA_ISSUES,CDH_HOST_MEMORY_USAGE_HOURLY_RESOLUTION_HISTORY,CDH_PROBLEM_IMPACTED_ENTITIES_HISTORY,CDH_LOG_MODULE_INGEST_ADOPTION_INCOMING_COUNT_HISTORY,BITBUCKET_PR_ACTIVITIES,CDH_WORKFLOWS_TASK_EXECUTION_HISTORY,MANAGED_LICENSE,SERVICE,DATA_ANALYTICS_CLA_CONTRACTS,LIMA_UNASSIGNED_CONSUMPTION_HOURLY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_HISTORY,CDH_PLUGIN_HOST_DETAILS_HISTORY,LIMA_ACCOUNT_GROUP_MEMBERSHIP,DPS_RATED_CONSUMPTION,CDH_LOG_MONITORING_METRIC_STATS_HISTORY,CDH_HOST_TECH_HISTORY,CDH_ALERTING_PROFILE_HISTORY,DIM_DATA_QUALITY_CHECK,CDH_TENANT_NETWORK_ZONE_STATS_HISTORY,CDH_HOST_BILLING_FULL_STACK_MONITORING_HISTORY,AWS_MARKETPLACE_AGREEMENT,CDH_RUM_USER_SESSIONS_WEB_BOUNCES_HISTORY,TENANT_USAGE_SUMMARY,CDH_ELASTICSEARCH_METRIC_DIMENSIONS_AFFILIATION_HISTORY,CDH_DASHBOARD_CONFIG_FILTER_USAGE_V2_HISTORY,CDH_PREFERENCES_SETTINGS_HISTORY,CDH_PLUGIN_STATE_HISTORY,CDH_COMPLETENESS_BY_CLOUD_HISTORY,ADA_ACCOUNT,INTERCOM_CONTACTS,CDH_MONITORED_VIRTUALIZATION_SERVICE_TYPES,EXTENSION_REPOSITORY_INFO,CDH_SOFTWARE_COMPONENT_DETAILS_HISTORY,CDH_RUM_USER_SESSIONS_MOBILE_BOUNCES_HISTORY,CDH_DDU_TRACES_OTEL_BY_DESCRIPTION_V2_HISTORY,EXTERNAL_DQ_CHECKS_RESULTS,CDH_PROBLEM_EVENT_METADATA_HISTORY,CDH_APPSEC_INTEGRATION_TYPES_HISTORY,DEV_JIRA_COMMENTS,CDH_RUM_USER_SESSIONS_WEB_SESSIONS_HISTORY,CDH_DASHBOARD_CONFIG_HISTORY,DIM_COLUMN,CDH_APPSEC_MONITORED_HOSTS_BY_FUNCTIONALITY_HISTORY,CDH_METRIC_DATA_TYPE_HISTORY,CDH_CF_FOUNDATION_HOST_HISTORY,DPS_SUBSCRIPTION_SKU,UPGRADE_EXECUTION,CDH_HOST_BILLING_INFRASTRUCTURE_MONITORING_HISTORY,CDH_FDI_EVENT_ENTITY_TYPE_AGGREGATIONS_HISTORY,TIME_ZONE,CDH_METRIC_EVENT_CONFIG_THRESHOLD_BASED_MODEL_HISTORY,CDH_RUM_BILLING_PERIODS_V1_HISTORY,CDH_CLUSTER_NETWORK_ZONE_STATS_HISTORY,ENVIRONMENT_SERVICE_SUMMARY,CDH_METRIC_EVENT_V2_NAME_FILTER_HISTORY,PROMO_CODE,CDH_UEM_CONFIG_HISTORY,EXTERNAL_DQ_CHECKS_DEFINITIONS,CDH_SERVICE_HISTORY,CDH_KUBERNETES_CLUSTER_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_SESSIONS_HISTORY,REGION,RUM_BEHAVIORAL_EVENTS_V3,CDH_SECURITY_PROBLEM_ASSESSMENT_VULNERABLE_FUNCTIONS_HISTORY,CDH_CLOUD_APPLICATION_HISTORY,BI_STATUS,CDH_DATABASE_INSIGHTS_ENDPOINT_DETAILS_HISTORY,SFDC_POC,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V1,DIM_DATA_SOURCE,CDH_BILLING_APP_SESSIONS_V3_HISTORY,ZENDESK_USERS,AZURE_METADATA,ZENDESK_TICKET_METRICS_HISTORY,CDH_CREDENTIALS_VAULT_ENTRIES_HISTORY,CDH_INTERNAL_ENTITY_MODEL_CAPPING_INFORMATION_HISTORY,CDH_RUM_USER_SESSIONS_CUSTOM_BOUNCES_HISTORY,CDH_SOFTWARE_COMPONENT_DETAILS_PACKAGE_V2_HISTORY,CDH_APPLICATION_HISTORY,FACT_COLUMN_HISTORY,FACT_DATAHUB_TABLE_CHANGE_LOG,CDH_PROBLEM_CAPPING_INFORMATION_HISTORY,CDH_VISIT_STORE_NEW_BILLING_METRICS_HISTORY,MC_ENVIRONMENTS,MC_CLUSTER_CONSUMPTION,APPENGINE_INVOCATIONS_PER_APP,BITBUCKET_PR,ZENDESK_USERS_HISTORY,CDH_APPSEC_CODE_LEVEL_VULNERABILITY_DETECTION_SETTINGS_HISTORY,CDH_AGENT_HEALTH_METRICS_HISTORY,CDH_PGI_PROCESS_COUNT_HISTORY,AWS_ACCOUNT_MAPPING,CDH_MAINTENANCE_WINDOW_FILTER_HISTORY,CDH_LOG_MONITORING_ES_STATS_HISTORY,CDH_CLUSTERS,CDH_SYNTHETIC_API_CALLS_HISTORY,DEV_JIRA_ISSUES,INTERCOM_CONVERSATIONS,SFDC_OPPORTUNITY_PRODUCT,ZENDESK_TICKETS,CDH_SECURITY_PROBLEM_SC_HISTORY,ZENDESK_GROUPS_V2,TABLE_LOAD_INFO,AWS_CONSUMPTION_HISTORY,CDH_ALERTING_PROFILE_SEVERITY_RULE_HISTORY,CDH_DDU_SERVERLESS_BY_DESCRIPTION_HISTORY,INTERCOM_COMPANIES,ZENDESK_ORGANIZATIONS,CDH_FDI_EVENT_INSTANCE_CLASSES_HISTORY,CDH_FDI_EVENT_METADATA_HISTORY,BITBUCKET_COMMITS,SERVICE_USAGE_SUMMARY,GCP_METADATA,SFDC_PROJECT,CDH_LOG_MONITORING_STATS_V2_HISTORY,CDH_JS_FRAMEWORK_USAGE_HISTORY,CDH_ATTACK_CANDIDATES_V2_HISTORY,CDH_API_USAGE_HISTORY2,CDH_BILLING_APP_SESSIONS_HISTORY,CDH_MAINFRAME_MSU_V2_HISTORY,MC_MANAGED_LICENSE,CDH_SERVERLESS_COMPLETENESS_HISTORY,BILLING_PROVIDER,PBI_WORKSPACE_ENTITY_NAMES,CDH_SOFTWARE_COMPONENT_HISTORY,MC_ENVIRONMENT_CONSUMPTION,ENVIRONMENT_SERVICE_DAILY_SUMMARY,CDH_RUM_BILLING_PERIODS_WEB_APPLICATIONS_HYBRID_VISITS_V2,CDH_MDA_CONFIGS_HISTORY,SIGNUP_AWS_MARKETPLACE,INTERCOM_ADMINS,CDH_ENVIRONMENTS,CDH_SESSION_STORAGE_USAGE_HISTORY,CDH_CF_FOUNDATION_HISTORY,ENVIRONMENT_USAGE_DAILY_SUMMARY_VIEW,CDH_API_USER_AGENT_USAGE_HISTORY,HOST_USAGE_DAILY_SUMMARY,CDH_CLUSTER_EMERGENCY_EMAILS_HISTORY,PARTNER_REFERRAL,SFDC_ACCOUNT_TEAMMEMBER,CDH_DEEP_MONITORING_SETTINGS_HISTORY,NEW_EMPLOYEES,CDH_WORKFLOWS_HISTORY,CDH_JS_AGENT_VERSIONS,MONTHLY_USAGE,LIMITS,AWS_MARKETPLACE_TAX_ITEM,SFDC_VW_SALES_USERACCESS,CDH_SECURITY_PROBLEM_MUTE_STATE_HISTORY", + "snowflake.user.privilege.grants_on": "TESTTABLE1,TESTTABLE2", "db.user": "TEST_PIPELINE", "dsoa.run.plugin": "test_users", "dsoa.run.context": "users" diff --git a/test/test_results/test_users_results.txt b/test/test_results/test_users_results.txt deleted file mode 100644 index a1007602..00000000 --- a/test/test_results/test_users_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:adfc5533e18b589e2d9c151ec95f6ad2b3b64a40640779cb7d87a564cc0f1165 -size 44063 diff --git a/test/test_results/test_warehouse_usage_results.txt b/test/test_results/test_warehouse_usage_results.txt deleted file mode 100644 index 4fee2373..00000000 --- a/test/test_results/test_warehouse_usage_results.txt +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0fccb93e86128f838fc9aa6ed9f86532477c74c6f4157a3586e35d00d6d5ed33 -size 3918