Skip to content

Conversation

@Stranger6667
Copy link
Owner

@Stranger6667 Stranger6667 commented Nov 15, 2025

Resolves #314
Resolves #279

@codecov
Copy link

codecov bot commented Nov 15, 2025

Codecov Report

❌ Patch coverage is 98.51852% with 22 lines in your changes missing coverage. Please review.
✅ Project coverage is 93.87%. Comparing base (cb29647) to head (98190df).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
crates/jsonschema/src/error.rs 85.71% 5 Missing ⚠️
crates/jsonschema/src/lib.rs 85.71% 5 Missing ⚠️
...rates/jsonschema/src/keywords/unevaluated_items.rs 91.30% 4 Missing ⚠️
.../jsonschema/src/keywords/unevaluated_properties.rs 91.30% 4 Missing ⚠️
crates/jsonschema-cli/src/main.rs 96.00% 2 Missing ⚠️
crates/jsonschema/src/evaluation.rs 99.87% 1 Missing ⚠️
...ates/jsonschema/src/keywords/pattern_properties.rs 91.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #859      +/-   ##
==========================================
+ Coverage   93.20%   93.87%   +0.67%     
==========================================
  Files          87       78       -9     
  Lines       15371    15980     +609     
==========================================
+ Hits        14327    15002     +675     
+ Misses       1044      978      -66     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codspeed-hq
Copy link

codspeed-hq bot commented Nov 15, 2025

CodSpeed Performance Report

Merging #859 will not alter performance

Comparing dd/evaluate-2 (98190df) with master (cb29647)

Summary

✅ 52 untouched
🆕 9 new
⏩ 9 skipped1

Benchmarks breakdown

Benchmark BASE HEAD Change
🆕 evaluate[CITM/Catalog] N/A 96.5 ms N/A
🆕 evaluate[FHIR/Fhir] N/A 32.6 ms N/A
🆕 evaluate[Fast/Invalid] N/A 47.1 µs N/A
🆕 evaluate[Fast/Valid] N/A 44 µs N/A
🆕 evaluate[GeoJSON/Canada] N/A 3.2 s N/A
🆕 evaluate[Open API/Zuora] N/A 698.6 ms N/A
🆕 evaluate[Swagger/Kubernetes] N/A 1.1 s N/A
🆕 evaluate[unevaluated_items] N/A 110.3 µs N/A
🆕 evaluate[unevaluated_properties] N/A 162.3 µs N/A

Footnotes

  1. 9 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

@Stranger6667 Stranger6667 force-pushed the dd/evaluate-2 branch 14 times, most recently from a778dc9 to 3eda8b5 Compare November 16, 2025 21:42
Copy link
Owner Author

@Stranger6667 Stranger6667 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add it to readme + check if readme still notes output styles from drafts 2019-09 & 2020-12 - maybe should note v1 styles

@Stranger6667 Stranger6667 force-pushed the dd/evaluate-2 branch 9 times, most recently from 620c094 to 78a2fb1 Compare November 17, 2025 22:51
@Stranger6667 Stranger6667 marked this pull request as ready for review November 17, 2025 22:55
@Stranger6667 Stranger6667 force-pushed the dd/evaluate-2 branch 2 times, most recently from 368ce7c to b629642 Compare November 17, 2025 23:02
Signed-off-by: Dmitry Dygalo <[email protected]>
@Stranger6667 Stranger6667 merged commit 5fa100c into master Nov 17, 2025
47 of 48 checks passed
@Stranger6667 Stranger6667 deleted the dd/evaluate-2 branch November 17, 2025 23:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Python] Implement apply & output styles verbose output format style

2 participants