Skip to content

Conversation

mavaylon1
Copy link
Contributor

@mavaylon1 mavaylon1 commented Aug 27, 2024

Motivation

What was the reasoning behind this change? Please explain the changes briefly.
Fix #1158
Fix #1163

Add expandable default for datasets.

How to test the behavior?

Show how to reproduce the new behavior (can be a bug fix or a new feature)

Checklist

  • Did you update CHANGELOG.md with your changes?
  • Does the PR clearly describe the problem and the solution?
  • Have you reviewed our Contributing Guide?
  • Does the PR use "Fix #XXX" notation to tell GitHub to close the relevant issue numbered XXX when the PR is merged?

mavaylon1 and others added 4 commits August 27, 2024 16:30
* prior

* chckpoint

* possible poc

* fix

* finished tests

* dtype

* fix

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix again

* fix again

* clean up

* coverage

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update h5tools.py

* Update h5tools.py

* clean up for review

* clean up for review

* clean up

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Copy link

codecov bot commented Aug 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.68%. Comparing base (12864bb) to head (08bf60b).

Additional details and impacted files
@@            Coverage Diff             @@
##              dev    #1180      +/-   ##
==========================================
- Coverage   91.70%   91.68%   -0.02%     
==========================================
  Files          42       42              
  Lines        9615     9637      +22     
  Branches     1940     1948       +8     
==========================================
+ Hits         8817     8836      +19     
- Misses        519      520       +1     
- Partials      279      281       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@mavaylon1
Copy link
Contributor Author

@rly This PR is essentially done. I need to go through the pynwb side to make sure what is failing we can ignore through future development. I also want to run the neuroconv tests to make sure those pass as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Dimension Labels and Expandable Datasets Edge Cases

1 participant