-
Couldn't load subscription status.
- Fork 15
Refactor interally to RawBatch and CleanBatch wrapper types
#57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Rather than Raw and Clean, I'd rather something that was more explicit. StacArrowBatch and StacJsonBatch perhaps? |
|
I had considered |
|
I’ll be out this week so feel free to merge whenever you’re ready. On Jun 3, 2024, at 11:12 AM, David Bitner ***@***.***> wrote:
@bitner approved this pull request.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because your review was requested.Message ID: ***@***.***>
|
|
I renamed to StacArrowBatch and StacJsonBatch |
I was adding tests for
parse_stac_ndjson_to_arrowwhen I discovered that that was inadvertently broken in #53 due to a refactor in the schema inference. It has been confusing to work with untypedpyarrow.RecordBatchclasses because they're a black box, and we have two distinct data schemas we're working with: one that is as close to the raw JSON as possible, and another that conforms to our STAC GeoParquet schema.This PR refactors this internally by adding
RawBatchandCleanBatchwrapper types (open to better naming suggestions, but these are not public, so we can easily rename in the future). They both hold an internalpyarrow.RecordBatchbutRawBatchaligns as much as possible to the raw STAC JSON representation, whileCleanBatchaligns to the STAC-GeoParquet schema.These wrapper types make it much easier to reason about the shape of the data at different points of the pipeline.
Change list
RawBatchandCleanBatchfor more reliable internal typing