-
Notifications
You must be signed in to change notification settings - Fork 825
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RecordBatch
normalization (flattening)
#6758
base: main
Are you sure you want to change the base?
Conversation
RecordBatch
normalization (flattening)
… iterative function for `RecordBatch`. Not sure which one is better currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had some questions regarding the implementation of this, since the one example from PyArrow doesn't seem to clarify on the edge cases here. Normalizing the Schema seems fairly straight forward to me, I'm just not sure on
- Whether the iterative or recursive approach is better (or something I missed)
- If
DataType::Struct
is the onlyDataType
that requires flattening. To me, it looks like that's the only one that can contained nestedField
s.
(I'm also not sure if I'm missing something with unwrapping like a List<Struct>
)
Any feedback/help would be appreciated!
@kszlim can you please help review this PR ? You requested the feature and we are currently quite short on review capacity in arrow-rs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -394,6 +396,56 @@ impl RecordBatch { | |||
) | |||
} | |||
|
|||
/// Normalize a semi-structured [`RecordBatch`] into a flat table. | |||
/// | |||
/// If max_level is 0, normalizes all levels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please improve this documentation (maybe copy from the pyarrow version)?
- Doucment what
max_level
means (in addition to that 0) - Document what
separator
does - provide an example of flatteing a record batch as a doc example?
For example like https://docs.rs/arrow/latest/arrow/index.html#columnar-format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, missed doing this, will do!
@@ -413,6 +413,81 @@ impl Schema { | |||
&self.metadata | |||
} | |||
|
|||
/// Returns a new schema, normalized based on the max_level | |||
/// This carries metadata from the parent schema over as well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise, please document the parametrs to this function and add a documentation example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, thanks!
I'll take a look, though please feel free to disregard anything I say and especially defer to the maintainers. |
if max_level == 0 { | ||
max_level = usize::MAX; | ||
} | ||
if self.num_rows() == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this worth keeping? I could be reading this wrong, but it looks like there's a lot of code strictly to support normalization for the 0 row case (which is likely very rare)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not sure the best case to handle this.
I think the secondary case can be removed, as I missed that even with a new_empty
RecordBatch, it will still have an (empty) columns field, good catch!
For the usize::MAX
setting, this was because polars/pandas had this , with a default value of 0. However, since Rust does not have default parameters, I wasn't sure the best way to adapt this.
A possible idea could be to set up like
enum Depth {
Default, // All levels
Value(usize)
}
then do some matching? Might be overkill though.
And of course, there's the option of removing it altogether, although then a value of 0 would mean no change?
@@ -413,6 +413,81 @@ impl Schema { | |||
&self.metadata | |||
} | |||
|
|||
/// Returns a new schema, normalized based on the max_level | |||
/// This carries metadata from the parent schema over as well | |||
pub fn normalize(&self, separator: &str, mut max_level: usize) -> Result<Self, ArrowError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean this is unused outside of the 0 row case right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was to help with the recursion, since using just the helper function would result in the separator as a prefix in the field.name()
. I do agree that this is not the best option though, maybe I can count this as a vote against the recursion approach? 😆
DataType::Struct(ff) => { | ||
// Need to zip these in reverse to maintain original order | ||
for (cff, fff) in c.as_struct().columns().iter().zip(ff.into_iter()).rev() { | ||
let new_key = format!("{}{}{}", f.name(), separator, fff.name()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if there's a better way to structure it, but is there a way to keep the field name parts in a Vec
and create the flattened fields at the end? That allows you to avoid the repeated format!
in a deeply nested schema.
Might not be worth the trouble though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good point, this is definitely not my favorite way to do this. I'll have to do some testing and think about it some more, but it may be better to construct the queue with the components of the Field
, then go through and construct all of the Field
s at the very end.
No problem at all, it's the holiday season! Hope everyone's taking a good break. Appreciate the feedback though! I'll get to work on it :) |
Which issue does this PR close?
Closes #6369.
Rationale for this change
Adds normalization (flattening) for
RecordBatch
, with normalization viaSchema
. Based on pandas/pola-rs.What changes are included in this PR?
Are there any user-facing changes?