-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data: Add partition stats writer and reader #11216
base: main
Are you sure you want to change the base?
Conversation
941505a
to
05a80f6
Compare
|
||
@Override | ||
@SuppressWarnings("checkstyle:CyclomaticComplexity") | ||
public boolean equals(Object other) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StructLikeMap
was previously handling this implicitly. But when PartitionStatsRecord
wraps PartitionStats
now for the writers, it needs to override equals
and hashcode
StructLike coercedPartition = | ||
PartitionUtil.coercePartition(partitionType, spec, file.partition()); | ||
StructLike key = keyTemplate.copyFor(coercedPartition); | ||
Record key = coercedPartitionRecord(file, spec, partitionType); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need Record
instead of PartitionData
for the writers.
Cannot keep this conversion in the data module as it just to wraps the same PartitionStats
object.
|
||
/** Wraps the {@link PartitionStats} as {@link Record}. Used by generic writers and readers. */ | ||
public class PartitionStatsRecord implements Record, StructLike { | ||
private static final LoadingCache<StructType, Map<String, Integer>> NAME_MAP_CACHE = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Class is similar to GenericRecord
but for a specific partition stats schema.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused why we need a special class for this? GenericRecord should work right? Also Record already implements StructLike so that's unnecessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused why we need a special class for this? GenericRecord should work right?
I got a review comment perviously from Anton that keeping the Record
in the public interface of writer and readers is fragile. So, New class introduced which is less fragile (coupled with partition stats schema and just wraps the PartitionStats).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why we need a special class here still? His comment is just to remove Record from the public interface which has been done. I don't think creating a new special class (which is public) is necessary since the records only exist within private handler code? - @aokolnychyi was in Europe last I checked but when he is back he can check it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, GenericRecord
can't wrap the PartitionStats
, it maintains its own data array.
Schema schema, | ||
PartitionSpec spec, | ||
int formatVersion, | ||
Map<String, String> properties) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was no option to pass the table properties before.
Needed to pass different file format for paramterized test.
@aokolnychyi: This PR is ready. But as we discussed previously, this PR wraps the I will explore adding the internal writers for Parquet and Orc. Similar to #11108. |
@RussellSpitzer: It would be good to have this in 1.7.0. |
I already tried POC for internal writers on another branch, The problems: b) Also, Using partitionData in StructLikeMap is not working fine. Some keys are missing in the map (looks like equals() logic), If I use Record, it is fine. Maybe in the next version we can have optimized writer and reader (without converter using internal reader and writers). |
} | ||
|
||
PartitionStats that = (PartitionStats) other; | ||
return Objects.equals(partition, that.partition) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StructLike doesn't have equal, I think you need to use StructLikeComparator here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are storing Record
as Partition
from coercedPartitionRecord()
.
Since GenericRecord
has equals
implemented, calling Objects.equals
is working here.
Hence, I didn't add comparator logic.
But agree that need to understand the implementation logic, just by looking at this class, it looks like we need comparator logic. I can update it if it is necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also tried adding the comparator logic today and pass the comparator of partition type.
Since, we are converting the partition values for the writer in PartitionStatsHandler.statsToRecords()
.
Comparator expects integer value for date column but we have converted the values to LocalDate, hence comparison fails.
If I don't use the comparator, Record.equals() will be called which does array compare and passes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry but we can't assume subtype here unless you want to assert and change the field type above. If we say something is structLike we can't assume it behaves like a record (even if given the current code we know it won't.) If you want it to be a record you need to cast it and assert earlier in the class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree. I have added the assert (preconditions) to make sure it is always of the type Record. Also, added a comment that why keeping the type as StructType instead of Record when it is always a record.
It is because in future when we introduce internal parquet writers that works with StructLike instead of Record, we don't have to change method signatures and it will be compatible.
data/src/test/java/org/apache/iceberg/data/TestPartitionStatsHandler.java
Show resolved
Hide resolved
Moving out of 1.7.0 since we still have a bit of discussion here |
05a80f6
to
ee3b273
Compare
@RussellSpitzer: I have added the Assertion for Partition type as you suggested and replied to #11216 (comment), do you have anymore comments for this PR? |
I had a conversation with @rdblue today about internal writers. Ryan should have a bit of time to help/guide. |
@@ -205,6 +211,8 @@ public <T> T get(int pos, Class<T> javaClass) { | |||
public <T> void set(int pos, T value) { | |||
switch (pos) { | |||
case 0: | |||
Preconditions.checkArgument( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels a bit awkward to rely on Record
for a nested field while the main object is simply StructLike
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can keep the member as Record
instead of StructLike
to avoid this. But since we have plan to use internal writers in future (which uses StructLike
), we lose compatibility if we keep the member as Record
instead of StructLike
.
I don't think it is too awkward as Record
implements StructLike
.
} | ||
|
||
@VisibleForTesting | ||
static Iterator<PartitionStatsRecord> statsToRecords( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are doing a lot of logic here that wouldn't be needed with internal readers and writers. Let's at least estimate the amount of work to get the internal writer for Avro, to begin with. Any thoughts, @rdblue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's at least estimate the amount of work to get the internal writer for Avro
This PR needs internal writer for parquet and orc as well not just Avro.
Considering 1.8.0 is planned end of this month and we have holidays coming up next month, I don't want to miss the release train again (like 1.7.0)
We are waiting for partition stats from long time (almost an year) and this PR is implemented based on what is available in the current Iceberg. I too agree that having internal writers will be nice. But it can be added in future and PR is designed such that we can replace current writers with internal writers without losing compatibility.
So, I don't see a reason to block the development of this feature.
Merging this PR will complete milestone for partition stats.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And regarding effort for internal writers, I tried a POC last time just for parquet.
ajantha-bhat@c209bc9
Introduced GenericStructParquetWriter
used by GenericStructFileWriterFactory
.
As It uses BaseParquetWriter
which expects LocalDate
for date type instead of Int
type and so on for other types. Should we use BaseParquetWriter
with converter for internal writers or we should go and refactor the ParquetValueWriters
and ColumnWriter
was my doubt.
Also, If we use writers with converters, StructLike
comparator will fail as it need int
type for date but the final value is LocalDate
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need internal writers here ? I understand it's an improvement, but I don't think it's a blocker for this PR. The effort is ongoing for a long time now, I would be more in favor of moving forward soon and plan internal writers improvement in a second step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdblue, @aokolnychyi, @RussellSpitzer: Can we please conclude on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping @rdblue, @aokolnychyi, @RussellSpitzer
@RussellSpitzer @aokolnychyi I'm reviewing the stale PRs, and this one is open for month. Do we have a way to move forward ? I can do a new review, but at the end of the day, it won't help for the merge (as only committers can merge PR). |
Thanks @ajantha-bhat for your work on partition stats support in Iceberg! That could be reused in Hive as a building block for apache/hive#5498 |
@danielcweeks @RussellSpitzer @aokolnychyi would you have some time to take a look on this PR and my proposal (previous comment) ? |
I just found this PR as I'm desperately looking for this functionality. Thanks @ajantha-bhat! Let's see if the review gets wrapped soon 🤞 |
* @param branch A branch information to select the required snapshot. | ||
* @return {@link PartitionStatisticsFile} for the given branch. | ||
*/ | ||
public static PartitionStatisticsFile computeAndWriteStatsFile(Table table, String branch) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better not to have fat methods with multiple responsibilities? What if we introduce the write method that gets the stats iterator as an argument.
We might not need to execute the complete stats rebuild for all the registered partitions, but only for those that were changed in the current snapshot.
Snapshot summary already has a metric for the number of changed partitions, maybe we could extend it with the partition list and re-compute stats only for them. Then generate a new stats file based on the prev snapshot stats with updates to the changed partitions.
hi @ajantha-bhat, could you please check below: it seems that Date, time, timestamp partition values are not properly serialized
PartitionSpec.partitionToPath(PartitionStatsRecord.unwrap().partition()) thows an exception
I think, instead of Record(1999-12-31) it should be Record{10956} full code snippet
|
@deniskuzZ: Thanks for testing out. We are working on Internal parquet/Avro/orc readers and writes. partition stats will use them. So, we don't need to go through these converters. I will retest all the data types once I use internal writers for partition stats. |
JFYI, once I removed
|
Introduce APIs to write the partition stats into files in table default format using Iceberg generic writers and readers.