[Iceberg AddFiles] Allow very large batch import#38039
[Iceberg AddFiles] Allow very large batch import#38039ahmedabu98 merged 6 commits intoapache:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the Iceberg AddFiles implementation to support high-volume data imports. By introducing an intermediate step that batches data files into Iceberg Manifest files before committing, the system avoids memory bottlenecks on the committing worker. This change improves stability and scalability for pipelines processing millions of files. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
|
Iceberg unit tests are failing with a weird error. It's passing fine locally. I think might be something related to infra? |
| "For a streaming pipeline, sets the desired number of appended files per commit. Defaults to 100,000 files. " | ||
| + "A commit is triggered when either this or append triggering interval is reached.") | ||
| public abstract @Nullable Integer getAppendBatchSize(); | ||
| @SchemaFieldDescription("The number of data files per manifest (default 10,000 files).") |
There was a problem hiding this comment.
How/why would an end-user customize this ?
There was a problem hiding this comment.
To tweak the performance of AddFiles, and the performance of table queries
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/BeamRowWrapper.java
Outdated
Show resolved
Hide resolved
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/BeamRowWrapper.java
Outdated
Show resolved
Hide resolved
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/BeamRowWrapper.java
Outdated
Show resolved
Hide resolved
…files-large-batch
|
It shows IcebergIO Integration Tests passed, but something associated in "Test Results" failed? I'm not sure how to read this |
|
So tests passed, but publishing the results failed (issue from GHA side). Will consider this as green and merge |
|
Cherry pick: #38096 |
Current implementation gathers all data files into one worker to commit. This can result in OOM when importing large amounts of files (e.g. millions).
New implementation batches data files into Manifests, then batches manifests into one worker to commit. This is much more manageable for the one committing worker
Also reverting the in-depth bucket-partition validation, partly because it is too resource intensive, and also because the Spark AddFiles equivalent performs zero validation.