Snowflake bounded source + Flink runner fix#38064
Snowflake bounded source + Flink runner fix#38064turb wants to merge 2 commits intoapache:masterfrom
Conversation
… are ready before launching subtasks
…ed files generated by Snowflake COPY.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request improves the scalability of SnowflakeIO by introducing a BoundedSource, which allows for more efficient parallel processing of staged CSV files. Additionally, it addresses a race condition in the Flink runner's source split enumeration, ensuring that subtasks correctly wait for splits to become available. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Assigning reviewers: R: @ahmedabu98 for label java. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
SnowflakeIOhas several steps:COPYthat outputs partitioned gzipped CSV files in a directoryWhile 1. and 2. are done by one worker, 3. and 4. can be parallelized.
It appears that Google Dataflow is able to do that (using work stealing?), but Apache Flink (with
--useDataStreamForBatch=true) propagates the scale of 1. / 2. to 3. 4., leading to very long processing when it can be fully scalable.This change creates a
SnowflakeBoundedSourceinstead of a simpleDoFnto execute theCOPYand then read the splits.When doing that, a bug appears: a race between the apparition of those splits and the reading. It is solved by a change in
LazyFlinkSourceSplitEnumeratorto make subtasks wait for the splits to be ready.I tested it still works on Google Dataflow.
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.