Limit global test pipeline options to integration tests#38237
Limit global test pipeline options to integration tests#38237aIbrahiim wants to merge 1 commit intoapache:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refines how test pipeline options are handled within the Apache Beam Python SDK. By ensuring that global test pipeline options are only applied when running integration tests, the change prevents unintended configuration leakage during standard unit test execution, addressing the issue identified in #36127. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Would you mind sharing some context? Looks like the test has been been flaky though not permared on HEAD. How this is related to the flaky test? |
Yeah sure happy to add more context and sorry forgot to add that on the PR description as well, so the stack trace looked like the ApproximateUnique snippet test was just timing out / flaking, but I dont think the transform itself was the real problem so what was happening is those snippet tests use TestPipeline under the hood, and TestPipeline was still inheriting the global --test-pipeline-options that pytest sets up for the cloud job. so sometimes you would get this tiny unit snippet test accidentally running with runner settings meant for heavier integration-style runs, and then you hit weird gRPC / deadline stuff under load which matches what was found so the fix is basically is only use those global pytest pipeline options for tests that are actually integration tests (is_integration_test=True) and by that way snippet tests stay on the normal local path, and we dont get this accidental coupling to remote runner config @Abacn |
|
I don't see it's being setup for cloud etc Do you mean TestPipelines.pytest_test_pipeline_options could get mutated by one test and affecting other tests? Nor this is immutable as it's a str |
|
DEADLINE_EXCEEDED is due to the tests run on PrismRunner and involves grpc, so when prism runner is stuck to bring up, etc there is always a vague DEADLINE_EXCEEDED error there |
yeah that YAML line doesnt pass cloud runner flags and the “cloud” bit is indirect as preCommitPy… runs both the normal tox env and the py*-cloud one (wired up in the tox Gradle common.gradle), so its not obvious from the workflow file alone and yeah, not saying the string gets mutated between tests. pytest_test_pipeline_options is basically “whatever pytest got for --test-pipeline-options at session start.” so my change is just about not using that global fallback for regular unit-style TestPipeline unless it’s actually an integration test (is_integration_test) |
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
|
After checking logs / code I don't think this is relevant The PreCommit runs four independent pytest command, initiated from run_pytest.sh There are duplicate commands because one without extensions (from normal tox target), one with cloud dependency extensions (from *-cloud tox target). TestPipelines.pytest_test_pipeline_options stays empty all time. I don't think we want to change the behavior of flag here. It may break internal / downstream tests |
Yeah thanks and makes sense as my as I did investigation in assuming global options were leaking and logs you showed say they’re not in this precommit path, so I agree the TestPipeline change isnt the right fix here |
Fixes: #36127
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.