Skip to content

Conversation

@RocMarshal
Copy link
Contributor

@RocMarshal RocMarshal commented Jan 20, 2026

What is the purpose of the change

[FLINK-38943][runtime] Support Adaptive Partition Selection for RescalePartitioner and RebalancePartitioner

Brief change log

Introduce the following:

  • config options
    • taskmanager.network.adaptive-partitioner.enabled
    • taskmanager.network.adaptive-partitioner.max-traverse-size
  • AdaptiveLoadBasedRecordWriter.java for adaptive partition

Verifying this change

This change added tests and can be verified as follows:

  • AdaptiveLoadBasedRecordWriterTest.java

The benchmark about it is here

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (yes / no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
  • The serializers: (yes / no / don't know)
  • The runtime per-record code paths (performance sensitive): (yes / no / don't know)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
  • The S3 file system connector: (yes / no / don't know)

Documentation

  • Does this pull request introduce a new feature? (yes / no)
  • If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)

@flinkbot
Copy link
Collaborator

flinkbot commented Jan 20, 2026

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run azure re-run the last Azure build

@RocMarshal
Copy link
Contributor Author

Hi, @davidradl @X-czh Could you help take a look ? thx a lot.

@RocMarshal
Copy link
Contributor Author

@flinkbot run azure

@X-czh
Copy link
Contributor

X-czh commented Jan 20, 2026

@RocMarshal Thanks for the quick contribution. I'll take a look later this week.

Copy link
Contributor Author

@RocMarshal RocMarshal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @davidradl for the review.
I updated the related lines based on your comments.
PTAL ~

@github-actions github-actions bot added the community-reviewed PR has been reviewed by the community. label Jan 21, 2026
@RocMarshal RocMarshal requested a review from davidradl January 22, 2026 12:36
@RocMarshal RocMarshal requested a review from davidradl January 26, 2026 09:42
ResultPartitionWriter writer, long timeout, String taskName, int maxTraverseSize) {
super(writer, timeout, taskName);
this.numberOfSubpartitions = writer.getNumberOfSubpartitions();
this.maxTraverseSize = Math.min(maxTraverseSize, numberOfSubpartitions);
Copy link
Contributor

@davidradl davidradl Jan 30, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering why we need numberOfSubpartitions and the maxTraverseSize. why not set numberOfSubpartitions to Math.min(maxTraverseSize, numberOfSubpartitions) and remove private final int maxTraverseSize;. then you do not need to check the maxTraverseSize. in the logic as the numberOfSubpartitions will always be the minimum, accounting for the maxTraverseSize.

Also on a previous response to a review comment you said maxTraverseSize could not be 1, but it could end as one if numberOfSubpartitions == 1 due this Math.min. We should probably check for the numberOfSubpartitions == 1 case and not do adaptive processing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @davidradl
thanks for your comments.

I am wondering why we need numberOfSubpartitions and the maxTraverseSize. why not set numberOfSubpartitions to Math.min(maxTraverseSize, numberOfSubpartitions) and remove private final int maxTraverseSize;. then you do not need to check the maxTraverseSize. in the logic as the numberOfSubpartitions will always be the minimum, accounting for the maxTraverseSize.

numberOfSubpartitions represents the number of downstream partitions that can be written to.

maxTraverseSize, on the other hand, represents the maximum number of partitions that the current partition selector can compare when performing rescale or rebalance.

Based on the above description, suppose numberOfSubpartitions = 6 and maxTraverseSize = 2. In this case, the program would inevitably stop writing data to 4 downstream partitions, which is not the expected behavior.

Also on a previous response to a review comment you said maxTraverseSize could not be 1, but it could end as one if numberOfSubpartitions == 1 due this Math.min. We should probably check for the numberOfSubpartitions == 1 case and not do adaptive processing.

When the number of downstream partitions is 1, setting maxTraverseSize to a value greater than 1 is meaningless, because there is only one downstream partition. No additional traversal or comparison is needed, and the only available partition can be selected directly.
In addition, when the number of downstream partitions is not 1 and the user explicitly sets maxTraverseSize to 1, this means that under this strategy the next partition is selected directly without any load calculation, and data is written to it immediately. This behavior is equivalent to not enabling the adaptive partition feature.

Therefore, when we previously said that maxTraverseSize cannot be 1, we meant that users are not allowed to configure this option with a value of 1. It does not mean that the internal maxTraverseSize cannot be 1. As explained above, when the internal maxTraverseSize becomes 1, it is caused by the number of downstream partitions being 1.

The number of downstream partitions is not always determined by user operations. For example, when a streaming job enables the adaptive scheduler, the parallelism of each operator or task may differ, which can lead to an uncontrollable number of downstream partitions for certain tasks. As a result, maxTraverseSize inside the writer may become 1 in such cases.

Please correct me if I'm wrong. Any input is appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-reviewed PR has been reviewed by the community.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants