Skip to content

[PWCI] "[v1] dts: refactor flow suite with generator pattern"#600

Open
ovsrobot wants to merge 1 commit intomainfrom
series_36976
Open

[PWCI] "[v1] dts: refactor flow suite with generator pattern"#600
ovsrobot wants to merge 1 commit intomainfrom
series_36976

Conversation

@ovsrobot
Copy link
Owner

@ovsrobot ovsrobot commented Jan 6, 2026

NOTE: This is an auto submission for "[v1] dts: refactor flow suite with generator pattern".

See "http://patchwork.dpdk.org/project/dpdk/list/?series=36976" for details.

Summary by Sourcery

Refactor the rte_flow test suite to use a data-driven generator pattern that systematically combines protocol layers and actions, and expand coverage to additional protocols and actions while centralizing result reporting.

New Features:

  • Introduce declarative layer, field, and action specifications to auto-generate rte_flow test cases across multiple protocol stacks, including new ICMP, SCTP, and ARP coverage.
  • Add confidence-check logic to verify non-matching packets are handled correctly before running queue, drop, and modify action tests.
  • Implement a comprehensive test result tracking and summary mechanism for the rte_flow suite that records per-case outcomes and logs suite-level statistics.

Enhancements:

  • Replace multiple specialized rte_flow tests with a unified generator-based framework that exercises queue, drop, modify_field, and jump actions over a wide set of multi-layer patterns.
  • Simplify and consolidate queue, drop, modify, and jump test logic into reusable helpers that share TestPmd setup, flow creation/validation, packet sending, and verification workflows.

Summary by CodeRabbit

Release Notes

  • Refactor
    • Restructured RTE Flow test framework with a data-driven, multi-layer architecture. Replaced monolithic test definitions with a generator-based approach supporting flexible combinatorial test case creation. Introduces structured abstractions along with standardized packet building and verification utilities for improved test maintainability and extensibility.

✏️ Tip: You can customize this high-level summary in your review settings.

Refactor current flow test suite to utilize a generator
method, which dynamically creates flow rules from a
dictionary of pattern and action fields during runtime.
This allows for much more extensive testing of the flow
API without exponential code growth.

Signed-off-by: Dean Marx <dmarx@iol.unh.edu>
Signed-off-by: 0-day Robot <robot@bytheb.org>
@sourcery-ai
Copy link

sourcery-ai bot commented Jan 6, 2026

Reviewer's Guide

Refactors the rte_flow DTS test suite to a generator-based framework that models protocol layers and actions as data objects, auto-generates multi-layer flow rules and packets, centralizes TestPmd interaction and verification, and consolidates many specialized tests into a smaller set of parameterized tests with result tracking and confidence checks.

File-Level Changes

Change Details Files
Introduce data-driven abstractions (layers, actions, test cases, results) and a generator to build packets and flow rules combinatorially across multiple protocol stacks.
  • Add PatternField, Layer, Action, FlowTestCase, and FlowTestResult dataclasses to describe match fields, protocol layers, actions, and per-test results.
  • Define LAYERS and LAYER_STACKS covering eth, vlan, ipv4/ipv6, TCP/UDP/SCTP, ICMP, and ARP with associated scapy classes and test values.
  • Define ACTIONS for queue, drop, and modify_field behaviors, including builders for flow action strings, verification parameters, and expected packets.
  • Implement FlowTestGenerator that, given a layer stack and action, computes cartesian products of fields and values, builds flow rules, scapy packets, and expected packets, and returns FlowTestCase objects.
dts/tests/TestSuite_rte_flow.py
Replace the old per-feature runner and helper verification functions with a unified execution engine that uses the new generator and tracks test results.
  • Remove the _runner helper, zip_lists helper, and specialized send/verify helpers in favor of generic _run_tests and per-action verification methods (_verify_queue, _verify_drop, _verify_modify).
  • Centralize TestPmd session handling inside _run_tests, including validation, creation, deletion of flow rules, and dispatch to the appropriate verification method based on test_case.verification_type.
  • Introduce FlowTestResult storage on the test suite, plus logging helpers _log_test_suite_summary and _log_test_case_failures to report passed, skipped, and failed patterns.
  • Handle flow rule validation failures and InteractiveCommandExecutionError by marking tests skipped/failed instead of immediately failing the entire suite, aggregating problems per-pattern.
dts/tests/TestSuite_rte_flow.py
Add pre-test confidence checks to ensure baseline forwarding behavior for drop, queue, and modify actions before running generated tests.
  • Implement _run_confidence_check that creates a simple flow rule per action type and sends a deliberately non-matching packet to assert it is not dropped, mis-queued, or modified.
  • Use TestPmd.verbose output for queue confidence checks to ensure non-matching traffic is not steered to the target queue.
  • Invoke _run_confidence_check from queue_action, drop_action, and modify_field_action before generating and executing the main test matrix.
dts/tests/TestSuite_rte_flow.py
Consolidate many specific test methods into a smaller set of parameterized tests that iterate over multiple layer stacks and actions using the generator.
  • Replace queue_action_ETH/IP/L4/VLAN with a single queue_action test that iterates over LAYER_STACKS, generating queue tests for each stack and queue index 2.
  • Replace drop_action_ETH/IP/L4/VLAN with a single drop_action test that iterates over LAYER_STACKS with the drop action.
  • Replace modify_actions with modify_field_action, which uses the generator to test IPv4 and MAC modify_field operations across selected stacks and group 1 flows.
  • Refactor jump_action into a more focused scenario that explicitly tests a two-group pipeline (group 0 jump to group 1 queue) with matching and non-matching packets and integrates with new result tracking.
  • Remove the egress_rules and priority_attribute tests from the suite, focusing this refactor on ingress multi-layer, modify, and jump behaviors.
dts/tests/TestSuite_rte_flow.py
Adjust imports, payload markers, and minor behavior details to align with the new framework and protocol coverage.
  • Update imports to include ICMP, ARP, SCTP and dataclasses, and remove unused typing and helper imports (Iterator, zip_longest, verify_else_skip).
  • Change Raw payload markers from "xxxxx" to more structured markers like "X" * N or "CONFIDENCE"+padding, and update matching logic to search for these markers via Raw layer and bytes payload.
  • Extend IPv6 handling in _build_multi_layer_packet to set next-header (nh) field appropriately for TCP/UDP/SCTP so generated patterns are consistent.
  • Use SkippedTestException instead of verify_else_skip to signal skipped tests from within _run_tests, recording them in FlowTestResult rather than short-circuiting via assertions.
dts/tests/TestSuite_rte_flow.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link

coderabbitai bot commented Jan 6, 2026

📝 Walkthrough

Walkthrough

The RTE Flow test suite underwent a comprehensive refactoring, transitioning from hardcoded test paths to a data-driven, generator-based architecture. New dataclass abstractions (PatternField, Layer, Action, FlowTestCase, FlowTestResult) and a FlowTestGenerator class enable combinatorial test case generation. Packet builder helpers and reorganized lifecycle hooks replace previous monolithic methods.

Changes

Cohort / File(s) Change Summary
RTE Flow Test Architecture Refactoring
dts/tests/TestSuite_rte_flow.py
Added dataclass-based abstractions (PatternField, Layer, Action, FlowTestCase, FlowTestResult) to enable data-driven test design. Introduced FlowTestGenerator class for combinatorial test case generation. Added layer/action definitions (LAYERS, ACTIONS, LAYER_STACKS) and packet builder helpers. Replaced monolithic test class structure with generator-based entry points and lifecycle hooks (set_up_suite, tear_down_suite). Updated imports and exception handling for SkippedTestException and consolidated verification logic.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 Data-driven dreams now flow,
Layer upon action, stack by stack,
From hardcoded paths to generative glow,
The test suite springs forward—no looking back! ✨
Combinatorial magic makes each case bloom,
In one file's wise architectural boom! 🌱

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: refactoring the flow test suite to use a generator pattern.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • In _run_tests, when flow_validate returns False you mark the result as skipped but then fall through and still attempt flow_create and verification; add a continue (or similar) after setting the skip state so invalid rules are not created or executed.
  • The drop verification in _verify_drop looks for a Raw payload containing b"XXXXX", but the generator builds packets with "X" * 32 and no marker, so the helper will report packets as dropped even when they are received; align the payload marker used in generated packets and in the drop check.
  • The summary and failure aggregation in _log_test_case_failures hardcode the message "Flow rule passed validation but failed creation" even for other failure reasons (e.g. validation failure or verification failure); consider using the stored failure_reason per result to make the summary accurate and avoid misleading messages.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_run_tests`, when `flow_validate` returns `False` you mark the result as skipped but then fall through and still attempt `flow_create` and verification; add a `continue` (or similar) after setting the skip state so invalid rules are not created or executed.
- The drop verification in `_verify_drop` looks for a `Raw` payload containing `b"XXXXX"`, but the generator builds packets with `"X" * 32` and no marker, so the helper will report packets as dropped even when they are received; align the payload marker used in generated packets and in the drop check.
- The summary and failure aggregation in `_log_test_case_failures` hardcode the message "Flow rule passed validation but failed creation" even for other failure reasons (e.g. validation failure or verification failure); consider using the stored `failure_reason` per result to make the summary accurate and avoid misleading messages.

## Individual Comments

### Comment 1
<location> `dts/tests/TestSuite_rte_flow.py:34-43` </location>
<code_context>
-                except InteractiveCommandExecutionError:
-                    log("Flow rule validation passed, but flow creation failed.")
-                    fail("Failed flow creation")
+@requires_nic_capability(NicCapability.FLOW_CTRL)
+class TestRteFlow(TestSuite):
+    """RTE Flow test suite.

-                if verification_method == self._send_packet_and_verify:
-                    verification_method(packet=packet, *args, **kwargs)
+    This suite consists of 4 test cases:
+    1. Queue Action: Verifies queue actions with multi-layer patterns
+    2. Drop Action: Verifies drop actions with multi-layer patterns
</code_context>

<issue_to_address>
**suggestion (testing):** Previous egress and priority tests have been removed and are no longer covered by the new suite

The new generator-based suite limits coverage to queue, drop, modify_field, and jump, and no longer appears to exercise any `direction="egress"` flows or variations in `priority_level` and their impact on queue selection.

If egress rules and priority handling remain supported/important for rte_flow, please either add focused tests for these behaviors (ideally using the new generator) or document that they are intentionally no longer covered and why. Right now, overall coverage is reduced compared to the prior suite.
</issue_to_address>

### Comment 2
<location> `dts/tests/TestSuite_rte_flow.py:425-434` </location>
<code_context>
+    def _run_confidence_check(self, action_type: str) -> None:
</code_context>

<issue_to_address>
**issue (testing):** Confidence check is advertised for 'modify' but not actually implemented

`modify_field_action` calls `self._run_confidence_check("modify")`, but `_run_confidence_check` only handles `"drop"` and `"queue"`. For other `action_type` values it no-ops and still logs that the confidence check passed, so the modify tests don’t actually verify the baseline behavior.

Either implement a real confidence check for `"modify"` (e.g., add a modify rule that won’t match and confirm the packet is received unmodified), or stop calling `_run_confidence_check("modify")` and update the docstring accordingly.
</issue_to_address>

### Comment 3
<location> `dts/tests/TestSuite_rte_flow.py:555-560` </location>
<code_context>
-                    received = True
-            verify(received, f"Expected packet was not received on queue {test_queue}")
+                try:
+                    is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
+                    if not is_valid:
+                        result.skipped = True
+                        result.failure_reason = "Flow rule failed validation"
+                        self.test_suite_results.append(result)
+                        self.test_case_results.append(result)
+
+                    try:
</code_context>

<issue_to_address>
**suggestion (bug_risk):** When flow validation fails, tests mark the case as skipped but continue to attempt flow creation and verification

Because you don’t `continue`/`return` after marking the result as skipped, the code still runs `flow_create` and verification for an invalid flow. This can cause misleading results and double-reporting. Add a `continue` (or equivalent early exit) after appending the skipped result so invalid flows are not executed.

Suggested implementation:

```python
                try:
                    is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
                    if not is_valid:
                        result.skipped = True
                        result.failure_reason = "Flow rule failed validation"
                        self.test_suite_results.append(result)
                        self.test_case_results.append(result)
                        continue

                    try:

```

This edit assumes that the shown block is inside a loop (for example, iterating over `port_id` or test cases) where `continue` is valid and correctly skips the rest of the iteration. If it is instead inside a helper method that should exit entirely on invalid flows, you should replace `continue` with `return` (or an appropriate early-exit mechanism) and ensure the function’s return type and callers handle that behavior.
</issue_to_address>

### Comment 4
<location> `dts/tests/TestSuite_rte_flow.py:674-680` </location>
<code_context>
-            test_queue=2,
-        )
+        self._run_confidence_check("queue")
+        for stack in LAYER_STACKS:
+            test_cases = self.generator.generate(
+                layer_names=stack,
+                action_name="queue",
+                action_value=2,
+            )
+            self._run_tests(test_cases)
+        self._log_test_case_failures()

</code_context>

<issue_to_address>
**suggestion (testing):** The generator-based tests may create a very large combinatorial set of cases, which could lead to long runtimes and harder debugging

The generator now takes the Cartesian product of all layer field specs and iterates over all `test_values` for each field across every stack in `LAYER_STACKS`. This can grow very quickly as fields/values are added, risking slow or flaky test runs and many hard-to-triage, near-duplicate failures.

Consider:
- Capping the number of combinations per stack (e.g., sampling representative cases).
- Providing a configuration switch for “smoke” vs “full” modes.
- Making `LAYER_STACKS` and/or field value sets environment-configurable so CI runs a reduced subset while local runs can use the full matrix.

This would keep runtime and noise manageable while preserving coverage benefits from the generator approach.

Suggested implementation:

```python
import os
from dataclasses import dataclass, field
from itertools import product
from typing import Any, Callable, cast

```

```python
        self._run_confidence_check("queue")

        # Determine test mode and per-stack cap from environment.
        # RTE_FLOW_TEST_MODE: "smoke" (default) or "full"
        # RTE_FLOW_MAX_COMBINATIONS_PER_STACK: positive int to cap combinations in "smoke" mode
        # RTE_FLOW_LAYER_STACK_INDICES: optional comma-separated stack indices (e.g. "0,2,3")
        test_mode = os.getenv("RTE_FLOW_TEST_MODE", "smoke").lower()
        max_per_stack_env = os.getenv("RTE_FLOW_MAX_COMBINATIONS_PER_STACK", "").strip()
        max_per_stack = int(max_per_stack_env) if max_per_stack_env.isdigit() else 0

        stack_indices_env = os.getenv("RTE_FLOW_LAYER_STACK_INDICES", "").strip()
        if stack_indices_env:
            try:
                allowed_indices = {
                    int(idx.strip())
                    for idx in stack_indices_env.split(",")
                    if idx.strip() != ""
                }
            except ValueError:
                # Fall back to all stacks if indices are malformed
                allowed_indices = None
        else:
            allowed_indices = None

        for stack_index, stack in enumerate(LAYER_STACKS):
            if allowed_indices is not None and stack_index not in allowed_indices:
                continue

            all_cases = list(
                self.generator.generate(
                    layer_names=stack,
                    action_name="queue",
                    action_value=2,
                )
            )

            # In smoke mode, cap the number of combinations per stack to keep runtime manageable.
            if (
                test_mode == "smoke"
                and max_per_stack > 0
                and len(all_cases) > max_per_stack
            ):
                # Deterministic sub-sampling: evenly stride through the list.
                step = max(1, len(all_cases) // max_per_stack)
                sampled_cases = all_cases[::step][:max_per_stack]
                self._run_tests(sampled_cases)
            else:
                # Full mode, or smoke mode without a configured cap.
                self._run_tests(all_cases)

        self._log_test_case_failures()

```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +425 to +434
def _run_confidence_check(self, action_type: str) -> None:
"""Verify that non-matching packets are unaffected by flow rules.

def _send_packet_and_verify(self, packet: Packet, should_receive: bool = True) -> None:
"""Generate a packet, send to the DUT, and verify it is forwarded back.
Creates a flow rule for the specified action, then sends a packet that
should NOT match the rule to confirm:
- For 'drop': non-matching packets ARE received (not dropped)
- For 'queue': non-matching packets are NOT steered to the target queue
- For 'modify': non-matching packets arrive unmodified

This ensures flow rules only affect matching traffic before
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): Confidence check is advertised for 'modify' but not actually implemented

modify_field_action calls self._run_confidence_check("modify"), but _run_confidence_check only handles "drop" and "queue". For other action_type values it no-ops and still logs that the confidence check passed, so the modify tests don’t actually verify the baseline behavior.

Either implement a real confidence check for "modify" (e.g., add a modify rule that won’t match and confirm the packet is received unmodified), or stop calling _run_confidence_check("modify") and update the docstring accordingly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI Agents
In @dts/tests/TestSuite_rte_flow.py:
- Around line 499-503: The _verify_drop method's packet payload check uses
p.load which can raise AttributeError on multi-layer packets; update the
comprehension in _verify_drop to access the payload via p[Raw].load (keep the
p.haslayer(Raw) check) so it matches the other tests and avoids direct .load
access; ensure the any(...) line uses p[Raw].load when checking for b"XXXXX" and
leave send_packet_and_capture and verify calls unchanged.
- Around line 554-570: The code currently validates a flow with
testpmd.flow_validate (resulting in setting result.skipped and failure_reason
when false) but does not stop execution, so it still calls testpmd.flow_create;
add a control flow break (a continue) immediately after appending the result
when is_valid is False to avoid attempting testpmd.flow_create on an invalid
rule; ensure this change is placed in the same block that handles the validation
failure (the branch where is_valid is False and result is appended to
self.test_suite_results and self.test_case_results).
- Around line 425-490: The method _run_confidence_check is missing a branch for
action_type == "modify" so non-matching packets are never validated for modify
rules; add an elif for "modify" that mirrors the other branches: construct a
FlowRule with a pattern that would match some other traffic and an actions list
that performs a modification (e.g., "set_field" or "modify"), call
testpmd.flow_create(flow_rule=..., port_id=0), start testpmd, send the
non_matching_packet via send_packet_and_capture, stop testpmd, verify the
received packet(s) still contain the original content/headers (e.g., Raw payload
contains "CONFIDENCE" and IP/MAC unchanged) then delete the flow with
testpmd.flow_delete(flow_id, port_id=0) and call verify to fail if modification
occurred; alternatively, if modifying confidence checks is intentionally
skipped, explicitly handle the "modify" case by logging or raising a clear skip
(e.g., log a skipped confidence check) instead of falling through silently so
intent is clear.
- Around line 601-605: Add a new except block to handle TestCaseVerifyError in
the same try/except region that currently catches SkippedTestException: when
TestCaseVerifyError is raised, set result.failure_reason to "Verification
failed: {error}", append result to self.test_suite_results and
self.test_case_results, and call testpmd.flow_delete(flow_id, port_id=port_id)
to ensure flow rules are cleaned up; place this handler alongside the existing
SkippedTestException handler (referencing TestCaseVerifyError, result,
self.test_suite_results, self.test_case_results, and testpmd.flow_delete) so
verification failures don't leave flow rules behind.
🧹 Nitpick comments (2)
dts/tests/TestSuite_rte_flow.py (2)

141-161: requires field is semantically incorrect for IPv6 stacks.

The tcp, udp, and sctp layers specify requires=["eth", "ipv4"], but LAYER_STACKS includes combinations like ["eth", "ipv6", "tcp"]. While this doesn't cause runtime issues (the requires field isn't currently enforced), it's misleading documentation.

Consider updating to reflect the actual requirement (IP layer, either version):

     "tcp": Layer(
         name="tcp",
         scapy_class=TCP,
         pattern_name="tcp",
         fields=[
             PatternField("sport", "src", [1234, 8080]),
             PatternField("dport", "dst", [80, 443]),
             PatternField("flags", "flags", [2, 16]),
         ],
-        requires=["eth", "ipv4"],
+        requires=["eth"],  # Requires IPv4 or IPv6
     ),

518-535: Consider adding explicit failure if test_packet remains None.

While the early verification at lines 513-516 ensures a matching packet exists, the subsequent field comparisons silently pass if test_packet is None. Adding an explicit assertion would catch any edge cases.

🔎 Proposed defensive check
         test_packet = None
         for pkt in received:
             if pkt.haslayer(Raw) and b"XXXXX" in pkt.load:
                 test_packet = pkt
                 break

+        verify(test_packet is not None, "Failed to locate test packet for modification check")
+
         if IP in expected_packet and test_packet is not None:
             verify(
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cd60dcd and 7f854ee.

📒 Files selected for processing (1)
  • dts/tests/TestSuite_rte_flow.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Sourcery review
🔇 Additional comments (6)
dts/tests/TestSuite_rte_flow.py (6)

13-29: LGTM!

Imports are well-organized and appropriate for the new data-driven architecture. The Scapy layers align with the LAYERS dictionary definitions.


32-105: LGTM!

The dataclass hierarchy provides a clean abstraction for test case generation. The separation of concerns between PatternField, Layer, Action, FlowTestCase, and FlowTestResult is well thought out.


209-223: LGTM!

The expected packet builders correctly clone packets and apply modifications. The layer existence checks (if IP in expected, if Ether in expected) ensure safety when processing different packet types.


279-404: LGTM!

The FlowTestGenerator class implements a clean combinatorial test case generation pattern. The handling of IPv6 next header values (lines 305-307) and the cycling through test values with i % len(field_spec.test_values) (line 366) are correctly implemented.


492-497: LGTM!

The method correctly relies on the caller to start testpmd (done in _run_tests at lines 575-576) and handles queue verification using verbose output.


744-886: LGTM!

The jump_action test method correctly implements a two-stage flow pipeline with proper validation, error handling, and cleanup across all code paths. The positive and negative test cases (matching vs non-matching packets) provide good coverage.

Comment on lines +425 to +490
def _run_confidence_check(self, action_type: str) -> None:
"""Verify that non-matching packets are unaffected by flow rules.

def _send_packet_and_verify(self, packet: Packet, should_receive: bool = True) -> None:
"""Generate a packet, send to the DUT, and verify it is forwarded back.
Creates a flow rule for the specified action, then sends a packet that
should NOT match the rule to confirm:
- For 'drop': non-matching packets ARE received (not dropped)
- For 'queue': non-matching packets are NOT steered to the target queue
- For 'modify': non-matching packets arrive unmodified

This ensures flow rules only affect matching traffic before
running the actual action tests.

Args:
packet: Scapy packet to send and verify.
should_receive: Indicate whether the packet should be received.
action_type: The action being tested ('drop', 'queue', 'modify').
"""
received = send_packet_and_capture(packet)
contains_packet = any(
packet.haslayer(Raw) and b"xxxxx" in packet.load for packet in received
)
verify(
should_receive == contains_packet,
f"Packet was {'dropped' if should_receive else 'received'}",
non_matching_packet = (
Ether(src="02:00:00:00:00:00", dst="02:00:00:00:00:01")
/ IP(src="192.168.100.1", dst="192.168.100.2")
/ UDP(sport=9999, dport=9998)
/ Raw(load="CONFIDENCE" + "X" * 22)
)

def _send_packet_and_verify_queue(
self, packet: Packet, test_queue: int, testpmd: TestPmd
) -> None:
"""Send packet and verify queue stats show packet was received.
with TestPmd(rx_queues=4, tx_queues=4) as testpmd:
if action_type == "drop":
drop_rule = FlowRule(
direction="ingress",
pattern=["eth / ipv4 src is 192.168.1.1 / udp dst is 53"],
actions=["drop"],
)
flow_id = testpmd.flow_create(flow_rule=drop_rule, port_id=0)

Args:
packet: Scapy packet to send to the SUT.
test_queue: Represents the queue the test packet is being sent to.
testpmd: TestPmd instance being used to send test packet.
"""
testpmd.set_verbose(level=8)
testpmd.start()
send_packet_and_capture(packet=packet)
testpmd.start()
received = send_packet_and_capture(non_matching_packet)
testpmd.stop()
contains_packet = any(
p.haslayer(Raw) and b"CONFIDENCE" in bytes(p[Raw].load) for p in received
)
testpmd.flow_delete(flow_id, port_id=0)
verify(
contains_packet,
"Confidence check failed: non-matching packet dropped by drop rule",
)

elif action_type == "queue":
queue_rule = FlowRule(
direction="ingress",
pattern=[
"eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254 "
"dst is 10.255.255.253 / udp src is 12345 dst is 54321"
],
actions=["queue index 3"],
)
flow_id = testpmd.flow_create(flow_rule=queue_rule, port_id=0)

testpmd.set_verbose(level=8)
testpmd.start()
send_packet_and_capture(non_matching_packet)
verbose_output = testpmd.extract_verbose_output(testpmd.stop())
received_on_target = any(p.queue_id == 3 for p in verbose_output)
testpmd.flow_delete(flow_id, port_id=0)
verify(
not received_on_target,
"Confidence check failed: non-matching packet steered to queue 3",
)

log(f"Confidence check passed for '{action_type}' action")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing confidence check implementation for "modify" action.

_run_confidence_check("modify") is called at line 722, but the method only handles "drop" and "queue" action types. The "modify" case falls through silently without any verification.

Either add the missing implementation or document why it's intentionally skipped:

🔎 Option 1: Add modify check
                 verify(
                     not received_on_target,
                     "Confidence check failed: non-matching packet steered to queue 3",
                 )

+            elif action_type == "modify":
+                # Verify non-matching packets arrive unmodified
+                modify_rule = FlowRule(
+                    direction="ingress",
+                    pattern=["eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254"],
+                    actions=["modify_field op set dst_type ipv4_dst src_type ipv4_src width 32 / queue index 0"],
+                )
+                flow_id = testpmd.flow_create(flow_rule=modify_rule, port_id=0)
+                testpmd.start()
+                received = send_packet_and_capture(non_matching_packet)
+                testpmd.stop()
+                # Verify packet arrived unmodified
+                for pkt in received:
+                    if pkt.haslayer(Raw) and b"CONFIDENCE" in bytes(pkt[Raw].load):
+                        verify(
+                            pkt[IP].dst == non_matching_packet[IP].dst,
+                            "Confidence check failed: non-matching packet was modified",
+                        )
+                        break
+                testpmd.flow_delete(flow_id, port_id=0)
+
         log(f"Confidence check passed for '{action_type}' action")
🔎 Option 2: Skip explicitly
+            elif action_type == "modify":
+                # Modify actions are validated through the actual test cases
+                pass
+
         log(f"Confidence check passed for '{action_type}' action")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _run_confidence_check(self, action_type: str) -> None:
"""Verify that non-matching packets are unaffected by flow rules.
def _send_packet_and_verify(self, packet: Packet, should_receive: bool = True) -> None:
"""Generate a packet, send to the DUT, and verify it is forwarded back.
Creates a flow rule for the specified action, then sends a packet that
should NOT match the rule to confirm:
- For 'drop': non-matching packets ARE received (not dropped)
- For 'queue': non-matching packets are NOT steered to the target queue
- For 'modify': non-matching packets arrive unmodified
This ensures flow rules only affect matching traffic before
running the actual action tests.
Args:
packet: Scapy packet to send and verify.
should_receive: Indicate whether the packet should be received.
action_type: The action being tested ('drop', 'queue', 'modify').
"""
received = send_packet_and_capture(packet)
contains_packet = any(
packet.haslayer(Raw) and b"xxxxx" in packet.load for packet in received
)
verify(
should_receive == contains_packet,
f"Packet was {'dropped' if should_receive else 'received'}",
non_matching_packet = (
Ether(src="02:00:00:00:00:00", dst="02:00:00:00:00:01")
/ IP(src="192.168.100.1", dst="192.168.100.2")
/ UDP(sport=9999, dport=9998)
/ Raw(load="CONFIDENCE" + "X" * 22)
)
def _send_packet_and_verify_queue(
self, packet: Packet, test_queue: int, testpmd: TestPmd
) -> None:
"""Send packet and verify queue stats show packet was received.
with TestPmd(rx_queues=4, tx_queues=4) as testpmd:
if action_type == "drop":
drop_rule = FlowRule(
direction="ingress",
pattern=["eth / ipv4 src is 192.168.1.1 / udp dst is 53"],
actions=["drop"],
)
flow_id = testpmd.flow_create(flow_rule=drop_rule, port_id=0)
Args:
packet: Scapy packet to send to the SUT.
test_queue: Represents the queue the test packet is being sent to.
testpmd: TestPmd instance being used to send test packet.
"""
testpmd.set_verbose(level=8)
testpmd.start()
send_packet_and_capture(packet=packet)
testpmd.start()
received = send_packet_and_capture(non_matching_packet)
testpmd.stop()
contains_packet = any(
p.haslayer(Raw) and b"CONFIDENCE" in bytes(p[Raw].load) for p in received
)
testpmd.flow_delete(flow_id, port_id=0)
verify(
contains_packet,
"Confidence check failed: non-matching packet dropped by drop rule",
)
elif action_type == "queue":
queue_rule = FlowRule(
direction="ingress",
pattern=[
"eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254 "
"dst is 10.255.255.253 / udp src is 12345 dst is 54321"
],
actions=["queue index 3"],
)
flow_id = testpmd.flow_create(flow_rule=queue_rule, port_id=0)
testpmd.set_verbose(level=8)
testpmd.start()
send_packet_and_capture(non_matching_packet)
verbose_output = testpmd.extract_verbose_output(testpmd.stop())
received_on_target = any(p.queue_id == 3 for p in verbose_output)
testpmd.flow_delete(flow_id, port_id=0)
verify(
not received_on_target,
"Confidence check failed: non-matching packet steered to queue 3",
)
log(f"Confidence check passed for '{action_type}' action")
def _run_confidence_check(self, action_type: str) -> None:
"""Verify that non-matching packets are unaffected by flow rules.
Creates a flow rule for the specified action, then sends a packet that
should NOT match the rule to confirm:
- For 'drop': non-matching packets ARE received (not dropped)
- For 'queue': non-matching packets are NOT steered to the target queue
- For 'modify': non-matching packets arrive unmodified
This ensures flow rules only affect matching traffic before
running the actual action tests.
Args:
action_type: The action being tested ('drop', 'queue', 'modify').
"""
non_matching_packet = (
Ether(src="02:00:00:00:00:00", dst="02:00:00:00:00:01")
/ IP(src="192.168.100.1", dst="192.168.100.2")
/ UDP(sport=9999, dport=9998)
/ Raw(load="CONFIDENCE" + "X" * 22)
)
with TestPmd(rx_queues=4, tx_queues=4) as testpmd:
if action_type == "drop":
drop_rule = FlowRule(
direction="ingress",
pattern=["eth / ipv4 src is 192.168.1.1 / udp dst is 53"],
actions=["drop"],
)
flow_id = testpmd.flow_create(flow_rule=drop_rule, port_id=0)
testpmd.start()
received = send_packet_and_capture(non_matching_packet)
testpmd.stop()
contains_packet = any(
p.haslayer(Raw) and b"CONFIDENCE" in bytes(p[Raw].load) for p in received
)
testpmd.flow_delete(flow_id, port_id=0)
verify(
contains_packet,
"Confidence check failed: non-matching packet dropped by drop rule",
)
elif action_type == "queue":
queue_rule = FlowRule(
direction="ingress",
pattern=[
"eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254 "
"dst is 10.255.255.253 / udp src is 12345 dst is 54321"
],
actions=["queue index 3"],
)
flow_id = testpmd.flow_create(flow_rule=queue_rule, port_id=0)
testpmd.set_verbose(level=8)
testpmd.start()
send_packet_and_capture(non_matching_packet)
verbose_output = testpmd.extract_verbose_output(testpmd.stop())
received_on_target = any(p.queue_id == 3 for p in verbose_output)
testpmd.flow_delete(flow_id, port_id=0)
verify(
not received_on_target,
"Confidence check failed: non-matching packet steered to queue 3",
)
elif action_type == "modify":
# Verify non-matching packets arrive unmodified
modify_rule = FlowRule(
direction="ingress",
pattern=["eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254"],
actions=["modify_field op set dst_type ipv4_dst src_type ipv4_src width 32 / queue index 0"],
)
flow_id = testpmd.flow_create(flow_rule=modify_rule, port_id=0)
testpmd.start()
received = send_packet_and_capture(non_matching_packet)
testpmd.stop()
# Verify packet arrived unmodified
for pkt in received:
if pkt.haslayer(Raw) and b"CONFIDENCE" in bytes(pkt[Raw].load):
verify(
pkt[IP].dst == non_matching_packet[IP].dst,
"Confidence check failed: non-matching packet was modified",
)
break
testpmd.flow_delete(flow_id, port_id=0)
log(f"Confidence check passed for '{action_type}' action")
🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 425 - 490, The method
_run_confidence_check is missing a branch for action_type == "modify" so
non-matching packets are never validated for modify rules; add an elif for
"modify" that mirrors the other branches: construct a FlowRule with a pattern
that would match some other traffic and an actions list that performs a
modification (e.g., "set_field" or "modify"), call
testpmd.flow_create(flow_rule=..., port_id=0), start testpmd, send the
non_matching_packet via send_packet_and_capture, stop testpmd, verify the
received packet(s) still contain the original content/headers (e.g., Raw payload
contains "CONFIDENCE" and IP/MAC unchanged) then delete the flow with
testpmd.flow_delete(flow_id, port_id=0) and call verify to fail if modification
occurred; alternatively, if modifying confidence checks is intentionally
skipped, explicitly handle the "modify" case by logging or raising a clear skip
(e.g., log a skipped confidence check) instead of falling through silently so
intent is clear.

Comment on lines +499 to +503
def _verify_drop(self, packet: Packet, **kwargs: Any) -> None:
"""Verify packet is dropped."""
received = send_packet_and_capture(packet)
contains_packet = any(p.haslayer(Raw) and b"XXXXX" in p.load for p in received)
verify(not contains_packet, "Packet was not dropped")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent payload access pattern may cause AttributeError.

Line 502 uses p.load but should use p[Raw].load for consistency with other methods (e.g., line 460, 514, 520). Accessing .load directly on a multi-layer packet may not work as expected.

🔎 Proposed fix
     def _verify_drop(self, packet: Packet, **kwargs: Any) -> None:
         """Verify packet is dropped."""
         received = send_packet_and_capture(packet)
-        contains_packet = any(p.haslayer(Raw) and b"XXXXX" in p.load for p in received)
+        contains_packet = any(p.haslayer(Raw) and b"XXXXX" in bytes(p[Raw].load) for p in received)
         verify(not contains_packet, "Packet was not dropped")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _verify_drop(self, packet: Packet, **kwargs: Any) -> None:
"""Verify packet is dropped."""
received = send_packet_and_capture(packet)
contains_packet = any(p.haslayer(Raw) and b"XXXXX" in p.load for p in received)
verify(not contains_packet, "Packet was not dropped")
def _verify_drop(self, packet: Packet, **kwargs: Any) -> None:
"""Verify packet is dropped."""
received = send_packet_and_capture(packet)
contains_packet = any(p.haslayer(Raw) and b"XXXXX" in bytes(p[Raw].load) for p in received)
verify(not contains_packet, "Packet was not dropped")
🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 499 - 503, The _verify_drop
method's packet payload check uses p.load which can raise AttributeError on
multi-layer packets; update the comprehension in _verify_drop to access the
payload via p[Raw].load (keep the p.haslayer(Raw) check) so it matches the other
tests and avoids direct .load access; ensure the any(...) line uses p[Raw].load
when checking for b"XXXXX" and leave send_packet_and_capture and verify calls
unchanged.

Comment on lines +554 to +570
try:
is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
if not is_valid:
result.skipped = True
result.failure_reason = "Flow rule failed validation"
self.test_suite_results.append(result)
self.test_case_results.append(result)

try:
flow_id = testpmd.flow_create(
flow_rule=test_case.flow_rule, port_id=port_id
)
except InteractiveCommandExecutionError:
result.failure_reason = "Hardware validated but failed to create flow rule"
self.test_suite_results.append(result)
self.test_case_results.append(result)
continue
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing continue after validation failure causes invalid flow creation attempt.

When flow_validate returns False, the code sets result.skipped = True and appends to results, but then falls through to attempt flow_create anyway. This is likely to cause InteractiveCommandExecutionError or unexpected behavior.

🔎 Proposed fix
                     is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
                     if not is_valid:
                         result.skipped = True
                         result.failure_reason = "Flow rule failed validation"
                         self.test_suite_results.append(result)
                         self.test_case_results.append(result)
+                        continue

                     try:
                         flow_id = testpmd.flow_create(
🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 554 - 570, The code currently
validates a flow with testpmd.flow_validate (resulting in setting result.skipped
and failure_reason when false) but does not stop execution, so it still calls
testpmd.flow_create; add a control flow break (a continue) immediately after
appending the result when is_valid is False to avoid attempting
testpmd.flow_create on an invalid rule; ensure this change is placed in the same
block that handles the validation failure (the branch where is_valid is False
and result is appended to self.test_suite_results and self.test_case_results).

Comment on lines +601 to +605
except SkippedTestException as e:
result.skipped = True
result.failure_reason = f"Skipped: {str(e)}"
self.test_suite_results.append(result)
self.test_case_results.append(result)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's find the file and understand its structure
find . -name "TestSuite_rte_flow.py" -type f

Repository: ovsrobot/dpdk

Length of output: 90


🏁 Script executed:

#!/bin/bash
# Search for exception definitions and verify method
rg "class.*Exception|def verify" --type py -A 2 | head -100

Repository: ovsrobot/dpdk

Length of output: 1436


🏁 Script executed:

#!/bin/bash
# Find where exceptions are defined in the test framework
find . -name "*.py" -type f | xargs grep -l "SkippedTestException\|TestCaseVerifyError" | head -20

Repository: ovsrobot/dpdk

Length of output: 229


🏁 Script executed:

#!/bin/bash
# Check the _run_tests method context around lines 601-605
sed -n '590,620p' dts/tests/TestSuite_rte_flow.py

Repository: ovsrobot/dpdk

Length of output: 1383


🏁 Script executed:

#!/bin/bash
# Get the verify function implementation
sed -n '1,100p' dts/api/test.py

Repository: ovsrobot/dpdk

Length of output: 2709


🏁 Script executed:

#!/bin/bash
# Get all exception definitions
cat dts/framework/exception.py

Repository: ovsrobot/dpdk

Length of output: 6371


🏁 Script executed:

#!/bin/bash
# Check the full _run_tests method to understand the try-except block better
sed -n '550,650p' dts/tests/TestSuite_rte_flow.py

Repository: ovsrobot/dpdk

Length of output: 4548


🏁 Script executed:

#!/bin/bash
# Find the fail() function implementation
rg "def fail" dts/api/test.py -A 10

Repository: ovsrobot/dpdk

Length of output: 429


🏁 Script executed:

#!/bin/bash
# Check if there's a broader exception handler or if the code relies on something else
sed -n '595,615p' dts/tests/TestSuite_rte_flow.py

Repository: ovsrobot/dpdk

Length of output: 954


Add exception handler for verification failures to prevent uncleaned flow rules.

The except SkippedTestException block only catches skipped tests. When verify() raises TestCaseVerifyError (for verification failures), it propagates uncaught, preventing the testpmd.flow_delete() call and leaving flow rules unreleased. Add a separate handler:

Suggested fix
except TestCaseVerifyError as e:
    result.failure_reason = f"Verification failed: {str(e)}"
    self.test_suite_results.append(result)
    self.test_case_results.append(result)
    testpmd.flow_delete(flow_id, port_id=port_id)
🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 601 - 605, Add a new except
block to handle TestCaseVerifyError in the same try/except region that currently
catches SkippedTestException: when TestCaseVerifyError is raised, set
result.failure_reason to "Verification failed: {error}", append result to
self.test_suite_results and self.test_case_results, and call
testpmd.flow_delete(flow_id, port_id=port_id) to ensure flow rules are cleaned
up; place this handler alongside the existing SkippedTestException handler
(referencing TestCaseVerifyError, result, self.test_suite_results,
self.test_case_results, and testpmd.flow_delete) so verification failures don't
leave flow rules behind.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants