[PWCI] "[v1] dts: refactor flow suite with generator pattern"#600
[PWCI] "[v1] dts: refactor flow suite with generator pattern"#600
Conversation
Refactor current flow test suite to utilize a generator method, which dynamically creates flow rules from a dictionary of pattern and action fields during runtime. This allows for much more extensive testing of the flow API without exponential code growth. Signed-off-by: Dean Marx <dmarx@iol.unh.edu> Signed-off-by: 0-day Robot <robot@bytheb.org>
Reviewer's GuideRefactors the rte_flow DTS test suite to a generator-based framework that models protocol layers and actions as data objects, auto-generates multi-layer flow rules and packets, centralizes TestPmd interaction and verification, and consolidates many specialized tests into a smaller set of parameterized tests with result tracking and confidence checks. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
📝 WalkthroughWalkthroughThe RTE Flow test suite underwent a comprehensive refactoring, transitioning from hardcoded test paths to a data-driven, generator-based architecture. New dataclass abstractions (PatternField, Layer, Action, FlowTestCase, FlowTestResult) and a FlowTestGenerator class enable combinatorial test case generation. Packet builder helpers and reorganized lifecycle hooks replace previous monolithic methods. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- In
_run_tests, whenflow_validatereturnsFalseyou mark the result as skipped but then fall through and still attemptflow_createand verification; add acontinue(or similar) after setting the skip state so invalid rules are not created or executed. - The drop verification in
_verify_droplooks for aRawpayload containingb"XXXXX", but the generator builds packets with"X" * 32and no marker, so the helper will report packets as dropped even when they are received; align the payload marker used in generated packets and in the drop check. - The summary and failure aggregation in
_log_test_case_failureshardcode the message "Flow rule passed validation but failed creation" even for other failure reasons (e.g. validation failure or verification failure); consider using the storedfailure_reasonper result to make the summary accurate and avoid misleading messages.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_run_tests`, when `flow_validate` returns `False` you mark the result as skipped but then fall through and still attempt `flow_create` and verification; add a `continue` (or similar) after setting the skip state so invalid rules are not created or executed.
- The drop verification in `_verify_drop` looks for a `Raw` payload containing `b"XXXXX"`, but the generator builds packets with `"X" * 32` and no marker, so the helper will report packets as dropped even when they are received; align the payload marker used in generated packets and in the drop check.
- The summary and failure aggregation in `_log_test_case_failures` hardcode the message "Flow rule passed validation but failed creation" even for other failure reasons (e.g. validation failure or verification failure); consider using the stored `failure_reason` per result to make the summary accurate and avoid misleading messages.
## Individual Comments
### Comment 1
<location> `dts/tests/TestSuite_rte_flow.py:34-43` </location>
<code_context>
- except InteractiveCommandExecutionError:
- log("Flow rule validation passed, but flow creation failed.")
- fail("Failed flow creation")
+@requires_nic_capability(NicCapability.FLOW_CTRL)
+class TestRteFlow(TestSuite):
+ """RTE Flow test suite.
- if verification_method == self._send_packet_and_verify:
- verification_method(packet=packet, *args, **kwargs)
+ This suite consists of 4 test cases:
+ 1. Queue Action: Verifies queue actions with multi-layer patterns
+ 2. Drop Action: Verifies drop actions with multi-layer patterns
</code_context>
<issue_to_address>
**suggestion (testing):** Previous egress and priority tests have been removed and are no longer covered by the new suite
The new generator-based suite limits coverage to queue, drop, modify_field, and jump, and no longer appears to exercise any `direction="egress"` flows or variations in `priority_level` and their impact on queue selection.
If egress rules and priority handling remain supported/important for rte_flow, please either add focused tests for these behaviors (ideally using the new generator) or document that they are intentionally no longer covered and why. Right now, overall coverage is reduced compared to the prior suite.
</issue_to_address>
### Comment 2
<location> `dts/tests/TestSuite_rte_flow.py:425-434` </location>
<code_context>
+ def _run_confidence_check(self, action_type: str) -> None:
</code_context>
<issue_to_address>
**issue (testing):** Confidence check is advertised for 'modify' but not actually implemented
`modify_field_action` calls `self._run_confidence_check("modify")`, but `_run_confidence_check` only handles `"drop"` and `"queue"`. For other `action_type` values it no-ops and still logs that the confidence check passed, so the modify tests don’t actually verify the baseline behavior.
Either implement a real confidence check for `"modify"` (e.g., add a modify rule that won’t match and confirm the packet is received unmodified), or stop calling `_run_confidence_check("modify")` and update the docstring accordingly.
</issue_to_address>
### Comment 3
<location> `dts/tests/TestSuite_rte_flow.py:555-560` </location>
<code_context>
- received = True
- verify(received, f"Expected packet was not received on queue {test_queue}")
+ try:
+ is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
+ if not is_valid:
+ result.skipped = True
+ result.failure_reason = "Flow rule failed validation"
+ self.test_suite_results.append(result)
+ self.test_case_results.append(result)
+
+ try:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** When flow validation fails, tests mark the case as skipped but continue to attempt flow creation and verification
Because you don’t `continue`/`return` after marking the result as skipped, the code still runs `flow_create` and verification for an invalid flow. This can cause misleading results and double-reporting. Add a `continue` (or equivalent early exit) after appending the skipped result so invalid flows are not executed.
Suggested implementation:
```python
try:
is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
if not is_valid:
result.skipped = True
result.failure_reason = "Flow rule failed validation"
self.test_suite_results.append(result)
self.test_case_results.append(result)
continue
try:
```
This edit assumes that the shown block is inside a loop (for example, iterating over `port_id` or test cases) where `continue` is valid and correctly skips the rest of the iteration. If it is instead inside a helper method that should exit entirely on invalid flows, you should replace `continue` with `return` (or an appropriate early-exit mechanism) and ensure the function’s return type and callers handle that behavior.
</issue_to_address>
### Comment 4
<location> `dts/tests/TestSuite_rte_flow.py:674-680` </location>
<code_context>
- test_queue=2,
- )
+ self._run_confidence_check("queue")
+ for stack in LAYER_STACKS:
+ test_cases = self.generator.generate(
+ layer_names=stack,
+ action_name="queue",
+ action_value=2,
+ )
+ self._run_tests(test_cases)
+ self._log_test_case_failures()
</code_context>
<issue_to_address>
**suggestion (testing):** The generator-based tests may create a very large combinatorial set of cases, which could lead to long runtimes and harder debugging
The generator now takes the Cartesian product of all layer field specs and iterates over all `test_values` for each field across every stack in `LAYER_STACKS`. This can grow very quickly as fields/values are added, risking slow or flaky test runs and many hard-to-triage, near-duplicate failures.
Consider:
- Capping the number of combinations per stack (e.g., sampling representative cases).
- Providing a configuration switch for “smoke” vs “full” modes.
- Making `LAYER_STACKS` and/or field value sets environment-configurable so CI runs a reduced subset while local runs can use the full matrix.
This would keep runtime and noise manageable while preserving coverage benefits from the generator approach.
Suggested implementation:
```python
import os
from dataclasses import dataclass, field
from itertools import product
from typing import Any, Callable, cast
```
```python
self._run_confidence_check("queue")
# Determine test mode and per-stack cap from environment.
# RTE_FLOW_TEST_MODE: "smoke" (default) or "full"
# RTE_FLOW_MAX_COMBINATIONS_PER_STACK: positive int to cap combinations in "smoke" mode
# RTE_FLOW_LAYER_STACK_INDICES: optional comma-separated stack indices (e.g. "0,2,3")
test_mode = os.getenv("RTE_FLOW_TEST_MODE", "smoke").lower()
max_per_stack_env = os.getenv("RTE_FLOW_MAX_COMBINATIONS_PER_STACK", "").strip()
max_per_stack = int(max_per_stack_env) if max_per_stack_env.isdigit() else 0
stack_indices_env = os.getenv("RTE_FLOW_LAYER_STACK_INDICES", "").strip()
if stack_indices_env:
try:
allowed_indices = {
int(idx.strip())
for idx in stack_indices_env.split(",")
if idx.strip() != ""
}
except ValueError:
# Fall back to all stacks if indices are malformed
allowed_indices = None
else:
allowed_indices = None
for stack_index, stack in enumerate(LAYER_STACKS):
if allowed_indices is not None and stack_index not in allowed_indices:
continue
all_cases = list(
self.generator.generate(
layer_names=stack,
action_name="queue",
action_value=2,
)
)
# In smoke mode, cap the number of combinations per stack to keep runtime manageable.
if (
test_mode == "smoke"
and max_per_stack > 0
and len(all_cases) > max_per_stack
):
# Deterministic sub-sampling: evenly stride through the list.
step = max(1, len(all_cases) // max_per_stack)
sampled_cases = all_cases[::step][:max_per_stack]
self._run_tests(sampled_cases)
else:
# Full mode, or smoke mode without a configured cap.
self._run_tests(all_cases)
self._log_test_case_failures()
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| def _run_confidence_check(self, action_type: str) -> None: | ||
| """Verify that non-matching packets are unaffected by flow rules. | ||
|
|
||
| def _send_packet_and_verify(self, packet: Packet, should_receive: bool = True) -> None: | ||
| """Generate a packet, send to the DUT, and verify it is forwarded back. | ||
| Creates a flow rule for the specified action, then sends a packet that | ||
| should NOT match the rule to confirm: | ||
| - For 'drop': non-matching packets ARE received (not dropped) | ||
| - For 'queue': non-matching packets are NOT steered to the target queue | ||
| - For 'modify': non-matching packets arrive unmodified | ||
|
|
||
| This ensures flow rules only affect matching traffic before |
There was a problem hiding this comment.
issue (testing): Confidence check is advertised for 'modify' but not actually implemented
modify_field_action calls self._run_confidence_check("modify"), but _run_confidence_check only handles "drop" and "queue". For other action_type values it no-ops and still logs that the confidence check passed, so the modify tests don’t actually verify the baseline behavior.
Either implement a real confidence check for "modify" (e.g., add a modify rule that won’t match and confirm the packet is received unmodified), or stop calling _run_confidence_check("modify") and update the docstring accordingly.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI Agents
In @dts/tests/TestSuite_rte_flow.py:
- Around line 499-503: The _verify_drop method's packet payload check uses
p.load which can raise AttributeError on multi-layer packets; update the
comprehension in _verify_drop to access the payload via p[Raw].load (keep the
p.haslayer(Raw) check) so it matches the other tests and avoids direct .load
access; ensure the any(...) line uses p[Raw].load when checking for b"XXXXX" and
leave send_packet_and_capture and verify calls unchanged.
- Around line 554-570: The code currently validates a flow with
testpmd.flow_validate (resulting in setting result.skipped and failure_reason
when false) but does not stop execution, so it still calls testpmd.flow_create;
add a control flow break (a continue) immediately after appending the result
when is_valid is False to avoid attempting testpmd.flow_create on an invalid
rule; ensure this change is placed in the same block that handles the validation
failure (the branch where is_valid is False and result is appended to
self.test_suite_results and self.test_case_results).
- Around line 425-490: The method _run_confidence_check is missing a branch for
action_type == "modify" so non-matching packets are never validated for modify
rules; add an elif for "modify" that mirrors the other branches: construct a
FlowRule with a pattern that would match some other traffic and an actions list
that performs a modification (e.g., "set_field" or "modify"), call
testpmd.flow_create(flow_rule=..., port_id=0), start testpmd, send the
non_matching_packet via send_packet_and_capture, stop testpmd, verify the
received packet(s) still contain the original content/headers (e.g., Raw payload
contains "CONFIDENCE" and IP/MAC unchanged) then delete the flow with
testpmd.flow_delete(flow_id, port_id=0) and call verify to fail if modification
occurred; alternatively, if modifying confidence checks is intentionally
skipped, explicitly handle the "modify" case by logging or raising a clear skip
(e.g., log a skipped confidence check) instead of falling through silently so
intent is clear.
- Around line 601-605: Add a new except block to handle TestCaseVerifyError in
the same try/except region that currently catches SkippedTestException: when
TestCaseVerifyError is raised, set result.failure_reason to "Verification
failed: {error}", append result to self.test_suite_results and
self.test_case_results, and call testpmd.flow_delete(flow_id, port_id=port_id)
to ensure flow rules are cleaned up; place this handler alongside the existing
SkippedTestException handler (referencing TestCaseVerifyError, result,
self.test_suite_results, self.test_case_results, and testpmd.flow_delete) so
verification failures don't leave flow rules behind.
🧹 Nitpick comments (2)
dts/tests/TestSuite_rte_flow.py (2)
141-161:requiresfield is semantically incorrect for IPv6 stacks.The
tcp,udp, andsctplayers specifyrequires=["eth", "ipv4"], butLAYER_STACKSincludes combinations like["eth", "ipv6", "tcp"]. While this doesn't cause runtime issues (therequiresfield isn't currently enforced), it's misleading documentation.Consider updating to reflect the actual requirement (IP layer, either version):
"tcp": Layer( name="tcp", scapy_class=TCP, pattern_name="tcp", fields=[ PatternField("sport", "src", [1234, 8080]), PatternField("dport", "dst", [80, 443]), PatternField("flags", "flags", [2, 16]), ], - requires=["eth", "ipv4"], + requires=["eth"], # Requires IPv4 or IPv6 ),
518-535: Consider adding explicit failure if test_packet remains None.While the early verification at lines 513-516 ensures a matching packet exists, the subsequent field comparisons silently pass if
test_packet is None. Adding an explicit assertion would catch any edge cases.🔎 Proposed defensive check
test_packet = None for pkt in received: if pkt.haslayer(Raw) and b"XXXXX" in pkt.load: test_packet = pkt break + verify(test_packet is not None, "Failed to locate test packet for modification check") + if IP in expected_packet and test_packet is not None: verify(
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
dts/tests/TestSuite_rte_flow.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Sourcery review
🔇 Additional comments (6)
dts/tests/TestSuite_rte_flow.py (6)
13-29: LGTM!Imports are well-organized and appropriate for the new data-driven architecture. The Scapy layers align with the
LAYERSdictionary definitions.
32-105: LGTM!The dataclass hierarchy provides a clean abstraction for test case generation. The separation of concerns between
PatternField,Layer,Action,FlowTestCase, andFlowTestResultis well thought out.
209-223: LGTM!The expected packet builders correctly clone packets and apply modifications. The layer existence checks (
if IP in expected,if Ether in expected) ensure safety when processing different packet types.
279-404: LGTM!The
FlowTestGeneratorclass implements a clean combinatorial test case generation pattern. The handling of IPv6 next header values (lines 305-307) and the cycling through test values withi % len(field_spec.test_values)(line 366) are correctly implemented.
492-497: LGTM!The method correctly relies on the caller to start testpmd (done in
_run_testsat lines 575-576) and handles queue verification using verbose output.
744-886: LGTM!The
jump_actiontest method correctly implements a two-stage flow pipeline with proper validation, error handling, and cleanup across all code paths. The positive and negative test cases (matching vs non-matching packets) provide good coverage.
| def _run_confidence_check(self, action_type: str) -> None: | ||
| """Verify that non-matching packets are unaffected by flow rules. | ||
|
|
||
| def _send_packet_and_verify(self, packet: Packet, should_receive: bool = True) -> None: | ||
| """Generate a packet, send to the DUT, and verify it is forwarded back. | ||
| Creates a flow rule for the specified action, then sends a packet that | ||
| should NOT match the rule to confirm: | ||
| - For 'drop': non-matching packets ARE received (not dropped) | ||
| - For 'queue': non-matching packets are NOT steered to the target queue | ||
| - For 'modify': non-matching packets arrive unmodified | ||
|
|
||
| This ensures flow rules only affect matching traffic before | ||
| running the actual action tests. | ||
|
|
||
| Args: | ||
| packet: Scapy packet to send and verify. | ||
| should_receive: Indicate whether the packet should be received. | ||
| action_type: The action being tested ('drop', 'queue', 'modify'). | ||
| """ | ||
| received = send_packet_and_capture(packet) | ||
| contains_packet = any( | ||
| packet.haslayer(Raw) and b"xxxxx" in packet.load for packet in received | ||
| ) | ||
| verify( | ||
| should_receive == contains_packet, | ||
| f"Packet was {'dropped' if should_receive else 'received'}", | ||
| non_matching_packet = ( | ||
| Ether(src="02:00:00:00:00:00", dst="02:00:00:00:00:01") | ||
| / IP(src="192.168.100.1", dst="192.168.100.2") | ||
| / UDP(sport=9999, dport=9998) | ||
| / Raw(load="CONFIDENCE" + "X" * 22) | ||
| ) | ||
|
|
||
| def _send_packet_and_verify_queue( | ||
| self, packet: Packet, test_queue: int, testpmd: TestPmd | ||
| ) -> None: | ||
| """Send packet and verify queue stats show packet was received. | ||
| with TestPmd(rx_queues=4, tx_queues=4) as testpmd: | ||
| if action_type == "drop": | ||
| drop_rule = FlowRule( | ||
| direction="ingress", | ||
| pattern=["eth / ipv4 src is 192.168.1.1 / udp dst is 53"], | ||
| actions=["drop"], | ||
| ) | ||
| flow_id = testpmd.flow_create(flow_rule=drop_rule, port_id=0) | ||
|
|
||
| Args: | ||
| packet: Scapy packet to send to the SUT. | ||
| test_queue: Represents the queue the test packet is being sent to. | ||
| testpmd: TestPmd instance being used to send test packet. | ||
| """ | ||
| testpmd.set_verbose(level=8) | ||
| testpmd.start() | ||
| send_packet_and_capture(packet=packet) | ||
| testpmd.start() | ||
| received = send_packet_and_capture(non_matching_packet) | ||
| testpmd.stop() | ||
| contains_packet = any( | ||
| p.haslayer(Raw) and b"CONFIDENCE" in bytes(p[Raw].load) for p in received | ||
| ) | ||
| testpmd.flow_delete(flow_id, port_id=0) | ||
| verify( | ||
| contains_packet, | ||
| "Confidence check failed: non-matching packet dropped by drop rule", | ||
| ) | ||
|
|
||
| elif action_type == "queue": | ||
| queue_rule = FlowRule( | ||
| direction="ingress", | ||
| pattern=[ | ||
| "eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254 " | ||
| "dst is 10.255.255.253 / udp src is 12345 dst is 54321" | ||
| ], | ||
| actions=["queue index 3"], | ||
| ) | ||
| flow_id = testpmd.flow_create(flow_rule=queue_rule, port_id=0) | ||
|
|
||
| testpmd.set_verbose(level=8) | ||
| testpmd.start() | ||
| send_packet_and_capture(non_matching_packet) | ||
| verbose_output = testpmd.extract_verbose_output(testpmd.stop()) | ||
| received_on_target = any(p.queue_id == 3 for p in verbose_output) | ||
| testpmd.flow_delete(flow_id, port_id=0) | ||
| verify( | ||
| not received_on_target, | ||
| "Confidence check failed: non-matching packet steered to queue 3", | ||
| ) | ||
|
|
||
| log(f"Confidence check passed for '{action_type}' action") |
There was a problem hiding this comment.
Missing confidence check implementation for "modify" action.
_run_confidence_check("modify") is called at line 722, but the method only handles "drop" and "queue" action types. The "modify" case falls through silently without any verification.
Either add the missing implementation or document why it's intentionally skipped:
🔎 Option 1: Add modify check
verify(
not received_on_target,
"Confidence check failed: non-matching packet steered to queue 3",
)
+ elif action_type == "modify":
+ # Verify non-matching packets arrive unmodified
+ modify_rule = FlowRule(
+ direction="ingress",
+ pattern=["eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254"],
+ actions=["modify_field op set dst_type ipv4_dst src_type ipv4_src width 32 / queue index 0"],
+ )
+ flow_id = testpmd.flow_create(flow_rule=modify_rule, port_id=0)
+ testpmd.start()
+ received = send_packet_and_capture(non_matching_packet)
+ testpmd.stop()
+ # Verify packet arrived unmodified
+ for pkt in received:
+ if pkt.haslayer(Raw) and b"CONFIDENCE" in bytes(pkt[Raw].load):
+ verify(
+ pkt[IP].dst == non_matching_packet[IP].dst,
+ "Confidence check failed: non-matching packet was modified",
+ )
+ break
+ testpmd.flow_delete(flow_id, port_id=0)
+
log(f"Confidence check passed for '{action_type}' action")🔎 Option 2: Skip explicitly
+ elif action_type == "modify":
+ # Modify actions are validated through the actual test cases
+ pass
+
log(f"Confidence check passed for '{action_type}' action")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _run_confidence_check(self, action_type: str) -> None: | |
| """Verify that non-matching packets are unaffected by flow rules. | |
| def _send_packet_and_verify(self, packet: Packet, should_receive: bool = True) -> None: | |
| """Generate a packet, send to the DUT, and verify it is forwarded back. | |
| Creates a flow rule for the specified action, then sends a packet that | |
| should NOT match the rule to confirm: | |
| - For 'drop': non-matching packets ARE received (not dropped) | |
| - For 'queue': non-matching packets are NOT steered to the target queue | |
| - For 'modify': non-matching packets arrive unmodified | |
| This ensures flow rules only affect matching traffic before | |
| running the actual action tests. | |
| Args: | |
| packet: Scapy packet to send and verify. | |
| should_receive: Indicate whether the packet should be received. | |
| action_type: The action being tested ('drop', 'queue', 'modify'). | |
| """ | |
| received = send_packet_and_capture(packet) | |
| contains_packet = any( | |
| packet.haslayer(Raw) and b"xxxxx" in packet.load for packet in received | |
| ) | |
| verify( | |
| should_receive == contains_packet, | |
| f"Packet was {'dropped' if should_receive else 'received'}", | |
| non_matching_packet = ( | |
| Ether(src="02:00:00:00:00:00", dst="02:00:00:00:00:01") | |
| / IP(src="192.168.100.1", dst="192.168.100.2") | |
| / UDP(sport=9999, dport=9998) | |
| / Raw(load="CONFIDENCE" + "X" * 22) | |
| ) | |
| def _send_packet_and_verify_queue( | |
| self, packet: Packet, test_queue: int, testpmd: TestPmd | |
| ) -> None: | |
| """Send packet and verify queue stats show packet was received. | |
| with TestPmd(rx_queues=4, tx_queues=4) as testpmd: | |
| if action_type == "drop": | |
| drop_rule = FlowRule( | |
| direction="ingress", | |
| pattern=["eth / ipv4 src is 192.168.1.1 / udp dst is 53"], | |
| actions=["drop"], | |
| ) | |
| flow_id = testpmd.flow_create(flow_rule=drop_rule, port_id=0) | |
| Args: | |
| packet: Scapy packet to send to the SUT. | |
| test_queue: Represents the queue the test packet is being sent to. | |
| testpmd: TestPmd instance being used to send test packet. | |
| """ | |
| testpmd.set_verbose(level=8) | |
| testpmd.start() | |
| send_packet_and_capture(packet=packet) | |
| testpmd.start() | |
| received = send_packet_and_capture(non_matching_packet) | |
| testpmd.stop() | |
| contains_packet = any( | |
| p.haslayer(Raw) and b"CONFIDENCE" in bytes(p[Raw].load) for p in received | |
| ) | |
| testpmd.flow_delete(flow_id, port_id=0) | |
| verify( | |
| contains_packet, | |
| "Confidence check failed: non-matching packet dropped by drop rule", | |
| ) | |
| elif action_type == "queue": | |
| queue_rule = FlowRule( | |
| direction="ingress", | |
| pattern=[ | |
| "eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254 " | |
| "dst is 10.255.255.253 / udp src is 12345 dst is 54321" | |
| ], | |
| actions=["queue index 3"], | |
| ) | |
| flow_id = testpmd.flow_create(flow_rule=queue_rule, port_id=0) | |
| testpmd.set_verbose(level=8) | |
| testpmd.start() | |
| send_packet_and_capture(non_matching_packet) | |
| verbose_output = testpmd.extract_verbose_output(testpmd.stop()) | |
| received_on_target = any(p.queue_id == 3 for p in verbose_output) | |
| testpmd.flow_delete(flow_id, port_id=0) | |
| verify( | |
| not received_on_target, | |
| "Confidence check failed: non-matching packet steered to queue 3", | |
| ) | |
| log(f"Confidence check passed for '{action_type}' action") | |
| def _run_confidence_check(self, action_type: str) -> None: | |
| """Verify that non-matching packets are unaffected by flow rules. | |
| Creates a flow rule for the specified action, then sends a packet that | |
| should NOT match the rule to confirm: | |
| - For 'drop': non-matching packets ARE received (not dropped) | |
| - For 'queue': non-matching packets are NOT steered to the target queue | |
| - For 'modify': non-matching packets arrive unmodified | |
| This ensures flow rules only affect matching traffic before | |
| running the actual action tests. | |
| Args: | |
| action_type: The action being tested ('drop', 'queue', 'modify'). | |
| """ | |
| non_matching_packet = ( | |
| Ether(src="02:00:00:00:00:00", dst="02:00:00:00:00:01") | |
| / IP(src="192.168.100.1", dst="192.168.100.2") | |
| / UDP(sport=9999, dport=9998) | |
| / Raw(load="CONFIDENCE" + "X" * 22) | |
| ) | |
| with TestPmd(rx_queues=4, tx_queues=4) as testpmd: | |
| if action_type == "drop": | |
| drop_rule = FlowRule( | |
| direction="ingress", | |
| pattern=["eth / ipv4 src is 192.168.1.1 / udp dst is 53"], | |
| actions=["drop"], | |
| ) | |
| flow_id = testpmd.flow_create(flow_rule=drop_rule, port_id=0) | |
| testpmd.start() | |
| received = send_packet_and_capture(non_matching_packet) | |
| testpmd.stop() | |
| contains_packet = any( | |
| p.haslayer(Raw) and b"CONFIDENCE" in bytes(p[Raw].load) for p in received | |
| ) | |
| testpmd.flow_delete(flow_id, port_id=0) | |
| verify( | |
| contains_packet, | |
| "Confidence check failed: non-matching packet dropped by drop rule", | |
| ) | |
| elif action_type == "queue": | |
| queue_rule = FlowRule( | |
| direction="ingress", | |
| pattern=[ | |
| "eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254 " | |
| "dst is 10.255.255.253 / udp src is 12345 dst is 54321" | |
| ], | |
| actions=["queue index 3"], | |
| ) | |
| flow_id = testpmd.flow_create(flow_rule=queue_rule, port_id=0) | |
| testpmd.set_verbose(level=8) | |
| testpmd.start() | |
| send_packet_and_capture(non_matching_packet) | |
| verbose_output = testpmd.extract_verbose_output(testpmd.stop()) | |
| received_on_target = any(p.queue_id == 3 for p in verbose_output) | |
| testpmd.flow_delete(flow_id, port_id=0) | |
| verify( | |
| not received_on_target, | |
| "Confidence check failed: non-matching packet steered to queue 3", | |
| ) | |
| elif action_type == "modify": | |
| # Verify non-matching packets arrive unmodified | |
| modify_rule = FlowRule( | |
| direction="ingress", | |
| pattern=["eth src is aa:bb:cc:dd:ee:ff / ipv4 src is 10.255.255.254"], | |
| actions=["modify_field op set dst_type ipv4_dst src_type ipv4_src width 32 / queue index 0"], | |
| ) | |
| flow_id = testpmd.flow_create(flow_rule=modify_rule, port_id=0) | |
| testpmd.start() | |
| received = send_packet_and_capture(non_matching_packet) | |
| testpmd.stop() | |
| # Verify packet arrived unmodified | |
| for pkt in received: | |
| if pkt.haslayer(Raw) and b"CONFIDENCE" in bytes(pkt[Raw].load): | |
| verify( | |
| pkt[IP].dst == non_matching_packet[IP].dst, | |
| "Confidence check failed: non-matching packet was modified", | |
| ) | |
| break | |
| testpmd.flow_delete(flow_id, port_id=0) | |
| log(f"Confidence check passed for '{action_type}' action") |
🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 425 - 490, The method
_run_confidence_check is missing a branch for action_type == "modify" so
non-matching packets are never validated for modify rules; add an elif for
"modify" that mirrors the other branches: construct a FlowRule with a pattern
that would match some other traffic and an actions list that performs a
modification (e.g., "set_field" or "modify"), call
testpmd.flow_create(flow_rule=..., port_id=0), start testpmd, send the
non_matching_packet via send_packet_and_capture, stop testpmd, verify the
received packet(s) still contain the original content/headers (e.g., Raw payload
contains "CONFIDENCE" and IP/MAC unchanged) then delete the flow with
testpmd.flow_delete(flow_id, port_id=0) and call verify to fail if modification
occurred; alternatively, if modifying confidence checks is intentionally
skipped, explicitly handle the "modify" case by logging or raising a clear skip
(e.g., log a skipped confidence check) instead of falling through silently so
intent is clear.
| def _verify_drop(self, packet: Packet, **kwargs: Any) -> None: | ||
| """Verify packet is dropped.""" | ||
| received = send_packet_and_capture(packet) | ||
| contains_packet = any(p.haslayer(Raw) and b"XXXXX" in p.load for p in received) | ||
| verify(not contains_packet, "Packet was not dropped") |
There was a problem hiding this comment.
Inconsistent payload access pattern may cause AttributeError.
Line 502 uses p.load but should use p[Raw].load for consistency with other methods (e.g., line 460, 514, 520). Accessing .load directly on a multi-layer packet may not work as expected.
🔎 Proposed fix
def _verify_drop(self, packet: Packet, **kwargs: Any) -> None:
"""Verify packet is dropped."""
received = send_packet_and_capture(packet)
- contains_packet = any(p.haslayer(Raw) and b"XXXXX" in p.load for p in received)
+ contains_packet = any(p.haslayer(Raw) and b"XXXXX" in bytes(p[Raw].load) for p in received)
verify(not contains_packet, "Packet was not dropped")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _verify_drop(self, packet: Packet, **kwargs: Any) -> None: | |
| """Verify packet is dropped.""" | |
| received = send_packet_and_capture(packet) | |
| contains_packet = any(p.haslayer(Raw) and b"XXXXX" in p.load for p in received) | |
| verify(not contains_packet, "Packet was not dropped") | |
| def _verify_drop(self, packet: Packet, **kwargs: Any) -> None: | |
| """Verify packet is dropped.""" | |
| received = send_packet_and_capture(packet) | |
| contains_packet = any(p.haslayer(Raw) and b"XXXXX" in bytes(p[Raw].load) for p in received) | |
| verify(not contains_packet, "Packet was not dropped") |
🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 499 - 503, The _verify_drop
method's packet payload check uses p.load which can raise AttributeError on
multi-layer packets; update the comprehension in _verify_drop to access the
payload via p[Raw].load (keep the p.haslayer(Raw) check) so it matches the other
tests and avoids direct .load access; ensure the any(...) line uses p[Raw].load
when checking for b"XXXXX" and leave send_packet_and_capture and verify calls
unchanged.
| try: | ||
| is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id) | ||
| if not is_valid: | ||
| result.skipped = True | ||
| result.failure_reason = "Flow rule failed validation" | ||
| self.test_suite_results.append(result) | ||
| self.test_case_results.append(result) | ||
|
|
||
| try: | ||
| flow_id = testpmd.flow_create( | ||
| flow_rule=test_case.flow_rule, port_id=port_id | ||
| ) | ||
| except InteractiveCommandExecutionError: | ||
| result.failure_reason = "Hardware validated but failed to create flow rule" | ||
| self.test_suite_results.append(result) | ||
| self.test_case_results.append(result) | ||
| continue |
There was a problem hiding this comment.
Missing continue after validation failure causes invalid flow creation attempt.
When flow_validate returns False, the code sets result.skipped = True and appends to results, but then falls through to attempt flow_create anyway. This is likely to cause InteractiveCommandExecutionError or unexpected behavior.
🔎 Proposed fix
is_valid = testpmd.flow_validate(flow_rule=test_case.flow_rule, port_id=port_id)
if not is_valid:
result.skipped = True
result.failure_reason = "Flow rule failed validation"
self.test_suite_results.append(result)
self.test_case_results.append(result)
+ continue
try:
flow_id = testpmd.flow_create(🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 554 - 570, The code currently
validates a flow with testpmd.flow_validate (resulting in setting result.skipped
and failure_reason when false) but does not stop execution, so it still calls
testpmd.flow_create; add a control flow break (a continue) immediately after
appending the result when is_valid is False to avoid attempting
testpmd.flow_create on an invalid rule; ensure this change is placed in the same
block that handles the validation failure (the branch where is_valid is False
and result is appended to self.test_suite_results and self.test_case_results).
| except SkippedTestException as e: | ||
| result.skipped = True | ||
| result.failure_reason = f"Skipped: {str(e)}" | ||
| self.test_suite_results.append(result) | ||
| self.test_case_results.append(result) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's find the file and understand its structure
find . -name "TestSuite_rte_flow.py" -type fRepository: ovsrobot/dpdk
Length of output: 90
🏁 Script executed:
#!/bin/bash
# Search for exception definitions and verify method
rg "class.*Exception|def verify" --type py -A 2 | head -100Repository: ovsrobot/dpdk
Length of output: 1436
🏁 Script executed:
#!/bin/bash
# Find where exceptions are defined in the test framework
find . -name "*.py" -type f | xargs grep -l "SkippedTestException\|TestCaseVerifyError" | head -20Repository: ovsrobot/dpdk
Length of output: 229
🏁 Script executed:
#!/bin/bash
# Check the _run_tests method context around lines 601-605
sed -n '590,620p' dts/tests/TestSuite_rte_flow.pyRepository: ovsrobot/dpdk
Length of output: 1383
🏁 Script executed:
#!/bin/bash
# Get the verify function implementation
sed -n '1,100p' dts/api/test.pyRepository: ovsrobot/dpdk
Length of output: 2709
🏁 Script executed:
#!/bin/bash
# Get all exception definitions
cat dts/framework/exception.pyRepository: ovsrobot/dpdk
Length of output: 6371
🏁 Script executed:
#!/bin/bash
# Check the full _run_tests method to understand the try-except block better
sed -n '550,650p' dts/tests/TestSuite_rte_flow.pyRepository: ovsrobot/dpdk
Length of output: 4548
🏁 Script executed:
#!/bin/bash
# Find the fail() function implementation
rg "def fail" dts/api/test.py -A 10Repository: ovsrobot/dpdk
Length of output: 429
🏁 Script executed:
#!/bin/bash
# Check if there's a broader exception handler or if the code relies on something else
sed -n '595,615p' dts/tests/TestSuite_rte_flow.pyRepository: ovsrobot/dpdk
Length of output: 954
Add exception handler for verification failures to prevent uncleaned flow rules.
The except SkippedTestException block only catches skipped tests. When verify() raises TestCaseVerifyError (for verification failures), it propagates uncaught, preventing the testpmd.flow_delete() call and leaving flow rules unreleased. Add a separate handler:
Suggested fix
except TestCaseVerifyError as e:
result.failure_reason = f"Verification failed: {str(e)}"
self.test_suite_results.append(result)
self.test_case_results.append(result)
testpmd.flow_delete(flow_id, port_id=port_id)🤖 Prompt for AI Agents
In @dts/tests/TestSuite_rte_flow.py around lines 601 - 605, Add a new except
block to handle TestCaseVerifyError in the same try/except region that currently
catches SkippedTestException: when TestCaseVerifyError is raised, set
result.failure_reason to "Verification failed: {error}", append result to
self.test_suite_results and self.test_case_results, and call
testpmd.flow_delete(flow_id, port_id=port_id) to ensure flow rules are cleaned
up; place this handler alongside the existing SkippedTestException handler
(referencing TestCaseVerifyError, result, self.test_suite_results,
self.test_case_results, and testpmd.flow_delete) so verification failures don't
leave flow rules behind.
NOTE: This is an auto submission for "[v1] dts: refactor flow suite with generator pattern".
See "http://patchwork.dpdk.org/project/dpdk/list/?series=36976" for details.
Summary by Sourcery
Refactor the rte_flow test suite to use a data-driven generator pattern that systematically combines protocol layers and actions, and expand coverage to additional protocols and actions while centralizing result reporting.
New Features:
Enhancements:
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.