Skip to content

[PWCI] "[v2] app/testpmd: fix flow queue job leaks"#614

Open
ovsrobot wants to merge 1 commit intomainfrom
series_37005
Open

[PWCI] "[v2] app/testpmd: fix flow queue job leaks"#614
ovsrobot wants to merge 1 commit intomainfrom
series_37005

Conversation

@ovsrobot
Copy link
Owner

@ovsrobot ovsrobot commented Jan 9, 2026

NOTE: This is an auto submission for "[v2] app/testpmd: fix flow queue job leaks".

See "http://patchwork.dpdk.org/project/dpdk/list/?series=37005" for details.

Summary by Sourcery

Track and flush asynchronous flow queue jobs in testpmd to prevent leaks when using the flow queue API.

Bug Fixes:

  • Ensure all asynchronous flow and indirect action operations are tracked per queue and freed when their completions are pulled.
  • Flush and clean up pending flow queue jobs when flushing flows or when ports are removed to avoid memory leaks and stale state.

Enhancements:

  • Introduce per-queue job lists and shared helper routines to manage and free queue_job structures and their associated resources.
  • Add bounded, iterative flushing of flow queues with logging and assertions to detect drivers that do not return all completions.

Summary by CodeRabbit

  • Bug Fixes

    • Improved resource cleanup and lifecycle management for asynchronous flow operations to prevent potential resource leaks during port flushing operations.
  • Refactor

    • Enhanced internal infrastructure for tracking and managing pending asynchronous flow operations per queue, ensuring consistent resource cleanup across all operation types and paths.

✏️ Tip: You can customize this high-level summary in your review settings.

Each enqueued async flow operation in testpmd has an associated
queue_job struct. It is passed in user data and used to determine
the type of operation when operation results are pulled on a given
queue. This information informs the necessary additional handling
(e.g., freeing flow struct or dumping the queried action state).

If async flow operations were enqueued and results were not pulled
before quitting testpmd, these queue_job structs were leaked as reported
by ASAN:

Direct leak of 88 byte(s) in 1 object(s) allocated from:
    #0 0x7f7539084a37 in __interceptor_calloc
        ../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
    #1 0x55a872c8e512 in port_queue_flow_create
        (/download/dpdk/install/bin/dpdk-testpmd+0x4cd512)
    #2 0x55a872c28414 in cmd_flow_cb
        (/download/dpdk/install/bin/dpdk-testpmd+0x467414)
    #3 0x55a8734fa6a3 in __cmdline_parse
        (/download/dpdk/install/bin/dpdk-testpmd+0xd396a3)
    #4 0x55a8734f6130 in cmdline_valid_buffer
        (/download/dpdk/install/bin/dpdk-testpmd+0xd35130)
    #5 0x55a873503b4f in rdline_char_in
        (/download/dpdk/install/bin/dpdk-testpmd+0xd42b4f)
    #6 0x55a8734f62ba in cmdline_in
        (/download/dpdk/install/bin/dpdk-testpmd+0xd352ba)
    #7 0x55a8734f65eb in cmdline_interact
        (/download/dpdk/install/bin/dpdk-testpmd+0xd355eb)
    #8 0x55a872c19b8e in prompt
        (/download/dpdk/install/bin/dpdk-testpmd+0x458b8e)
    #9 0x55a872be425a in main
        (/download/dpdk/install/bin/dpdk-testpmd+0x42325a)
    #10 0x7f7538756d8f in __libc_start_call_main
        ../sysdeps/nptl/libc_start_call_main.h:58

This patch addresses that by registering all queue_job structs, for a
given queue, on a linked list. Whenever operation results are pulled
and result is handled, queue_job struct will be removed from that list
and freed.
Before port is closed, during flow flush, testpmd will pull
all of the expected results
(based on the number of queue_job on the list).

Fixes: c9dc038 ("ethdev: add indirect action async query")
Fixes: 99231e4 ("ethdev: add template table resize")
Fixes: 77e7939 ("app/testpmd: add flow rule update command")
Fixes: 3e3edab ("ethdev: add flow quota")
Fixes: 966eb55 ("ethdev: add queue-based API to report aged flow rules")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: 0-day Robot <robot@bytheb.org>
@sourcery-ai
Copy link

sourcery-ai bot commented Jan 9, 2026

Reviewer's Guide

Adds per-queue tracking and flushing of asynchronous flow operations in testpmd to fix leaks of queue jobs, centralizing job cleanup and ensuring job lists are freed when ports are flushed or removed.

File-Level Changes

Change Details Files
Introduce per-queue job lists on ports to track pending async flow operations and initialize them when configuring flow queues.
  • Add FLOW_QUEUE_FLUSH_MAX_ITERS constant for bounded queue flush attempts
  • Extend struct rte_port with a job_list array of queue_job_list heads, representing pending operations per queue
  • Allocate and initialize port->job_list when configuring flow queues, with one list per queue
app/test-pmd/config.c
app/test-pmd/testpmd.h
Ensure all enqueued queue jobs are linked into the per-queue lists and are properly freed, with shared cleanup logic.
  • Add LIST_ENTRY to struct queue_job and define queue_job_list list head type
  • On every enqueue of async flow or indirect action operation, insert the queue_job into the corresponding per-queue list
  • Factor common job cleanup into port_free_queue_job, handling pf/pia ownership, list removal, and job free
  • Use port_free_queue_job in port_queue_flow_pull and aged-flow destroy paths instead of duplicating free logic and ensure user_data is asserted non-NULL
app/test-pmd/testpmd.h
app/test-pmd/config.c
Add flushing logic for async flow queues that pushes pending ops, polls completions up to a bounded number of iterations, and drains/frees any remaining jobs to avoid leaks.
  • Implement port_flow_queue_job_flush to push the queue, count expected operations from job_list, repeatedly pull completions, free jobs, and log if completions are missing after FLOW_QUEUE_FLUSH_MAX_ITERS
  • Implement port_flow_queues_job_flush to iterate over all queues of a port and flush only queues with non-empty job lists
  • Invoke port_flow_queues_job_flush at the start of port_flow_flush and log failures without aborting subsequent flush logic
app/test-pmd/config.c
Free per-port job_list storage when cleaning up ports to avoid memory leaks.
  • Introduce port_free_job_list helper that frees port->job_list
  • Call port_free_job_list from flush_port_owned_resources so job_list memory is released on port teardown
app/test-pmd/testpmd.c
app/test-pmd/testpmd.h

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link

coderabbitai bot commented Jan 9, 2026

📝 Walkthrough

Walkthrough

This pull request introduces a per-port, per-queue job tracking system using linked lists to manage pending asynchronous flow API operations. Jobs are allocated and enqueued during flow create/update/destroy operations, then flushed via dedicated cleanup functions before the general port flush occurs.

Changes

Cohort / File(s) Summary
Data Structure Integration
app/test-pmd/testpmd.h
Added LIST_ENTRY(queue_job) chain field to queue_job struct for list integration; introduced queue_job_list LIST_HEAD type; added job_list pointer to rte_port struct for per-queue job tracking
Queue Job Management
app/test-pmd/config.c
Introduced FLOW_QUEUE_FLUSH_MAX_ITERS macro; enhanced port_flow_configure to allocate and initialize per-port job_list array; added port_free_queue_job function for proper job removal and resource cleanup; implemented port_flow_queue_job_flush and port_flow_queues_job_flush for flushing pending jobs; updated port_flow_flush to flush queue jobs before flow flush; instrumented multiple flow operations (create, update, destroy, indirect actions) to enqueue jobs into job_list
Port Resource Cleanup
app/test-pmd/testpmd.c
Added port_free_job_list helper function; integrated job_list deallocation into flush_port_owned_resources

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A fluffy queue of jobs now tracked,
Per port, per queue, all stacked,
With linked lists keeping all in line,
Before the flush—resources aligned! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 59.09% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: fixing flow queue job leaks in testpmd by implementing per-queue tracking and proper cleanup mechanisms.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • In port_flow_queue_job_flush() and its callers, the logging format strings consistently use "queue %u on port %u" but the arguments are passed as (port_id, queue_id) rather than (queue_id, port_id), which will produce misleading logs and should be corrected.
  • When port_flow_configure() fails after allocating port->job_list (for example if rte_flow_configure() returns an error), the newly allocated job_list is leaked; consider freeing port->job_list on the error path or deferring allocation until after a successful configure.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `port_flow_queue_job_flush()` and its callers, the logging format strings consistently use "queue %u on port %u" but the arguments are passed as `(port_id, queue_id)` rather than `(queue_id, port_id)`, which will produce misleading logs and should be corrected.
- When `port_flow_configure()` fails after allocating `port->job_list` (for example if `rte_flow_configure()` returns an error), the newly allocated `job_list` is leaked; consider freeing `port->job_list` on the error path or deferring allocation until after a successful configure.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
app/test-pmd/config.c (4)

1839-1851: Fix job_list lifetime on flow configure failure / reconfigure.

port_flow_configure() allocates port->job_list before rte_flow_configure(), but if rte_flow_configure() fails you return without freeing port->job_list. Also, if this API can be called more than once per port, you’ll leak/overwrite an existing job_list.

Proposed fix
@@
 	port->queue_nb = nb_queue;
 	port->queue_sz = queue_attr->size;
@@
-	port->job_list = calloc(nb_queue, sizeof(*port->job_list));
+	/* Reconfigure safety: drop old tracking array if present (should be empty). */
+	if (port->job_list != NULL) {
+		free(port->job_list);
+		port->job_list = NULL;
+	}
+
+	port->job_list = calloc(nb_queue, sizeof(*port->job_list));
 	if (port->job_list == NULL) {
 		TESTPMD_LOG(ERR, "Failed to allocate memory for operations tracking on port %u\n",
 			    port_id);
 		return -ENOMEM;
 	}
@@
 	/* Poisoning to make sure PMDs update it in case of error. */
 	memset(&error, 0x66, sizeof(error));
-	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
-		return port_flow_complain(&error);
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error)) {
+		free(port->job_list);
+		port->job_list = NULL;
+		return port_flow_complain(&error);
+	}

3385-3412: Critical: potential OOB access + missing queue bounds check in port_queue_action_handle_query_update().

You assign port = &ports[port_id]; before validating port_id (or even pia). If port_id is invalid / RTE_PORT_ALL, pia becomes NULL and you still index ports[port_id] before returning. Also, you index port->job_list[queue_id] without checking queue_id < port->queue_nb.

Proposed fix
@@
 void
 port_queue_action_handle_query_update(portid_t port_id,
 				      uint32_t queue_id, bool postpone,
 				      uint32_t id,
 				      enum rte_flow_query_update_mode qu_mode,
 				      const struct rte_flow_action *action)
 {
@@
-	struct rte_port *port;
+	struct rte_port *port;
 	struct queue_job *job;
 
-	port = &ports[port_id];
-
-	if (!pia || !pia->handle)
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return;
+
+	if (!pia || !pia->handle)
 		return;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return;
+	}
@@
 	if (ret) {
 		port_flow_complain(&error);
 		free(job);
 	} else {
 		LIST_INSERT_HEAD(&port->job_list[queue_id], job, chain);
 		printf("port-%u: indirect action #%u update-and-query queued\n",
 		       port_id, id);
 	}
 }

3616-3637: Possible hang: aged-flow destroy polling compares against nb_flows instead of current chunk size.

Inside port_queue_aged_flow_destroy(), you enqueue n = min(queue_sz, nb_flows) ops, but you poll with while (success < nb_flows), which can block forever when nb_flows > queue_sz. Also: good that you now free queue_job for each result.

Proposed fix
@@
-		while (success < nb_flows) {
+		while (success < (int)n) {
 			struct queue_job *job;
@@
 			for (i = 0; i < ret; i++) {
@@
 				RTE_ASSERT(job != NULL);
 				port_free_queue_job(job);
 			}
 		}

3785-3794: Add a NULL guard/assert for res[i].user_data before dereferencing job->type.

port_queue_flow_pull() now always calls port_free_queue_job(job), but it reads job->type first and doesn’t assert job != NULL (unlike the aged-flow path). If a PMD ever returns NULL user_data, this will crash.

Proposed fix
@@
 	for (i = 0; i < ret; i++) {
@@
-		job = (struct queue_job *)res[i].user_data;
-		if (job->type == QUEUE_JOB_TYPE_ACTION_QUERY)
+		job = (struct queue_job *)res[i].user_data;
+		RTE_ASSERT(job != NULL);
+		if (job->type == QUEUE_JOB_TYPE_ACTION_QUERY)
 			port_action_handle_query_dump(port_id, job->pia,
 						      &job->query);
 		port_free_queue_job(job);
 	}
🧹 Nitpick comments (4)
app/test-pmd/testpmd.h (1)

375-375: Consider clarifying the comment.

The comment states "per queue" but the declaration struct queue_job_list *job_list; doesn't immediately make it clear this is a pointer to an array of lists (one per queue). Based on the context from the AI summary, this appears to be an array allocated with nb_queue entries.

📝 Suggested comment improvement
-	struct queue_job_list *job_list; /**< Pending async flow API operations, per queue. */
+	struct queue_job_list *job_list; /**< Array of pending async flow API operation lists, one per queue. */
app/test-pmd/testpmd.c (1)

3278-3283: Consider adding defensive validation for remaining jobs.

The function correctly frees the job_list array, relying on prior port_flow_flush (line 3289) to have cleaned up all pending jobs. However, adding a defensive check for any remaining jobs would make the code more robust.

🛡️ Optional defensive validation

While the current design expects all jobs to be cleaned during flush, adding validation would catch potential issues:

 static void
 port_free_job_list(portid_t pi)
 {
 	struct rte_port *port = &ports[pi];
+	
+	/* Defensive check: all jobs should have been cleaned up during flush */
+	if (port->job_list != NULL) {
+		for (queueid_t qi = 0; qi < port->queue_nb; qi++) {
+			if (!LIST_EMPTY(&port->job_list[qi])) {
+				TESTPMD_LOG(WARNING,
+					"Port %u queue %u still has pending jobs during cleanup\n",
+					pi, qi);
+			}
+		}
+	}
 	free(port->job_list);
 }
app/test-pmd/config.c (2)

72-73: Consider making FLOW_QUEUE_FLUSH_MAX_ITERS configurable (or at least justify “10”).

Hard-coding 10 poll iterations may be too low on slower systems / heavy PMDs, and too high for tight shutdown loops. A short comment explaining the rationale (or plumbing a CLI/config) would help future tuning.


3566-3577: port_free_queue_job(): solid consolidation, but it assumes LIST membership unconditionally.

Centralizing the frees/removal is a nice improvement. Just be aware LIST_REMOVE(job, chain) is unsafe if a job is ever freed without being inserted (or is removed twice); callers should consistently assert/guard job != NULL and only call this for tracked jobs.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cd60dcd and 5a8fe1a.

📒 Files selected for processing (3)
  • app/test-pmd/config.c
  • app/test-pmd/testpmd.c
  • app/test-pmd/testpmd.h
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Sourcery review
🔇 Additional comments (5)
app/test-pmd/testpmd.h (2)

283-283: LGTM!

The addition of LIST_ENTRY(queue_job) chain; follows the standard BSD queue.h pattern used elsewhere in this file (e.g., struct port_flow_tunnel at line 295).


292-293: LGTM!

The LIST_HEAD(queue_job_list, queue_job) declaration correctly defines the list head type for managing pending queue jobs.

app/test-pmd/testpmd.c (1)

3295-3295: LGTM!

The call to port_free_job_list(pi) is correctly placed after port_flow_flush(pi) (line 3289), ensuring that all pending async flow operations are completed and jobs are cleaned up before freeing the list array.

app/test-pmd/config.c (2)

2951-2953: Job tracking insertions look consistent and close the original leak.

Enqueue paths now reliably LIST_INSERT_HEAD() the allocated queue_job into the per-queue list after successful async submission, which is what enables the later flush to clean up outstanding jobs.

Also applies to: 2989-2991, 3043-3052, 3177-3181, 3232-3235, 3294-3304, 3369-3372, 3450-3453


4098-4122: Good integration: flushing pending jobs before rte_flow_flush().

Calling port_flow_queues_job_flush() inside port_flow_flush() aligns with the PR goal (avoid leaks when exiting without pulling). Returning ret while still best-effort flushing flows is reasonable for testpmd.

Also applies to: 4138-4159

Comment on lines +4002 to +4096
static int
port_flow_queue_job_flush(portid_t port_id, queueid_t queue_id)
{
struct rte_flow_op_result *res;
struct rte_flow_error error;
unsigned int expected_ops;
struct rte_port *port;
struct queue_job *job;
unsigned int success;
unsigned int polled;
int iterations;
int ret;

port = &ports[port_id];

printf("Flushing flow queue %u on port %u\n", port_id, queue_id);

/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x44, sizeof(error));
if (rte_flow_push(port_id, queue_id, &error))
return port_flow_complain(&error);

/* Count expected operations. */
expected_ops = 0;
LIST_FOREACH(job, &port->job_list[queue_id], chain)
expected_ops++;

res = calloc(expected_ops, sizeof(*res));
if (res == NULL)
return -ENOMEM;

polled = 0;
success = 0;
iterations = FLOW_QUEUE_FLUSH_MAX_ITERS;
while (iterations > 0 && expected_ops > 0) {
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x55, sizeof(error));
ret = rte_flow_pull(port_id, queue_id, res, expected_ops, &error);
if (ret < 0) {
port_flow_complain(&error);
free(res);
return ret;
}
if (ret == 0) {
/* Prevent infinite loop when driver does not return any completion. */
iterations--;
continue;
}

expected_ops -= ret;
polled += ret;
for (int i = 0; i < ret; i++) {
if (res[i].status == RTE_FLOW_OP_SUCCESS)
success++;

job = (struct queue_job *)res[i].user_data;
/*
* It is assumed that each enqueued async flow operation
* has a queue_job entry.
*/
RTE_ASSERT(job != NULL);
port_free_queue_job(job);
}
}
free(res);

printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
port_id, queue_id, polled - success, success);

if (iterations == 0 && expected_ops > 0) {
/*
* Driver was not able to return all completions for flow operations in time.
* Log the error and free the queue_job entries to prevent leak.
*/

TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
port_id, queue_id, expected_ops);

while (!LIST_EMPTY(&port->job_list[queue_id])) {
job = LIST_FIRST(&port->job_list[queue_id]);
port_free_queue_job(job);
}

return 0;
}

/*
* It is assumed that each enqueued async flow operation
* has a queue_job entry, so if expected_ops reached zero,
* then the queue_job list should be empty.
*/
RTE_ASSERT(LIST_EMPTY(&port->job_list[queue_id]));

return 0;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, find the file and check its size
fd -t f config.c | grep test-pmd

# Count lines to understand the file structure
wc -l app/test-pmd/config.c

Repository: ovsrobot/dpdk

Length of output: 105


🏁 Script executed:

# Read the specific function mentioned (lines 4002-4096)
sed -n '4002,4096p' app/test-pmd/config.c | head -100

Repository: ovsrobot/dpdk

Length of output: 2533


🏁 Script executed:

# Search for FLOW_QUEUE_FLUSH_MAX_ITERS definition
rg "FLOW_QUEUE_FLUSH_MAX_ITERS" --type c

Repository: ovsrobot/dpdk

Length of output: 182


🏁 Script executed:

# Check context around the printf calls to verify argument order
rg "Flushing flow queue|Flushed flow queue|Unable to fully flush" -B 2 -A 2 --type c

Repository: ovsrobot/dpdk

Length of output: 825


Fix swapped log message arguments in three places.

The printf and TESTPMD_LOG calls have reversed arguments: the format strings say "flow queue %u on port %u", but pass port_id, queue_id. They should pass queue_id, port_id instead.

Additionally, consider whether freeing jobs after 10 timeout iterations (when rte_flow_pull() returns 0 repeatedly) is safe. While this prevents leaks, verify that no subsequent operations can reference the freed user_data pointers, even if completions arrive later for those freed jobs.

Proposed fix for swapped printf args
@@
-	printf("Flushing flow queue %u on port %u\n", port_id, queue_id);
+	printf("Flushing flow queue %u on port %u\n", queue_id, port_id);
@@
-	printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
-	       port_id, queue_id, polled - success, success);
+	printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
+	       queue_id, port_id, polled - success, success);
@@
-		TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
-			    port_id, queue_id, expected_ops);
+		TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
+			    queue_id, port_id, expected_ops);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
static int
port_flow_queue_job_flush(portid_t port_id, queueid_t queue_id)
{
struct rte_flow_op_result *res;
struct rte_flow_error error;
unsigned int expected_ops;
struct rte_port *port;
struct queue_job *job;
unsigned int success;
unsigned int polled;
int iterations;
int ret;
port = &ports[port_id];
printf("Flushing flow queue %u on port %u\n", port_id, queue_id);
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x44, sizeof(error));
if (rte_flow_push(port_id, queue_id, &error))
return port_flow_complain(&error);
/* Count expected operations. */
expected_ops = 0;
LIST_FOREACH(job, &port->job_list[queue_id], chain)
expected_ops++;
res = calloc(expected_ops, sizeof(*res));
if (res == NULL)
return -ENOMEM;
polled = 0;
success = 0;
iterations = FLOW_QUEUE_FLUSH_MAX_ITERS;
while (iterations > 0 && expected_ops > 0) {
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x55, sizeof(error));
ret = rte_flow_pull(port_id, queue_id, res, expected_ops, &error);
if (ret < 0) {
port_flow_complain(&error);
free(res);
return ret;
}
if (ret == 0) {
/* Prevent infinite loop when driver does not return any completion. */
iterations--;
continue;
}
expected_ops -= ret;
polled += ret;
for (int i = 0; i < ret; i++) {
if (res[i].status == RTE_FLOW_OP_SUCCESS)
success++;
job = (struct queue_job *)res[i].user_data;
/*
* It is assumed that each enqueued async flow operation
* has a queue_job entry.
*/
RTE_ASSERT(job != NULL);
port_free_queue_job(job);
}
}
free(res);
printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
port_id, queue_id, polled - success, success);
if (iterations == 0 && expected_ops > 0) {
/*
* Driver was not able to return all completions for flow operations in time.
* Log the error and free the queue_job entries to prevent leak.
*/
TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
port_id, queue_id, expected_ops);
while (!LIST_EMPTY(&port->job_list[queue_id])) {
job = LIST_FIRST(&port->job_list[queue_id]);
port_free_queue_job(job);
}
return 0;
}
/*
* It is assumed that each enqueued async flow operation
* has a queue_job entry, so if expected_ops reached zero,
* then the queue_job list should be empty.
*/
RTE_ASSERT(LIST_EMPTY(&port->job_list[queue_id]));
return 0;
}
static int
port_flow_queue_job_flush(portid_t port_id, queueid_t queue_id)
{
struct rte_flow_op_result *res;
struct rte_flow_error error;
unsigned int expected_ops;
struct rte_port *port;
struct queue_job *job;
unsigned int success;
unsigned int polled;
int iterations;
int ret;
port = &ports[port_id];
printf("Flushing flow queue %u on port %u\n", queue_id, port_id);
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x44, sizeof(error));
if (rte_flow_push(port_id, queue_id, &error))
return port_flow_complain(&error);
/* Count expected operations. */
expected_ops = 0;
LIST_FOREACH(job, &port->job_list[queue_id], chain)
expected_ops++;
res = calloc(expected_ops, sizeof(*res));
if (res == NULL)
return -ENOMEM;
polled = 0;
success = 0;
iterations = FLOW_QUEUE_FLUSH_MAX_ITERS;
while (iterations > 0 && expected_ops > 0) {
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x55, sizeof(error));
ret = rte_flow_pull(port_id, queue_id, res, expected_ops, &error);
if (ret < 0) {
port_flow_complain(&error);
free(res);
return ret;
}
if (ret == 0) {
/* Prevent infinite loop when driver does not return any completion. */
iterations--;
continue;
}
expected_ops -= ret;
polled += ret;
for (int i = 0; i < ret; i++) {
if (res[i].status == RTE_FLOW_OP_SUCCESS)
success++;
job = (struct queue_job *)res[i].user_data;
/*
* It is assumed that each enqueued async flow operation
* has a queue_job entry.
*/
RTE_ASSERT(job != NULL);
port_free_queue_job(job);
}
}
free(res);
printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
queue_id, port_id, polled - success, success);
if (iterations == 0 && expected_ops > 0) {
/*
* Driver was not able to return all completions for flow operations in time.
* Log the error and free the queue_job entries to prevent leak.
*/
TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
queue_id, port_id, expected_ops);
while (!LIST_EMPTY(&port->job_list[queue_id])) {
job = LIST_FIRST(&port->job_list[queue_id]);
port_free_queue_job(job);
}
return 0;
}
/*
* It is assumed that each enqueued async flow operation
* has a queue_job entry, so if expected_ops reached zero,
* then the queue_job list should be empty.
*/
RTE_ASSERT(LIST_EMPTY(&port->job_list[queue_id]));
return 0;
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants