[PWCI] "[v2] app/testpmd: fix flow queue job leaks"#614
[PWCI] "[v2] app/testpmd: fix flow queue job leaks"#614
Conversation
Each enqueued async flow operation in testpmd has an associated
queue_job struct. It is passed in user data and used to determine
the type of operation when operation results are pulled on a given
queue. This information informs the necessary additional handling
(e.g., freeing flow struct or dumping the queried action state).
If async flow operations were enqueued and results were not pulled
before quitting testpmd, these queue_job structs were leaked as reported
by ASAN:
Direct leak of 88 byte(s) in 1 object(s) allocated from:
#0 0x7f7539084a37 in __interceptor_calloc
../../../../src/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x55a872c8e512 in port_queue_flow_create
(/download/dpdk/install/bin/dpdk-testpmd+0x4cd512)
#2 0x55a872c28414 in cmd_flow_cb
(/download/dpdk/install/bin/dpdk-testpmd+0x467414)
#3 0x55a8734fa6a3 in __cmdline_parse
(/download/dpdk/install/bin/dpdk-testpmd+0xd396a3)
#4 0x55a8734f6130 in cmdline_valid_buffer
(/download/dpdk/install/bin/dpdk-testpmd+0xd35130)
#5 0x55a873503b4f in rdline_char_in
(/download/dpdk/install/bin/dpdk-testpmd+0xd42b4f)
#6 0x55a8734f62ba in cmdline_in
(/download/dpdk/install/bin/dpdk-testpmd+0xd352ba)
#7 0x55a8734f65eb in cmdline_interact
(/download/dpdk/install/bin/dpdk-testpmd+0xd355eb)
#8 0x55a872c19b8e in prompt
(/download/dpdk/install/bin/dpdk-testpmd+0x458b8e)
#9 0x55a872be425a in main
(/download/dpdk/install/bin/dpdk-testpmd+0x42325a)
#10 0x7f7538756d8f in __libc_start_call_main
../sysdeps/nptl/libc_start_call_main.h:58
This patch addresses that by registering all queue_job structs, for a
given queue, on a linked list. Whenever operation results are pulled
and result is handled, queue_job struct will be removed from that list
and freed.
Before port is closed, during flow flush, testpmd will pull
all of the expected results
(based on the number of queue_job on the list).
Fixes: c9dc038 ("ethdev: add indirect action async query")
Fixes: 99231e4 ("ethdev: add template table resize")
Fixes: 77e7939 ("app/testpmd: add flow rule update command")
Fixes: 3e3edab ("ethdev: add flow quota")
Fixes: 966eb55 ("ethdev: add queue-based API to report aged flow rules")
Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: 0-day Robot <robot@bytheb.org>
Reviewer's GuideAdds per-queue tracking and flushing of asynchronous flow operations in testpmd to fix leaks of queue jobs, centralizing job cleanup and ensuring job lists are freed when ports are flushed or removed. File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
📝 WalkthroughWalkthroughThis pull request introduces a per-port, per-queue job tracking system using linked lists to manage pending asynchronous flow API operations. Jobs are allocated and enqueued during flow create/update/destroy operations, then flushed via dedicated cleanup functions before the general port flush occurs. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- In
port_flow_queue_job_flush()and its callers, the logging format strings consistently use "queue %u on port %u" but the arguments are passed as(port_id, queue_id)rather than(queue_id, port_id), which will produce misleading logs and should be corrected. - When
port_flow_configure()fails after allocatingport->job_list(for example ifrte_flow_configure()returns an error), the newly allocatedjob_listis leaked; consider freeingport->job_liston the error path or deferring allocation until after a successful configure.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `port_flow_queue_job_flush()` and its callers, the logging format strings consistently use "queue %u on port %u" but the arguments are passed as `(port_id, queue_id)` rather than `(queue_id, port_id)`, which will produce misleading logs and should be corrected.
- When `port_flow_configure()` fails after allocating `port->job_list` (for example if `rte_flow_configure()` returns an error), the newly allocated `job_list` is leaked; consider freeing `port->job_list` on the error path or deferring allocation until after a successful configure.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
app/test-pmd/config.c (4)
1839-1851: Fix job_list lifetime on flow configure failure / reconfigure.
port_flow_configure()allocatesport->job_listbeforerte_flow_configure(), but ifrte_flow_configure()fails you return without freeingport->job_list. Also, if this API can be called more than once per port, you’ll leak/overwrite an existingjob_list.Proposed fix
@@ port->queue_nb = nb_queue; port->queue_sz = queue_attr->size; @@ - port->job_list = calloc(nb_queue, sizeof(*port->job_list)); + /* Reconfigure safety: drop old tracking array if present (should be empty). */ + if (port->job_list != NULL) { + free(port->job_list); + port->job_list = NULL; + } + + port->job_list = calloc(nb_queue, sizeof(*port->job_list)); if (port->job_list == NULL) { TESTPMD_LOG(ERR, "Failed to allocate memory for operations tracking on port %u\n", port_id); return -ENOMEM; } @@ /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x66, sizeof(error)); - if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error)) - return port_flow_complain(&error); + if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error)) { + free(port->job_list); + port->job_list = NULL; + return port_flow_complain(&error); + }
3385-3412: Critical: potential OOB access + missing queue bounds check in port_queue_action_handle_query_update().You assign
port = &ports[port_id];before validatingport_id(or evenpia). Ifport_idis invalid /RTE_PORT_ALL,piabecomes NULL and you still indexports[port_id]before returning. Also, you indexport->job_list[queue_id]without checkingqueue_id < port->queue_nb.Proposed fix
@@ void port_queue_action_handle_query_update(portid_t port_id, uint32_t queue_id, bool postpone, uint32_t id, enum rte_flow_query_update_mode qu_mode, const struct rte_flow_action *action) { @@ - struct rte_port *port; + struct rte_port *port; struct queue_job *job; - port = &ports[port_id]; - - if (!pia || !pia->handle) + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return; + + if (!pia || !pia->handle) return; + + port = &ports[port_id]; + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return; + } @@ if (ret) { port_flow_complain(&error); free(job); } else { LIST_INSERT_HEAD(&port->job_list[queue_id], job, chain); printf("port-%u: indirect action #%u update-and-query queued\n", port_id, id); } }
3616-3637: Possible hang: aged-flow destroy polling compares against nb_flows instead of current chunk size.Inside
port_queue_aged_flow_destroy(), you enqueuen = min(queue_sz, nb_flows)ops, but you poll withwhile (success < nb_flows), which can block forever whennb_flows > queue_sz. Also: good that you now freequeue_jobfor each result.Proposed fix
@@ - while (success < nb_flows) { + while (success < (int)n) { struct queue_job *job; @@ for (i = 0; i < ret; i++) { @@ RTE_ASSERT(job != NULL); port_free_queue_job(job); } }
3785-3794: Add a NULL guard/assert for res[i].user_data before dereferencing job->type.
port_queue_flow_pull()now always callsport_free_queue_job(job), but it readsjob->typefirst and doesn’t assertjob != NULL(unlike the aged-flow path). If a PMD ever returns NULL user_data, this will crash.Proposed fix
@@ for (i = 0; i < ret; i++) { @@ - job = (struct queue_job *)res[i].user_data; - if (job->type == QUEUE_JOB_TYPE_ACTION_QUERY) + job = (struct queue_job *)res[i].user_data; + RTE_ASSERT(job != NULL); + if (job->type == QUEUE_JOB_TYPE_ACTION_QUERY) port_action_handle_query_dump(port_id, job->pia, &job->query); port_free_queue_job(job); }
🧹 Nitpick comments (4)
app/test-pmd/testpmd.h (1)
375-375: Consider clarifying the comment.The comment states "per queue" but the declaration
struct queue_job_list *job_list;doesn't immediately make it clear this is a pointer to an array of lists (one per queue). Based on the context from the AI summary, this appears to be an array allocated withnb_queueentries.📝 Suggested comment improvement
- struct queue_job_list *job_list; /**< Pending async flow API operations, per queue. */ + struct queue_job_list *job_list; /**< Array of pending async flow API operation lists, one per queue. */app/test-pmd/testpmd.c (1)
3278-3283: Consider adding defensive validation for remaining jobs.The function correctly frees the
job_listarray, relying on priorport_flow_flush(line 3289) to have cleaned up all pending jobs. However, adding a defensive check for any remaining jobs would make the code more robust.🛡️ Optional defensive validation
While the current design expects all jobs to be cleaned during flush, adding validation would catch potential issues:
static void port_free_job_list(portid_t pi) { struct rte_port *port = &ports[pi]; + + /* Defensive check: all jobs should have been cleaned up during flush */ + if (port->job_list != NULL) { + for (queueid_t qi = 0; qi < port->queue_nb; qi++) { + if (!LIST_EMPTY(&port->job_list[qi])) { + TESTPMD_LOG(WARNING, + "Port %u queue %u still has pending jobs during cleanup\n", + pi, qi); + } + } + } free(port->job_list); }app/test-pmd/config.c (2)
72-73: Consider making FLOW_QUEUE_FLUSH_MAX_ITERS configurable (or at least justify “10”).Hard-coding 10 poll iterations may be too low on slower systems / heavy PMDs, and too high for tight shutdown loops. A short comment explaining the rationale (or plumbing a CLI/config) would help future tuning.
3566-3577: port_free_queue_job(): solid consolidation, but it assumes LIST membership unconditionally.Centralizing the frees/removal is a nice improvement. Just be aware
LIST_REMOVE(job, chain)is unsafe if a job is ever freed without being inserted (or is removed twice); callers should consistently assert/guardjob != NULLand only call this for tracked jobs.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
app/test-pmd/config.capp/test-pmd/testpmd.capp/test-pmd/testpmd.h
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Sourcery review
🔇 Additional comments (5)
app/test-pmd/testpmd.h (2)
283-283: LGTM!The addition of
LIST_ENTRY(queue_job) chain;follows the standard BSD queue.h pattern used elsewhere in this file (e.g.,struct port_flow_tunnelat line 295).
292-293: LGTM!The
LIST_HEAD(queue_job_list, queue_job)declaration correctly defines the list head type for managing pending queue jobs.app/test-pmd/testpmd.c (1)
3295-3295: LGTM!The call to
port_free_job_list(pi)is correctly placed afterport_flow_flush(pi)(line 3289), ensuring that all pending async flow operations are completed and jobs are cleaned up before freeing the list array.app/test-pmd/config.c (2)
2951-2953: Job tracking insertions look consistent and close the original leak.Enqueue paths now reliably
LIST_INSERT_HEAD()the allocatedqueue_jobinto the per-queue list after successful async submission, which is what enables the later flush to clean up outstanding jobs.Also applies to: 2989-2991, 3043-3052, 3177-3181, 3232-3235, 3294-3304, 3369-3372, 3450-3453
4098-4122: Good integration: flushing pending jobs before rte_flow_flush().Calling
port_flow_queues_job_flush()insideport_flow_flush()aligns with the PR goal (avoid leaks when exiting without pulling). Returningretwhile still best-effort flushing flows is reasonable for testpmd.Also applies to: 4138-4159
| static int | ||
| port_flow_queue_job_flush(portid_t port_id, queueid_t queue_id) | ||
| { | ||
| struct rte_flow_op_result *res; | ||
| struct rte_flow_error error; | ||
| unsigned int expected_ops; | ||
| struct rte_port *port; | ||
| struct queue_job *job; | ||
| unsigned int success; | ||
| unsigned int polled; | ||
| int iterations; | ||
| int ret; | ||
|
|
||
| port = &ports[port_id]; | ||
|
|
||
| printf("Flushing flow queue %u on port %u\n", port_id, queue_id); | ||
|
|
||
| /* Poisoning to make sure PMDs update it in case of error. */ | ||
| memset(&error, 0x44, sizeof(error)); | ||
| if (rte_flow_push(port_id, queue_id, &error)) | ||
| return port_flow_complain(&error); | ||
|
|
||
| /* Count expected operations. */ | ||
| expected_ops = 0; | ||
| LIST_FOREACH(job, &port->job_list[queue_id], chain) | ||
| expected_ops++; | ||
|
|
||
| res = calloc(expected_ops, sizeof(*res)); | ||
| if (res == NULL) | ||
| return -ENOMEM; | ||
|
|
||
| polled = 0; | ||
| success = 0; | ||
| iterations = FLOW_QUEUE_FLUSH_MAX_ITERS; | ||
| while (iterations > 0 && expected_ops > 0) { | ||
| /* Poisoning to make sure PMDs update it in case of error. */ | ||
| memset(&error, 0x55, sizeof(error)); | ||
| ret = rte_flow_pull(port_id, queue_id, res, expected_ops, &error); | ||
| if (ret < 0) { | ||
| port_flow_complain(&error); | ||
| free(res); | ||
| return ret; | ||
| } | ||
| if (ret == 0) { | ||
| /* Prevent infinite loop when driver does not return any completion. */ | ||
| iterations--; | ||
| continue; | ||
| } | ||
|
|
||
| expected_ops -= ret; | ||
| polled += ret; | ||
| for (int i = 0; i < ret; i++) { | ||
| if (res[i].status == RTE_FLOW_OP_SUCCESS) | ||
| success++; | ||
|
|
||
| job = (struct queue_job *)res[i].user_data; | ||
| /* | ||
| * It is assumed that each enqueued async flow operation | ||
| * has a queue_job entry. | ||
| */ | ||
| RTE_ASSERT(job != NULL); | ||
| port_free_queue_job(job); | ||
| } | ||
| } | ||
| free(res); | ||
|
|
||
| printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n", | ||
| port_id, queue_id, polled - success, success); | ||
|
|
||
| if (iterations == 0 && expected_ops > 0) { | ||
| /* | ||
| * Driver was not able to return all completions for flow operations in time. | ||
| * Log the error and free the queue_job entries to prevent leak. | ||
| */ | ||
|
|
||
| TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n", | ||
| port_id, queue_id, expected_ops); | ||
|
|
||
| while (!LIST_EMPTY(&port->job_list[queue_id])) { | ||
| job = LIST_FIRST(&port->job_list[queue_id]); | ||
| port_free_queue_job(job); | ||
| } | ||
|
|
||
| return 0; | ||
| } | ||
|
|
||
| /* | ||
| * It is assumed that each enqueued async flow operation | ||
| * has a queue_job entry, so if expected_ops reached zero, | ||
| * then the queue_job list should be empty. | ||
| */ | ||
| RTE_ASSERT(LIST_EMPTY(&port->job_list[queue_id])); | ||
|
|
||
| return 0; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the file and check its size
fd -t f config.c | grep test-pmd
# Count lines to understand the file structure
wc -l app/test-pmd/config.cRepository: ovsrobot/dpdk
Length of output: 105
🏁 Script executed:
# Read the specific function mentioned (lines 4002-4096)
sed -n '4002,4096p' app/test-pmd/config.c | head -100Repository: ovsrobot/dpdk
Length of output: 2533
🏁 Script executed:
# Search for FLOW_QUEUE_FLUSH_MAX_ITERS definition
rg "FLOW_QUEUE_FLUSH_MAX_ITERS" --type cRepository: ovsrobot/dpdk
Length of output: 182
🏁 Script executed:
# Check context around the printf calls to verify argument order
rg "Flushing flow queue|Flushed flow queue|Unable to fully flush" -B 2 -A 2 --type cRepository: ovsrobot/dpdk
Length of output: 825
Fix swapped log message arguments in three places.
The printf and TESTPMD_LOG calls have reversed arguments: the format strings say "flow queue %u on port %u", but pass port_id, queue_id. They should pass queue_id, port_id instead.
Additionally, consider whether freeing jobs after 10 timeout iterations (when rte_flow_pull() returns 0 repeatedly) is safe. While this prevents leaks, verify that no subsequent operations can reference the freed user_data pointers, even if completions arrive later for those freed jobs.
Proposed fix for swapped printf args
@@
- printf("Flushing flow queue %u on port %u\n", port_id, queue_id);
+ printf("Flushing flow queue %u on port %u\n", queue_id, port_id);
@@
- printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
- port_id, queue_id, polled - success, success);
+ printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n",
+ queue_id, port_id, polled - success, success);
@@
- TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
- port_id, queue_id, expected_ops);
+ TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n",
+ queue_id, port_id, expected_ops);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static int | |
| port_flow_queue_job_flush(portid_t port_id, queueid_t queue_id) | |
| { | |
| struct rte_flow_op_result *res; | |
| struct rte_flow_error error; | |
| unsigned int expected_ops; | |
| struct rte_port *port; | |
| struct queue_job *job; | |
| unsigned int success; | |
| unsigned int polled; | |
| int iterations; | |
| int ret; | |
| port = &ports[port_id]; | |
| printf("Flushing flow queue %u on port %u\n", port_id, queue_id); | |
| /* Poisoning to make sure PMDs update it in case of error. */ | |
| memset(&error, 0x44, sizeof(error)); | |
| if (rte_flow_push(port_id, queue_id, &error)) | |
| return port_flow_complain(&error); | |
| /* Count expected operations. */ | |
| expected_ops = 0; | |
| LIST_FOREACH(job, &port->job_list[queue_id], chain) | |
| expected_ops++; | |
| res = calloc(expected_ops, sizeof(*res)); | |
| if (res == NULL) | |
| return -ENOMEM; | |
| polled = 0; | |
| success = 0; | |
| iterations = FLOW_QUEUE_FLUSH_MAX_ITERS; | |
| while (iterations > 0 && expected_ops > 0) { | |
| /* Poisoning to make sure PMDs update it in case of error. */ | |
| memset(&error, 0x55, sizeof(error)); | |
| ret = rte_flow_pull(port_id, queue_id, res, expected_ops, &error); | |
| if (ret < 0) { | |
| port_flow_complain(&error); | |
| free(res); | |
| return ret; | |
| } | |
| if (ret == 0) { | |
| /* Prevent infinite loop when driver does not return any completion. */ | |
| iterations--; | |
| continue; | |
| } | |
| expected_ops -= ret; | |
| polled += ret; | |
| for (int i = 0; i < ret; i++) { | |
| if (res[i].status == RTE_FLOW_OP_SUCCESS) | |
| success++; | |
| job = (struct queue_job *)res[i].user_data; | |
| /* | |
| * It is assumed that each enqueued async flow operation | |
| * has a queue_job entry. | |
| */ | |
| RTE_ASSERT(job != NULL); | |
| port_free_queue_job(job); | |
| } | |
| } | |
| free(res); | |
| printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n", | |
| port_id, queue_id, polled - success, success); | |
| if (iterations == 0 && expected_ops > 0) { | |
| /* | |
| * Driver was not able to return all completions for flow operations in time. | |
| * Log the error and free the queue_job entries to prevent leak. | |
| */ | |
| TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n", | |
| port_id, queue_id, expected_ops); | |
| while (!LIST_EMPTY(&port->job_list[queue_id])) { | |
| job = LIST_FIRST(&port->job_list[queue_id]); | |
| port_free_queue_job(job); | |
| } | |
| return 0; | |
| } | |
| /* | |
| * It is assumed that each enqueued async flow operation | |
| * has a queue_job entry, so if expected_ops reached zero, | |
| * then the queue_job list should be empty. | |
| */ | |
| RTE_ASSERT(LIST_EMPTY(&port->job_list[queue_id])); | |
| return 0; | |
| } | |
| static int | |
| port_flow_queue_job_flush(portid_t port_id, queueid_t queue_id) | |
| { | |
| struct rte_flow_op_result *res; | |
| struct rte_flow_error error; | |
| unsigned int expected_ops; | |
| struct rte_port *port; | |
| struct queue_job *job; | |
| unsigned int success; | |
| unsigned int polled; | |
| int iterations; | |
| int ret; | |
| port = &ports[port_id]; | |
| printf("Flushing flow queue %u on port %u\n", queue_id, port_id); | |
| /* Poisoning to make sure PMDs update it in case of error. */ | |
| memset(&error, 0x44, sizeof(error)); | |
| if (rte_flow_push(port_id, queue_id, &error)) | |
| return port_flow_complain(&error); | |
| /* Count expected operations. */ | |
| expected_ops = 0; | |
| LIST_FOREACH(job, &port->job_list[queue_id], chain) | |
| expected_ops++; | |
| res = calloc(expected_ops, sizeof(*res)); | |
| if (res == NULL) | |
| return -ENOMEM; | |
| polled = 0; | |
| success = 0; | |
| iterations = FLOW_QUEUE_FLUSH_MAX_ITERS; | |
| while (iterations > 0 && expected_ops > 0) { | |
| /* Poisoning to make sure PMDs update it in case of error. */ | |
| memset(&error, 0x55, sizeof(error)); | |
| ret = rte_flow_pull(port_id, queue_id, res, expected_ops, &error); | |
| if (ret < 0) { | |
| port_flow_complain(&error); | |
| free(res); | |
| return ret; | |
| } | |
| if (ret == 0) { | |
| /* Prevent infinite loop when driver does not return any completion. */ | |
| iterations--; | |
| continue; | |
| } | |
| expected_ops -= ret; | |
| polled += ret; | |
| for (int i = 0; i < ret; i++) { | |
| if (res[i].status == RTE_FLOW_OP_SUCCESS) | |
| success++; | |
| job = (struct queue_job *)res[i].user_data; | |
| /* | |
| * It is assumed that each enqueued async flow operation | |
| * has a queue_job entry. | |
| */ | |
| RTE_ASSERT(job != NULL); | |
| port_free_queue_job(job); | |
| } | |
| } | |
| free(res); | |
| printf("Flushed flow queue %u on port %u (%u failed, %u succeeded).\n", | |
| queue_id, port_id, polled - success, success); | |
| if (iterations == 0 && expected_ops > 0) { | |
| /* | |
| * Driver was not able to return all completions for flow operations in time. | |
| * Log the error and free the queue_job entries to prevent leak. | |
| */ | |
| TESTPMD_LOG(ERR, "Unable to fully flush flow queue %u on port %u (left ops %u)\n", | |
| queue_id, port_id, expected_ops); | |
| while (!LIST_EMPTY(&port->job_list[queue_id])) { | |
| job = LIST_FIRST(&port->job_list[queue_id]); | |
| port_free_queue_job(job); | |
| } | |
| return 0; | |
| } | |
| /* | |
| * It is assumed that each enqueued async flow operation | |
| * has a queue_job entry, so if expected_ops reached zero, | |
| * then the queue_job list should be empty. | |
| */ | |
| RTE_ASSERT(LIST_EMPTY(&port->job_list[queue_id])); | |
| return 0; | |
| } |
NOTE: This is an auto submission for "[v2] app/testpmd: fix flow queue job leaks".
See "http://patchwork.dpdk.org/project/dpdk/list/?series=37005" for details.
Summary by Sourcery
Track and flush asynchronous flow queue jobs in testpmd to prevent leaks when using the flow queue API.
Bug Fixes:
Enhancements:
Summary by CodeRabbit
Bug Fixes
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.