From: Raslan Darawsheh <rasland@nvidia.com>
To: Maayan Kashani <mkashani@nvidia.com>, dev@dpdk.org
Cc: stable@dpdk.org, Bing Zhao <bingz@nvidia.com>,
Dariusz Sosnowski <dsosnowski@nvidia.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Subject: Re: [PATCH] net/mlx5: fix segmentation fault in flow destruction path
Date: Tue, 18 Nov 2025 13:31:43 +0200 [thread overview]
Message-ID: <0c792f20-a091-478e-9ad7-0a004948c473@nvidia.com> (raw)
In-Reply-To: <20251117071536.205328-1-mkashani@nvidia.com>
Hi,
On 17/11/2025 9:15 AM, Maayan Kashani wrote:
> The mlx5_ipool_free() function was called with a NULL pool pointer
> during HW flow destruction, causing a segmentation fault. This occurred
> when flow creation failed and the cleanup path attempted to free
> resources from an uninitialized flow pool.
>
> The crash happened in the following scenario:
> 1. During device start, a default NTA copy action flow is created
> 2. If the flow creation fails, mlx5_flow_hw_list_destroy() is called
> 3. In hw_cmpl_flow_update_or_destroy(), table->flow pool could be NULL
> 4. mlx5_ipool_free(table->flow, flow->idx) was called without checking
> if table->flow is NULL
> 5. Inside mlx5_ipool_free(), accessing pool->cfg.per_core_cache caused
> a segmentation fault due to NULL pointer dereference
>
> The fix adds two layers of protection,
> 1. Add NULL check for table->flow before calling mlx5_ipool_free() in
> hw_cmpl_flow_update_or_destroy(), consistent with the existing check
> for table->resource on the previous line
> 2. Add NULL check for pool parameter in mlx5_ipool_free() as a defensive
> measure to prevent similar crashes in other code paths
>
> The fix also renames the ‘flow’ field in rte_flow_template_table
> to ‘flow_pool’ for better code readability.
>
> Stack trace of the fault:
> mlx5_ipool_free (pool=0x0) at mlx5_utils.c:753
> hw_cmpl_flow_update_or_destroy at mlx5_flow_hw.c:4481
> mlx5_flow_hw_destroy at mlx5_flow_hw.c:14219
> mlx5_flow_hw_list_destroy at mlx5_flow_hw.c:14279
> flow_hw_list_create at mlx5_flow_hw.c:14415
> mlx5_flow_start_default at mlx5_flow.c:8263
> mlx5_dev_start at mlx5_trigger.c:1420
>
> Fixes: 27d171b88031 ("net/mlx5: abstract flow action and enable reconfigure")
> Cc: stable@dpdk.org
>
> Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
> Acked-by: Bing Zhao <bingz@nvidia.com>
Patch applied to next-net-mlx,
Kindest regards
Raslan Darawsheh
prev parent reply other threads:[~2025-11-18 11:31 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-17 7:15 Maayan Kashani
2025-11-18 11:31 ` Raslan Darawsheh [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0c792f20-a091-478e-9ad7-0a004948c473@nvidia.com \
--to=rasland@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=mkashani@nvidia.com \
--cc=orika@nvidia.com \
--cc=stable@dpdk.org \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).