From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: <getelson@nvidia.com>, <matan@nvidia.com>, <rasland@nvidia.com>,
<thomas@monjalon.net>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
"Shahaf Shuler" <shahafs@nvidia.com>,
Michael Baum <michaelba@nvidia.com>
Subject: [dpdk-dev] [PATCH] net/mlx5: fix DevX resources memory management.
Date: Tue, 24 Nov 2020 10:10:13 +0200 [thread overview]
Message-ID: <20201124081013.1912-1-getelson@nvidia.com> (raw)
Invalid memory release order of DevX resources caused PMD crash.
1. SQ and CQ memory must be unregistered with DevX before it is freed.
2. SQ objects reference to a CQ ones. Hence, SQ should be destroyed in
advance of CQ it references to.
Fixes: 6deb19e1b2d2 ("net/mlx5: separate Rx queue object creations")
Fixes: 88f2e3f18cc7 ("net/mlx5: rearrange SQ and CQ creation in DevX
module")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_devx.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 73ee147246..de9b204075 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -154,14 +154,14 @@ mlx5_rxq_release_devx_rq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
{
struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->rq_dbrec_page;
- if (rxq_ctrl->rxq.wqes) {
- mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
- rxq_ctrl->rxq.wqes = NULL;
- }
if (rxq_ctrl->wq_umem) {
mlx5_glue->devx_umem_dereg(rxq_ctrl->wq_umem);
rxq_ctrl->wq_umem = NULL;
}
+ if (rxq_ctrl->rxq.wqes) {
+ mlx5_free((void *)(uintptr_t)rxq_ctrl->rxq.wqes);
+ rxq_ctrl->rxq.wqes = NULL;
+ }
if (dbr_page) {
claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
mlx5_os_get_umem_id(dbr_page->umem),
@@ -181,14 +181,14 @@ mlx5_rxq_release_devx_cq_resources(struct mlx5_rxq_ctrl *rxq_ctrl)
{
struct mlx5_devx_dbr_page *dbr_page = rxq_ctrl->cq_dbrec_page;
- if (rxq_ctrl->rxq.cqes) {
- rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
- rxq_ctrl->rxq.cqes = NULL;
- }
if (rxq_ctrl->cq_umem) {
mlx5_glue->devx_umem_dereg(rxq_ctrl->cq_umem);
rxq_ctrl->cq_umem = NULL;
}
+ if (rxq_ctrl->rxq.cqes) {
+ rte_free((void *)(uintptr_t)rxq_ctrl->rxq.cqes);
+ rxq_ctrl->rxq.cqes = NULL;
+ }
if (dbr_page) {
claim_zero(mlx5_release_dbr(&rxq_ctrl->priv->dbrpgs,
mlx5_os_get_umem_id(dbr_page->umem),
@@ -1174,8 +1174,8 @@ mlx5_txq_release_devx_cq_resources(struct mlx5_txq_obj *txq_obj)
static void
mlx5_txq_release_devx_resources(struct mlx5_txq_obj *txq_obj)
{
- mlx5_txq_release_devx_cq_resources(txq_obj);
mlx5_txq_release_devx_sq_resources(txq_obj);
+ mlx5_txq_release_devx_cq_resources(txq_obj);
}
/**
--
2.29.2
next reply other threads:[~2020-11-24 8:10 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-24 8:10 Gregory Etelson [this message]
2020-11-24 10:15 ` Matan Azrad
2020-11-24 22:18 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201124081013.1912-1-getelson@nvidia.com \
--to=getelson@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=michaelba@nvidia.com \
--cc=rasland@nvidia.com \
--cc=shahafs@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).