patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Wenjing Qiao <wenjing.qiao@intel.com>
To: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com
Cc: dev@dpdk.org, Wenjing Qiao <wenjing.qiao@intel.com>,
	stable@dpdk.org, Christopher Pau <christopher.pau@intel.com>
Subject: [PATCH v2 10/15] common/idpf: fix memory leaks on ctrlq functions
Date: Fri, 21 Apr 2023 04:40:38 -0400	[thread overview]
Message-ID: <20230421084043.135503-11-wenjing.qiao@intel.com> (raw)
In-Reply-To: <20230421084043.135503-1-wenjing.qiao@intel.com>

idpf_init_hw needs to free it's q_info.
idpf_clean_arq_element needs to return buffers via post_rx_buffs

Fixes: fb4ac04e9bfa ("common/idpf: introduce common library")
Cc: stable@dpdk.org

Signed-off-by: Christopher Pau <christopher.pau@intel.com>
Signed-off-by: Wenjing Qiao <wenjing.qiao@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
index 69e3b32f85..de82c3458f 100644
--- a/drivers/common/idpf/base/idpf_common.c
+++ b/drivers/common/idpf/base/idpf_common.c
@@ -130,6 +130,8 @@ int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
 	hw->mac.addr[4] = 0x03;
 	hw->mac.addr[5] = 0x14;
 
+	idpf_free(hw, q_info);
+
 	return 0;
 }
 
@@ -219,6 +221,7 @@ bool idpf_check_asq_alive(struct idpf_hw *hw)
 int idpf_clean_arq_element(struct idpf_hw *hw,
 			   struct idpf_arq_event_info *e, u16 *pending)
 {
+	struct idpf_dma_mem *dma_mem = NULL;
 	struct idpf_ctlq_msg msg = { 0 };
 	int status;
 	u16 msg_data_len;
@@ -226,6 +229,8 @@ int idpf_clean_arq_element(struct idpf_hw *hw,
 	*pending = 1;
 
 	status = idpf_ctlq_recv(hw->arq, pending, &msg);
+	if (status == -ENOMSG)
+		goto exit;
 
 	/* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
 	e->desc.opcode = msg.opcode;
@@ -240,7 +245,14 @@ int idpf_clean_arq_element(struct idpf_hw *hw,
 		msg_data_len = msg.data_len;
 		idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len,
 			    IDPF_DMA_TO_NONDMA);
+		dma_mem = msg.ctx.indirect.payload;
+	} else {
+		*pending = 0;
 	}
+
+	status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem);
+
+exit:
 	return status;
 }
 
-- 
2.25.1


      parent reply	other threads:[~2023-04-21  8:46 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230413094502.1714755-2-wenjing.qiao@intel.com>
     [not found] ` <20230421084043.135503-1-wenjing.qiao@intel.com>
2023-04-21  8:40   ` [PATCH v2 02/15] common/idpf: fix ctlq message send and receive Wenjing Qiao
2023-04-21  8:40   ` [PATCH v2 03/15] common/idpf: fix ITR register definitions for AVF Wenjing Qiao
2023-04-21  8:40   ` [PATCH v2 09/15] common/idpf: fix idpf_send_msg_to_cp prototypes Wenjing Qiao
2023-04-21  8:40   ` Wenjing Qiao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230421084043.135503-11-wenjing.qiao@intel.com \
    --to=wenjing.qiao@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=christopher.pau@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).