DPDK patches and discussions
 help / color / mirror / Atom feed
From: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
To: dev@dpdk.org, Ferruh Yigit <ferruh.yigit@intel.com>
Cc: Shahaf Shuler <shahafs@mellanox.com>
Subject: [dpdk-dev] [PATCH v2 1/8] net/mlx5: avoid reusing old queue's mbuf on reconfigure
Date: Wed, 23 Aug 2017 10:15:05 +0200	[thread overview]
Message-ID: <06164fc7b495f9bf8f6f58199f23bc5d8927d363.1503475999.git.nelio.laranjeiro@6wind.com> (raw)
In-Reply-To: <cover.1503475999.git.nelio.laranjeiro@6wind.com>
In-Reply-To: <cover.1503475999.git.nelio.laranjeiro@6wind.com>

This patch prepare the merge of fake mbuf allocation needed by the vector
code with rxq_alloc_elts() where all mbuf of the queues should be
allocated.

Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
Acked-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 23 +++--------------------
 1 file changed, 3 insertions(+), 20 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 74387a7..550e648 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -666,16 +666,12 @@ rxq_trim_elts(struct rxq *rxq)
  *   Pointer to RX queue structure.
  * @param elts_n
  *   Number of elements to allocate.
- * @param[in] pool
- *   If not NULL, fetch buffers from this array instead of allocating them
- *   with rte_pktmbuf_alloc().
  *
  * @return
  *   0 on success, errno value on failure.
  */
 static int
-rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n,
-	       struct rte_mbuf *(*pool)[])
+rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n)
 {
 	const unsigned int sges_n = 1 << rxq_ctrl->rxq.sges_n;
 	unsigned int i;
@@ -687,12 +683,7 @@ rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n,
 		volatile struct mlx5_wqe_data_seg *scat =
 			&(*rxq_ctrl->rxq.wqes)[i];
 
-		buf = (pool != NULL) ? (*pool)[i] : NULL;
-		if (buf != NULL) {
-			rte_pktmbuf_reset(buf);
-			rte_pktmbuf_refcnt_update(buf, 1);
-		} else
-			buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp);
+		buf = rte_pktmbuf_alloc(rxq_ctrl->rxq.mp);
 		if (buf == NULL) {
 			ERROR("%p: empty mbuf pool", (void *)rxq_ctrl);
 			ret = ENOMEM;
@@ -725,7 +716,6 @@ rxq_alloc_elts(struct rxq_ctrl *rxq_ctrl, unsigned int elts_n,
 	assert(ret == 0);
 	return 0;
 error:
-	assert(pool == NULL);
 	elts_n = i;
 	for (i = 0; (i != elts_n); ++i) {
 		if ((*rxq_ctrl->rxq.elts)[i] != NULL)
@@ -1064,14 +1054,7 @@ rxq_ctrl_setup(struct rte_eth_dev *dev, struct rxq_ctrl *rxq_ctrl,
 		      (void *)dev, strerror(ret));
 		goto error;
 	}
-	/* Reuse buffers from original queue if possible. */
-	if (rxq_ctrl->rxq.elts_n) {
-		assert(1 << rxq_ctrl->rxq.elts_n == desc);
-		assert(rxq_ctrl->rxq.elts != tmpl.rxq.elts);
-		rxq_trim_elts(&rxq_ctrl->rxq);
-		ret = rxq_alloc_elts(&tmpl, desc, rxq_ctrl->rxq.elts);
-	} else
-		ret = rxq_alloc_elts(&tmpl, desc, NULL);
+	ret = rxq_alloc_elts(&tmpl, desc);
 	if (ret) {
 		ERROR("%p: RXQ allocation failed: %s",
 		      (void *)dev, strerror(ret));
-- 
2.1.4

  parent reply	other threads:[~2017-08-23  8:15 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-01 12:09 [dpdk-dev] [PATCH 0/5] net/mlx5: cleanups Nelio Laranjeiro
2017-08-01 12:09 ` [dpdk-dev] [PATCH 1/5] net/mlx5: remove flow drop useless if branches Nelio Laranjeiro
2017-08-01 12:09 ` [dpdk-dev] [PATCH 2/5] net/mlx5: remove pdentic pragma Nelio Laranjeiro
2017-08-17 14:38   ` Ferruh Yigit
2017-08-22  9:10     ` Nélio Laranjeiro
2017-08-01 12:09 ` [dpdk-dev] [PATCH 3/5] net/mlx5: fix non working secondary process by removing it Nelio Laranjeiro
2017-08-17 14:38   ` Ferruh Yigit
2017-08-22  9:08     ` Nélio Laranjeiro
2017-08-01 12:09 ` [dpdk-dev] [PATCH 4/5] net/mlx5: remove multiple drop RSS queues Nelio Laranjeiro
2017-08-17 14:38   ` Ferruh Yigit
2017-08-22  8:59     ` Nélio Laranjeiro
2017-08-01 12:09 ` [dpdk-dev] [PATCH 5/5] net/mlx5: remove old MLNX_OFED 3.3 verification Nelio Laranjeiro
2017-08-17 14:38   ` Ferruh Yigit
2017-08-22  8:25     ` Nélio Laranjeiro
2017-08-02 15:36 ` [dpdk-dev] [PATCH 0/5] net/mlx5: cleanups Nélio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 0/8] " Nelio Laranjeiro
2017-08-23 10:07   ` Ferruh Yigit
2017-08-23  8:15 ` Nelio Laranjeiro [this message]
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 2/8] net/mlx5: prepare vector Rx ring at setup time Nelio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 3/8] net/mlx5: cleanup Rx ring in free functions Nelio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 4/8] net/mlx5: remove flow drop useless if branches Nelio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 5/8] net/mlx5: remove pdentic pragma Nelio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 6/8] net/mlx5: fix non working secondary process by removing it Nelio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 7/8] net/mlx5: remove multiple drop RSS queues Nelio Laranjeiro
2017-08-23  8:15 ` [dpdk-dev] [PATCH v2 8/8] net/mlx5: remove old MLNX_OFED 3.3 verification Nelio Laranjeiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=06164fc7b495f9bf8f6f58199f23bc5d8927d363.1503475999.git.nelio.laranjeiro@6wind.com \
    --to=nelio.laranjeiro@6wind.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=shahafs@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).