DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf
@ 2021-05-27  8:17 Feifei Wang
  2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Feifei Wang @ 2021-05-27  8:17 UTC (permalink / raw)
  Cc: dev, nd, Feifei Wang

For i40e Tx path, use bulk free of the buffers when mbuf fast free
mode is enabled. This can efficiently improve the performance.

Feifei Wang (2):
  net/i40e: improve performance for scalar Tx
  net/i40e: improve performance for vector Tx

 drivers/net/i40e/i40e_rxtx.c            |  5 ++++-
 drivers/net/i40e/i40e_rxtx_vec_common.h | 11 +++++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-05-27  8:17 [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
@ 2021-05-27  8:17 ` Feifei Wang
  2021-06-22  6:07   ` Xing, Beilei
  2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 2/2] net/i40e: improve performance for vector Tx Feifei Wang
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Feifei Wang @ 2021-05-27  8:17 UTC (permalink / raw)
  To: Beilei Xing; +Cc: dev, nd, Feifei Wang, Ruifeng Wang

For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means
per-queue all mbufs come from the same mempool and have refcnt = 1.

Thus we can use bulk free of the buffers when mbuf fast free mode is
enabled.

For scalar path in arm platform:
In n1sdp, performance is improved by 7.8%;
In thunderx2, performance is improved by 6.7%.

For scalar path in x86 platform,
performance is improved by 6%.

Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
---
 drivers/net/i40e/i40e_rxtx.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 6c58decece..fe7b20f750 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1295,6 +1295,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 {
 	struct i40e_tx_entry *txep;
 	uint16_t i;
+	struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
 
 	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
 			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
@@ -1308,9 +1309,11 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 
 	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
-			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
+			free[i] = txep->mbuf;
 			txep->mbuf = NULL;
 		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free,
+					txq->tx_rs_thresh);
 	} else {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_pktmbuf_free_seg(txep->mbuf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v1 2/2] net/i40e: improve performance for vector Tx
  2021-05-27  8:17 [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
  2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
@ 2021-05-27  8:17 ` Feifei Wang
  2021-06-22  1:52 ` [dpdk-dev] 回复: [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
  2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
  3 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2021-05-27  8:17 UTC (permalink / raw)
  To: Beilei Xing; +Cc: dev, nd, Feifei Wang, Ruifeng Wang

For i40e vector Tx path, if tx_offload is set as FAST_FREE_MBUF mode,
no mbuf fast free operations are executed. To fix this, add mbuf fast
free mode for vector Tx path.

Furthermore, for i40e vector Tx path, if implement FAST_FREE_MBUF mode,
it means per-queue all mbufs come from the same mempool and have
refcnt = 1. Thus we can use bulk free of the buffers when mbuf fast free
mode is enabled.

For vector path in arm platform:
In n1sdp, performance is improved by 18.4%;
In thunderx2, performance is improved by 23%.

For vector path in x86 platform:
No performance changes.

Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
---
 drivers/net/i40e/i40e_rxtx_vec_common.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 16fcf0aec6..f52ed98d62 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -99,6 +99,16 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  * tx_next_dd - (tx_rs_thresh-1)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < n; i++) {
+			free[i] = txep[i].mbuf;
+			txep[i].mbuf = NULL;
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+		goto done;
+	}
+
 	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
 	if (likely(m != NULL)) {
 		free[0] = m;
@@ -126,6 +136,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 		}
 	}
 
+done:
 	/* buffers were freed, update counters */
 	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
 	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] 回复: [PATCH v1 0/2] net/i40e: improve free mbuf
  2021-05-27  8:17 [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
  2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
  2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 2/2] net/i40e: improve performance for vector Tx Feifei Wang
@ 2021-06-22  1:52 ` Feifei Wang
  2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
  3 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2021-06-22  1:52 UTC (permalink / raw)
  To: Feifei Wang, qi.z.zhang; +Cc: dev, nd, nd

Hi, Qi

Can you help review these patches?
Thanks very much.

Best Regards
Feifei

> -----邮件原件-----
> 发件人: Feifei Wang <feifei.wang2@arm.com>
> 发送时间: 2021年5月27日 16:17
> 抄送: dev@dpdk.org; nd <nd@arm.com>; Feifei Wang
> <Feifei.Wang2@arm.com>
> 主题: [PATCH v1 0/2] net/i40e: improve free mbuf
> 
> For i40e Tx path, use bulk free of the buffers when mbuf fast free mode is
> enabled. This can efficiently improve the performance.
> 
> Feifei Wang (2):
>   net/i40e: improve performance for scalar Tx
>   net/i40e: improve performance for vector Tx
> 
>  drivers/net/i40e/i40e_rxtx.c            |  5 ++++-
>  drivers/net/i40e/i40e_rxtx_vec_common.h | 11 +++++++++++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
@ 2021-06-22  6:07   ` Xing, Beilei
  2021-06-22  9:58     ` [dpdk-dev] 回复: " Feifei Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Xing, Beilei @ 2021-06-22  6:07 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev, nd, Ruifeng Wang



> -----Original Message-----
> From: Feifei Wang <feifei.wang2@arm.com>
> Sent: Thursday, May 27, 2021 4:17 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; nd@arm.com; Feifei Wang <feifei.wang2@arm.com>;
> Ruifeng Wang <ruifeng.wang@arm.com>
> Subject: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means per-
> queue all mbufs come from the same mempool and have refcnt = 1.
> 
> Thus we can use bulk free of the buffers when mbuf fast free mode is
> enabled.
> 
> For scalar path in arm platform:
> In n1sdp, performance is improved by 7.8%; In thunderx2, performance is
> improved by 6.7%.
> 
> For scalar path in x86 platform,
> performance is improved by 6%.
> 
> Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> ---
>  drivers/net/i40e/i40e_rxtx.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decece..fe7b20f750 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1295,6 +1295,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)  {
>  	struct i40e_tx_entry *txep;
>  	uint16_t i;
> +	struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> 
>  	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
>  			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> @@ -1308,9 +1309,11 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
> 
>  	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
>  		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> -			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
> +			free[i] = txep->mbuf;

The tx_rs_thresh can be 'nb_desc - 3', so if tx_rs_thres > RTE_I40E_TX_MAX_FREE_BUF_SZ, there'll be out of bounds, right?

>  			txep->mbuf = NULL;
>  		}
> +		rte_mempool_put_bulk(free[0]->pool, (void **)free,
> +					txq->tx_rs_thresh);
>  	} else {
>  		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
>  			rte_pktmbuf_free_seg(txep->mbuf);
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-06-22  6:07   ` Xing, Beilei
@ 2021-06-22  9:58     ` Feifei Wang
  2021-06-22 10:08       ` Feifei Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Feifei Wang @ 2021-06-22  9:58 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev, nd, Ruifeng Wang, nd

Hi, Beilei

Thanks for your comments, please see below.

> -----邮件原件-----
> 发件人: Xing, Beilei <beilei.xing@intel.com>
> 发送时间: 2021年6月22日 14:08
> 收件人: Feifei Wang <Feifei.Wang2@arm.com>
> 抄送: dev@dpdk.org; nd <nd@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>
> 主题: RE: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> 
> 
> > -----Original Message-----
> > From: Feifei Wang <feifei.wang2@arm.com>
> > Sent: Thursday, May 27, 2021 4:17 PM
> > To: Xing, Beilei <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; nd@arm.com; Feifei Wang <feifei.wang2@arm.com>;
> > Ruifeng Wang <ruifeng.wang@arm.com>
> > Subject: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> >
> > For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means
> > per- queue all mbufs come from the same mempool and have refcnt = 1.
> >
> > Thus we can use bulk free of the buffers when mbuf fast free mode is
> > enabled.
> >
> > For scalar path in arm platform:
> > In n1sdp, performance is improved by 7.8%; In thunderx2, performance
> > is improved by 6.7%.
> >
> > For scalar path in x86 platform,
> > performance is improved by 6%.
> >
> > Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > ---
> >  drivers/net/i40e/i40e_rxtx.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > b/drivers/net/i40e/i40e_rxtx.c index
> > 6c58decece..fe7b20f750 100644
> > --- a/drivers/net/i40e/i40e_rxtx.c
> > +++ b/drivers/net/i40e/i40e_rxtx.c
> > @@ -1295,6 +1295,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)  {
> >  	struct i40e_tx_entry *txep;
> >  	uint16_t i;
> > +	struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> >
> >  	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
> >
> 	rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != @@ -1308,9
> +1309,11
> > @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
> >
> >  	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
> >  		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> > -			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
> > +			free[i] = txep->mbuf;
> 
> The tx_rs_thresh can be 'nb_desc - 3', so if tx_rs_thres >
> RTE_I40E_TX_MAX_FREE_BUF_SZ, there'll be out of bounds, right?

Actually tx_rs_thresh  <=  tx__free_thresh  <  nb_desc - 3 (i40e_dev_tx_queue_setup).
However, I don't know how it affects the relationship between tx_rs_thresh and
RTE_I40E_TX_MAX_FREE_BUF_SZ.

Furthermore, I think you are right that tx_rs_thres can be greater than
RTE_I40E_TX_MAX_FREE_BUF_SZ in tx_simple_mode (i40e_set_tx_function_flag).

Thus, in scalar path, we can change like:
---------------------------------------------------------------------------------------------------------------
int n = txq->tx_rs_thresh;
int32_t i = 0, j = 0;
const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ;
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];

For FAST_FREE_MODE:
	
if (k) {
	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
			free[i] = txep->mbuf;
			txep->mbuf = NULL;
		}
		rte_mempool_put_bulk(free[0]->pool, (void **)free,
					RTE_I40E_TX_MAX_FREE_BUF_SZ);
	}
} else {
	for (i = 0; i < m; ++i, ++txep) {
		free[i] = txep->mbuf;
		txep->mbuf = NULL;
	}
	rte_mempool_put_bulk(free[0]->pool, (void **)free, m);
}
---------------------------------------------------------------------------------------------------------------

Best Regards
Feifei

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-06-22  9:58     ` [dpdk-dev] 回复: " Feifei Wang
@ 2021-06-22 10:08       ` Feifei Wang
  2021-06-23  7:02         ` [dpdk-dev] " Xing, Beilei
  0 siblings, 1 reply; 16+ messages in thread
From: Feifei Wang @ 2021-06-22 10:08 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev, nd, Ruifeng Wang, nd

Sorry for a mistake for the code, it should be:
------------------------------------------------------------------------------------------------
int n = txq->tx_rs_thresh;
 int32_t i = 0, j = 0;
const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; 
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];

For FAST_FREE_MODE:

if (k) {
 	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
 			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
			free[i] = txep->mbuf;
			txep->mbuf = NULL;
		}
 		rte_mempool_put_bulk(free[0]->pool, (void **)free,
 					RTE_I40E_TX_MAX_FREE_BUF_SZ);
 	}
 } 

if (m) {
 	for (i = 0; i < m; ++i, ++txep) {
		free[i] = txep->mbuf;
 		txep->mbuf = NULL;
	}
 }
 rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }
------------------------------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-06-22 10:08       ` Feifei Wang
@ 2021-06-23  7:02         ` Xing, Beilei
  2021-06-25  9:40           ` [dpdk-dev] 回复: " Feifei Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Xing, Beilei @ 2021-06-23  7:02 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev, nd, Ruifeng Wang, nd



> -----Original Message-----
> From: Feifei Wang <Feifei.Wang2@arm.com>
> Sent: Tuesday, June 22, 2021 6:08 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; nd <nd@arm.com>
> Subject: 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> Sorry for a mistake for the code, it should be:
> ------------------------------------------------------------------------------------------------
> int n = txq->tx_rs_thresh;
>  int32_t i = 0, j = 0;
> const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
> const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct rte_mbuf
> *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> 
> For FAST_FREE_MODE:
> 
> if (k) {
>  	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
>  			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
> 		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
> 			free[i] = txep->mbuf;
> 			txep->mbuf = NULL;
> 		}
>  		rte_mempool_put_bulk(free[0]->pool, (void **)free,
>  					RTE_I40E_TX_MAX_FREE_BUF_SZ);
>  	}
>  }
> 
> if (m) {
>  	for (i = 0; i < m; ++i, ++txep) {
> 		free[i] = txep->mbuf;
>  		txep->mbuf = NULL;
> 	}
>  }
>  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }
> ------------------------------------------------------------------------------------------------

Seems no logical problem, but the code looks heavy due to for loops.
Did you run performance with this change when tx_rs_thresh > RTE_I40E_TX_MAX_FREE_BUF_SZ?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-06-23  7:02         ` [dpdk-dev] " Xing, Beilei
@ 2021-06-25  9:40           ` Feifei Wang
  2021-06-28  2:27             ` [dpdk-dev] " Xing, Beilei
  0 siblings, 1 reply; 16+ messages in thread
From: Feifei Wang @ 2021-06-25  9:40 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev, nd, Ruifeng Wang, nd, nd

<snip>

> > int n = txq->tx_rs_thresh;
> >  int32_t i = 0, j = 0;
> > const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct rte_mbuf
> > *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> >
> > For FAST_FREE_MODE:
> >
> > if (k) {
> >  	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
> >  			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
> > 		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
> > 			free[i] = txep->mbuf;
> > 			txep->mbuf = NULL;
> > 		}
> >  		rte_mempool_put_bulk(free[0]->pool, (void **)free,
> >  					RTE_I40E_TX_MAX_FREE_BUF_SZ);
> >  	}
> >  }
> >
> > if (m) {
> >  	for (i = 0; i < m; ++i, ++txep) {
> > 		free[i] = txep->mbuf;
> >  		txep->mbuf = NULL;
> > 	}
> >  }
> >  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }

> Seems no logical problem, but the code looks heavy due to for loops.
> Did you run performance with this change when tx_rs_thresh >
> RTE_I40E_TX_MAX_FREE_BUF_SZ?

Sorry for my late rely. It takes me some time to do the test for this path and following
is my test results:

First, I come up with another way to solve this bug and compare it with "loop"(size of 'free' is 64).
That is set the size of 'free' as a large constant. We know:
tx_rs_thresh < ring_desc_size < I40E_MAX_RING_DESC(4096), so we can directly define as:
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];

[1]Test Config:
MRR Test: two porst & bi-directional flows & one core
RX API: i40e_recv_pkts_bulk_alloc
TX API: i40e_xmit_pkts_simple
ring_descs_size: 1024
Ring_I40E_TX_MAX_FREE_SZ: 64

[2]Scheme:
tx_rs_thresh =  I40E_DEFAULT_TX_RSBIT_THRESH
tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH
tx_rs_thresh <= tx_free_thresh < nb_tx_desc
So we change the value of 'tx_rs_thresh' by adjust I40E_DEFAULT_TX_RSBIT_THRESH

[3]Test Results (performance improve):
In X86:						
tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
1.mempool_put(base)                                   0                  0                        0
2.mempool_put_bulk:loop                           +4.7%         +5.6%               +7.0%
3.mempool_put_bulk:large size for free   +3.8%          +2.3%               -2.0%
(free[I40E_MAX_RING_DESC])

In Arm:
N1SDP:
tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
1.mempool_put(base)                                   0                  0                        0
2.mempool_put_bulk:loop                           +7.9%         +9.1%               +2.9%
3.mempool_put_bulk:large size for free    +7.1%         +8.7%               +3.4%
(free[I40E_MAX_RING_DESC])

Thunderx2:
tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
1.mempool_put(base)                                   0                  0                        0
2.mempool_put_bulk:loop                           +7.6%         +10.5%             +7.6%
3.mempool_put_bulk:large size for free    +1.7%         +18.4%             +10.2%
(free[I40E_MAX_RING_DESC])

As a result, I feel maybe 'loop' is better and it seems not very heavy according to the test.
What about your views and look forward to your reply.
Thanks a lot.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-06-25  9:40           ` [dpdk-dev] 回复: " Feifei Wang
@ 2021-06-28  2:27             ` Xing, Beilei
  2021-06-28  2:28               ` [dpdk-dev] 回复: " Feifei Wang
  0 siblings, 1 reply; 16+ messages in thread
From: Xing, Beilei @ 2021-06-28  2:27 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev, nd, Ruifeng Wang, nd, nd



> -----Original Message-----
> From: Feifei Wang <Feifei.Wang2@arm.com>
> Sent: Friday, June 25, 2021 5:40 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; nd <nd@arm.com>; nd <nd@arm.com>
> Subject: 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> <snip>
> 
> > > int n = txq->tx_rs_thresh;
> > >  int32_t i = 0, j = 0;
> > > const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > > const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct rte_mbuf
> > > *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> > >
> > > For FAST_FREE_MODE:
> > >
> > > if (k) {
> > >  	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
> > >  			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
> > > 		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
> > > 			free[i] = txep->mbuf;
> > > 			txep->mbuf = NULL;
> > > 		}
> > >  		rte_mempool_put_bulk(free[0]->pool, (void **)free,
> > >  					RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > >  	}
> > >  }
> > >
> > > if (m) {
> > >  	for (i = 0; i < m; ++i, ++txep) {
> > > 		free[i] = txep->mbuf;
> > >  		txep->mbuf = NULL;
> > > 	}
> > >  }
> > >  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }
> 
> > Seems no logical problem, but the code looks heavy due to for loops.
> > Did you run performance with this change when tx_rs_thresh >
> > RTE_I40E_TX_MAX_FREE_BUF_SZ?
> 
> Sorry for my late rely. It takes me some time to do the test for this path and
> following is my test results:
> 
> First, I come up with another way to solve this bug and compare it with
> "loop"(size of 'free' is 64).
> That is set the size of 'free' as a large constant. We know:
> tx_rs_thresh < ring_desc_size < I40E_MAX_RING_DESC(4096), so we can
> directly define as:
> struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> 
> [1]Test Config:
> MRR Test: two porst & bi-directional flows & one core RX API:
> i40e_recv_pkts_bulk_alloc TX API: i40e_xmit_pkts_simple
> ring_descs_size: 1024
> Ring_I40E_TX_MAX_FREE_SZ: 64
> 
> [2]Scheme:
> tx_rs_thresh =  I40E_DEFAULT_TX_RSBIT_THRESH tx_free_thresh =
> I40E_DEFAULT_TX_FREE_THRESH tx_rs_thresh <= tx_free_thresh <
> nb_tx_desc So we change the value of 'tx_rs_thresh' by adjust
> I40E_DEFAULT_TX_RSBIT_THRESH
> 
> [3]Test Results (performance improve):
> In X86:
> tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
> 1.mempool_put(base)                                   0                  0                        0
> 2.mempool_put_bulk:loop                           +4.7%         +5.6%               +7.0%
> 3.mempool_put_bulk:large size for free   +3.8%          +2.3%               -2.0%
> (free[I40E_MAX_RING_DESC])
> 
> In Arm:
> N1SDP:
> tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
> 1.mempool_put(base)                                   0                  0                        0
> 2.mempool_put_bulk:loop                           +7.9%         +9.1%               +2.9%
> 3.mempool_put_bulk:large size for free    +7.1%         +8.7%               +3.4%
> (free[I40E_MAX_RING_DESC])
> 
> Thunderx2:
> tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
> 1.mempool_put(base)                                   0                  0                        0
> 2.mempool_put_bulk:loop                           +7.6%         +10.5%             +7.6%
> 3.mempool_put_bulk:large size for free    +1.7%         +18.4%             +10.2%
> (free[I40E_MAX_RING_DESC])
> 
> As a result, I feel maybe 'loop' is better and it seems not very heavy
> according to the test.
> What about your views and look forward to your reply.
> Thanks a lot.

Thanks for your patch and test.
It looks OK for me, please send V2.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
  2021-06-28  2:27             ` [dpdk-dev] " Xing, Beilei
@ 2021-06-28  2:28               ` Feifei Wang
  0 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2021-06-28  2:28 UTC (permalink / raw)
  To: Xing, Beilei; +Cc: dev, nd, Ruifeng Wang, nd, nd, nd


> -----邮件原件-----
> 发件人: Xing, Beilei <beilei.xing@intel.com>
> 发送时间: 2021年6月28日 10:27
> 收件人: Feifei Wang <Feifei.Wang2@arm.com>
> 抄送: dev@dpdk.org; nd <nd@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; nd <nd@arm.com>; nd <nd@arm.com>
> 主题: RE: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
> 
> 
> 
> > -----Original Message-----
> > From: Feifei Wang <Feifei.Wang2@arm.com>
> > Sent: Friday, June 25, 2021 5:40 PM
> > To: Xing, Beilei <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; nd <nd@arm.com>; Ruifeng Wang
> > <Ruifeng.Wang@arm.com>; nd <nd@arm.com>; nd <nd@arm.com>
> > Subject: 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar
> > Tx
> >
> > <snip>
> >
> > > > int n = txq->tx_rs_thresh;
> > > >  int32_t i = 0, j = 0;
> > > > const int32_t k = RTE_ALIGN_FLOOR(n,
> RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > > > const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct
> rte_mbuf
> > > > *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> > > >
> > > > For FAST_FREE_MODE:
> > > >
> > > > if (k) {
> > > >  	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
> > > >  			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
> > > > 		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
> > > > 			free[i] = txep->mbuf;
> > > > 			txep->mbuf = NULL;
> > > > 		}
> > > >  		rte_mempool_put_bulk(free[0]->pool, (void **)free,
> > > >  					RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > > >  	}
> > > >  }
> > > >
> > > > if (m) {
> > > >  	for (i = 0; i < m; ++i, ++txep) {
> > > > 		free[i] = txep->mbuf;
> > > >  		txep->mbuf = NULL;
> > > > 	}
> > > >  }
> > > >  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }
> >
> > > Seems no logical problem, but the code looks heavy due to for loops.
> > > Did you run performance with this change when tx_rs_thresh >
> > > RTE_I40E_TX_MAX_FREE_BUF_SZ?
> >
> > Sorry for my late rely. It takes me some time to do the test for this
> > path and following is my test results:
> >
> > First, I come up with another way to solve this bug and compare it
> > with "loop"(size of 'free' is 64).
> > That is set the size of 'free' as a large constant. We know:
> > tx_rs_thresh < ring_desc_size < I40E_MAX_RING_DESC(4096), so we can
> > directly define as:
> > struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> >
> > [1]Test Config:
> > MRR Test: two porst & bi-directional flows & one core RX API:
> > i40e_recv_pkts_bulk_alloc TX API: i40e_xmit_pkts_simple
> > ring_descs_size: 1024
> > Ring_I40E_TX_MAX_FREE_SZ: 64
> >
> > [2]Scheme:
> > tx_rs_thresh =  I40E_DEFAULT_TX_RSBIT_THRESH tx_free_thresh =
> > I40E_DEFAULT_TX_FREE_THRESH tx_rs_thresh <= tx_free_thresh <
> > nb_tx_desc So we change the value of 'tx_rs_thresh' by adjust
> > I40E_DEFAULT_TX_RSBIT_THRESH
> >
> > [3]Test Results (performance improve):
> > In X86:
> > tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
> > 1.mempool_put(base)                                   0                  0                        0
> > 2.mempool_put_bulk:loop                           +4.7%         +5.6%               +7.0%
> > 3.mempool_put_bulk:large size for free   +3.8%          +2.3%               -2.0%
> > (free[I40E_MAX_RING_DESC])
> >
> > In Arm:
> > N1SDP:
> > tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
> > 1.mempool_put(base)                                   0                  0                        0
> > 2.mempool_put_bulk:loop                           +7.9%         +9.1%               +2.9%
> > 3.mempool_put_bulk:large size for free    +7.1%         +8.7%               +3.4%
> > (free[I40E_MAX_RING_DESC])
> >
> > Thunderx2:
> > tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
> > 1.mempool_put(base)                                   0                  0                        0
> > 2.mempool_put_bulk:loop                           +7.6%         +10.5%             +7.6%
> > 3.mempool_put_bulk:large size for free    +1.7%         +18.4%             +10.2%
> > (free[I40E_MAX_RING_DESC])
> >
> > As a result, I feel maybe 'loop' is better and it seems not very heavy
> > according to the test.
> > What about your views and look forward to your reply.
> > Thanks a lot.
> 
> Thanks for your patch and test.
> It looks OK for me, please send V2.
Thanks for the reviewing, I will update the V2 version.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx
  2021-05-27  8:17 [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
                   ` (2 preceding siblings ...)
  2021-06-22  1:52 ` [dpdk-dev] 回复: [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
@ 2021-06-30  6:40 ` Feifei Wang
  2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
                     ` (2 more replies)
  3 siblings, 3 replies; 16+ messages in thread
From: Feifei Wang @ 2021-06-30  6:40 UTC (permalink / raw)
  Cc: dev, nd, Feifei Wang

For i40e Tx path, use bulk free of the buffers when mbuf fast free
mode is enabled. This can efficiently improve the performance.

v2:
1. fix bug when tx_rs_thres > RTE_I40E_TX_MAX_FREE_BUF_SZ (Beilei)

v3:
1. change variable name for more readable (Beilei)

Feifei Wang (2):
  net/i40e: improve performance for scalar Tx
  net/i40e: improve performance for vector Tx

 drivers/net/i40e/i40e_rxtx.c            | 30 ++++++++++++++++++++-----
 drivers/net/i40e/i40e_rxtx_vec_common.h | 11 +++++++++
 2 files changed, 35 insertions(+), 6 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx
  2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
@ 2021-06-30  6:40   ` Feifei Wang
  2021-06-30  6:59     ` Xing, Beilei
  2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 2/2] net/i40e: improve performance for vector Tx Feifei Wang
  2021-07-01 12:34   ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Zhang, Qi Z
  2 siblings, 1 reply; 16+ messages in thread
From: Feifei Wang @ 2021-06-30  6:40 UTC (permalink / raw)
  To: Beilei Xing; +Cc: dev, nd, Feifei Wang, Ruifeng Wang

For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means
per-queue all mbufs come from the same mempool and have refcnt = 1.

Thus we can use bulk free of the buffers when mbuf fast free mode is
enabled.

Following are the test results with this patch:

MRR L3FWD Test:
two ports & bi-directional flows & one core
RX API: i40e_recv_pkts_bulk_alloc
TX API: i40e_xmit_pkts_simple
ring_descs_size = 1024;
Ring_I40E_TX_MAX_FREE_SZ = 64;
tx_rs_thresh = I40E_DEFAULT_TX_RSBIT_THRESH = 32;
tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH = 32;

For scalar path in arm platform with default 'tx_rs_thresh':
In n1sdp, performance is improved by 7.9%;
In thunderx2, performance is improved by 7.6%.

For scalar path in x86 platform with default 'tx_rs_thresh':
performance is improved by 4.7%.

Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 drivers/net/i40e/i40e_rxtx.c | 30 ++++++++++++++++++++++++------
 1 file changed, 24 insertions(+), 6 deletions(-)

diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 6c58decece..0d3482a9d2 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1294,22 +1294,40 @@ static __rte_always_inline int
 i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 {
 	struct i40e_tx_entry *txep;
-	uint16_t i;
+	uint16_t tx_rs_thresh = txq->tx_rs_thresh;
+	uint16_t i = 0, j = 0;
+	struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
+	const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh, RTE_I40E_TX_MAX_FREE_BUF_SZ);
+	const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
 
 	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
 			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
 			rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
 		return 0;
 
-	txep = &(txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)]);
+	txep = &txq->sw_ring[txq->tx_next_dd - (tx_rs_thresh - 1)];
 
-	for (i = 0; i < txq->tx_rs_thresh; i++)
+	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
 	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
-		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
-			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
-			txep->mbuf = NULL;
+		if (k) {
+			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
+				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
+					free[i] = txep->mbuf;
+					txep->mbuf = NULL;
+				}
+				rte_mempool_put_bulk(free[0]->pool, (void **)free,
+						RTE_I40E_TX_MAX_FREE_BUF_SZ);
+			}
+		}
+
+		if (m) {
+			for (i = 0; i < m; ++i, ++txep) {
+				free[i] = txep->mbuf;
+				txep->mbuf = NULL;
+			}
+			rte_mempool_put_bulk(free[0]->pool, (void **)free, m);
 		}
 	} else {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [dpdk-dev] [PATCH v3 2/2] net/i40e: improve performance for vector Tx
  2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
  2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
@ 2021-06-30  6:40   ` Feifei Wang
  2021-07-01 12:34   ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Zhang, Qi Z
  2 siblings, 0 replies; 16+ messages in thread
From: Feifei Wang @ 2021-06-30  6:40 UTC (permalink / raw)
  To: Beilei Xing; +Cc: dev, nd, Feifei Wang, Ruifeng Wang

For i40e vector Tx path, if tx_offload is set as FAST_FREE_MBUF mode,
no mbuf fast free operations are executed. To fix this, add mbuf fast
free mode for vector Tx path.

Furthermore, for i40e vector Tx path, if implement FAST_FREE_MBUF mode,
it means per-queue all mbufs come from the same mempool and have
refcnt = 1. Thus we can use bulk free of the buffers when mbuf fast free
mode is enabled.

For vector path in arm platform:
In n1sdp, performance is improved by 18.4%;
In thunderx2, performance is improved by 23%.

For vector path in x86 platform:
No performance changes.

Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 drivers/net/i40e/i40e_rxtx_vec_common.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index 16fcf0aec6..f52ed98d62 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -99,6 +99,16 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  * tx_next_dd - (tx_rs_thresh-1)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
+
+	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+		for (i = 0; i < n; i++) {
+			free[i] = txep[i].mbuf;
+			txep[i].mbuf = NULL;
+		}
+		rte_mempool_put_bulk(free[0]->pool, (void **)free, n);
+		goto done;
+	}
+
 	m = rte_pktmbuf_prefree_seg(txep[0].mbuf);
 	if (likely(m != NULL)) {
 		free[0] = m;
@@ -126,6 +136,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 		}
 	}
 
+done:
 	/* buffers were freed, update counters */
 	txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
 	txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx
  2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
@ 2021-06-30  6:59     ` Xing, Beilei
  0 siblings, 0 replies; 16+ messages in thread
From: Xing, Beilei @ 2021-06-30  6:59 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev, nd, Ruifeng Wang



> -----Original Message-----
> From: Feifei Wang <feifei.wang2@arm.com>
> Sent: Wednesday, June 30, 2021 2:41 PM
> To: Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; nd@arm.com; Feifei Wang <feifei.wang2@arm.com>;
> Ruifeng Wang <ruifeng.wang@arm.com>
> Subject: [PATCH v3 1/2] net/i40e: improve performance for scalar Tx
> 
> For i40e scalar Tx path, if implement FAST_FREE_MBUF mode, it means per-
> queue all mbufs come from the same mempool and have refcnt = 1.
> 
> Thus we can use bulk free of the buffers when mbuf fast free mode is
> enabled.
> 
> Following are the test results with this patch:
> 
> MRR L3FWD Test:
> two ports & bi-directional flows & one core RX API:
> i40e_recv_pkts_bulk_alloc TX API: i40e_xmit_pkts_simple ring_descs_size =
> 1024; Ring_I40E_TX_MAX_FREE_SZ = 64; tx_rs_thresh =
> I40E_DEFAULT_TX_RSBIT_THRESH = 32; tx_free_thresh =
> I40E_DEFAULT_TX_FREE_THRESH = 32;
> 
> For scalar path in arm platform with default 'tx_rs_thresh':
> In n1sdp, performance is improved by 7.9%; In thunderx2, performance is
> improved by 7.6%.
> 
> For scalar path in x86 platform with default 'tx_rs_thresh':
> performance is improved by 4.7%.
> 
> Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>  drivers/net/i40e/i40e_rxtx.c | 30 ++++++++++++++++++++++++------
>  1 file changed, 24 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index
> 6c58decece..0d3482a9d2 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -1294,22 +1294,40 @@ static __rte_always_inline int
> i40e_tx_free_bufs(struct i40e_tx_queue *txq)  {
>  	struct i40e_tx_entry *txep;
> -	uint16_t i;
> +	uint16_t tx_rs_thresh = txq->tx_rs_thresh;
> +	uint16_t i = 0, j = 0;
> +	struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> +	const uint16_t k = RTE_ALIGN_FLOOR(tx_rs_thresh,
> RTE_I40E_TX_MAX_FREE_BUF_SZ);
> +	const uint16_t m = tx_rs_thresh % RTE_I40E_TX_MAX_FREE_BUF_SZ;
> 
>  	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
>  			rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> 
> 	rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
>  		return 0;
> 
> -	txep = &(txq->sw_ring[txq->tx_next_dd - (txq->tx_rs_thresh - 1)]);
> +	txep = &txq->sw_ring[txq->tx_next_dd - (tx_rs_thresh - 1)];
> 
> -	for (i = 0; i < txq->tx_rs_thresh; i++)
> +	for (i = 0; i < tx_rs_thresh; i++)
>  		rte_prefetch0((txep + i)->mbuf);
> 
>  	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
> -		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> -			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
> -			txep->mbuf = NULL;
> +		if (k) {
> +			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ)
> {
> +				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ;
> ++i, ++txep) {
> +					free[i] = txep->mbuf;
> +					txep->mbuf = NULL;
> +				}
> +				rte_mempool_put_bulk(free[0]->pool, (void
> **)free,
> +
> 	RTE_I40E_TX_MAX_FREE_BUF_SZ);
> +			}
> +		}
> +
> +		if (m) {
> +			for (i = 0; i < m; ++i, ++txep) {
> +				free[i] = txep->mbuf;
> +				txep->mbuf = NULL;
> +			}
> +			rte_mempool_put_bulk(free[0]->pool, (void **)free,
> m);
>  		}
>  	} else {
>  		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
> --
> 2.25.1
Acked-by: Beilei Xing <beilei.xing@intel.com>


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx
  2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
  2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
  2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 2/2] net/i40e: improve performance for vector Tx Feifei Wang
@ 2021-07-01 12:34   ` Zhang, Qi Z
  2 siblings, 0 replies; 16+ messages in thread
From: Zhang, Qi Z @ 2021-07-01 12:34 UTC (permalink / raw)
  To: Feifei Wang; +Cc: dev, nd



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Feifei Wang
> Sent: Wednesday, June 30, 2021 2:41 PM
> Cc: dev@dpdk.org; nd@arm.com; Feifei Wang <feifei.wang2@arm.com>
> Subject: [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx
> 
> For i40e Tx path, use bulk free of the buffers when mbuf fast free mode is
> enabled. This can efficiently improve the performance.
> 
> v2:
> 1. fix bug when tx_rs_thres > RTE_I40E_TX_MAX_FREE_BUF_SZ (Beilei)
> 
> v3:
> 1. change variable name for more readable (Beilei)
> 
> Feifei Wang (2):
>   net/i40e: improve performance for scalar Tx
>   net/i40e: improve performance for vector Tx
> 
>  drivers/net/i40e/i40e_rxtx.c            | 30 ++++++++++++++++++++-----
>  drivers/net/i40e/i40e_rxtx_vec_common.h | 11 +++++++++
>  2 files changed, 35 insertions(+), 6 deletions(-)
> 
> --
> 2.25.1

Applied to dpdk-next-net-intel.

Thanks
Qi

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-07-01 12:34 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-27  8:17 [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
2021-06-22  6:07   ` Xing, Beilei
2021-06-22  9:58     ` [dpdk-dev] 回复: " Feifei Wang
2021-06-22 10:08       ` Feifei Wang
2021-06-23  7:02         ` [dpdk-dev] " Xing, Beilei
2021-06-25  9:40           ` [dpdk-dev] 回复: " Feifei Wang
2021-06-28  2:27             ` [dpdk-dev] " Xing, Beilei
2021-06-28  2:28               ` [dpdk-dev] 回复: " Feifei Wang
2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 2/2] net/i40e: improve performance for vector Tx Feifei Wang
2021-06-22  1:52 ` [dpdk-dev] 回复: [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
2021-06-30  6:59     ` Xing, Beilei
2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 2/2] net/i40e: improve performance for vector Tx Feifei Wang
2021-07-01 12:34   ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).