DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Min Hu (Connor)" <humin29@huawei.com>, <dev@dpdk.org>
Cc: <thomas@monjalon.net>
Subject: Re: [PATCH 1/2] net/hns3: optimized Tx performance by mbuf fast free
Date: Mon, 15 Nov 2021 17:30:59 +0000	[thread overview]
Message-ID: <91dc58ee-64fc-b520-9716-d7e0fc9d34d9@intel.com> (raw)
In-Reply-To: <20211111133859.13705-2-humin29@huawei.com>

On 11/11/2021 1:38 PM, Min Hu (Connor) wrote:
> From: Chengwen Feng <fengchengwen@huawei.com>
> 
> Currently the vector and simple xmit algorithm don't support multi_segs,
> so if Tx offload support MBUF_FAST_FREE, driver could invoke
> rte_mempool_put_bulk() to free Tx mbufs in this situation.
> 
> In the testpmd single core MAC forwarding scenario, the performance is
> improved by 8% at 64B on Kunpeng920 platform.
> 

'RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE' seems already announced in 'tx_offload_capa',
  was it wrong?

> Cc: stable@dpdk.org
> 
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> ---
>   drivers/net/hns3/hns3_rxtx.c     | 11 +++++++++++
>   drivers/net/hns3/hns3_rxtx.h     |  2 ++
>   drivers/net/hns3/hns3_rxtx_vec.h |  9 +++++++++
>   3 files changed, 22 insertions(+)
> 

Can you please update 'doc/guides/nics/features/hns3.ini' to announce
"Free Tx mbuf on demand" feature.

> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index d26e262335..78227a139f 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -3059,6 +3059,8 @@ hns3_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
>   	txq->min_tx_pkt_len = hw->min_tx_pkt_len;
>   	txq->tso_mode = hw->tso_mode;
>   	txq->udp_cksum_mode = hw->udp_cksum_mode;
> +	txq->mbuf_fast_free_en = !!(dev->data->dev_conf.txmode.offloads &
> +				    DEV_TX_OFFLOAD_MBUF_FAST_FREE);

Can you please use updated macro name, 'RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE'?

  reply	other threads:[~2021-11-15 17:31 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-11 13:38 [PATCH 0/2] performance optimized for hns3 PMD Min Hu (Connor)
2021-11-11 13:38 ` [PATCH 1/2] net/hns3: optimized Tx performance by mbuf fast free Min Hu (Connor)
2021-11-15 17:30   ` Ferruh Yigit [this message]
2021-11-16  1:24     ` Min Hu (Connor)
2021-11-11 13:38 ` [PATCH 2/2] net/hns3: optimized Tx performance Min Hu (Connor)
2021-11-15 17:32   ` Ferruh Yigit
2021-11-16  1:22 ` [PATCH v2 0/2] performance optimized for hns3 PMD Min Hu (Connor)
2021-11-16  1:22   ` [PATCH v2 1/2] net/hns3: optimized Tx performance by mbuf fast free Min Hu (Connor)
2021-11-16  1:22   ` [PATCH v2 2/2] net/hns3: optimized Tx performance Min Hu (Connor)
2021-11-16 14:36   ` [PATCH v2 0/2] performance optimized for hns3 PMD Ferruh Yigit
2021-11-16 15:04     ` Fengchengwen
2021-11-16 15:12     ` humin (Q)
2021-11-16 15:38     ` Ferruh Yigit
2021-11-16 15:43   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=91dc58ee-64fc-b520-9716-d7e0fc9d34d9@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=dev@dpdk.org \
    --cc=humin29@huawei.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).