DPDK patches and discussions
 help / color / mirror / Atom feed
From: Feifei Wang <Feifei.Wang2@arm.com>
To: "Xing, Beilei" <beilei.xing@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>,
	Ruifeng Wang <Ruifeng.Wang@arm.com>, nd <nd@arm.com>,
	nd <nd@arm.com>
Subject: [dpdk-dev] 回复: [PATCH v1 1/2] net/i40e: improve performance for scalar Tx
Date: Fri, 25 Jun 2021 09:40:22 +0000	[thread overview]
Message-ID: <AM9PR08MB691581600F6F82EA1A1B0855C8069@AM9PR08MB6915.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <MN2PR11MB38075B90591282ADB37E06F7F7089@MN2PR11MB3807.namprd11.prod.outlook.com>

<snip>

> > int n = txq->tx_rs_thresh;
> >  int32_t i = 0, j = 0;
> > const int32_t k = RTE_ALIGN_FLOOR(n, RTE_I40E_TX_MAX_FREE_BUF_SZ);
> > const int32_t m = n % RTE_I40E_TX_MAX_FREE_BUF_SZ; struct rte_mbuf
> > *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];
> >
> > For FAST_FREE_MODE:
> >
> > if (k) {
> >  	for (j = 0; j != k - RTE_I40E_TX_MAX_FREE_BUF_SZ;
> >  			j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
> > 		for (i = 0; i <RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
> > 			free[i] = txep->mbuf;
> > 			txep->mbuf = NULL;
> > 		}
> >  		rte_mempool_put_bulk(free[0]->pool, (void **)free,
> >  					RTE_I40E_TX_MAX_FREE_BUF_SZ);
> >  	}
> >  }
> >
> > if (m) {
> >  	for (i = 0; i < m; ++i, ++txep) {
> > 		free[i] = txep->mbuf;
> >  		txep->mbuf = NULL;
> > 	}
> >  }
> >  rte_mempool_put_bulk(free[0]->pool, (void **)free, m); }

> Seems no logical problem, but the code looks heavy due to for loops.
> Did you run performance with this change when tx_rs_thresh >
> RTE_I40E_TX_MAX_FREE_BUF_SZ?

Sorry for my late rely. It takes me some time to do the test for this path and following
is my test results:

First, I come up with another way to solve this bug and compare it with "loop"(size of 'free' is 64).
That is set the size of 'free' as a large constant. We know:
tx_rs_thresh < ring_desc_size < I40E_MAX_RING_DESC(4096), so we can directly define as:
struct rte_mbuf *free[RTE_I40E_TX_MAX_FREE_BUF_SZ];

[1]Test Config:
MRR Test: two porst & bi-directional flows & one core
RX API: i40e_recv_pkts_bulk_alloc
TX API: i40e_xmit_pkts_simple
ring_descs_size: 1024
Ring_I40E_TX_MAX_FREE_SZ: 64

[2]Scheme:
tx_rs_thresh =  I40E_DEFAULT_TX_RSBIT_THRESH
tx_free_thresh = I40E_DEFAULT_TX_FREE_THRESH
tx_rs_thresh <= tx_free_thresh < nb_tx_desc
So we change the value of 'tx_rs_thresh' by adjust I40E_DEFAULT_TX_RSBIT_THRESH

[3]Test Results (performance improve):
In X86:						
tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
1.mempool_put(base)                                   0                  0                        0
2.mempool_put_bulk:loop                           +4.7%         +5.6%               +7.0%
3.mempool_put_bulk:large size for free   +3.8%          +2.3%               -2.0%
(free[I40E_MAX_RING_DESC])

In Arm:
N1SDP:
tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
1.mempool_put(base)                                   0                  0                        0
2.mempool_put_bulk:loop                           +7.9%         +9.1%               +2.9%
3.mempool_put_bulk:large size for free    +7.1%         +8.7%               +3.4%
(free[I40E_MAX_RING_DESC])

Thunderx2:
tx_rs_thresh/ tx_free_thresh                       32/32          256/256          512/512
1.mempool_put(base)                                   0                  0                        0
2.mempool_put_bulk:loop                           +7.6%         +10.5%             +7.6%
3.mempool_put_bulk:large size for free    +1.7%         +18.4%             +10.2%
(free[I40E_MAX_RING_DESC])

As a result, I feel maybe 'loop' is better and it seems not very heavy according to the test.
What about your views and look forward to your reply.
Thanks a lot.

  reply	other threads:[~2021-06-25  9:40 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-27  8:17 [dpdk-dev] [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
2021-06-22  6:07   ` Xing, Beilei
2021-06-22  9:58     ` [dpdk-dev] 回复: " Feifei Wang
2021-06-22 10:08       ` Feifei Wang
2021-06-23  7:02         ` [dpdk-dev] " Xing, Beilei
2021-06-25  9:40           ` Feifei Wang [this message]
2021-06-28  2:27             ` Xing, Beilei
2021-06-28  2:28               ` [dpdk-dev] 回复: " Feifei Wang
2021-05-27  8:17 ` [dpdk-dev] [PATCH v1 2/2] net/i40e: improve performance for vector Tx Feifei Wang
2021-06-22  1:52 ` [dpdk-dev] 回复: [PATCH v1 0/2] net/i40e: improve free mbuf Feifei Wang
2021-06-30  6:40 ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Feifei Wang
2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: improve performance for scalar Tx Feifei Wang
2021-06-30  6:59     ` Xing, Beilei
2021-06-30  6:40   ` [dpdk-dev] [PATCH v3 2/2] net/i40e: improve performance for vector Tx Feifei Wang
2021-07-01 12:34   ` [dpdk-dev] [PATCH v3 0/2] net/i40e: improve free mbuf for Tx Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM9PR08MB691581600F6F82EA1A1B0855C8069@AM9PR08MB6915.eurprd08.prod.outlook.com \
    --to=feifei.wang2@arm.com \
    --cc=Ruifeng.Wang@arm.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).