DPDK patches and discussions
 help / color / mirror / Atom feed
From: Feifei Wang <Feifei.Wang2@arm.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>, nd <nd@arm.com>
Subject: 回复: 回复: [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side
Date: Tue, 28 Feb 2023 06:52:08 +0000	[thread overview]
Message-ID: <AS8PR08MB77184B4EB961E0ED302C8486C8AC9@AS8PR08MB7718.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <AS8PR08MB77180D81A851D524DF890690C8AC9@AS8PR08MB7718.eurprd08.prod.outlook.com>

CC to the right e-mail address.

> > I also have some concerns on how useful this API will be in real life,
> > and does the use case worth the complexity it brings.
> > And it looks too much low level detail for the application.
> 
> Concerns of direct rearm:
> 1. Earlier version of the design required the rxq/txq pairing to be done
> before starting the data plane threads. This required the user to know the
> direction of the packet flow in advance. This limited the use cases.
> 
> In the latest version, direct-rearm mode is packaged as a separate API.
> This allows for the users to change rxq/txq pairing in real time in data plane,
> according to the analysis of the packet flow by the application, for example:
> ----------------------------------------------------------------------------------------------
> --------------
> Step 1: upper application analyse the flow direction Step 2: rxq_rearm_data =
> rte_eth_rx_get_rearm_data(rx_portid, rx_queueid) Step 3:
> rte_eth_dev_direct_rearm(rx_portid, rx_queueid, tx_portid, tx_queueid,
> rxq_rearm_data); Step 4: rte_eth_rx_burst(rx_portid,rx_queueid);
> Step 5: rte_eth_tx_burst(tx_portid,tx_queueid);
> ----------------------------------------------------------------------------------------------
> --------------
> Above can support user to change rxq/txq pairing  at runtime and user does
> not need to know the direction of flow in advance. This can effectively
> expand direct-rearm use scenarios.
> 
> 2. Earlier version of direct rearm was breaking the independence between
> the RX and TX path.
> In the latest version, we use a structure to let Rx and Tx interact, for example:
> ----------------------------------------------------------------------------------------------
> -------------------------------------
> struct rte_eth_rxq_rearm_data {
>        struct rte_mbuf **buf_ring; /**< Buffer ring of Rx queue. */
>        uint16_t *refill_head;            /**< Head of buffer ring refilling descriptors.
> */
>        uint16_t *receive_tail;          /**< Tail of buffer ring receiving pkts. */
>        uint16_t nb_buf;                    /**< configured number of buffer ring. */
> }  rxq_rearm_data;
> 
> data path:
> 	/* Get direct-rearm info for a receive queue of an Ethernet device.
> */
> 	rxq_rearm_data = rte_eth_rx_get_rearm_data(rx_portid,
> rx_queueid);
> 	rte_eth_dev_direct_rearm(rx_portid, rx_queueid, tx_portid,
> tx_queueid, rxq_rearm_data) {
> 
> 		/*  Using Tx used buffer to refill Rx buffer ring in direct rearm
> mode */
> 		nb_rearm = rte_eth_tx_fill_sw_ring(tx_portid, tx_queueid,
> rxq_rearm_data );
> 
> 		/* Flush Rx descriptor in direct rearm mode */
> 		rte_eth_rx_flush_descs(rx_portid, rx_queuid, nb_rearm) ;
> 	}
> 	rte_eth_rx_burst(rx_portid,rx_queueid);
> 	rte_eth_tx_burst(tx_portid,tx_queueid);
> ----------------------------------------------------------------------------------------------
> -------------------------------------
> Furthermore, this let direct-rearm usage no longer limited to the same pmd,
> it can support moving buffers between different vendor pmds, even can put
> the buffer anywhere into your Rx buffer ring as long as the address of the
> buffer ring can be provided.
> In the latest version, we enable direct-rearm in i40e pmd and ixgbe pmd, and
> also try to use i40e driver in Rx, ixgbe driver in Tx, and then achieve 7-9%
> performance improvement by direct-rearm.
> 
> 3. Difference between direct rearm, ZC API used in mempool  and general
> path For general path:
>                 Rx: 32 pkts memcpy from mempool cache to rx_sw_ring
>                 Tx: 32 pkts memcpy from tx_sw_ring to temporary variable + 32 pkts
> memcpy from temporary variable to mempool cache For ZC API used in
> mempool:
>                 Rx: 32 pkts memcpy from mempool cache to rx_sw_ring
>                 Tx: 32 pkts memcpy from tx_sw_ring to zero-copy mempool cache
>                 Refer link:
> http://patches.dpdk.org/project/dpdk/patch/20230221055205.22984-2-
> kamalakshitha.aligeri@arm.com/
> For direct_rearm:
>                 Rx/Tx: 32 pkts memcpy from tx_sw_ring to rx_sw_ring Thus we can
> see in the one loop, compared to general path direct rearm reduce 32+32=64
> pkts memcpy; Compared to ZC API used in mempool, we can see direct
> rearm reduce 32 pkts memcpy in each loop.
> So, direct_rearm has its own benefits.
> 
> 4. Performance test and real cases
> For performance test, in l3fwd, we achieve the performance improvement
> of up to 15% in Arm server.
> For real cases, we have enabled direct-rearm in vpp and achieved
> performance improvement.
> 

  reply	other threads:[~2023-02-28  6:52 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-24 16:46 Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 1/4] net/i40e: enable direct re-arm mode Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 2/4] ethdev: add API for " Feifei Wang
2021-12-24 19:38   ` Stephen Hemminger
2021-12-26  9:49     ` 回复: " Feifei Wang
2021-12-26 10:31       ` Morten Brørup
2021-12-24 16:46 ` [RFC PATCH v1 3/4] net/i40e: add direct re-arm mode internal API Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 4/4] examples/l3fwd: give an example for direct rearm mode Feifei Wang
2021-12-26 10:25 ` [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side Morten Brørup
2021-12-28  6:55   ` 回复: " Feifei Wang
2022-01-18 15:51     ` Ferruh Yigit
2022-01-18 16:53       ` Thomas Monjalon
2022-01-18 17:27         ` Morten Brørup
2022-01-27  5:24           ` Honnappa Nagarahalli
2022-01-27 16:45             ` Ananyev, Konstantin
2022-02-02 19:46               ` Honnappa Nagarahalli
2022-01-27  5:16         ` Honnappa Nagarahalli
2023-02-28  6:43       ` 回复: " Feifei Wang
2023-02-28  6:52         ` Feifei Wang [this message]
2022-01-27  4:06   ` Honnappa Nagarahalli
2022-01-27 17:13     ` Morten Brørup
2022-01-28 11:29     ` Morten Brørup
2023-03-23 10:43 ` [PATCH v4 0/3] Recycle buffers from Tx to Rx Feifei Wang
2023-03-23 10:43   ` [PATCH v4 1/3] ethdev: add API for buffer recycle mode Feifei Wang
2023-03-23 11:41     ` Morten Brørup
2023-03-29  2:16       ` Feifei Wang
2023-03-23 10:43   ` [PATCH v4 2/3] net/i40e: implement recycle buffer mode Feifei Wang
2023-03-23 10:43   ` [PATCH v4 3/3] net/ixgbe: " Feifei Wang
2023-03-30  6:29 ` [PATCH v5 0/3] Recycle buffers from Tx to Rx Feifei Wang
2023-03-30  6:29   ` [PATCH v5 1/3] ethdev: add API for buffer recycle mode Feifei Wang
2023-03-30  7:19     ` Morten Brørup
2023-03-30  9:31       ` Feifei Wang
2023-03-30 15:15         ` Morten Brørup
2023-03-30 15:58         ` Morten Brørup
2023-04-26  6:59           ` Feifei Wang
2023-04-19 14:46     ` Ferruh Yigit
2023-04-26  7:29       ` Feifei Wang
2023-03-30  6:29   ` [PATCH v5 2/3] net/i40e: implement recycle buffer mode Feifei Wang
2023-03-30  6:29   ` [PATCH v5 3/3] net/ixgbe: " Feifei Wang
2023-04-19 14:46     ` Ferruh Yigit
2023-04-26  7:36       ` Feifei Wang
2023-03-30 15:04   ` [PATCH v5 0/3] Recycle buffers from Tx to Rx Stephen Hemminger
2023-04-03  2:48     ` Feifei Wang
2023-04-19 14:56   ` Ferruh Yigit
2023-04-25  7:57     ` Feifei Wang
2023-05-25  9:45 ` [PATCH v6 0/4] Recycle mbufs from Tx queue to Rx queue Feifei Wang
2023-05-25  9:45   ` [PATCH v6 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-05-25 15:08     ` Morten Brørup
2023-05-31  6:10       ` Feifei Wang
2023-06-05 12:53     ` Константин Ананьев
2023-06-06  2:55       ` Feifei Wang
2023-06-06  7:10         ` Konstantin Ananyev
2023-06-06  7:31           ` Feifei Wang
2023-06-06  8:34             ` Konstantin Ananyev
2023-06-07  0:00               ` Ferruh Yigit
2023-06-12  3:25                 ` Feifei Wang
2023-05-25  9:45   ` [PATCH v6 2/4] net/i40e: implement " Feifei Wang
2023-06-05 13:02     ` Константин Ананьев
2023-06-06  3:16       ` Feifei Wang
2023-06-06  7:18         ` Konstantin Ananyev
2023-06-06  7:58           ` Feifei Wang
2023-06-06  8:27             ` Konstantin Ananyev
2023-06-12  3:05               ` Feifei Wang
2023-05-25  9:45   ` [PATCH v6 3/4] net/ixgbe: " Feifei Wang
2023-05-25  9:45   ` [PATCH v6 4/4] app/testpmd: add recycle mbufs engine Feifei Wang
2023-06-05 13:08     ` Константин Ананьев
2023-06-06  6:32       ` Feifei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AS8PR08MB77184B4EB961E0ED302C8486C8AC9@AS8PR08MB7718.eurprd08.prod.outlook.com \
    --to=feifei.wang2@arm.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).