From: Bruce Richardson <bruce.richardson@intel.com>
To: Chengwen Feng <fengchengwen@huawei.com>
Cc: <thomas@monjalon.net>, <dev@dpdk.org>, <kevin.laatz@intel.com>
Subject: Re: [PATCH] examples/dma: support DMA dequeue when no packet received
Date: Mon, 25 Jul 2022 11:01:26 +0100 [thread overview]
Message-ID: <Yt5p9qgMWovoQbEW@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <20220725081212.4473-1-fengchengwen@huawei.com>
On Mon, Jul 25, 2022 at 04:12:12PM +0800, Chengwen Feng wrote:
> Currently the example using DMA in asynchronous mode, which are:
> nb_rx = rte_eth_rx_burst();
> if (nb_rx == 0)
> continue;
> ...
> dma_enqueue(); // enqueue the received packets copy request
> nb_cpl = dma_dequeue(); // get copy completed packets
> ...
>
> There are no waiting inside dma_dequeue(), and this is why it's called
> asynchronus. If there are no packet received, it won't call
> dma_dequeue(), but some packets may still in the DMA queue which
> enqueued in last cycle. As a result, when the traffic is stopped, the
> sent packets and received packets are unbalanced from the perspective
> of the traffic generator.
>
> The patch supports DMA dequeue when no packet received, it helps to
> judge the test result by comparing the sent packets with the received
> packets on traffic generator sides.
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
> ---
> examples/dma/dmafwd.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/examples/dma/dmafwd.c b/examples/dma/dmafwd.c
> index 67b5a9b22b..e3fe226dff 100644
> --- a/examples/dma/dmafwd.c
> +++ b/examples/dma/dmafwd.c
> @@ -408,7 +408,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
> nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i,
> pkts_burst, MAX_PKT_BURST);
>
> - if (nb_rx == 0)
> + if (nb_rx == 0 && copy_mode != COPY_MODE_DMA_NUM)
> continue;
>
> port_statistics.rx[rx_config->rxtx_port] += nb_rx;
With this change, we would work through the all the receive packet
processing code, and calling all it's functions, just witha packet count of
zero. I therefore wonder if it would be cleaner to do the dma_dequeue
immediately here on receiving zero, and then jumping to handle those
dequeued packets. Something like the diff below.
/Bruce
@@ -408,8 +408,13 @@ dma_rx_port(struct rxtx_port_config *rx_config)
nb_rx = rte_eth_rx_burst(rx_config->rxtx_port, i,
pkts_burst, MAX_PKT_BURST);
- if (nb_rx == 0)
+ if (nb_rx == 0) {
+ if (copy_mode == COPY_MODE_DMA_NUM &&
+ (nb_rx = dma_dequeue(pkts_burst, pkts_burst_copy,
+ MAX_PKT_BURST, rx_config->dmadev_ids[i])) > 0)
+ goto handle_tx;
continue;
+ }
port_statistics.rx[rx_config->rxtx_port] += nb_rx;
@@ -450,6 +455,7 @@ dma_rx_port(struct rxtx_port_config *rx_config)
pkts_burst_copy[j]);
}
+handle_tx:
rte_mempool_put_bulk(dma_pktmbuf_pool,
(void *)pkts_burst, nb_rx);
next prev parent reply other threads:[~2022-07-25 10:01 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-25 8:12 Chengwen Feng
2022-07-25 10:01 ` Bruce Richardson [this message]
2022-07-25 12:31 ` fengchengwen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yt5p9qgMWovoQbEW@bricha3-MOBL.ger.corp.intel.com \
--to=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=fengchengwen@huawei.com \
--cc=kevin.laatz@intel.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).