From: "Ferriter, Cian" <cian.ferriter@intel.com>
To: "Yigit, Ferruh" <ferruh.yigit@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] net/pcap: fix infinite Rx with large files
Date: Thu, 4 Feb 2021 16:03:56 +0000 [thread overview]
Message-ID: <BYAPR11MB3751BBA920A0E798F0E5044FEDB39@BYAPR11MB3751.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210203154920.2449179-1-ferruh.yigit@intel.com>
Hi Ferruh,
This fixes the issue I was seeing. Now an error is reported, rather than silent failure.
I have one piece of feedback about the particular error message below inline which you can take or leave, I'm happy for you to upstream this fix either way.
Acked-by: Cian Ferriter <cian.ferriter@intel.com>
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Wednesday 3 February 2021 15:49
> To: Ferriter, Cian <cian.ferriter@intel.com>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org; stable@dpdk.org
> Subject: [PATCH] net/pcap: fix infinite Rx with large files
>
> Packet forwarding is not working when infinite Rx feature is used with
> large .pcap files that has high number of packets.
>
> The problem is number of allocated mbufs are less than the infinite Rx
> ring size, and all mbufs consumed to fill the ring, so there is no mbuf
> left for forwarding.
>
> Current logic can not detect that infinite Rx ring is not filled
> completely and no more mbufs left, and setup continues which leads
> silent fail on packet forwarding.
>
> There isn't much can be done when there is not enough mbuf for the given
> .pcap file, so additional checks added to detect the case and fail
> explicitly with an error log.
>
> Bugzilla ID: 595
> Fixes: a3f5252e5cbd ("net/pcap: enable infinitely Rx a pcap file")
> Cc: stable@dpdk.org
>
> Reported-by: Cian Ferriter <cian.ferriter@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
> drivers/net/pcap/rte_eth_pcap.c | 40 ++++++++++++++++++++-------------
> 1 file changed, 25 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/net/pcap/rte_eth_pcap.c
> b/drivers/net/pcap/rte_eth_pcap.c
> index ff02ade70d1a..98f80368ca1d 100644
> --- a/drivers/net/pcap/rte_eth_pcap.c
> +++ b/drivers/net/pcap/rte_eth_pcap.c
> @@ -735,6 +735,17 @@ eth_stats_reset(struct rte_eth_dev *dev)
> return 0;
> }
>
> +static inline void
> +infinite_rx_ring_free(struct rte_ring *pkts)
> +{
> + struct rte_mbuf *bufs;
> +
> + while (!rte_ring_dequeue(pkts, (void **)&bufs))
> + rte_pktmbuf_free(bufs);
> +
> + rte_ring_free(pkts);
> +}
> +
> static int
> eth_dev_close(struct rte_eth_dev *dev)
> {
> @@ -753,7 +764,6 @@ eth_dev_close(struct rte_eth_dev *dev)
> if (internals->infinite_rx) {
> for (i = 0; i < dev->data->nb_rx_queues; i++) {
> struct pcap_rx_queue *pcap_q = &internals-
> >rx_queue[i];
> - struct rte_mbuf *pcap_buf;
>
> /*
> * 'pcap_q->pkts' can be NULL if 'eth_dev_close()'
> @@ -762,11 +772,7 @@ eth_dev_close(struct rte_eth_dev *dev)
> if (pcap_q->pkts == NULL)
> continue;
>
> - while (!rte_ring_dequeue(pcap_q->pkts,
> - (void **)&pcap_buf))
> - rte_pktmbuf_free(pcap_buf);
> -
> - rte_ring_free(pcap_q->pkts);
> + infinite_rx_ring_free(pcap_q->pkts);
> }
> }
>
> @@ -835,21 +841,25 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
> while (eth_pcap_rx(pcap_q, bufs, 1)) {
> /* Check for multiseg mbufs. */
> if (bufs[0]->nb_segs != 1) {
> - rte_pktmbuf_free(*bufs);
> -
> - while (!rte_ring_dequeue(pcap_q->pkts,
> - (void **)bufs))
> - rte_pktmbuf_free(*bufs);
> -
> - rte_ring_free(pcap_q->pkts);
> - PMD_LOG(ERR, "Multiseg mbufs are not
> supported in infinite_rx "
> - "mode.");
> + infinite_rx_ring_free(pcap_q->pkts);
> + PMD_LOG(ERR,
> + "Multiseg mbufs are not supported in
> infinite_rx mode.");
> return -EINVAL;
> }
>
> rte_ring_enqueue_bulk(pcap_q->pkts,
> (void * const *)bufs, 1, NULL);
> }
> +
> + if (rte_ring_count(pcap_q->pkts) < pcap_pkt_count) {
> + infinite_rx_ring_free(pcap_q->pkts);
> + PMD_LOG(ERR,
> + "Not enough mbuf to fill the infinite_rx ring. "
> + "At least %" PRIu64 " mbufs per queue is
> required to fill the ring",
> + pcap_pkt_count);
[Cian Ferriter]
So we can say that the issue is either too many packets in the PCAP or too few mbufs for the ring. What can the user do about this?
They can use a PCAP with less packets.
Can they change how many mbufs are available by passing more memory or any other method?
Should be mention these remedies, or is this outside the scope for an error message?
As I mentioned, I'm happy for you to upstream either way.
> + return -EINVAL;
> + }
> +
> /*
> * Reset the stats for this queue since eth_pcap_rx calls
> above
> * didn't result in the application receiving packets.
> --
> 2.29.2
next prev parent reply other threads:[~2021-02-04 16:04 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-03 15:49 Ferruh Yigit
2021-02-04 16:03 ` Ferriter, Cian [this message]
2021-02-04 16:28 ` Ferruh Yigit
2021-02-04 17:01 ` Ferriter, Cian
2021-02-04 16:51 ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
2021-02-04 17:02 ` Ferriter, Cian
2021-02-04 17:12 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BYAPR11MB3751BBA920A0E798F0E5044FEDB39@BYAPR11MB3751.namprd11.prod.outlook.com \
--to=cian.ferriter@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).