DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ferriter, Cian" <cian.ferriter@intel.com>
To: "Yigit, Ferruh" <ferruh.yigit@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] net/pcap: fix infinite Rx with large files
Date: Thu, 4 Feb 2021 17:02:33 +0000	[thread overview]
Message-ID: <BYAPR11MB37516F00406057B99CB152A4EDB39@BYAPR11MB3751.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210204165103.2355136-1-ferruh.yigit@intel.com>

The new error message looks great.

As I've already given my ack, I'm happy for this to be applied.

> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Thursday 4 February 2021 16:51
> To: Ferriter, Cian <cian.ferriter@intel.com>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org; stable@dpdk.org
> Subject: [PATCH v2] net/pcap: fix infinite Rx with large files
> 
> Packet forwarding is not working when infinite Rx feature is used with
> large .pcap files that has high number of packets.
> 
> The problem is number of allocated mbufs are less than the infinite Rx
> ring size, and all mbufs consumed to fill the ring, so there is no mbuf
> left for forwarding.
> 
> Current logic can not detect that infinite Rx ring is not filled
> completely and no more mbufs left, and setup continues which leads
> silent fail on packet forwarding.
> 
> There isn't much can be done when there is not enough mbuf for the given
> .pcap file, so additional checks added to detect the case and fail
> explicitly with an error log.
> 
> Bugzilla ID: 595
> Fixes: a3f5252e5cbd ("net/pcap: enable infinitely Rx a pcap file")
> Cc: stable@dpdk.org
> 
> Reported-by: Cian Ferriter <cian.ferriter@intel.com>
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Cian Ferriter <cian.ferriter@intel.com>
> ---
> v2:
> * Updated log message
> ---
>  drivers/net/pcap/rte_eth_pcap.c | 40 ++++++++++++++++++++-------------
>  1 file changed, 25 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/net/pcap/rte_eth_pcap.c
> b/drivers/net/pcap/rte_eth_pcap.c
> index c7751b7ba742..90f5d75ea87f 100644
> --- a/drivers/net/pcap/rte_eth_pcap.c
> +++ b/drivers/net/pcap/rte_eth_pcap.c
> @@ -735,6 +735,17 @@ eth_stats_reset(struct rte_eth_dev *dev)
>  	return 0;
>  }
> 
> +static inline void
> +infinite_rx_ring_free(struct rte_ring *pkts)
> +{
> +	struct rte_mbuf *bufs;
> +
> +	while (!rte_ring_dequeue(pkts, (void **)&bufs))
> +		rte_pktmbuf_free(bufs);
> +
> +	rte_ring_free(pkts);
> +}
> +
>  static int
>  eth_dev_close(struct rte_eth_dev *dev)
>  {
> @@ -753,7 +764,6 @@ eth_dev_close(struct rte_eth_dev *dev)
>  	if (internals->infinite_rx) {
>  		for (i = 0; i < dev->data->nb_rx_queues; i++) {
>  			struct pcap_rx_queue *pcap_q = &internals-
> >rx_queue[i];
> -			struct rte_mbuf *pcap_buf;
> 
>  			/*
>  			 * 'pcap_q->pkts' can be NULL if 'eth_dev_close()'
> @@ -762,11 +772,7 @@ eth_dev_close(struct rte_eth_dev *dev)
>  			if (pcap_q->pkts == NULL)
>  				continue;
> 
> -			while (!rte_ring_dequeue(pcap_q->pkts,
> -					(void **)&pcap_buf))
> -				rte_pktmbuf_free(pcap_buf);
> -
> -			rte_ring_free(pcap_q->pkts);
> +			infinite_rx_ring_free(pcap_q->pkts);
>  		}
>  	}
> 
> @@ -835,21 +841,25 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
>  		while (eth_pcap_rx(pcap_q, bufs, 1)) {
>  			/* Check for multiseg mbufs. */
>  			if (bufs[0]->nb_segs != 1) {
> -				rte_pktmbuf_free(*bufs);
> -
> -				while (!rte_ring_dequeue(pcap_q->pkts,
> -						(void **)bufs))
> -					rte_pktmbuf_free(*bufs);
> -
> -				rte_ring_free(pcap_q->pkts);
> -				PMD_LOG(ERR, "Multiseg mbufs are not
> supported in infinite_rx "
> -						"mode.");
> +				infinite_rx_ring_free(pcap_q->pkts);
> +				PMD_LOG(ERR,
> +					"Multiseg mbufs are not supported in
> infinite_rx mode.");
>  				return -EINVAL;
>  			}
> 
>  			rte_ring_enqueue_bulk(pcap_q->pkts,
>  					(void * const *)bufs, 1, NULL);
>  		}
> +
> +		if (rte_ring_count(pcap_q->pkts) < pcap_pkt_count) {
> +			infinite_rx_ring_free(pcap_q->pkts);
> +			PMD_LOG(ERR,
> +				"Not enough mbufs to accommodate packets
> in pcap file. "
> +				"At least %" PRIu64 " mbufs per queue is
> required.",
> +				pcap_pkt_count);
> +			return -EINVAL;
> +		}
> +
>  		/*
>  		 * Reset the stats for this queue since eth_pcap_rx calls
> above
>  		 * didn't result in the application receiving packets.
> --
> 2.29.2


  reply	other threads:[~2021-02-04 17:02 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-03 15:49 [dpdk-dev] [PATCH] " Ferruh Yigit
2021-02-04 16:03 ` Ferriter, Cian
2021-02-04 16:28   ` Ferruh Yigit
2021-02-04 17:01     ` Ferriter, Cian
2021-02-04 16:51 ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
2021-02-04 17:02   ` Ferriter, Cian [this message]
2021-02-04 17:12     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB37516F00406057B99CB152A4EDB39@BYAPR11MB3751.namprd11.prod.outlook.com \
    --to=cian.ferriter@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).