DPDK patches and discussions
 help / color / mirror / Atom feed
From: Alan Elder <alan.elder@microsoft.com>
To: Long Li <longli@microsoft.com>,
	Ferruh Yigit <ferruh.yigit@amd.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: "dev@dpdk.org" <dev@dpdk.org>, stephen <stephen@networkplumber.org>
Subject: RE: [PATCH v2] net/netvsc: fix number Tx queues > Rx queues
Date: Tue, 19 Mar 2024 14:19:40 +0000	[thread overview]
Message-ID: <PA4PR83MB05265B4A616639BF0F0A734D972C2@PA4PR83MB0526.EURPRD83.prod.outlook.com> (raw)
In-Reply-To: <SJ1PR21MB3457C3F4261C263951DE9F4BCE2B2@SJ1PR21MB3457.namprd21.prod.outlook.com>

Thanks for the feedback Long.

I've made both changes you suggested, plus one additional one to not try and allocate an mbuf if the pool is null.

This means if a packet is received on a Rx queue that isn't being polled we will see it appear as "mbuf allocation failed" rather than causing a segfault.

Cheers,
Alan

> -----Original Message-----
> From: Long Li <longli@microsoft.com>
> Sent: Tuesday, March 12, 2024 7:09 PM
> To: Alan Elder <alan.elder@microsoft.com>; Ferruh Yigit
> <ferruh.yigit@amd.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org; stephen <stephen@networkplumber.org>
> Subject: RE: [PATCH v2] net/netvsc: fix number Tx queues > Rx queues
> 
> > a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index
> > 9bf1ec5509..c0aaeaa972 100644
> > --- a/drivers/net/netvsc/hn_rxtx.c
> > +++ b/drivers/net/netvsc/hn_rxtx.c
> > @@ -243,6 +243,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,  {
> >  	struct hn_data *hv = dev->data->dev_private;
> >  	struct hn_tx_queue *txq;
> > +	struct hn_rx_queue *rxq;
> >  	char name[RTE_MEMPOOL_NAMESIZE];
> >  	uint32_t tx_free_thresh;
> >  	int err = -ENOMEM;
> > @@ -301,6 +302,22 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
> >  		goto error;
> >  	}
> >
> > +	/*
> > +	 * If there are more Tx queues than Rx queues, allocate rx_queues
> > +	 * with event buffer so that Tx completion messages can still be
> > +	 * received
> > +	 */
> > +	if (queue_idx >= dev->data->nb_rx_queues) {
> > +		rxq = hn_rx_queue_alloc(hv, queue_idx, socket_id);
> 
> Need to check if rxq is NULL.
> 
> > +		/*
> > +		 * Don't allocate mbuf pool or rx ring.  RSS is always configured
> > +		 * to ensure packets aren't received by this Rx queue.
> > +		 */
> > +		rxq->mb_pool = NULL;
> > +		rxq->rx_ring = NULL;
> > +		dev->data->rx_queues[queue_idx] = rxq;
> > +	}
> > +
> >  	txq->agg_szmax  = RTE_MIN(hv->chim_szmax, hv->rndis_agg_size);
> >  	txq->agg_pktmax = hv->rndis_agg_pkts;
> >  	txq->agg_align  = hv->rndis_agg_align; @@ -354,6 +371,17 @@ static
> > void hn_txd_put(struct hn_tx_queue *txq, struct hn_txdesc *txd)
> >  	rte_mempool_put(txq->txdesc_pool, txd);  }
> >
> > +static void
> > +hn_rx_queue_free_common(struct hn_rx_queue *rxq) {
> > +	if (!rxq)
> > +		return;
> > +
> > +	rte_free(rxq->rxbuf_info);
> > +	rte_free(rxq->event_buf);
> > +	rte_free(rxq);
> > +}
> > +
> >  void
> >  hn_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)  { @@
> > -364,6
> > +392,13 @@ hn_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t
> > +qid)
> >  	if (!txq)
> >  		return;
> >
> > +	/*
> > +	 * Free any Rx queues allocated for a Tx queue without a
> corresponding
> > +	 * Rx queue
> > +	 */
> > +	if (qid >= dev->data->nb_rx_queues)
> > +		hn_rx_queue_free_common(dev->data->rx_queues[qid]);
> > +
> >  	rte_mempool_free(txq->txdesc_pool);
> >
> >  	rte_memzone_free(txq->tx_rndis_mz);
> > @@ -942,6 +977,13 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
> >  	if (queue_idx == 0) {
> >  		rxq = hv->primary;
> >  	} else {
> > +		/*
> > +		 * If the number of Tx queues was previously greater than
> > +		 * the number of Rx queues, we may already have allocated
> > +		 * an rxq. If so, free it now before allocating a new one.
> > +		 */
> > +		hn_rx_queue_free_common(dev->data-
> > >rx_queues[queue_idx]);
> 
> This logic seems strange. How about check if rxq is already allocated. If not,
> allocate it.
> 
> Something like:
> 
> if (!dev->data->rx_queues[queue_idx])
> 	rxq = hn_rx_queue_alloc(hv, queue_idx, socket_id);
> 
> 
> 
> Thanks,
> 
> Long

      parent reply	other threads:[~2024-03-19 14:19 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-29 19:29 [PATCH] " Alan Elder
2024-02-29 21:53 ` Stephen Hemminger
2024-03-01  2:03   ` Long Li
2024-03-08 18:21     ` Alan Elder
2024-03-08 18:09 ` [PATCH v2] " Alan Elder
2024-03-11 22:31   ` Ferruh Yigit
2024-03-12 19:08   ` Long Li
2024-03-19 14:16     ` [PATCH v3] " Alan Elder
2024-03-19 18:40       ` Long Li
2024-04-11 11:38       ` Ferruh Yigit
2024-04-11 20:45         ` [EXTERNAL] " Alan Elder
2024-04-12 10:23           ` Ferruh Yigit
2024-04-12 16:50             ` Alan Elder
2024-04-15 17:54               ` Ferruh Yigit
2024-04-15 14:40       ` [PATCH v4] " Alan Elder
2024-04-15 18:11         ` Ferruh Yigit
2024-04-17 23:45           ` Long Li
2024-05-01  7:43         ` Morten Brørup
2024-05-20 13:52           ` Ferruh Yigit
2024-10-03 22:55         ` Stephen Hemminger
2024-10-17 19:21           ` Long Li
2024-03-19 14:19     ` Alan Elder [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PA4PR83MB05265B4A616639BF0F0A734D972C2@PA4PR83MB0526.EURPRD83.prod.outlook.com \
    --to=alan.elder@microsoft.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=longli@microsoft.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).