DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/gve: fix RX buffer size alignment
@ 2023-11-11  0:34 Joshua Washington
  2023-11-11  4:18 ` Ferruh Yigit
  2023-11-13 23:12 ` [PATCH v2] " Joshua Washington
  0 siblings, 2 replies; 6+ messages in thread
From: Joshua Washington @ 2023-11-11  0:34 UTC (permalink / raw)
  To: Junfeng Guo, Jeroen de Borst, Rushil Gupta, Joshua Washington,
	Xiaoyun Li
  Cc: dev, stable, Ferruh Yigit

In GVE, both queue formats have RX buffer size alignment requirements
which are not respected whenever the mbuf size is greater than the
minimum required by DPDK (2048 + 128). This causes the driver to break
silently in initialization, and no queues are created, leading to no
network traffic.

This change aims to remedy this by restricting the RX receive buffer
sizes to valid sizes for their respective queue formats.

Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
Cc: junfeng.guo@intel.com
Cc: stable@dpdk.org

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
 drivers/net/gve/gve_ethdev.c |  5 ++++-
 drivers/net/gve/gve_ethdev.h | 22 +++++++++++++++++++++-
 drivers/net/gve/gve_rx.c     | 10 +++++++++-
 drivers/net/gve/gve_rx_dqo.c |  9 ++++++++-
 4 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index eb3bc7e151..43b4ab523d 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -296,7 +296,10 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_queues = priv->max_nb_rxq;
 	dev_info->max_tx_queues = priv->max_nb_txq;
-	dev_info->min_rx_bufsize = GVE_MIN_BUF_SIZE;
+	if (gve_is_gqi(priv))
+		dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_GQI;
+	else
+		dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_DQO;
 	dev_info->max_rx_pktlen = priv->max_mtu + RTE_ETHER_HDR_LEN;
 	dev_info->max_mtu = priv->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 755ee8ad15..0cc3b176f9 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -20,7 +20,13 @@
 #define GVE_DEFAULT_TX_RS_THRESH     32
 #define GVE_TX_MAX_FREE_SZ          512
 
-#define GVE_MIN_BUF_SIZE	    1024
+#define GVE_RX_BUF_ALIGN_DQO        128
+#define GVE_RX_MIN_BUF_SIZE_DQO    1024
+#define GVE_RX_MAX_BUF_SIZE_DQO    ((16 * 1024) - GVE_RX_BUF_ALIGN_DQO)
+
+#define GVE_RX_BUF_ALIGN_GQI       2048
+#define GVE_RX_MIN_BUF_SIZE_GQI    2048
+#define GVE_RX_MAX_BUF_SIZE_GQI    4096
 
 #define GVE_TX_CKSUM_OFFLOAD_MASK (		\
 		RTE_MBUF_F_TX_L4_MASK  |	\
@@ -337,6 +343,20 @@ gve_clear_device_rings_ok(struct gve_priv *priv)
 				&priv->state_flags);
 }
 
+static inline int
+gve_validate_rx_buffer_size(struct gve_priv *priv, uint16_t rx_buffer_size)
+{
+	uint16_t min_rx_buffer_size = gve_is_gqi(priv) ?
+		GVE_RX_MIN_BUF_SIZE_GQI : GVE_RX_MIN_BUF_SIZE_DQO;
+	if (rx_buffer_size < min_rx_buffer_size) {
+		PMD_DRV_LOG(ERR, "mbuf size must be at least %hu bytes",
+			    min_rx_buffer_size);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 int
 gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc,
 		   unsigned int socket_id, const struct rte_eth_rxconf *conf,
diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
index b8c92ccda0..0049c6428d 100644
--- a/drivers/net/gve/gve_rx.c
+++ b/drivers/net/gve/gve_rx.c
@@ -301,6 +301,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
 	const struct rte_memzone *mz;
 	struct gve_rx_queue *rxq;
 	uint16_t free_thresh;
+	uint32_t mbuf_len;
 	int err = 0;
 
 	if (nb_desc != hw->rx_desc_cnt) {
@@ -344,7 +345,14 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
 	rxq->hw = hw;
 	rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
 
-	rxq->rx_buf_len = rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+	mbuf_len =
+		rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+	err = gve_validate_rx_buffer_size(hw, mbuf_len);
+	if (err)
+		goto err_rxq;
+	rxq->rx_buf_len =
+		RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_GQI,
+			RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_GQI));
 
 	/* Allocate software ring */
 	rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", sizeof(struct rte_mbuf *) * nb_desc,
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 7e7ddac48e..2ec6135705 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -220,6 +220,7 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	const struct rte_memzone *mz;
 	struct gve_rx_queue *rxq;
 	uint16_t free_thresh;
+	uint32_t mbuf_len;
 	int err = 0;
 
 	if (nb_desc != hw->rx_desc_cnt) {
@@ -264,8 +265,14 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	rxq->hw = hw;
 	rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
 
-	rxq->rx_buf_len =
+	mbuf_len =
 		rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+	err = gve_validate_rx_buffer_size(hw, mbuf_len);
+	if (err)
+		goto free_rxq;
+	rxq->rx_buf_len =
+		RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_DQO,
+			RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_DQO));
 
 	/* Allocate software ring */
 	rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring",
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] net/gve: fix RX buffer size alignment
  2023-11-11  0:34 [PATCH] net/gve: fix RX buffer size alignment Joshua Washington
@ 2023-11-11  4:18 ` Ferruh Yigit
  2023-11-13 22:47   ` Joshua Washington
  2023-11-13 23:12 ` [PATCH v2] " Joshua Washington
  1 sibling, 1 reply; 6+ messages in thread
From: Ferruh Yigit @ 2023-11-11  4:18 UTC (permalink / raw)
  To: Joshua Washington, Junfeng Guo, Jeroen de Borst, Rushil Gupta,
	Xiaoyun Li
  Cc: dev, stable

On 11/11/2023 12:34 AM, Joshua Washington wrote:
> In GVE, both queue formats have RX buffer size alignment requirements
> which are not respected whenever the mbuf size is greater than the
> minimum required by DPDK (2048 + 128).
>

Hi Joshua,

We don't have a way to inform application about the alignment
requirement, so drivers enforces these as you are doing in this patch.

But I am not clear with what is "minimum required by DPDK", since
application can provide smaller mbufs.
Also not clear why this alignment cause problem only with mbuf size
bigger than 2048 + 128 bytes. Can you please clarify?

> This causes the driver to break
> silently in initialization, and no queues are created, leading to no
> network traffic.
> 
> This change aims to remedy this by restricting the RX receive buffer
> sizes to valid sizes for their respective queue formats.
> 
> Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
> Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
> Cc: junfeng.guo@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Joshua Washington <joshwash@google.com>
> Reviewed-by: Rushil Gupta <rushilg@google.com>

<...>

> @@ -337,6 +343,20 @@ gve_clear_device_rings_ok(struct gve_priv *priv)
>  				&priv->state_flags);
>  }
>  
> +static inline int
> +gve_validate_rx_buffer_size(struct gve_priv *priv, uint16_t rx_buffer_size)
> +{
> +	uint16_t min_rx_buffer_size = gve_is_gqi(priv) ?
> +		GVE_RX_MIN_BUF_SIZE_GQI : GVE_RX_MIN_BUF_SIZE_DQO;
> +	if (rx_buffer_size < min_rx_buffer_size) {
> +		PMD_DRV_LOG(ERR, "mbuf size must be at least %hu bytes",
> +			    min_rx_buffer_size);
> +		return -EINVAL;
> +	}
> +
>

When 'dev_info->min_rx_bufsize' set correctly, above check should be
done in ethdev level, can you please check 'rte_eth_check_rx_mempool()'.


> +	return 0;
> +}
> +
>  int
>  gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc,
>  		   unsigned int socket_id, const struct rte_eth_rxconf *conf,
> diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
> index b8c92ccda0..0049c6428d 100644
> --- a/drivers/net/gve/gve_rx.c
> +++ b/drivers/net/gve/gve_rx.c
> @@ -301,6 +301,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
>  	const struct rte_memzone *mz;
>  	struct gve_rx_queue *rxq;
>  	uint16_t free_thresh;
> +	uint32_t mbuf_len;
>  	int err = 0;
>  
>  	if (nb_desc != hw->rx_desc_cnt) {
> @@ -344,7 +345,14 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
>  	rxq->hw = hw;
>  	rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
>  
> -	rxq->rx_buf_len = rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
> +	mbuf_len =
> +		rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
> +	err = gve_validate_rx_buffer_size(hw, mbuf_len);
> +	if (err)
> +		goto err_rxq;
> +	rxq->rx_buf_len =
> +		RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_GQI,
> +			RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_GQI));
>

Just for your info, this release 'dev_info.max_rx_bufsize' and ethdev
layer note added [1] if user provides mbuf size bigger than this value.
Ethdev layer not is mainly for memmory optimization, but above check is
required for driver.

[1]
https://git.dpdk.org/dpdk/commit/?id=75c7849a9dcca356985fdb87f2d11cae135dfb1a


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] net/gve: fix RX buffer size alignment
  2023-11-11  4:18 ` Ferruh Yigit
@ 2023-11-13 22:47   ` Joshua Washington
  0 siblings, 0 replies; 6+ messages in thread
From: Joshua Washington @ 2023-11-13 22:47 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Junfeng Guo, Jeroen de Borst, Rushil Gupta, Xiaoyun Li, dev, stable

[-- Attachment #1: Type: text/plain, Size: 5833 bytes --]

Hello Ferruh,

But I am not clear with what is "minimum required by DPDK", since
> application can provide smaller mbufs.
> Also not clear why this alignment cause problem only with mbuf size
> bigger than 2048 + 128 bytes. Can you please clarify?
>

My apologies, the statement "minimum required by DPDK" is a typo; it should
probably say "minimum recommended", per
https://doc.dpdk.org/api/rte__mbuf__core_8h.html#a185c46bcdbfa90f6c50a4b037a93313f.
The GVE GQ driver is one which requires a packet buffer size of at least
2K. This alignment issue manifests in a different way when the mbuf size is
smaller than the minimum supported by the device, but it is an issue
nonetheless. I will fix the wording in the commit description in an updated
patch.

When 'dev_info->min_rx_bufsize' set correctly, above check should be
> done in ethdev level, can you please check 'rte_eth_check_rx_mempool()'.
>

This validation path does seem to be hit when running testpmd:

# dpdk-testpmd -- -a --stats-period=1 --forward-mode=txonly --rxq=$N
--txq=$N --nb-cores=$(($N + 1)) --mbuf-size=10
...
mb_pool_0 mbuf_data_room_size 10 < 1152 (128 + 1024)
Fail to configure port 0 rx queues
Port 0 is closed
EAL: Error - exiting with code: 1
  Cause: Start ports failed

I can remove this check from the driver, as it is redundant.

Just for your info, this release 'dev_info.max_rx_bufsize' and ethdev
> layer note added [1] if user provides mbuf size bigger than this value.
> Ethdev layer not is mainly for memmory optimization, but above check is
> required for driver.
>
> [1]
>
> https://git.dpdk.org/dpdk/commit/?id=75c7849a9dcca356985fdb87f2d11cae135dfb1a


If I were to add GVE support for max buffer size to this patch, how would
that interact with backports? Is it possible to include only parts of a
patch in a backport?



On Fri, Nov 10, 2023 at 8:18 PM Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 11/11/2023 12:34 AM, Joshua Washington wrote:
> > In GVE, both queue formats have RX buffer size alignment requirements
> > which are not respected whenever the mbuf size is greater than the
> > minimum required by DPDK (2048 + 128).
> >
>
> Hi Joshua,
>
> We don't have a way to inform application about the alignment
> requirement, so drivers enforces these as you are doing in this patch.
>
> But I am not clear with what is "minimum required by DPDK", since
> application can provide smaller mbufs.
> Also not clear why this alignment cause problem only with mbuf size
> bigger than 2048 + 128 bytes. Can you please clarify?
>
> > This causes the driver to break
> > silently in initialization, and no queues are created, leading to no
> > network traffic.
> >
> > This change aims to remedy this by restricting the RX receive buffer
> > sizes to valid sizes for their respective queue formats.
> >
> > Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
> > Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
> > Cc: junfeng.guo@intel.com
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Joshua Washington <joshwash@google.com>
> > Reviewed-by: Rushil Gupta <rushilg@google.com>
>
> <...>
>
> > @@ -337,6 +343,20 @@ gve_clear_device_rings_ok(struct gve_priv *priv)
> >                               &priv->state_flags);
> >  }
> >
> > +static inline int
> > +gve_validate_rx_buffer_size(struct gve_priv *priv, uint16_t
> rx_buffer_size)
> > +{
> > +     uint16_t min_rx_buffer_size = gve_is_gqi(priv) ?
> > +             GVE_RX_MIN_BUF_SIZE_GQI : GVE_RX_MIN_BUF_SIZE_DQO;
> > +     if (rx_buffer_size < min_rx_buffer_size) {
> > +             PMD_DRV_LOG(ERR, "mbuf size must be at least %hu bytes",
> > +                         min_rx_buffer_size);
> > +             return -EINVAL;
> > +     }
> > +
> >
>
> When 'dev_info->min_rx_bufsize' set correctly, above check should be
> done in ethdev level, can you please check 'rte_eth_check_rx_mempool()'.
>
>
> > +     return 0;
> > +}
> > +
> >  int
> >  gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t
> nb_desc,
> >                  unsigned int socket_id, const struct rte_eth_rxconf
> *conf,
> > diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
> > index b8c92ccda0..0049c6428d 100644
> > --- a/drivers/net/gve/gve_rx.c
> > +++ b/drivers/net/gve/gve_rx.c
> > @@ -301,6 +301,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t
> queue_id,
> >       const struct rte_memzone *mz;
> >       struct gve_rx_queue *rxq;
> >       uint16_t free_thresh;
> > +     uint32_t mbuf_len;
> >       int err = 0;
> >
> >       if (nb_desc != hw->rx_desc_cnt) {
> > @@ -344,7 +345,14 @@ gve_rx_queue_setup(struct rte_eth_dev *dev,
> uint16_t queue_id,
> >       rxq->hw = hw;
> >       rxq->ntfy_addr =
> &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
> >
> > -     rxq->rx_buf_len = rte_pktmbuf_data_room_size(rxq->mpool) -
> RTE_PKTMBUF_HEADROOM;
> > +     mbuf_len =
> > +             rte_pktmbuf_data_room_size(rxq->mpool) -
> RTE_PKTMBUF_HEADROOM;
> > +     err = gve_validate_rx_buffer_size(hw, mbuf_len);
> > +     if (err)
> > +             goto err_rxq;
> > +     rxq->rx_buf_len =
> > +             RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_GQI,
> > +                     RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_GQI));
> >
>
> Just for your info, this release 'dev_info.max_rx_bufsize' and ethdev
> layer note added [1] if user provides mbuf size bigger than this value.
> Ethdev layer not is mainly for memmory optimization, but above check is
> required for driver.
>
> [1]
>
> https://git.dpdk.org/dpdk/commit/?id=75c7849a9dcca356985fdb87f2d11cae135dfb1a
>
>

-- 

Joshua Washington | Software Engineer | joshwash@google.com | (414) 366-4423

[-- Attachment #2: Type: text/html, Size: 9169 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2] net/gve: fix RX buffer size alignment
  2023-11-11  0:34 [PATCH] net/gve: fix RX buffer size alignment Joshua Washington
  2023-11-11  4:18 ` Ferruh Yigit
@ 2023-11-13 23:12 ` Joshua Washington
  2023-11-14  2:41   ` Guo, Junfeng
  1 sibling, 1 reply; 6+ messages in thread
From: Joshua Washington @ 2023-11-13 23:12 UTC (permalink / raw)
  To: Junfeng Guo, Jeroen de Borst, Rushil Gupta, Joshua Washington,
	Xiaoyun Li
  Cc: dev, stable, Ferruh Yigit

In GVE, both queue formats have RX buffer size alignment requirements
which will not always be respected when a user specifies an mbuf size.
Assuming that an mbuf size is greater than the DPDK recommended default
(2048 + 128), if the buffer size is not properly aligned with what the
device expects, the device will silently fail to create any transmit or
receive queues.

Because no queues are created, there is no network traffic for the DPDK
program, and errors like the following are returned when attempting to
destroy queues:

gve_adminq_parse_err(): AQ command failed with status -11
gve_stop_tx_queues(): failed to destroy txqs
gve_adminq_parse_err(): AQ command failed with status -11
gve_stop_rx_queues(): failed to destroy rxqs

This change aims to remedy this by restricting the RX receive buffer
sizes to valid sizes for their respective queue formats, including both
alignment and minimum and maximum supported buffer sizes.

Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
Cc: junfeng.guo@intel.com
Cc: stable@dpdk.org

Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
 drivers/net/gve/gve_ethdev.c | 5 ++++-
 drivers/net/gve/gve_ethdev.h | 8 +++++++-
 drivers/net/gve/gve_rx.c     | 7 ++++++-
 drivers/net/gve/gve_rx_dqo.c | 6 +++++-
 4 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index eb3bc7e151..43b4ab523d 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -296,7 +296,10 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_queues = priv->max_nb_rxq;
 	dev_info->max_tx_queues = priv->max_nb_txq;
-	dev_info->min_rx_bufsize = GVE_MIN_BUF_SIZE;
+	if (gve_is_gqi(priv))
+		dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_GQI;
+	else
+		dev_info->min_rx_bufsize = GVE_RX_MIN_BUF_SIZE_DQO;
 	dev_info->max_rx_pktlen = priv->max_mtu + RTE_ETHER_HDR_LEN;
 	dev_info->max_mtu = priv->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 755ee8ad15..37f2b60845 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -20,7 +20,13 @@
 #define GVE_DEFAULT_TX_RS_THRESH     32
 #define GVE_TX_MAX_FREE_SZ          512
 
-#define GVE_MIN_BUF_SIZE	    1024
+#define GVE_RX_BUF_ALIGN_DQO        128
+#define GVE_RX_MIN_BUF_SIZE_DQO    1024
+#define GVE_RX_MAX_BUF_SIZE_DQO    ((16 * 1024) - GVE_RX_BUF_ALIGN_DQO)
+
+#define GVE_RX_BUF_ALIGN_GQI       2048
+#define GVE_RX_MIN_BUF_SIZE_GQI    2048
+#define GVE_RX_MAX_BUF_SIZE_GQI    4096
 
 #define GVE_TX_CKSUM_OFFLOAD_MASK (		\
 		RTE_MBUF_F_TX_L4_MASK  |	\
diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
index b8c92ccda0..36a1b73c65 100644
--- a/drivers/net/gve/gve_rx.c
+++ b/drivers/net/gve/gve_rx.c
@@ -301,6 +301,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
 	const struct rte_memzone *mz;
 	struct gve_rx_queue *rxq;
 	uint16_t free_thresh;
+	uint32_t mbuf_len;
 	int err = 0;
 
 	if (nb_desc != hw->rx_desc_cnt) {
@@ -344,7 +345,11 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
 	rxq->hw = hw;
 	rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
 
-	rxq->rx_buf_len = rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+	mbuf_len =
+		rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len =
+		RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_GQI,
+			RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_GQI));
 
 	/* Allocate software ring */
 	rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring", sizeof(struct rte_mbuf *) * nb_desc,
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 7e7ddac48e..422784e7e0 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -220,6 +220,7 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	const struct rte_memzone *mz;
 	struct gve_rx_queue *rxq;
 	uint16_t free_thresh;
+	uint32_t mbuf_len;
 	int err = 0;
 
 	if (nb_desc != hw->rx_desc_cnt) {
@@ -264,8 +265,11 @@ gve_rx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
 	rxq->hw = hw;
 	rxq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[rxq->ntfy_id].id)];
 
-	rxq->rx_buf_len =
+	mbuf_len =
 		rte_pktmbuf_data_room_size(rxq->mpool) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len =
+		RTE_MIN((uint16_t)GVE_RX_MAX_BUF_SIZE_DQO,
+			RTE_ALIGN_FLOOR(mbuf_len, GVE_RX_BUF_ALIGN_DQO));
 
 	/* Allocate software ring */
 	rxq->sw_ring = rte_zmalloc_socket("gve rx sw ring",
-- 
2.42.0.869.gea05f2083d-goog


^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [PATCH v2] net/gve: fix RX buffer size alignment
  2023-11-13 23:12 ` [PATCH v2] " Joshua Washington
@ 2023-11-14  2:41   ` Guo, Junfeng
  2023-11-14 12:56     ` Ferruh Yigit
  0 siblings, 1 reply; 6+ messages in thread
From: Guo, Junfeng @ 2023-11-14  2:41 UTC (permalink / raw)
  To: Joshua Washington, Jeroen de Borst, Rushil Gupta, Li, Xiaoyun
  Cc: dev, stable, Ferruh Yigit



> -----Original Message-----
> From: Joshua Washington <joshwash@google.com>
> Sent: Tuesday, November 14, 2023 07:12
> To: Guo, Junfeng <junfeng.guo@intel.com>; Jeroen de Borst
> <jeroendb@google.com>; Rushil Gupta <rushilg@google.com>; Joshua
> Washington <joshwash@google.com>; Li, Xiaoyun <xiaoyun.li@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org; Ferruh Yigit <ferruh.yigit@amd.com>
> Subject: [PATCH v2] net/gve: fix RX buffer size alignment
> 
> In GVE, both queue formats have RX buffer size alignment requirements
> which will not always be respected when a user specifies an mbuf size.
> Assuming that an mbuf size is greater than the DPDK recommended default
> (2048 + 128), if the buffer size is not properly aligned with what the
> device expects, the device will silently fail to create any transmit or
> receive queues.
> 
> Because no queues are created, there is no network traffic for the DPDK
> program, and errors like the following are returned when attempting to
> destroy queues:
> 
> gve_adminq_parse_err(): AQ command failed with status -11
> gve_stop_tx_queues(): failed to destroy txqs
> gve_adminq_parse_err(): AQ command failed with status -11
> gve_stop_rx_queues(): failed to destroy rxqs
> 
> This change aims to remedy this by restricting the RX receive buffer
> sizes to valid sizes for their respective queue formats, including both
> alignment and minimum and maximum supported buffer sizes.
> 
> Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
> Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
> Cc: junfeng.guo@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Joshua Washington <joshwash@google.com>
> Reviewed-by: Rushil Gupta <rushilg@google.com>

Acked-by: Junfeng Guo <junfeng.guo@intel.com>

Regards,
Junfeng Guo

> ---
>  drivers/net/gve/gve_ethdev.c | 5 ++++-
>  drivers/net/gve/gve_ethdev.h | 8 +++++++-
>  drivers/net/gve/gve_rx.c     | 7 ++++++-
>  drivers/net/gve/gve_rx_dqo.c | 6 +++++-
>  4 files changed, 22 insertions(+), 4 deletions(-)
> 
> --
> 2.42.0.869.gea05f2083d-goog


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] net/gve: fix RX buffer size alignment
  2023-11-14  2:41   ` Guo, Junfeng
@ 2023-11-14 12:56     ` Ferruh Yigit
  0 siblings, 0 replies; 6+ messages in thread
From: Ferruh Yigit @ 2023-11-14 12:56 UTC (permalink / raw)
  To: Guo, Junfeng, Joshua Washington, Jeroen de Borst, Rushil Gupta,
	Li, Xiaoyun
  Cc: dev, stable

On 11/14/2023 2:41 AM, Guo, Junfeng wrote:
> 
> 
>> -----Original Message-----
>> From: Joshua Washington <joshwash@google.com>
>> Sent: Tuesday, November 14, 2023 07:12
>> To: Guo, Junfeng <junfeng.guo@intel.com>; Jeroen de Borst
>> <jeroendb@google.com>; Rushil Gupta <rushilg@google.com>; Joshua
>> Washington <joshwash@google.com>; Li, Xiaoyun <xiaoyun.li@intel.com>
>> Cc: dev@dpdk.org; stable@dpdk.org; Ferruh Yigit <ferruh.yigit@amd.com>
>> Subject: [PATCH v2] net/gve: fix RX buffer size alignment
>>
>> In GVE, both queue formats have RX buffer size alignment requirements
>> which will not always be respected when a user specifies an mbuf size.
>> Assuming that an mbuf size is greater than the DPDK recommended default
>> (2048 + 128), if the buffer size is not properly aligned with what the
>> device expects, the device will silently fail to create any transmit or
>> receive queues.
>>
>> Because no queues are created, there is no network traffic for the DPDK
>> program, and errors like the following are returned when attempting to
>> destroy queues:
>>
>> gve_adminq_parse_err(): AQ command failed with status -11
>> gve_stop_tx_queues(): failed to destroy txqs
>> gve_adminq_parse_err(): AQ command failed with status -11
>> gve_stop_rx_queues(): failed to destroy rxqs
>>
>> This change aims to remedy this by restricting the RX receive buffer
>> sizes to valid sizes for their respective queue formats, including both
>> alignment and minimum and maximum supported buffer sizes.
>>
>> Fixes: 4bec2d0b5572 ("net/gve: support queue operations")
>> Fixes: 1dc00f4fc74b ("net/gve: add Rx queue setup for DQO")
>> Cc: junfeng.guo@intel.com
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Joshua Washington <joshwash@google.com>
>> Reviewed-by: Rushil Gupta <rushilg@google.com>
> 
> Acked-by: Junfeng Guo <junfeng.guo@intel.com>
> 

Applied to dpdk-next-net/main, thanks.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-11-14 12:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-11  0:34 [PATCH] net/gve: fix RX buffer size alignment Joshua Washington
2023-11-11  4:18 ` Ferruh Yigit
2023-11-13 22:47   ` Joshua Washington
2023-11-13 23:12 ` [PATCH v2] " Joshua Washington
2023-11-14  2:41   ` Guo, Junfeng
2023-11-14 12:56     ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).