From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC20A45637; Wed, 17 Jul 2024 19:56:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BC6B40A77; Wed, 17 Jul 2024 19:56:29 +0200 (CEST) Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by mails.dpdk.org (Postfix) with ESMTP id F10F140A70 for ; Wed, 17 Jul 2024 19:56:27 +0200 (CEST) Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2cb5d93c529so363667a91.0 for ; Wed, 17 Jul 2024 10:56:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721238987; x=1721843787; darn=dpdk.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Sd25cK5chc8xC3PrEDVRpxMKVLTtMfOFeHpiCp6RKes=; b=IQQ7tm+yJz6hJ6yXoq90drBfmYqq4mkp/fU/4NaowMj9WrJMdpgMyW3/GoACQfHOHi ZnUoUz/C69ylBsGKhz21eeov/ckrRKYz75PDVs0O/yopimKZmCIdvSmqwrudiQSv8ePh UftitHgbtgqXJx+VJ9GYF8vvkbpSwgxNWJP7i37f6PtDTRk5uP0+KLYHLkZKMfxTt6IM +/LzUuOJI41f5K01mZ46UIYSlZQ9yYuDuYjcSDMENX2A0pnYJM5olw1fSknIh+ULg/hA YNj9kRG8aWFTgR+g/TYu8vEfmPpto7j+e8BjaiqeJJnNC7tEfC3G8LF5IvBhckPGFNFJ wD8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721238987; x=1721843787; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Sd25cK5chc8xC3PrEDVRpxMKVLTtMfOFeHpiCp6RKes=; b=FEqll0Ibn7qrRd2i6/BmUcSrkHMhpbK2mTga5OquQkX2vylSogrxSKovm0bavIwDCu Gk7dBdl7cpAa/FC0bbj/VGZ/ASJU94k9KSDPfEw7v5VwNwRHLIMrag4q4v6QrH5x5JwN PPbVvBZs0J9a7cGiO9yS57omQqsHlW0wTlEj7EJUeUCBurT7X5TUNCP9UboPkB0+fyBa Akc1xVP0KiD0fxCo355eldHqFUqJmlY2YbfyZ+NbwKGMG7YtLFLNZ5GhlOdGUReUKP5/ WcHT4ASUXDNBM7DLhXRcm250t2SBiZAyYKSkQjHjP2LUZ2YJz/IwL47nigxNx8u4mcTv u0HA== X-Gm-Message-State: AOJu0YzFWPUzpxAScb0RDyEMmTQVl/NLtQm9EHSdePtCzmdYyMwBtbKC KtjnRLJWN9UBhlrOGERsLR2ecJnnhTH0mD8LfpTKonOQ30f30b7eSEYj8PxjU5gtskk6QWKlmAo R5aBx+l70WQ== X-Google-Smtp-Source: AGHT+IEgX4TnAkvU8XJOCDnrbd9Gn/VQAwEBwN3KfEjL1ZyOWVVzVxsDq7HSsILzotE6otLWAgpqQNdgKargeg== X-Received: from joshwash.sea.corp.google.com ([2620:15c:11c:202:9e5:24bd:1c5a:9cd9]) (user=joshwash job=sendgmr) by 2002:a17:90a:ea89:b0:2c8:81b:d09a with SMTP id 98e67ed59e1d1-2cb559559c8mr5582a91.3.1721238986935; Wed, 17 Jul 2024 10:56:26 -0700 (PDT) Date: Wed, 17 Jul 2024 10:56:18 -0700 In-Reply-To: <20240717175619.3159026-1-joshwash@google.com> Mime-Version: 1.0 References: <20240717175619.3159026-1-joshwash@google.com> X-Mailer: git-send-email 2.45.2.993.g49e7a77208-goog Message-ID: <20240717175619.3159026-4-joshwash@google.com> Subject: [PATCH 3/4] net/gve: add min ring size support From: Joshua Washington To: Jeroen de Borst , Rushil Gupta , Joshua Washington Cc: dev@dpdk.org, Ferruh Yigit , Harshitha Ramamurthy Content-Type: text/plain; charset="UTF-8" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This change adds support for parsing the minimum ring size from the modify_ring_size device option. Like the maximum ring size, this field will help allow the alteration of the ring size on the GQ driver. Note that it is optional whether the ring size is passed from the device or not. If the device does not pass minimum ring sizes, they are set to static values. Signed-off-by: Joshua Washington Reviewed-by: Rushil Gupta Reviewed-by: Harshitha Ramamurthy --- drivers/net/gve/base/gve_adminq.c | 45 ++++++++++++++++++++----------- drivers/net/gve/base/gve_adminq.h | 13 ++++++--- drivers/net/gve/gve_ethdev.c | 28 ++++++++++++++----- drivers/net/gve/gve_ethdev.h | 37 ++++++++++++++++--------- drivers/net/gve/gve_rx.c | 6 ++--- drivers/net/gve/gve_tx.c | 6 ++--- 6 files changed, 92 insertions(+), 43 deletions(-) diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c index c25fefbd0f..72c05c8237 100644 --- a/drivers/net/gve/base/gve_adminq.c +++ b/drivers/net/gve/base/gve_adminq.c @@ -15,6 +15,9 @@ #define GVE_DEVICE_OPTION_TOO_BIG_FMT "Length of %s option larger than expected. Possible older version of guest driver." +static void gve_set_min_desc_cnt(struct gve_priv *priv, + struct gve_device_option_modify_ring *dev_op_modify_ring); + static struct gve_device_option *gve_get_next_option(struct gve_device_descriptor *descriptor, struct gve_device_option *option) @@ -107,7 +110,9 @@ void gve_parse_device_option(struct gve_priv *priv, *dev_op_dqo_rda = RTE_PTR_ADD(option, sizeof(*option)); break; case GVE_DEV_OPT_ID_MODIFY_RING: - if (option_length < sizeof(**dev_op_modify_ring) || + /* Min ring size bound is optional. */ + if (option_length < (sizeof(**dev_op_modify_ring) - + sizeof(struct gve_ring_size_bound)) || req_feat_mask != GVE_DEV_OPT_REQ_FEAT_MASK_MODIFY_RING) { PMD_DRV_LOG(WARNING, GVE_DEVICE_OPTION_ERROR_FMT, "Modify Ring", @@ -123,6 +128,10 @@ void gve_parse_device_option(struct gve_priv *priv, "Modify Ring"); } *dev_op_modify_ring = RTE_PTR_ADD(option, sizeof(*option)); + + /* Min ring size included; set the minimum ring size. */ + if (option_length == sizeof(**dev_op_modify_ring)) + gve_set_min_desc_cnt(priv, *dev_op_modify_ring); break; case GVE_DEV_OPT_ID_JUMBO_FRAMES: if (option_length < sizeof(**dev_op_jumbo_frames) || @@ -686,16 +695,17 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) static int gve_set_desc_cnt(struct gve_priv *priv, struct gve_device_descriptor *descriptor) { - priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); - if (priv->tx_desc_cnt * sizeof(priv->txqs[0]->tx_desc_ring[0]) + priv->default_tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); + if (priv->default_tx_desc_cnt * sizeof(priv->txqs[0]->tx_desc_ring[0]) < PAGE_SIZE) { - PMD_DRV_LOG(ERR, "Tx desc count %d too low", priv->tx_desc_cnt); + PMD_DRV_LOG(ERR, "Tx desc count %d too low", + priv->default_tx_desc_cnt); return -EINVAL; } - priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); - if (priv->rx_desc_cnt * sizeof(priv->rxqs[0]->rx_desc_ring[0]) + priv->default_rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); + if (priv->default_rx_desc_cnt * sizeof(priv->rxqs[0]->rx_desc_ring[0]) < PAGE_SIZE) { - PMD_DRV_LOG(ERR, "Rx desc count %d too low", priv->rx_desc_cnt); + PMD_DRV_LOG(ERR, "Rx desc count %d too low", priv->default_rx_desc_cnt); return -EINVAL; } return 0; @@ -706,14 +716,22 @@ gve_set_desc_cnt_dqo(struct gve_priv *priv, const struct gve_device_descriptor *descriptor, const struct gve_device_option_dqo_rda *dev_op_dqo_rda) { - priv->tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); + priv->default_tx_desc_cnt = be16_to_cpu(descriptor->tx_queue_entries); priv->tx_compq_size = be16_to_cpu(dev_op_dqo_rda->tx_comp_ring_entries); - priv->rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); + priv->default_rx_desc_cnt = be16_to_cpu(descriptor->rx_queue_entries); priv->rx_bufq_size = be16_to_cpu(dev_op_dqo_rda->rx_buff_ring_entries); return 0; } +static void +gve_set_min_desc_cnt(struct gve_priv *priv, + struct gve_device_option_modify_ring *modify_ring) +{ + priv->min_rx_desc_cnt = be16_to_cpu(modify_ring->min_ring_size.rx); + priv->min_tx_desc_cnt = be16_to_cpu(modify_ring->min_ring_size.tx); +} + static void gve_set_max_desc_cnt(struct gve_priv *priv, const struct gve_device_option_modify_ring *modify_ring) @@ -725,8 +743,8 @@ gve_set_max_desc_cnt(struct gve_priv *priv, priv->max_tx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO; return; } - priv->max_rx_desc_cnt = modify_ring->max_rx_ring_size; - priv->max_tx_desc_cnt = modify_ring->max_tx_ring_size; + priv->max_rx_desc_cnt = be16_to_cpu(modify_ring->max_ring_size.rx); + priv->max_tx_desc_cnt = be16_to_cpu(modify_ring->max_ring_size.tx); } static void gve_enable_supported_features(struct gve_priv *priv, @@ -737,6 +755,7 @@ static void gve_enable_supported_features(struct gve_priv *priv, if (dev_op_modify_ring && (supported_features_mask & GVE_SUP_MODIFY_RING_MASK)) { PMD_DRV_LOG(INFO, "MODIFY RING device option enabled."); + /* Min ring size set separately by virtue of it being optional. */ gve_set_max_desc_cnt(priv, dev_op_modify_ring); } @@ -819,10 +838,6 @@ int gve_adminq_describe_device(struct gve_priv *priv) if (err) goto free_device_descriptor; - /* Default max to current in case modify ring size option is disabled. */ - priv->max_tx_desc_cnt = priv->tx_desc_cnt; - priv->max_rx_desc_cnt = priv->rx_desc_cnt; - priv->max_registered_pages = be64_to_cpu(descriptor->max_registered_pages); mtu = be16_to_cpu(descriptor->mtu); diff --git a/drivers/net/gve/base/gve_adminq.h b/drivers/net/gve/base/gve_adminq.h index ff69f74d69..6a3d4691b5 100644 --- a/drivers/net/gve/base/gve_adminq.h +++ b/drivers/net/gve/base/gve_adminq.h @@ -110,13 +110,20 @@ struct gve_device_option_dqo_rda { GVE_CHECK_STRUCT_LEN(8, gve_device_option_dqo_rda); +struct gve_ring_size_bound { + __be16 rx; + __be16 tx; +}; + +GVE_CHECK_STRUCT_LEN(4, gve_ring_size_bound); + struct gve_device_option_modify_ring { __be32 supported_features_mask; - __be16 max_rx_ring_size; - __be16 max_tx_ring_size; + struct gve_ring_size_bound max_ring_size; + struct gve_ring_size_bound min_ring_size; }; -GVE_CHECK_STRUCT_LEN(8, gve_device_option_modify_ring); +GVE_CHECK_STRUCT_LEN(12, gve_device_option_modify_ring); struct gve_device_option_jumbo_frames { __be32 supported_features_mask; diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c index ca92277a68..603644735d 100644 --- a/drivers/net/gve/gve_ethdev.c +++ b/drivers/net/gve/gve_ethdev.c @@ -508,17 +508,21 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .offloads = 0, }; - dev_info->default_rxportconf.ring_size = priv->rx_desc_cnt; + dev_info->default_rxportconf.ring_size = priv->default_rx_desc_cnt; dev_info->rx_desc_lim = (struct rte_eth_desc_lim) { - .nb_max = gve_is_gqi(priv) ? priv->rx_desc_cnt : GVE_MAX_QUEUE_SIZE_DQO, - .nb_min = priv->rx_desc_cnt, + .nb_max = gve_is_gqi(priv) ? + priv->default_rx_desc_cnt : + GVE_MAX_QUEUE_SIZE_DQO, + .nb_min = priv->default_rx_desc_cnt, .nb_align = 1, }; - dev_info->default_txportconf.ring_size = priv->tx_desc_cnt; + dev_info->default_txportconf.ring_size = priv->default_tx_desc_cnt; dev_info->tx_desc_lim = (struct rte_eth_desc_lim) { - .nb_max = gve_is_gqi(priv) ? priv->tx_desc_cnt : GVE_MAX_QUEUE_SIZE_DQO, - .nb_min = priv->tx_desc_cnt, + .nb_max = gve_is_gqi(priv) ? + priv->default_tx_desc_cnt : + GVE_MAX_QUEUE_SIZE_DQO, + .nb_min = priv->default_tx_desc_cnt, .nb_align = 1, }; @@ -1088,6 +1092,15 @@ gve_setup_device_resources(struct gve_priv *priv) return err; } +static void +gve_set_default_ring_size_bounds(struct gve_priv *priv) +{ + priv->max_tx_desc_cnt = GVE_DEFAULT_MAX_RING_SIZE; + priv->max_rx_desc_cnt = GVE_DEFAULT_MAX_RING_SIZE; + priv->min_tx_desc_cnt = GVE_DEFAULT_MIN_TX_RING_SIZE; + priv->min_rx_desc_cnt = GVE_DEFAULT_MIN_RX_RING_SIZE; +} + static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) { @@ -1106,6 +1119,9 @@ gve_init_priv(struct gve_priv *priv, bool skip_describe_device) goto free_adminq; } + /* Set default descriptor counts */ + gve_set_default_ring_size_bounds(priv); + if (skip_describe_device) goto setup_device; diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h index 393a4362c9..c417a0b31c 100644 --- a/drivers/net/gve/gve_ethdev.h +++ b/drivers/net/gve/gve_ethdev.h @@ -15,19 +15,23 @@ /* TODO: this is a workaround to ensure that Tx complq is enough */ #define DQO_TX_MULTIPLIER 4 -#define GVE_DEFAULT_RX_FREE_THRESH 64 -#define GVE_DEFAULT_TX_FREE_THRESH 32 -#define GVE_DEFAULT_TX_RS_THRESH 32 -#define GVE_TX_MAX_FREE_SZ 512 +#define GVE_DEFAULT_MAX_RING_SIZE 1024 +#define GVE_DEFAULT_MIN_RX_RING_SIZE 512 +#define GVE_DEFAULT_MIN_TX_RING_SIZE 256 -#define GVE_RX_BUF_ALIGN_DQO 128 -#define GVE_RX_MIN_BUF_SIZE_DQO 1024 -#define GVE_RX_MAX_BUF_SIZE_DQO ((16 * 1024) - GVE_RX_BUF_ALIGN_DQO) -#define GVE_MAX_QUEUE_SIZE_DQO 4096 +#define GVE_DEFAULT_RX_FREE_THRESH 64 +#define GVE_DEFAULT_TX_FREE_THRESH 32 +#define GVE_DEFAULT_TX_RS_THRESH 32 +#define GVE_TX_MAX_FREE_SZ 512 -#define GVE_RX_BUF_ALIGN_GQI 2048 -#define GVE_RX_MIN_BUF_SIZE_GQI 2048 -#define GVE_RX_MAX_BUF_SIZE_GQI 4096 +#define GVE_RX_BUF_ALIGN_DQO 128 +#define GVE_RX_MIN_BUF_SIZE_DQO 1024 +#define GVE_RX_MAX_BUF_SIZE_DQO ((16 * 1024) - GVE_RX_BUF_ALIGN_DQO) +#define GVE_MAX_QUEUE_SIZE_DQO 4096 + +#define GVE_RX_BUF_ALIGN_GQI 2048 +#define GVE_RX_MIN_BUF_SIZE_GQI 2048 +#define GVE_RX_MAX_BUF_SIZE_GQI 4096 #define GVE_RSS_HASH_KEY_SIZE 40 #define GVE_RSS_INDIR_SIZE 128 @@ -234,10 +238,17 @@ struct gve_priv { const struct rte_memzone *cnt_array_mz; uint16_t num_event_counters; + + /* TX ring size default and limits. */ + uint16_t default_tx_desc_cnt; uint16_t max_tx_desc_cnt; + uint16_t min_tx_desc_cnt; + + /* RX ring size default and limits. */ + uint16_t default_rx_desc_cnt; uint16_t max_rx_desc_cnt; - uint16_t tx_desc_cnt; /* txq size */ - uint16_t rx_desc_cnt; /* rxq size */ + uint16_t min_rx_desc_cnt; + uint16_t tx_pages_per_qpl; /* Only valid for DQO_RDA queue format */ diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c index d2c6920406..43cb368be9 100644 --- a/drivers/net/gve/gve_rx.c +++ b/drivers/net/gve/gve_rx.c @@ -304,11 +304,11 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint32_t mbuf_len; int err = 0; - if (nb_desc != hw->rx_desc_cnt) { + if (nb_desc != hw->default_rx_desc_cnt) { PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", - hw->rx_desc_cnt); + hw->default_rx_desc_cnt); } - nb_desc = hw->rx_desc_cnt; + nb_desc = hw->default_rx_desc_cnt; /* Free memory if needed. */ if (dev->data->rx_queues[queue_id]) { diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c index 70d3ef060c..8c255bd0f2 100644 --- a/drivers/net/gve/gve_tx.c +++ b/drivers/net/gve/gve_tx.c @@ -559,11 +559,11 @@ gve_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc, uint16_t free_thresh; int err = 0; - if (nb_desc != hw->tx_desc_cnt) { + if (nb_desc != hw->default_tx_desc_cnt) { PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.", - hw->tx_desc_cnt); + hw->default_tx_desc_cnt); } - nb_desc = hw->tx_desc_cnt; + nb_desc = hw->default_tx_desc_cnt; /* Free memory if needed. */ if (dev->data->tx_queues[queue_id]) { -- 2.45.2.803.g4e1b14247a-goog