From: Junfeng Guo <junfeng.guo@intel.com>
To: qi.z.zhang@intel.com, jingjing.wu@intel.com,
ferruh.yigit@amd.com, beilei.xing@intel.com
Cc: dev@dpdk.org, xiaoyun.li@intel.com, helin.zhang@intel.com,
Junfeng Guo <junfeng.guo@intel.com>,
Rushil Gupta <rushilg@google.com>,
Jordan Kimbrough <jrkim@google.com>,
Jeroen de Borst <jeroendb@google.com>
Subject: [RFC v3 01/10] net/gve: add Tx queue setup for DQO
Date: Fri, 17 Feb 2023 15:32:19 +0800 [thread overview]
Message-ID: <20230217073228.340815-2-junfeng.guo@intel.com> (raw)
In-Reply-To: <20230217073228.340815-1-junfeng.guo@intel.com>
Add support for tx_queue_setup_dqo ops.
DQO format has submission and completion queue pair for each Tx/Rx
queue. Note that with DQO format all descriptors and doorbells, as
well as counters are written in little-endian.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Signed-off-by: Rushil Gupta <rushilg@google.com>
Signed-off-by: Jordan Kimbrough <jrkim@google.com>
Signed-off-by: Jeroen de Borst <jeroendb@google.com>
---
.mailmap | 3 +
MAINTAINERS | 3 +
drivers/net/gve/base/gve.h | 3 +-
drivers/net/gve/base/gve_desc_dqo.h | 6 +-
drivers/net/gve/base/gve_osdep.h | 6 +-
drivers/net/gve/gve_ethdev.c | 19 ++-
drivers/net/gve/gve_ethdev.h | 35 +++++-
drivers/net/gve/gve_tx_dqo.c | 184 ++++++++++++++++++++++++++++
drivers/net/gve/meson.build | 3 +-
9 files changed, 248 insertions(+), 14 deletions(-)
create mode 100644 drivers/net/gve/gve_tx_dqo.c
diff --git a/.mailmap b/.mailmap
index 2af8606181..abfb09039e 100644
--- a/.mailmap
+++ b/.mailmap
@@ -579,6 +579,7 @@ Jens Freimann <jfreimann@redhat.com> <jfreiman@redhat.com>
Jeremy Plsek <jplsek@iol.unh.edu>
Jeremy Spewock <jspewock@iol.unh.edu>
Jerin Jacob <jerinj@marvell.com> <jerin.jacob@caviumnetworks.com> <jerinjacobk@gmail.com>
+Jeroen de Borst <jeroendb@google.com>
Jerome Jutteau <jerome.jutteau@outscale.com>
Jerry Hao OS <jerryhao@os.amperecomputing.com>
Jerry Lilijun <jerry.lilijun@huawei.com>
@@ -643,6 +644,7 @@ Jonathan Erb <jonathan.erb@banduracyber.com>
Jon DeVree <nuxi@vault24.org>
Jon Loeliger <jdl@netgate.com>
Joongi Kim <joongi@an.kaist.ac.kr>
+Jordan Kimbrough <jrkim@google.com>
Jørgen Østergaard Sloth <jorgen.sloth@xci.dk>
Jörg Thalheim <joerg@thalheim.io>
Joseph Richard <joseph.richard@windriver.com>
@@ -1148,6 +1150,7 @@ Roy Franz <roy.franz@cavium.com>
Roy Pledge <roy.pledge@nxp.com>
Roy Shterman <roy.shterman@vastdata.com>
Ruifeng Wang <ruifeng.wang@arm.com>
+Rushil Gupta <rushilg@google.com>
Ryan E Hall <ryan.e.hall@intel.com>
Sabyasachi Sengupta <sabyasg@hpe.com>
Sachin Saxena <sachin.saxena@nxp.com> <sachin.saxena@oss.nxp.com>
diff --git a/MAINTAINERS b/MAINTAINERS
index 3495946d0f..0b04fe20f2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -703,6 +703,9 @@ F: doc/guides/nics/features/enic.ini
Google Virtual Ethernet
M: Junfeng Guo <junfeng.guo@intel.com>
+M: Jeroen de Borst <jeroendb@google.com>
+M: Rushil Gupta <rushilg@google.com>
+M: Jordan Kimbrough <jrkim@google.com>
F: drivers/net/gve/
F: doc/guides/nics/gve.rst
F: doc/guides/nics/features/gve.ini
diff --git a/drivers/net/gve/base/gve.h b/drivers/net/gve/base/gve.h
index 2dc4507acb..22d175910d 100644
--- a/drivers/net/gve/base/gve.h
+++ b/drivers/net/gve/base/gve.h
@@ -1,12 +1,13 @@
/* SPDX-License-Identifier: MIT
* Google Virtual Ethernet (gve) driver
- * Copyright (C) 2015-2022 Google, Inc.
+ * Copyright (C) 2015-2023 Google, Inc.
*/
#ifndef _GVE_H_
#define _GVE_H_
#include "gve_desc.h"
+#include "gve_desc_dqo.h"
#define GVE_VERSION "1.3.0"
#define GVE_VERSION_PREFIX "GVE-"
diff --git a/drivers/net/gve/base/gve_desc_dqo.h b/drivers/net/gve/base/gve_desc_dqo.h
index ee1afdecb8..431abac424 100644
--- a/drivers/net/gve/base/gve_desc_dqo.h
+++ b/drivers/net/gve/base/gve_desc_dqo.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT
* Google Virtual Ethernet (gve) driver
- * Copyright (C) 2015-2022 Google, Inc.
+ * Copyright (C) 2015-2023 Google, Inc.
*/
/* GVE DQO Descriptor formats */
@@ -13,10 +13,6 @@
#define GVE_TX_MAX_HDR_SIZE_DQO 255
#define GVE_TX_MIN_TSO_MSS_DQO 88
-#ifndef __LITTLE_ENDIAN_BITFIELD
-#error "Only little endian supported"
-#endif
-
/* Basic TX descriptor (DTYPE 0x0C) */
struct gve_tx_pkt_desc_dqo {
__le64 buf_addr;
diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h
index 7cb73002f4..71759d254f 100644
--- a/drivers/net/gve/base/gve_osdep.h
+++ b/drivers/net/gve/base/gve_osdep.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2022 Intel Corporation
+ * Copyright(C) 2022-2023 Intel Corporation
*/
#ifndef _GVE_OSDEP_H_
@@ -35,6 +35,10 @@ typedef rte_be16_t __be16;
typedef rte_be32_t __be32;
typedef rte_be64_t __be64;
+typedef rte_le16_t __le16;
+typedef rte_le32_t __le32;
+typedef rte_le64_t __le64;
+
typedef rte_iova_t dma_addr_t;
#define ETH_MIN_MTU RTE_ETHER_MIN_MTU
diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index 06d1b796c8..a02a48ef11 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2022 Intel Corporation
+ * Copyright(C) 2022-2023 Intel Corporation
*/
#include "gve_ethdev.h"
@@ -299,6 +299,7 @@ gve_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
dev_info->default_txconf = (struct rte_eth_txconf) {
.tx_free_thresh = GVE_DEFAULT_TX_FREE_THRESH,
+ .tx_rs_thresh = GVE_DEFAULT_TX_RS_THRESH,
.offloads = 0,
};
@@ -420,6 +421,17 @@ static const struct eth_dev_ops gve_eth_dev_ops = {
.mtu_set = gve_dev_mtu_set,
};
+static const struct eth_dev_ops gve_eth_dev_ops_dqo = {
+ .dev_configure = gve_dev_configure,
+ .dev_start = gve_dev_start,
+ .dev_stop = gve_dev_stop,
+ .dev_close = gve_dev_close,
+ .dev_infos_get = gve_dev_info_get,
+ .tx_queue_setup = gve_tx_queue_setup_dqo,
+ .link_update = gve_link_update,
+ .mtu_set = gve_dev_mtu_set,
+};
+
static void
gve_free_counter_array(struct gve_priv *priv)
{
@@ -662,8 +674,6 @@ gve_dev_init(struct rte_eth_dev *eth_dev)
rte_be32_t *db_bar;
int err;
- eth_dev->dev_ops = &gve_eth_dev_ops;
-
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
@@ -699,10 +709,11 @@ gve_dev_init(struct rte_eth_dev *eth_dev)
return err;
if (gve_is_gqi(priv)) {
+ eth_dev->dev_ops = &gve_eth_dev_ops;
eth_dev->rx_pkt_burst = gve_rx_burst;
eth_dev->tx_pkt_burst = gve_tx_burst;
} else {
- PMD_DRV_LOG(ERR, "DQO_RDA is not implemented and will be added in the future");
+ eth_dev->dev_ops = &gve_eth_dev_ops_dqo;
}
eth_dev->data->mac_addrs = &priv->dev_addr;
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 64e571bcae..c4b66acb0a 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2022 Intel Corporation
+ * Copyright(C) 2022-2023 Intel Corporation
*/
#ifndef _GVE_ETHDEV_H_
@@ -11,6 +11,9 @@
#include "base/gve.h"
+/* TODO: this is a workaround to ensure that Tx complq is enough */
+#define DQO_TX_MULTIPLIER 4
+
/*
* Following macros are derived from linux/pci_regs.h, however,
* we can't simply include that header here, as there is no such
@@ -25,7 +28,8 @@
#define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */
#define GVE_DEFAULT_RX_FREE_THRESH 512
-#define GVE_DEFAULT_TX_FREE_THRESH 256
+#define GVE_DEFAULT_TX_FREE_THRESH 32
+#define GVE_DEFAULT_TX_RS_THRESH 32
#define GVE_TX_MAX_FREE_SZ 512
#define GVE_MIN_BUF_SIZE 1024
@@ -50,6 +54,13 @@ union gve_tx_desc {
struct gve_tx_seg_desc seg; /* subsequent descs for a packet */
};
+/* Tx desc for DQO format */
+union gve_tx_desc_dqo {
+ struct gve_tx_pkt_desc_dqo pkt;
+ struct gve_tx_tso_context_desc_dqo tso_ctx;
+ struct gve_tx_general_context_desc_dqo general_ctx;
+};
+
/* Offload features */
union gve_tx_offload {
uint64_t data;
@@ -78,8 +89,10 @@ struct gve_tx_queue {
uint32_t tx_tail;
uint16_t nb_tx_desc;
uint16_t nb_free;
+ uint16_t nb_used;
uint32_t next_to_clean;
uint16_t free_thresh;
+ uint16_t rs_thresh;
/* Only valid for DQO_QPL queue format */
uint16_t sw_tail;
@@ -107,6 +120,17 @@ struct gve_tx_queue {
const struct rte_memzone *qres_mz;
struct gve_queue_resources *qres;
+ /* newly added for DQO */
+ volatile union gve_tx_desc_dqo *tx_ring;
+ struct gve_tx_compl_desc *compl_ring;
+ const struct rte_memzone *compl_ring_mz;
+ uint64_t compl_ring_phys_addr;
+ uint32_t complq_tail;
+ uint16_t sw_size;
+ uint8_t cur_gen_bit;
+ uint32_t last_desc_cleaned;
+ void **txqs;
+
/* Only valid for DQO_RDA queue format */
struct gve_tx_queue *complq;
@@ -319,4 +343,11 @@ gve_rx_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
uint16_t
gve_tx_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+/* Below functions are used for DQO */
+
+int
+gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *conf);
+
#endif /* _GVE_ETHDEV_H_ */
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
new file mode 100644
index 0000000000..acf4ee2952
--- /dev/null
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022-2023 Intel Corporation
+ */
+
+#include "gve_ethdev.h"
+#include "base/gve_adminq.h"
+
+static int
+check_tx_thresh_dqo(uint16_t nb_desc, uint16_t tx_rs_thresh,
+ uint16_t tx_free_thresh)
+{
+ if (tx_rs_thresh >= (nb_desc - 2)) {
+ PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 2",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_free_thresh >= (nb_desc - 3)) {
+ PMD_DRV_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 3.",
+ tx_free_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_rs_thresh > tx_free_thresh) {
+ PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+ "equal to tx_free_thresh (%u).",
+ tx_rs_thresh, tx_free_thresh);
+ return -EINVAL;
+ }
+ if ((nb_desc % tx_rs_thresh) != 0) {
+ PMD_DRV_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+ "number of TX descriptors (%u).",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void
+gve_reset_txq_dqo(struct gve_tx_queue *txq)
+{
+ struct rte_mbuf **sw_ring;
+ uint32_t size, i;
+
+ if (txq == NULL) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ size = txq->nb_tx_desc * sizeof(union gve_tx_desc_dqo);
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->tx_ring)[i] = 0;
+
+ size = txq->sw_size * sizeof(struct gve_tx_compl_desc);
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->compl_ring)[i] = 0;
+
+ sw_ring = txq->sw_ring;
+ for (i = 0; i < txq->sw_size; i++)
+ sw_ring[i] = NULL;
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ txq->last_desc_cleaned = 0;
+ txq->sw_tail = 0;
+ txq->nb_free = txq->nb_tx_desc - 1;
+
+ txq->complq_tail = 0;
+ txq->cur_gen_bit = 1;
+}
+
+int
+gve_tx_queue_setup_dqo(struct rte_eth_dev *dev, uint16_t queue_id,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *conf)
+{
+ struct gve_priv *hw = dev->data->dev_private;
+ const struct rte_memzone *mz;
+ struct gve_tx_queue *txq;
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+ uint16_t sw_size;
+ int err = 0;
+
+ if (nb_desc != hw->tx_desc_cnt) {
+ PMD_DRV_LOG(WARNING, "gve doesn't support nb_desc config, use hw nb_desc %u.",
+ hw->tx_desc_cnt);
+ }
+ nb_desc = hw->tx_desc_cnt;
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("gve txq",
+ sizeof(struct gve_tx_queue),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ /* need to check free_thresh here */
+ free_thresh = conf->tx_free_thresh ?
+ conf->tx_free_thresh : GVE_DEFAULT_TX_FREE_THRESH;
+ rs_thresh = conf->tx_rs_thresh ?
+ conf->tx_rs_thresh : GVE_DEFAULT_TX_RS_THRESH;
+ if (check_tx_thresh_dqo(nb_desc, rs_thresh, free_thresh))
+ return -EINVAL;
+
+ txq->nb_tx_desc = nb_desc;
+ txq->free_thresh = free_thresh;
+ txq->rs_thresh = rs_thresh;
+ txq->queue_id = queue_id;
+ txq->port_id = dev->data->port_id;
+ txq->ntfy_id = queue_id;
+ txq->hw = hw;
+ txq->ntfy_addr = &hw->db_bar2[rte_be_to_cpu_32(hw->irq_dbs[txq->ntfy_id].id)];
+
+ /* Allocate software ring */
+ sw_size = nb_desc * DQO_TX_MULTIPLIER;
+ txq->sw_ring = rte_zmalloc_socket("gve tx sw ring",
+ sw_size * sizeof(struct rte_mbuf *),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq->sw_ring == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ err = -ENOMEM;
+ goto free_txq;
+ }
+ txq->sw_size = sw_size;
+
+ /* Allocate TX hardware ring descriptors. */
+ mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_id,
+ nb_desc * sizeof(union gve_tx_desc_dqo),
+ PAGE_SIZE, socket_id);
+ if (mz == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX");
+ err = -ENOMEM;
+ goto free_txq_sw_ring;
+ }
+ txq->tx_ring = (union gve_tx_desc_dqo *)mz->addr;
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->mz = mz;
+
+ /* Allocate TX completion ring descriptors. */
+ mz = rte_eth_dma_zone_reserve(dev, "tx_compl_ring", queue_id,
+ sw_size * sizeof(struct gve_tx_compl_desc),
+ PAGE_SIZE, socket_id);
+ if (mz == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
+ err = -ENOMEM;
+ goto free_txq_mz;
+ }
+ txq->compl_ring = (struct gve_tx_compl_desc *)mz->addr;
+ txq->compl_ring_phys_addr = mz->iova;
+ txq->compl_ring_mz = mz;
+ txq->txqs = dev->data->tx_queues;
+
+ mz = rte_eth_dma_zone_reserve(dev, "txq_res", queue_id,
+ sizeof(struct gve_queue_resources),
+ PAGE_SIZE, socket_id);
+ if (mz == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to reserve DMA memory for TX resource");
+ err = -ENOMEM;
+ goto free_txq_cq_mz;
+ }
+ txq->qres = (struct gve_queue_resources *)mz->addr;
+ txq->qres_mz = mz;
+
+ gve_reset_txq_dqo(txq);
+
+ dev->data->tx_queues[queue_id] = txq;
+
+ return 0;
+
+free_txq_cq_mz:
+ rte_memzone_free(txq->compl_ring_mz);
+free_txq_mz:
+ rte_memzone_free(txq->mz);
+free_txq_sw_ring:
+ rte_free(txq->sw_ring);
+free_txq:
+ rte_free(txq);
+ return err;
+}
diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build
index af0010c01c..a699432160 100644
--- a/drivers/net/gve/meson.build
+++ b/drivers/net/gve/meson.build
@@ -1,5 +1,5 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2022 Intel Corporation
+# Copyright(C) 2022-2023 Intel Corporation
if is_windows
build = false
@@ -11,6 +11,7 @@ sources = files(
'base/gve_adminq.c',
'gve_rx.c',
'gve_tx.c',
+ 'gve_tx_dqo.c',
'gve_ethdev.c',
)
includes += include_directories('base')
--
2.34.1
next prev parent reply other threads:[~2023-02-17 7:39 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-18 2:53 [RFC 0/8] gve PMD enhancement Junfeng Guo
2023-01-18 2:53 ` [RFC 1/8] net/gve: add Rx queue setup for DQO Junfeng Guo
2023-01-18 2:53 ` [RFC 2/8] net/gve: support device start and close " Junfeng Guo
2023-01-18 2:53 ` [RFC 3/8] net/gve: support queue release and stop " Junfeng Guo
2023-01-18 2:53 ` [RFC 4/8] net/gve: support basic Tx data path " Junfeng Guo
2023-01-18 2:53 ` [RFC 5/8] net/gve: support basic Rx " Junfeng Guo
2023-01-18 2:53 ` [RFC 6/8] net/gve: support basic stats " Junfeng Guo
2023-01-18 2:53 ` [RFC 7/8] net/gve: support jumbo frame for GQI Junfeng Guo
2023-01-18 2:53 ` [RFC 8/8] net/gve: add AdminQ command to verify driver compatibility Junfeng Guo
2023-01-25 13:37 ` [RFC 0/8] gve PMD enhancement Li, Xiaoyun
2023-01-30 6:26 ` [RFC v2 0/9] " Junfeng Guo
2023-01-30 6:26 ` [RFC v2 1/9] net/gve: add Tx queue setup for DQO Junfeng Guo
2023-01-30 6:26 ` [RFC v2 2/9] net/gve: add Rx " Junfeng Guo
2023-01-30 6:26 ` [RFC v2 3/9] net/gve: support device start and close " Junfeng Guo
2023-01-30 6:26 ` [RFC v2 4/9] net/gve: support queue release and stop " Junfeng Guo
2023-01-30 6:26 ` [RFC v2 5/9] net/gve: support basic Tx data path " Junfeng Guo
2023-01-30 6:26 ` [RFC v2 6/9] net/gve: support basic Rx " Junfeng Guo
2023-01-30 18:32 ` Honnappa Nagarahalli
2023-01-30 6:26 ` [RFC v2 7/9] net/gve: support basic stats " Junfeng Guo
2023-01-30 18:27 ` Honnappa Nagarahalli
2023-01-30 6:26 ` [RFC v2 8/9] net/gve: support jumbo frame for GQI Junfeng Guo
2023-01-30 6:26 ` [RFC v2 9/9] net/gve: add AdminQ command to verify driver compatibility Junfeng Guo
2023-02-17 7:32 ` [RFC v3 00/10] gve PMD enhancement Junfeng Guo
2023-02-17 7:32 ` Junfeng Guo [this message]
2023-02-17 7:32 ` [RFC v3 02/10] net/gve: add Rx queue setup for DQO Junfeng Guo
2023-02-17 7:32 ` [RFC v3 03/10] net/gve: support device start and close " Junfeng Guo
2023-02-17 7:32 ` [RFC v3 04/10] net/gve: support queue release and stop " Junfeng Guo
2023-02-17 7:32 ` [RFC v3 05/10] net/gve: support basic Tx data path " Junfeng Guo
2023-02-17 7:32 ` [RFC v3 06/10] net/gve: support basic Rx " Junfeng Guo
2023-02-17 15:17 ` Honnappa Nagarahalli
2023-02-23 5:32 ` Guo, Junfeng
2023-02-17 7:32 ` [RFC v3 07/10] net/gve: support basic stats " Junfeng Guo
2023-02-17 15:28 ` Honnappa Nagarahalli
2023-02-17 7:32 ` [RFC v3 08/10] net/gve: enable Tx checksum offload " Junfeng Guo
2023-02-17 7:32 ` [RFC v3 09/10] net/gve: support jumbo frame for GQI Junfeng Guo
2023-02-17 7:32 ` [RFC v3 10/10] net/gve: add AdminQ command to verify driver compatibility Junfeng Guo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230217073228.340815-2-junfeng.guo@intel.com \
--to=junfeng.guo@intel.com \
--cc=beilei.xing@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@amd.com \
--cc=helin.zhang@intel.com \
--cc=jeroendb@google.com \
--cc=jingjing.wu@intel.com \
--cc=jrkim@google.com \
--cc=qi.z.zhang@intel.com \
--cc=rushilg@google.com \
--cc=xiaoyun.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).