* [PATCH v4 01/15] net/xsc: add xsc PMD framework
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 19:00 ` Stephen Hemminger
2025-01-03 15:04 ` [PATCH v4 02/15] net/xsc: add xsc device initialization WanRenyong
` (13 subsequent siblings)
14 siblings, 1 reply; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Add xsc PMD framework, doc and build infrastructure, supporting
PCI probe.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
.mailmap | 5 ++
MAINTAINERS | 10 +++
doc/guides/nics/features/xsc.ini | 9 +++
doc/guides/nics/index.rst | 1 +
doc/guides/nics/xsc.rst | 31 +++++++++
doc/guides/rel_notes/release_25_03.rst | 4 ++
drivers/net/meson.build | 1 +
drivers/net/xsc/meson.build | 11 ++++
drivers/net/xsc/xsc_defs.h | 15 +++++
drivers/net/xsc/xsc_ethdev.c | 89 ++++++++++++++++++++++++++
drivers/net/xsc/xsc_ethdev.h | 15 +++++
drivers/net/xsc/xsc_log.h | 24 +++++++
12 files changed, 215 insertions(+)
create mode 100644 doc/guides/nics/features/xsc.ini
create mode 100644 doc/guides/nics/xsc.rst
create mode 100644 drivers/net/xsc/meson.build
create mode 100644 drivers/net/xsc/xsc_defs.h
create mode 100644 drivers/net/xsc/xsc_ethdev.c
create mode 100644 drivers/net/xsc/xsc_ethdev.h
create mode 100644 drivers/net/xsc/xsc_log.h
diff --git a/.mailmap b/.mailmap
index 818798273f..18293215c3 100644
--- a/.mailmap
+++ b/.mailmap
@@ -370,6 +370,7 @@ Dongdong Liu <liudongdong3@huawei.com>
Dongsheng Rong <rongdongsheng@baidu.com>
Dongsu Han <dongsuh@cs.cmu.edu>
Dong Wang <dong1.wang@intel.com>
+Dongwei Xu <xudw@yunsilicon.com>
Dongyang Pan <197020236@qq.com>
Dong Zhou <dongzhou@nvidia.com> <dongz@mellanox.com>
Don Provan <dprovan@bivio.net>
@@ -1062,6 +1063,7 @@ Nagadheeraj Rottela <rnagadheeraj@marvell.com>
Naga Harish K S V <s.v.naga.harish.k@intel.com>
Naga Suresh Somarowthu <naga.sureshx.somarowthu@intel.com>
Nalla Pradeep <pnalla@marvell.com>
+Na Na <nana@yunsilicon.com>
Na Na <nana.nn@alibaba-inc.com>
Nan Chen <whutchennan@gmail.com>
Nandini Persad <nandinipersad361@gmail.com>
@@ -1306,6 +1308,7 @@ Ronak Doshi <ronak.doshi@broadcom.com> <doshir@vmware.com>
Ron Beider <rbeider@amazon.com>
Ronghua Zhang <rzhang@vmware.com>
RongQiang Xie <xie.rongqiang@zte.com.cn>
+Rong Qian <qianr@yunsilicon.com>
RongQing Li <lirongqing@baidu.com>
Rongwei Liu <rongweil@nvidia.com>
Rory Sexton <rory.sexton@intel.com>
@@ -1633,6 +1636,7 @@ Waldemar Dworakowski <waldemar.dworakowski@intel.com>
Walter Heymans <walter.heymans@corigine.com>
Wang Sheng-Hui <shhuiw@gmail.com>
Wangyu (Eric) <seven.wangyu@huawei.com>
+WanRenyong <wanry@yunsilicon.com>
Waterman Cao <waterman.cao@intel.com>
Wathsala Vithanage <wathsala.vithanage@arm.com>
Weichun Chen <weichunx.chen@intel.com>
@@ -1686,6 +1690,7 @@ Xiaonan Zhang <xiaonanx.zhang@intel.com>
Xiao Wang <xiao.w.wang@intel.com>
Xiaoxiao Zeng <xiaoxiaox.zeng@intel.com>
Xiaoxin Peng <xiaoxin.peng@broadcom.com>
+Xiaoxiong Zhang <zhangxx@yunsilicon.com>
Xiaoyu Min <jackmin@nvidia.com> <jackmin@mellanox.com>
Xiaoyun Li <xiaoyun.li@intel.com>
Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
diff --git a/MAINTAINERS b/MAINTAINERS
index 60bdcce543..3426658486 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1075,6 +1075,16 @@ F: drivers/net/avp/
F: doc/guides/nics/avp.rst
F: doc/guides/nics/features/avp.ini
+Yunsilicon xsc
+M: WanRenyong <wanry@yunsilicon.com>
+M: Na Na <nana@yunsilicon.com>
+M: Rong Qian <qianr@yunsilicon.com>
+M: Xiaoxiong Zhang <zhangxx@yunsilicon.com>
+M: Dongwei Xu <xudw@yunsilicon.com>
+F: drivers/net/xsc/
+F: doc/guides/nics/xsc.rst
+F: doc/guides/nics/features/xsc.ini
+
ZTE zxdh - EXPERIMENTAL
M: Junlong Wang <wang.junlong1@zte.com.cn>
M: Lijie Shan <shan.lijie@zte.com.cn>
diff --git a/doc/guides/nics/features/xsc.ini b/doc/guides/nics/features/xsc.ini
new file mode 100644
index 0000000000..b5c44ce535
--- /dev/null
+++ b/doc/guides/nics/features/xsc.ini
@@ -0,0 +1,9 @@
+;
+; Supported features of the 'xsc' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Linux = Y
+ARMv8 = Y
+x86-64 = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 50688d9f64..10a2eca3b0 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -70,4 +70,5 @@ Network Interface Controller Drivers
vhost
virtio
vmxnet3
+ xsc
zxdh
diff --git a/doc/guides/nics/xsc.rst b/doc/guides/nics/xsc.rst
new file mode 100644
index 0000000000..8e189db541
--- /dev/null
+++ b/doc/guides/nics/xsc.rst
@@ -0,0 +1,31 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2024 Yunsilicon Technology Co., Ltd
+
+XSC Poll Mode Driver
+======================
+
+The xsc PMD (**librte_net_xsc**) provides poll mode driver support for
+10/25/50/100/200 Gbps Yunsilicon metaScale Series Network Adapters.
+
+Supported NICs
+--------------
+
+The following Yunsilicon device models are supported by the same xsc driver:
+
+ - metaScale-200S
+ - metaScale-200
+ - metaScale-100Q
+ - metaScale-50
+
+Prerequisites
+--------------
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+- Learning about Yunsilicon metaScale Series NICs using
+ `<https://www.yunsilicon.com/#/productInformation>`_.
+
+Limitations or Known issues
+---------------------------
+32bit ARCHs are not supported.
+Windows and BSD are not supported yet.
diff --git a/doc/guides/rel_notes/release_25_03.rst b/doc/guides/rel_notes/release_25_03.rst
index 426dfcd982..6f766add72 100644
--- a/doc/guides/rel_notes/release_25_03.rst
+++ b/doc/guides/rel_notes/release_25_03.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added Yunsilicon xsc net driver [EXPERIMENTAL].**
+
+ * Added the PMD for Yunsilicon metaScale serials NICs.
+
Removed Items
-------------
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index dafd637ba4..c1ca7b0b39 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -63,6 +63,7 @@ drivers = [
'vhost',
'virtio',
'vmxnet3',
+ 'xsc',
'zxdh',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
new file mode 100644
index 0000000000..84a09a23de
--- /dev/null
+++ b/drivers/net/xsc/meson.build
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2025 Yunsilicon Technology Co., Ltd.
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64bit Linux'
+endif
+
+sources = files(
+ 'xsc_ethdev.c',
+)
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
new file mode 100644
index 0000000000..7c91d3443f
--- /dev/null
+++ b/drivers/net/xsc/xsc_defs.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef XSC_DEFS_H_
+#define XSC_DEFS_H_
+
+#define XSC_PCI_VENDOR_ID 0x1f67
+#define XSC_PCI_DEV_ID_MS 0x1111
+#define XSC_PCI_DEV_ID_MSVF 0x1112
+#define XSC_PCI_DEV_ID_MVH 0x1151
+#define XSC_PCI_DEV_ID_MVHVF 0x1152
+#define XSC_PCI_DEV_ID_MVS 0x1153
+
+#endif /* XSC_DEFS_H_ */
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
new file mode 100644
index 0000000000..a7dca46127
--- /dev/null
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#include <net/if.h>
+#include <ethdev_pci.h>
+
+#include "xsc_log.h"
+#include "xsc_defs.h"
+#include "xsc_ethdev.h"
+
+static int
+xsc_ethdev_init(struct rte_eth_dev *eth_dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ priv->eth_dev = eth_dev;
+ priv->pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+
+ return 0;
+}
+
+static int
+xsc_ethdev_uninit(struct rte_eth_dev *eth_dev)
+{
+ RTE_SET_USED(eth_dev);
+
+ PMD_INIT_FUNC_TRACE();
+
+ return 0;
+}
+
+static int
+xsc_ethdev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = rte_eth_dev_pci_generic_probe(pci_dev,
+ sizeof(struct xsc_ethdev_priv),
+ xsc_ethdev_init);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to probe ethdev: %s", pci_dev->name);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int
+xsc_ethdev_pci_remove(struct rte_pci_device *pci_dev)
+{
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = rte_eth_dev_pci_generic_remove(pci_dev, xsc_ethdev_uninit);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Could not remove ethdev: %s", pci_dev->name);
+ return ret;
+ }
+
+ return 0;
+}
+
+static const struct rte_pci_id xsc_ethdev_pci_id_map[] = {
+ { RTE_PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_PCI_DEV_ID_MS) },
+ { RTE_PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_PCI_DEV_ID_MSVF) },
+ { RTE_PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_PCI_DEV_ID_MVH) },
+ { RTE_PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_PCI_DEV_ID_MVHVF) },
+ { RTE_PCI_DEVICE(XSC_PCI_VENDOR_ID, XSC_PCI_DEV_ID_MVS) },
+ { RTE_PCI_DEVICE(0, 0) },
+};
+
+static struct rte_pci_driver xsc_ethdev_pci_driver = {
+ .id_table = xsc_ethdev_pci_id_map,
+ .probe = xsc_ethdev_pci_probe,
+ .remove = xsc_ethdev_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(net_xsc, xsc_ethdev_pci_driver);
+RTE_PMD_REGISTER_PCI_TABLE(net_xsc, xsc_ethdev_pci_id_map);
+
+RTE_LOG_REGISTER_SUFFIX(xsc_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(xsc_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/xsc/xsc_ethdev.h b/drivers/net/xsc/xsc_ethdev.h
new file mode 100644
index 0000000000..508f5a86de
--- /dev/null
+++ b/drivers/net/xsc/xsc_ethdev.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_ETHDEV_H_
+#define _XSC_ETHDEV_H_
+
+struct xsc_ethdev_priv {
+ struct rte_eth_dev *eth_dev;
+ struct rte_pci_device *pci_dev;
+};
+
+#define TO_XSC_ETHDEV_PRIV(dev) ((struct xsc_ethdev_priv *)(dev)->data->dev_private)
+
+#endif /* _XSC_ETHDEV_H_ */
diff --git a/drivers/net/xsc/xsc_log.h b/drivers/net/xsc/xsc_log.h
new file mode 100644
index 0000000000..16de436edb
--- /dev/null
+++ b/drivers/net/xsc/xsc_log.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_LOG_H_
+#define _XSC_LOG_H_
+
+#include <rte_log.h>
+
+extern int xsc_logtype_init;
+extern int xsc_logtype_driver;
+
+#define RTE_LOGTYPE_XSC_INIT xsc_logtype_init
+#define RTE_LOGTYPE_XSC_DRV xsc_logtype_driver
+
+#define PMD_INIT_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, XSC_INIT, "%s(): ", __func__, __VA_ARGS__)
+
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+
+#define PMD_DRV_LOG(level, ...) \
+ RTE_LOG_LINE_PREFIX(level, XSC_DRV, "%s(): ", __func__, __VA_ARGS__)
+
+#endif /* _XSC_LOG_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 02/15] net/xsc: add xsc device initialization
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
2025-01-03 15:04 ` [PATCH v4 01/15] net/xsc: add xsc PMD framework WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 18:58 ` Stephen Hemminger
2025-01-03 15:04 ` [PATCH v4 03/15] net/xsc: add xsc mailbox WanRenyong
` (12 subsequent siblings)
14 siblings, 1 reply; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
XSC device is a hardware abstract level device serving as a handle
to interact with hardware.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
drivers/net/xsc/meson.build | 1 +
drivers/net/xsc/xsc_defs.h | 16 ++++
drivers/net/xsc/xsc_dev.c | 181 +++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_dev.h | 131 +++++++++++++++++++++++++
drivers/net/xsc/xsc_ethdev.c | 16 +++-
drivers/net/xsc/xsc_ethdev.h | 3 +
6 files changed, 347 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/xsc/xsc_dev.c
create mode 100644 drivers/net/xsc/xsc_dev.h
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
index 84a09a23de..683a1f6632 100644
--- a/drivers/net/xsc/meson.build
+++ b/drivers/net/xsc/meson.build
@@ -8,4 +8,5 @@ endif
sources = files(
'xsc_ethdev.c',
+ 'xsc_dev.c',
)
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index 7c91d3443f..60244425cd 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -12,4 +12,20 @@
#define XSC_PCI_DEV_ID_MVHVF 0x1152
#define XSC_PCI_DEV_ID_MVS 0x1153
+#define XSC_VFREP_BASE_LOGICAL_PORT 1081
+
+enum xsc_nic_mode {
+ XSC_NIC_MODE_LEGACY,
+ XSC_NIC_MODE_SWITCHDEV,
+ XSC_NIC_MODE_SOC,
+};
+
+enum xsc_pph_type {
+ XSC_PPH_NONE = 0,
+ XSC_RX_PPH = 0x1,
+ XSC_TX_PPH = 0x2,
+ XSC_VFREP_PPH = 0x4,
+ XSC_UPLINK_PPH = 0x8,
+};
+
#endif /* XSC_DEFS_H_ */
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
new file mode 100644
index 0000000000..1b8a84baa6
--- /dev/null
+++ b/drivers/net/xsc/xsc_dev.c
@@ -0,0 +1,181 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#include <stdlib.h>
+#include <unistd.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <limits.h>
+#include <sys/mman.h>
+
+#include <rte_pci.h>
+#include <rte_bus_pci.h>
+#include <bus_pci_driver.h>
+#include <rte_kvargs.h>
+#include <rte_eal_paging.h>
+#include <rte_bitops.h>
+
+#include "xsc_log.h"
+#include "xsc_defs.h"
+#include "xsc_dev.h"
+
+#define XSC_DEV_DEF_FLOW_MODE 7
+
+TAILQ_HEAD(xsc_dev_ops_list, xsc_dev_ops);
+static struct xsc_dev_ops_list dev_ops_list = TAILQ_HEAD_INITIALIZER(dev_ops_list);
+
+static const struct xsc_dev_ops *
+xsc_dev_ops_get(enum rte_pci_kernel_driver kdrv)
+{
+ const struct xsc_dev_ops *ops;
+
+ TAILQ_FOREACH(ops, &dev_ops_list, entry) {
+ if (ops->kdrv == kdrv)
+ return ops;
+ }
+
+ return NULL;
+}
+
+void
+xsc_dev_ops_register(struct xsc_dev_ops *new_ops)
+{
+ struct xsc_dev_ops *ops;
+
+ TAILQ_FOREACH(ops, &dev_ops_list, entry) {
+ if (ops->kdrv == new_ops->kdrv) {
+ PMD_DRV_LOG(ERR, "xsc dev ops exists, kdrv=%d", new_ops->kdrv);
+ return;
+ }
+ }
+
+ TAILQ_INSERT_TAIL(&dev_ops_list, new_ops, entry);
+}
+
+int
+xsc_dev_close(struct xsc_dev *xdev, int __rte_unused repr_id)
+{
+ return xdev->dev_ops->dev_close(xdev);
+}
+
+static int
+xsc_dev_alloc_vfos_info(struct xsc_dev *xdev)
+{
+ struct xsc_hwinfo *hwinfo;
+ int base_lp = 0;
+
+ if (xsc_dev_is_vf(xdev))
+ return 0;
+
+ hwinfo = &xdev->hwinfo;
+ if (hwinfo->pcie_no == 1) {
+ xdev->vfrep_offset = hwinfo->func_id -
+ hwinfo->pcie1_pf_funcid_base +
+ hwinfo->pcie0_pf_funcid_top -
+ hwinfo->pcie0_pf_funcid_base + 1;
+ } else {
+ xdev->vfrep_offset = hwinfo->func_id - hwinfo->pcie0_pf_funcid_base;
+ }
+
+ base_lp = XSC_VFREP_BASE_LOGICAL_PORT;
+ if (xdev->devargs.nic_mode == XSC_NIC_MODE_LEGACY)
+ base_lp += xdev->vfrep_offset;
+ xdev->vfos_logical_in_port = base_lp;
+ return 0;
+}
+
+static void
+xsc_dev_args_parse(struct xsc_dev *xdev, struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+ struct xsc_devargs *xdevargs = &xdev->devargs;
+ const char *tmp;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (kvlist == NULL)
+ return;
+
+ tmp = rte_kvargs_get(kvlist, XSC_PPH_MODE_ARG);
+ if (tmp != NULL)
+ xdevargs->pph_mode = atoi(tmp);
+ else
+ xdevargs->pph_mode = XSC_PPH_NONE;
+
+ tmp = rte_kvargs_get(kvlist, XSC_NIC_MODE_ARG);
+ if (tmp != NULL)
+ xdevargs->nic_mode = atoi(tmp);
+ else
+ xdevargs->nic_mode = XSC_NIC_MODE_LEGACY;
+
+ tmp = rte_kvargs_get(kvlist, XSC_FLOW_MODE_ARG);
+ if (tmp != NULL)
+ xdevargs->flow_mode = atoi(tmp);
+ else
+ xdevargs->flow_mode = XSC_DEV_DEF_FLOW_MODE;
+
+ rte_kvargs_free(kvlist);
+}
+
+void
+xsc_dev_uninit(struct xsc_dev *xdev)
+{
+ PMD_INIT_FUNC_TRACE();
+ xsc_dev_close(xdev, XSC_DEV_REPR_ID_INVALID);
+ rte_free(xdev);
+}
+
+int
+xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **xdev)
+{
+ struct xsc_dev *d;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ d = rte_zmalloc(NULL, sizeof(*d), RTE_CACHE_LINE_SIZE);
+ if (d == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory for xsc_dev");
+ return -ENOMEM;
+ }
+
+ d->dev_ops = xsc_dev_ops_get(pci_dev->kdrv);
+ if (d->dev_ops == NULL) {
+ PMD_DRV_LOG(ERR, "Could not get dev_ops, kdrv=%d", pci_dev->kdrv);
+ return -ENODEV;
+ }
+
+ d->pci_dev = pci_dev;
+
+ if (d->dev_ops->dev_init)
+ d->dev_ops->dev_init(d);
+
+ xsc_dev_args_parse(d, pci_dev->device.devargs);
+
+ ret = xsc_dev_alloc_vfos_info(d);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to alloc vfos info");
+ ret = -EINVAL;
+ goto hwinfo_init_fail;
+ }
+
+ *xdev = d;
+
+ return 0;
+
+hwinfo_init_fail:
+ xsc_dev_uninit(d);
+ return ret;
+}
+
+bool
+xsc_dev_is_vf(struct xsc_dev *xdev)
+{
+ uint16_t devic_id = xdev->pci_dev->id.device_id;
+
+ if (devic_id == XSC_PCI_DEV_ID_MSVF ||
+ devic_id == XSC_PCI_DEV_ID_MVHVF)
+ return true;
+
+ return false;
+}
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
new file mode 100644
index 0000000000..7eae78d9bf
--- /dev/null
+++ b/drivers/net/xsc/xsc_dev.h
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_DEV_H_
+#define _XSC_DEV_H_
+
+#include <rte_ethdev.h>
+#include <ethdev_driver.h>
+#include <rte_interrupts.h>
+#include <rte_bitmap.h>
+#include <rte_malloc.h>
+#include <bus_pci_driver.h>
+
+#include "xsc_defs.h"
+#include "xsc_log.h"
+
+#define XSC_PPH_MODE_ARG "pph_mode"
+#define XSC_NIC_MODE_ARG "nic_mode"
+#define XSC_FLOW_MODE_ARG "flow_mode"
+
+#define XSC_FUNCID_TYPE_MASK 0x1c000
+#define XSC_FUNCID_MASK 0x3fff
+
+#define XSC_DEV_PCT_IDX_INVALID 0xFFFFFFFF
+#define XSC_DEV_REPR_ID_INVALID 0x7FFFFFFF
+
+struct xsc_hwinfo {
+ uint8_t valid; /* 1: current phy info is valid, 0 : invalid */
+ uint32_t pcie_no; /* pcie number , 0 or 1 */
+ uint32_t func_id; /* pf glb func id */
+ uint32_t pcie_host; /* host pcie number */
+ uint32_t mac_phy_port; /* mac port */
+ uint32_t funcid_to_logic_port_off; /* port func id offset */
+ uint16_t lag_id;
+ uint16_t raw_qp_id_base;
+ uint16_t raw_rss_qp_id_base;
+ uint16_t pf0_vf_funcid_base;
+ uint16_t pf0_vf_funcid_top;
+ uint16_t pf1_vf_funcid_base;
+ uint16_t pf1_vf_funcid_top;
+ uint16_t pcie0_pf_funcid_base;
+ uint16_t pcie0_pf_funcid_top;
+ uint16_t pcie1_pf_funcid_base;
+ uint16_t pcie1_pf_funcid_top;
+ uint16_t lag_port_start;
+ uint16_t raw_tpe_qp_num;
+ int send_seg_num;
+ int recv_seg_num;
+ uint8_t on_chip_tbl_vld;
+ uint8_t dma_rw_tbl_vld;
+ uint8_t pct_compress_vld;
+ uint32_t chip_version;
+ uint32_t hca_core_clock;
+ uint8_t mac_bit;
+ uint8_t esw_mode;
+};
+
+struct xsc_devargs {
+ int nic_mode;
+ int flow_mode;
+ int pph_mode;
+};
+
+struct xsc_repr_info {
+ int repr_id;
+ enum xsc_port_type port_type;
+ int pf_bond;
+
+ uint32_t ifindex;
+ const char *phys_dev_name;
+ uint32_t funcid;
+
+ uint16_t logical_port;
+ uint16_t local_dstinfo;
+ uint16_t peer_logical_port;
+ uint16_t peer_dstinfo;
+};
+
+struct xsc_repr_port {
+ struct xsc_dev *xdev;
+ struct xsc_repr_info info;
+ void *drv_data;
+ struct xsc_dev_pct_list def_pct_list;
+};
+
+struct xsc_dev_config {
+ uint8_t pph_flag;
+ uint8_t hw_csum;
+ uint8_t tso;
+ uint32_t tso_max_payload_sz;
+};
+
+struct xsc_dev {
+ struct rte_pci_device *pci_dev;
+ const struct xsc_dev_ops *dev_ops;
+ struct xsc_devargs devargs;
+ struct xsc_hwinfo hwinfo;
+ struct rte_eth_link pf_dev_link;
+ uint32_t link_speed_capa;
+ int vfos_logical_in_port;
+ int vfrep_offset;
+
+ struct rte_intr_handle *intr_handle;
+ struct xsc_repr_port *repr_ports;
+ int num_repr_ports; /* PF and VF representor ports num */
+ int ifindex;
+ int port_id; /* Probe dev */
+ void *dev_priv;
+ char name[RTE_ETH_NAME_MAX_LEN];
+ void *bar_addr;
+ void *jumbo_buffer_pa;
+ void *jumbo_buffer_va;
+ uint64_t bar_len;
+ int ctrl_fd;
+};
+
+struct xsc_dev_ops {
+ TAILQ_ENTRY(xsc_dev_ops) entry;
+ enum rte_pci_kernel_driver kdrv;
+ int (*dev_init)(struct xsc_dev *xdev);
+ int (*dev_close)(struct xsc_dev *xdev);
+};
+
+void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
+int xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **dev);
+void xsc_dev_uninit(struct xsc_dev *xdev);
+int xsc_dev_close(struct xsc_dev *xdev, int repr_id);
+bool xsc_dev_is_vf(struct xsc_dev *xdev);
+
+#endif /* _XSC_DEV_H_ */
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index a7dca46127..4bdc70507f 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -13,22 +13,32 @@ static int
xsc_ethdev_init(struct rte_eth_dev *eth_dev)
{
struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(eth_dev);
+ int ret;
PMD_INIT_FUNC_TRACE();
priv->eth_dev = eth_dev;
priv->pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+ ret = xsc_dev_init(priv->pci_dev, &priv->xdev);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to initialize xsc device");
+ return ret;
+ }
+ priv->xdev->port_id = eth_dev->data->port_id;
+
return 0;
}
static int
xsc_ethdev_uninit(struct rte_eth_dev *eth_dev)
{
- RTE_SET_USED(eth_dev);
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(eth_dev);
PMD_INIT_FUNC_TRACE();
+ xsc_dev_uninit(priv->xdev);
+
return 0;
}
@@ -84,6 +94,10 @@ static struct rte_pci_driver xsc_ethdev_pci_driver = {
RTE_PMD_REGISTER_PCI(net_xsc, xsc_ethdev_pci_driver);
RTE_PMD_REGISTER_PCI_TABLE(net_xsc, xsc_ethdev_pci_id_map);
+RTE_PMD_REGISTER_PARAM_STRING(net_xsc,
+ XSC_PPH_MODE_ARG "=<x>"
+ XSC_NIC_MODE_ARG "=<x>"
+ XSC_FLOW_MODE_ARG "=<x>");
RTE_LOG_REGISTER_SUFFIX(xsc_logtype_init, init, NOTICE);
RTE_LOG_REGISTER_SUFFIX(xsc_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/xsc/xsc_ethdev.h b/drivers/net/xsc/xsc_ethdev.h
index 508f5a86de..05040f8865 100644
--- a/drivers/net/xsc/xsc_ethdev.h
+++ b/drivers/net/xsc/xsc_ethdev.h
@@ -5,9 +5,12 @@
#ifndef _XSC_ETHDEV_H_
#define _XSC_ETHDEV_H_
+#include "xsc_dev.h"
+
struct xsc_ethdev_priv {
struct rte_eth_dev *eth_dev;
struct rte_pci_device *pci_dev;
+ struct xsc_dev *xdev;
};
#define TO_XSC_ETHDEV_PRIV(dev) ((struct xsc_ethdev_priv *)(dev)->data->dev_private)
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 03/15] net/xsc: add xsc mailbox
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
2025-01-03 15:04 ` [PATCH v4 01/15] net/xsc: add xsc PMD framework WanRenyong
2025-01-03 15:04 ` [PATCH v4 02/15] net/xsc: add xsc device initialization WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver WanRenyong
` (11 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
XSC mailbox is a mechanism used for interaction between PMD and firmware.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
Signed-off-by: Rong Qian <qianr@yunsilicon.com>
---
drivers/net/xsc/meson.build | 1 +
drivers/net/xsc/xsc_cmd.h | 387 ++++++++++++++++++
drivers/net/xsc/xsc_defs.h | 2 +
drivers/net/xsc/xsc_vfio_mbox.c | 691 ++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_vfio_mbox.h | 142 +++++++
5 files changed, 1223 insertions(+)
create mode 100644 drivers/net/xsc/xsc_cmd.h
create mode 100644 drivers/net/xsc/xsc_vfio_mbox.c
create mode 100644 drivers/net/xsc/xsc_vfio_mbox.h
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
index 683a1f6632..df4c8ea499 100644
--- a/drivers/net/xsc/meson.build
+++ b/drivers/net/xsc/meson.build
@@ -9,4 +9,5 @@ endif
sources = files(
'xsc_ethdev.c',
'xsc_dev.c',
+ 'xsc_vfio_mbox.c',
)
diff --git a/drivers/net/xsc/xsc_cmd.h b/drivers/net/xsc/xsc_cmd.h
new file mode 100644
index 0000000000..433dcd0afa
--- /dev/null
+++ b/drivers/net/xsc/xsc_cmd.h
@@ -0,0 +1,387 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_CMD_H_
+#define _XSC_CMD_H_
+
+#include <sys/types.h>
+#include <unistd.h>
+#include <string.h>
+#include <dirent.h>
+#include <net/if.h>
+
+#define XSC_BOARD_SN_LEN 32
+#define XSC_CMD_QUERY_HCA_CAP_V1 1
+
+enum xsc_cmd_opcode {
+ XSC_CMD_OP_QUERY_HCA_CAP = 0x100,
+ XSC_CMD_OP_CREATE_CQ = 0x400,
+ XSC_CMD_OP_DESTROY_CQ = 0x401,
+ XSC_CMD_OP_CREATE_QP = 0x500,
+ XSC_CMD_OP_DESTROY_QP = 0x501,
+ XSC_CMD_OP_RTR2RTS_QP = 0x504,
+ XSC_CMD_OP_QP_2RST = 0x50A,
+ XSC_CMD_OP_CREATE_MULTI_QP = 0x515,
+ XSC_CMD_OP_MODIFY_NIC_HCA = 0x812,
+ XSC_CMD_OP_MODIFY_RAW_QP = 0x81f,
+ XSC_CMD_OP_EXEC_NP = 0x900,
+ XSC_CMD_OP_SET_MTU = 0x1100,
+ XSC_CMD_OP_QUERY_ETH_MAC = 0X1101,
+ XSC_CMD_OP_MAX
+};
+
+enum xsc_cmd_status {
+ XSC_CMD_SUCC = 0,
+ XSC_CMD_FAIL,
+ XSC_CMD_TIMEOUT,
+};
+
+struct xsc_cmd_inbox_hdr {
+ rte_be16_t opcode;
+ uint8_t rsvd[4];
+ rte_be16_t ver;
+};
+
+struct xsc_cmd_outbox_hdr {
+ uint8_t status;
+ uint8_t rsvd[5];
+ rte_be16_t ver;
+};
+
+struct xsc_cmd_fw_version {
+ uint8_t major;
+ uint8_t minor;
+ rte_be16_t patch;
+ rte_be32_t tweak;
+ uint8_t extra_flag;
+ uint8_t rsv[7];
+};
+
+struct xsc_cmd_hca_cap {
+ uint8_t rsvd1[12];
+ uint8_t send_seg_num;
+ uint8_t send_wqe_shift;
+ uint8_t recv_seg_num;
+ uint8_t recv_wqe_shift;
+ uint8_t log_max_srq_sz;
+ uint8_t log_max_qp_sz;
+ uint8_t log_max_mtt;
+ uint8_t log_max_qp;
+ uint8_t log_max_strq_sz;
+ uint8_t log_max_srqs;
+ uint8_t rsvd2[2];
+ uint8_t log_max_tso;
+ uint8_t log_max_cq_sz;
+ uint8_t rsvd3;
+ uint8_t log_max_cq;
+ uint8_t log_max_eq_sz;
+ uint8_t log_max_mkey;
+ uint8_t log_max_msix;
+ uint8_t log_max_eq;
+ uint8_t max_indirection;
+ uint8_t log_max_mrw_sz;
+ uint8_t log_max_bsf_list_sz;
+ uint8_t log_max_klm_list_sz;
+ uint8_t rsvd4;
+ uint8_t log_max_ra_req_dc;
+ uint8_t rsvd5;
+ uint8_t log_max_ra_res_dc;
+ uint8_t rsvd6;
+ uint8_t log_max_ra_req_qp;
+ uint8_t log_max_qp_depth;
+ uint8_t log_max_ra_res_qp;
+ rte_be16_t max_vfs;
+ rte_be16_t raweth_qp_id_end;
+ rte_be16_t raw_tpe_qp_num;
+ rte_be16_t max_qp_count;
+ rte_be16_t raweth_qp_id_base;
+ uint8_t rsvd7;
+ uint8_t local_ca_ack_delay;
+ uint8_t max_num_eqs;
+ uint8_t num_ports;
+ uint8_t log_max_msg;
+ uint8_t mac_port;
+ rte_be16_t raweth_rss_qp_id_base;
+ rte_be16_t stat_rate_support;
+ uint8_t rsvd8[2];
+ rte_be64_t flags;
+ uint8_t rsvd9;
+ uint8_t uar_sz;
+ uint8_t rsvd10;
+ uint8_t log_pg_sz;
+ rte_be16_t bf_log_bf_reg_size;
+ rte_be16_t msix_base;
+ rte_be16_t msix_num;
+ rte_be16_t max_desc_sz_sq;
+ uint8_t rsvd11[2];
+ rte_be16_t max_desc_sz_rq;
+ uint8_t rsvd12[2];
+ rte_be16_t max_desc_sz_sq_dc;
+ uint8_t rsvd13[4];
+ rte_be16_t max_qp_mcg;
+ uint8_t rsvd14;
+ uint8_t log_max_mcg;
+ uint8_t rsvd15;
+ uint8_t log_max_pd;
+ uint8_t rsvd16;
+ uint8_t log_max_xrcd;
+ uint8_t rsvd17[40];
+ rte_be32_t uar_page_sz;
+ uint8_t rsvd18[8];
+ rte_be32_t hw_feature_flag;
+ rte_be16_t pf0_vf_funcid_base;
+ rte_be16_t pf0_vf_funcid_top;
+ rte_be16_t pf1_vf_funcid_base;
+ rte_be16_t pf1_vf_funcid_top;
+ rte_be16_t pcie0_pf_funcid_base;
+ rte_be16_t pcie0_pf_funcid_top;
+ rte_be16_t pcie1_pf_funcid_base;
+ rte_be16_t pcie1_pf_funcid_top;
+ uint8_t log_msx_atomic_size_qp;
+ uint8_t pcie_host;
+ uint8_t rsvd19;
+ uint8_t log_msx_atomic_size_dc;
+ uint8_t board_sn[XSC_BOARD_SN_LEN];
+ uint8_t max_tc;
+ uint8_t mac_bit;
+ rte_be16_t funcid_to_logic_port;
+ uint8_t rsvd20[6];
+ uint8_t nif_port_num;
+ uint8_t reg_mr_via_cmdq;
+ rte_be32_t hca_core_clock;
+ rte_be32_t max_rwq_indirection_tables;
+ rte_be32_t max_rwq_indirection_table_size;
+ rte_be32_t chip_ver_h;
+ rte_be32_t chip_ver_m;
+ rte_be32_t chip_ver_l;
+ rte_be32_t hotfix_num;
+ rte_be32_t feature_flag;
+ rte_be32_t rx_pkt_len_max;
+ rte_be32_t glb_func_id;
+ rte_be64_t tx_db;
+ rte_be64_t rx_db;
+ rte_be64_t complete_db;
+ rte_be64_t complete_reg;
+ rte_be64_t event_db;
+ rte_be32_t qp_rate_limit_min;
+ rte_be32_t qp_rate_limit_max;
+ struct xsc_cmd_fw_version fw_ver;
+ uint8_t lag_logic_port_ofst;
+ rte_be64_t max_mr_size;
+ rte_be16_t max_cmd_in_len;
+ rte_be16_t max_cmd_out_len;
+};
+
+struct xsc_cmd_query_hca_cap_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ rte_be16_t cpu_num;
+ uint8_t rsvd[6];
+};
+
+struct xsc_cmd_query_hca_cap_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsvd[8];
+ struct xsc_cmd_hca_cap hca_cap;
+};
+
+struct xsc_cmd_cq_context {
+ uint16_t eqn;
+ uint16_t pa_num;
+ uint16_t glb_func_id;
+ uint8_t log_cq_sz;
+ uint8_t cq_type;
+};
+
+struct xsc_cmd_create_cq_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ struct xsc_cmd_cq_context ctx;
+ uint64_t pas[];
+};
+
+struct xsc_cmd_create_cq_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint32_t cqn;
+ uint8_t rsvd[4];
+};
+
+struct xsc_cmd_destroy_cq_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ uint32_t cqn;
+ uint8_t rsvd[4];
+};
+
+struct xsc_cmd_destroy_cq_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsvd[8];
+};
+
+struct xsc_cmd_create_qp_request {
+ rte_be16_t input_qpn;
+ rte_be16_t pa_num;
+ uint8_t qp_type;
+ uint8_t log_sq_sz;
+ uint8_t log_rq_sz;
+ uint8_t dma_direct;
+ rte_be32_t pdn;
+ rte_be16_t cqn_send;
+ rte_be16_t cqn_recv;
+ rte_be16_t glb_funcid;
+ uint8_t page_shift;
+ uint8_t rsvd;
+ rte_be64_t pas[];
+};
+
+struct xsc_cmd_create_qp_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ struct xsc_cmd_create_qp_request req;
+};
+
+struct xsc_cmd_create_qp_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint32_t qpn;
+ uint8_t rsvd[4];
+};
+
+struct xsc_cmd_create_multiqp_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ rte_be16_t qp_num;
+ uint8_t qp_type;
+ uint8_t rsvd;
+ rte_be32_t req_len;
+ uint8_t data[];
+};
+
+struct xsc_cmd_create_multiqp_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ rte_be32_t qpn_base;
+};
+
+struct xsc_cmd_destroy_qp_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ rte_be32_t qpn;
+ uint8_t rsvd[4];
+};
+
+struct xsc_cmd_destroy_qp_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsvd[8];
+};
+
+struct xsc_cmd_qp_context {
+ rte_be32_t remote_qpn;
+ rte_be32_t cqn_send;
+ rte_be32_t cqn_recv;
+ rte_be32_t next_send_psn;
+ rte_be32_t next_recv_psn;
+ rte_be32_t pdn;
+ rte_be16_t src_udp_port;
+ rte_be16_t path_id;
+ uint8_t mtu_mode;
+ uint8_t lag_sel;
+ uint8_t lag_sel_en;
+ uint8_t retry_cnt;
+ uint8_t rnr_retry;
+ uint8_t dscp;
+ uint8_t state;
+ uint8_t hop_limit;
+ uint8_t dmac[6];
+ uint8_t smac[6];
+ rte_be32_t dip[4];
+ rte_be32_t sip[4];
+ rte_be16_t ip_type;
+ rte_be16_t grp_id;
+ uint8_t vlan_valid;
+ uint8_t dci_cfi_prio_sl;
+ rte_be16_t vlan_id;
+ uint8_t qp_out_port;
+ uint8_t pcie_no;
+ rte_be16_t lag_id;
+ rte_be16_t func_id;
+ rte_be16_t rsvd;
+};
+
+struct xsc_cmd_modify_qp_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ rte_be32_t qpn;
+ struct xsc_cmd_qp_context ctx;
+ uint8_t no_need_wait;
+};
+
+struct xsc_cmd_modify_qp_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsvd[8];
+};
+
+struct xsc_cmd_modify_raw_qp_request {
+ uint16_t qpn;
+ uint16_t lag_id;
+ uint16_t func_id;
+ uint8_t dma_direct;
+ uint8_t prio;
+ uint8_t qp_out_port;
+ uint8_t rsvd[7];
+};
+
+struct xsc_cmd_modify_raw_qp_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ uint8_t pcie_no;
+ uint8_t rsv[7];
+ struct xsc_cmd_modify_raw_qp_request req;
+};
+
+struct xsc_cmd_modify_raw_qp_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsvd[8];
+};
+
+struct xsc_cmd_set_mtu_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ rte_be16_t mtu;
+ rte_be16_t rx_buf_sz_min;
+ uint8_t mac_port;
+ uint8_t rsvd;
+};
+
+struct xsc_cmd_set_mtu_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+};
+
+struct xsc_cmd_query_eth_mac_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ uint8_t index;
+};
+
+struct xsc_cmd_query_eth_mac_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t mac[6];
+};
+
+struct xsc_cmd_nic_attr {
+ rte_be16_t caps;
+ rte_be16_t caps_mask;
+ uint8_t mac_addr[6];
+};
+
+struct xsc_cmd_rss_modify_attr {
+ uint8_t caps_mask;
+ uint8_t rss_en;
+ rte_be16_t rqn_base;
+ rte_be16_t rqn_num;
+ uint8_t hfunc;
+ rte_be32_t hash_tmpl;
+ uint8_t hash_key[52];
+};
+
+struct xsc_cmd_modify_nic_hca_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ struct xsc_cmd_nic_attr nic;
+ struct xsc_cmd_rss_modify_attr rss;
+};
+
+struct xsc_cmd_modify_nic_hca_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsvd[4];
+};
+
+#endif /* _XSC_CMD_H_ */
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index 60244425cd..a4b36685a6 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -5,6 +5,8 @@
#ifndef XSC_DEFS_H_
#define XSC_DEFS_H_
+#define XSC_PAGE_SIZE 4096
+
#define XSC_PCI_VENDOR_ID 0x1f67
#define XSC_PCI_DEV_ID_MS 0x1111
#define XSC_PCI_DEV_ID_MSVF 0x1112
diff --git a/drivers/net/xsc/xsc_vfio_mbox.c b/drivers/net/xsc/xsc_vfio_mbox.c
new file mode 100644
index 0000000000..b1bb06feb8
--- /dev/null
+++ b/drivers/net/xsc/xsc_vfio_mbox.c
@@ -0,0 +1,691 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+#include <rte_malloc.h>
+#include <bus_pci_driver.h>
+
+#include "xsc_vfio_mbox.h"
+#include "xsc_log.h"
+
+#define XSC_MBOX_BUF_NUM 2048
+#define XSC_MBOX_BUF_CACHE_SIZE 256
+#define XSC_CMDQ_DEPTH_LOG 5
+#define XSC_CMDQ_ELEMENT_SIZE_LOG 6
+#define XSC_CMDQ_REQ_TYPE 7
+#define XSC_CMDQ_WAIT_TIMEOUT 10
+#define XSC_CMDQ_WAIT_DELAY_MS 100
+#define XSC_CMD_OP_DUMMY 0x10d
+
+#define XSC_PF_CMDQ_ELEMENT_SZ 0x1020020
+#define XSC_PF_CMDQ_REQ_BASE_H_ADDR 0x1022000
+#define XSC_PF_CMDQ_REQ_BASE_L_ADDR 0x1024000
+#define XSC_PF_CMDQ_RSP_BASE_H_ADDR 0x102a000
+#define XSC_PF_CMDQ_RSP_BASE_L_ADDR 0x102c000
+#define XSC_PF_CMDQ_REQ_PID 0x1026000
+#define XSC_PF_CMDQ_REQ_CID 0x1028000
+#define XSC_PF_CMDQ_RSP_PID 0x102e000
+#define XSC_PF_CMDQ_RSP_CID 0x1030000
+#define XSC_PF_CMDQ_DEPTH 0x1020028
+
+#define XSC_VF_CMDQ_REQ_BASE_H_ADDR 0x0
+#define XSC_VF_CMDQ_REQ_BASE_L_ADDR 0x4
+#define XSC_VF_CMDQ_RSP_BASE_H_ADDR 0x10
+#define XSC_VF_CMDQ_RSP_BASE_L_ADDR 0x14
+#define XSC_VF_CMDQ_REQ_PID 0x8
+#define XSC_VF_CMDQ_REQ_CID 0xc
+#define XSC_VF_CMDQ_RSP_PID 0x18
+#define XSC_VF_CMDQ_RSP_CID 0x1c
+#define XSC_VF_CMDQ_ELEMENT_SZ 0x28
+#define XSC_VF_CMDQ_DEPTH 0x2c
+
+static const char * const xsc_cmd_error[] = {
+ "xsc cmd success",
+ "xsc cmd fail",
+ "xsc cmd timeout"
+};
+
+static struct xsc_cmdq_config xsc_pf_config = {
+ .req_pid_addr = XSC_PF_CMDQ_REQ_PID,
+ .req_cid_addr = XSC_PF_CMDQ_REQ_CID,
+ .rsp_pid_addr = XSC_PF_CMDQ_RSP_PID,
+ .rsp_cid_addr = XSC_PF_CMDQ_RSP_CID,
+ .req_h_addr = XSC_PF_CMDQ_REQ_BASE_H_ADDR,
+ .req_l_addr = XSC_PF_CMDQ_REQ_BASE_L_ADDR,
+ .rsp_h_addr = XSC_PF_CMDQ_RSP_BASE_H_ADDR,
+ .rsp_l_addr = XSC_PF_CMDQ_RSP_BASE_L_ADDR,
+ .elt_sz_addr = XSC_PF_CMDQ_ELEMENT_SZ,
+ .depth_addr = XSC_PF_CMDQ_DEPTH,
+};
+
+static struct xsc_cmdq_config xsc_vf_config = {
+ .req_pid_addr = XSC_VF_CMDQ_REQ_PID,
+ .req_cid_addr = XSC_VF_CMDQ_REQ_CID,
+ .rsp_pid_addr = XSC_VF_CMDQ_RSP_PID,
+ .rsp_cid_addr = XSC_VF_CMDQ_RSP_CID,
+ .req_h_addr = XSC_VF_CMDQ_REQ_BASE_H_ADDR,
+ .req_l_addr = XSC_VF_CMDQ_REQ_BASE_L_ADDR,
+ .rsp_h_addr = XSC_VF_CMDQ_RSP_BASE_H_ADDR,
+ .rsp_l_addr = XSC_VF_CMDQ_RSP_BASE_L_ADDR,
+ .elt_sz_addr = XSC_VF_CMDQ_ELEMENT_SZ,
+ .depth_addr = XSC_VF_CMDQ_DEPTH,
+};
+
+static void
+xsc_cmdq_config_init(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ if (!xsc_dev_is_vf(xdev))
+ cmdq->config = &xsc_pf_config;
+ else
+ cmdq->config = &xsc_vf_config;
+}
+
+static void
+xsc_cmdq_rsp_cid_update(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ uint32_t rsp_pid;
+
+ cmdq->rsp_cid = rte_read32((uint8_t *)xdev->bar_addr + cmdq->config->rsp_cid_addr);
+ rsp_pid = rte_read32((uint8_t *)xdev->bar_addr + cmdq->config->rsp_pid_addr);
+ if (rsp_pid != cmdq->rsp_cid) {
+ PMD_DRV_LOG(INFO, "Update cid(%u) to latest pid(%u)",
+ cmdq->rsp_cid, rsp_pid);
+ cmdq->rsp_cid = rsp_pid;
+ rte_write32(cmdq->rsp_cid, (uint8_t *)xdev->bar_addr + cmdq->config->rsp_cid_addr);
+ }
+}
+
+static void
+xsc_cmdq_depth_set(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ cmdq->depth_n = XSC_CMDQ_DEPTH_LOG;
+ cmdq->depth_m = (1 << XSC_CMDQ_DEPTH_LOG) - 1;
+ rte_write32(1 << cmdq->depth_n, (uint8_t *)xdev->bar_addr + cmdq->config->depth_addr);
+}
+
+static int
+xsc_cmdq_elt_size_check(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ uint32_t elts_n;
+
+ elts_n = rte_read32((uint8_t *)xdev->bar_addr + cmdq->config->elt_sz_addr);
+ if (elts_n != XSC_CMDQ_ELEMENT_SIZE_LOG) {
+ PMD_DRV_LOG(ERR, "The cmdq elt size log(%u) is error, should be %u",
+ elts_n, XSC_CMDQ_ELEMENT_SIZE_LOG);
+ rte_errno = ENODEV;
+ return -1;
+ }
+
+ return 0;
+}
+
+static void
+xsc_cmdq_req_base_addr_set(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ uint32_t h_addr, l_addr;
+
+ h_addr = (uint32_t)(cmdq->req_mz->iova >> 32);
+ l_addr = (uint32_t)(cmdq->req_mz->iova);
+ rte_write32(h_addr, (uint8_t *)xdev->bar_addr + cmdq->config->req_h_addr);
+ rte_write32(l_addr, (uint8_t *)xdev->bar_addr + cmdq->config->req_l_addr);
+}
+
+static void
+xsc_cmdq_rsp_base_addr_set(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ uint32_t h_addr, l_addr;
+
+ h_addr = (uint32_t)(cmdq->rsp_mz->iova >> 32);
+ l_addr = (uint32_t)(cmdq->rsp_mz->iova);
+ rte_write32(h_addr, (uint8_t *)xdev->bar_addr + cmdq->config->rsp_h_addr);
+ rte_write32(l_addr, (uint8_t *)xdev->bar_addr + cmdq->config->rsp_l_addr);
+}
+
+static void
+xsc_cmdq_mbox_free(struct xsc_dev *xdev, struct xsc_cmdq_mbox *mbox)
+{
+ struct xsc_cmdq_mbox *next, *head;
+ struct xsc_vfio_priv *priv = (struct xsc_vfio_priv *)xdev->dev_priv;
+
+ head = mbox;
+ while (head != NULL) {
+ next = head->next;
+ if (head->buf != NULL)
+ rte_mempool_put(priv->cmdq->mbox_buf_pool, head->buf);
+ free(head);
+ head = next;
+ }
+}
+
+static struct xsc_cmdq_mbox *
+xsc_cmdq_mbox_alloc(struct xsc_dev *xdev)
+{
+ struct xsc_cmdq_mbox *mbox;
+ int ret;
+ struct xsc_vfio_priv *priv = (struct xsc_vfio_priv *)xdev->dev_priv;
+
+ mbox = (struct xsc_cmdq_mbox *)malloc(sizeof(*mbox));
+ if (mbox == NULL) {
+ rte_errno = -ENOMEM;
+ goto error;
+ }
+ memset(mbox, 0, sizeof(struct xsc_cmdq_mbox));
+
+ ret = rte_mempool_get(priv->cmdq->mbox_buf_pool, (void **)&mbox->buf);
+ if (ret != 0)
+ goto error;
+ mbox->buf_dma = rte_mempool_virt2iova(mbox->buf);
+ memset(mbox->buf, 0, sizeof(struct xsc_cmdq_mbox_buf));
+ mbox->next = NULL;
+
+ return mbox;
+
+error:
+ xsc_cmdq_mbox_free(xdev, mbox);
+ return NULL;
+}
+
+static struct xsc_cmdq_mbox *
+xsc_cmdq_mbox_alloc_bulk(struct xsc_dev *xdev, int n)
+{
+ int i;
+ struct xsc_cmdq_mbox *head = NULL;
+ struct xsc_cmdq_mbox *mbox;
+ struct xsc_cmdq_mbox_buf *mbox_buf;
+
+ for (i = 0; i < n; i++) {
+ mbox = xsc_cmdq_mbox_alloc(xdev);
+ if (mbox == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc mailbox");
+ goto error;
+ }
+
+ mbox_buf = mbox->buf;
+ mbox->next = head;
+ mbox_buf->next = rte_cpu_to_be_64(mbox->next ? mbox->next->buf_dma : 0);
+ mbox_buf->block_num = rte_cpu_to_be_32(n - i - 1);
+ head = mbox;
+ }
+
+ return head;
+
+error:
+ xsc_cmdq_mbox_free(xdev, head);
+ return NULL;
+}
+
+static void
+xsc_cmdq_req_msg_free(struct xsc_dev *xdev, struct xsc_cmdq_req_msg *msg)
+{
+ struct xsc_cmdq_mbox *head;
+
+ if (msg == NULL)
+ return;
+
+ head = msg->next;
+ xsc_cmdq_mbox_free(xdev, head);
+ free(msg);
+}
+
+static struct xsc_cmdq_req_msg *
+xsc_cmdq_req_msg_alloc(struct xsc_dev *xdev, int len)
+{
+ struct xsc_cmdq_req_msg *msg;
+ struct xsc_cmdq_mbox *head = NULL;
+ int cmd_len, nb_mbox;
+
+ msg = (struct xsc_cmdq_req_msg *)malloc(sizeof(*msg));
+ if (msg == NULL) {
+ rte_errno = -ENOMEM;
+ goto error;
+ }
+ memset(msg, 0, sizeof(*msg));
+
+ cmd_len = len - RTE_MIN(sizeof(msg->hdr.data), (uint32_t)len);
+ nb_mbox = (cmd_len + XSC_CMDQ_DATA_SIZE - 1) / XSC_CMDQ_DATA_SIZE;
+ head = xsc_cmdq_mbox_alloc_bulk(xdev, nb_mbox);
+ if (head == NULL && nb_mbox != 0)
+ goto error;
+
+ msg->next = head;
+ msg->len = len;
+
+ return msg;
+
+error:
+ xsc_cmdq_req_msg_free(xdev, msg);
+ return NULL;
+}
+
+static void
+xsc_cmdq_rsp_msg_free(struct xsc_dev *xdev, struct xsc_cmdq_rsp_msg *msg)
+{
+ struct xsc_cmdq_mbox *head;
+
+ if (msg == NULL)
+ return;
+
+ head = msg->next;
+ xsc_cmdq_mbox_free(xdev, head);
+ free(msg);
+}
+
+static struct xsc_cmdq_rsp_msg *
+xsc_cmdq_rsp_msg_alloc(struct xsc_dev *xdev, int len)
+{
+ struct xsc_cmdq_rsp_msg *msg;
+ struct xsc_cmdq_mbox *head = NULL;
+ int cmd_len, nb_mbox;
+
+ msg = (struct xsc_cmdq_rsp_msg *)malloc(sizeof(*msg));
+ if (msg == NULL) {
+ rte_errno = -ENOMEM;
+ goto error;
+ }
+ memset(msg, 0, sizeof(*msg));
+
+ cmd_len = len - RTE_MIN(sizeof(msg->hdr.data), (uint32_t)len);
+ nb_mbox = (cmd_len + XSC_CMDQ_DATA_SIZE - 1) / XSC_CMDQ_DATA_SIZE;
+ head = xsc_cmdq_mbox_alloc_bulk(xdev, nb_mbox);
+ if (head == NULL && nb_mbox != 0)
+ goto error;
+
+ msg->next = head;
+ msg->len = len;
+
+ return msg;
+
+error:
+ xsc_cmdq_rsp_msg_free(xdev, msg);
+ return NULL;
+}
+
+static void
+xsc_cmdq_msg_destruct(struct xsc_dev *xdev,
+ struct xsc_cmdq_req_msg **req_msg,
+ struct xsc_cmdq_rsp_msg **rsp_msg)
+{
+ xsc_cmdq_req_msg_free(xdev, *req_msg);
+ xsc_cmdq_rsp_msg_free(xdev, *rsp_msg);
+ *req_msg = NULL;
+ *rsp_msg = NULL;
+}
+
+static int
+xsc_cmdq_msg_construct(struct xsc_dev *xdev,
+ struct xsc_cmdq_req_msg **req_msg, int in_len,
+ struct xsc_cmdq_rsp_msg **rsp_msg, int out_len)
+{
+ *req_msg = xsc_cmdq_req_msg_alloc(xdev, in_len);
+ if (*req_msg == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc xsc cmd request msg");
+ goto error;
+ }
+
+ *rsp_msg = xsc_cmdq_rsp_msg_alloc(xdev, out_len);
+ if (*rsp_msg == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc xsc cmd response msg");
+ goto error;
+ }
+
+ return 0;
+
+error:
+ xsc_cmdq_msg_destruct(xdev, req_msg, rsp_msg);
+ return -1;
+}
+
+static int
+xsc_cmdq_req_msg_copy(struct xsc_cmdq_req_msg *req_msg, void *data_in, int in_len)
+{
+ struct xsc_cmdq_mbox_buf *mbox_buf;
+ struct xsc_cmdq_mbox *mbox;
+ int copy;
+ uint8_t *data = data_in;
+
+ if (req_msg == NULL || data == NULL)
+ return -1;
+
+ copy = RTE_MIN((uint32_t)in_len, sizeof(req_msg->hdr.data));
+ memcpy(req_msg->hdr.data, data, copy);
+
+ in_len -= copy;
+ data += copy;
+
+ mbox = req_msg->next;
+ while (in_len > 0) {
+ if (mbox == NULL)
+ return -1;
+
+ copy = RTE_MIN(in_len, XSC_CMDQ_DATA_SIZE);
+ mbox_buf = mbox->buf;
+ memcpy(mbox_buf->data, data, copy);
+ mbox_buf->owner_status = 0;
+ data += copy;
+ in_len -= copy;
+ mbox = mbox->next;
+ }
+
+ return 0;
+}
+
+static int
+xsc_cmdq_rsp_msg_copy(void *data_out, struct xsc_cmdq_rsp_msg *rsp_msg, int out_len)
+{
+ struct xsc_cmdq_mbox_buf *mbox_buf;
+ struct xsc_cmdq_mbox *mbox;
+ int copy;
+ uint8_t *data = data_out;
+
+ if (data == NULL || rsp_msg == NULL)
+ return -1;
+
+ copy = RTE_MIN((uint32_t)out_len, sizeof(rsp_msg->hdr.data));
+ memcpy(data, rsp_msg->hdr.data, copy);
+ out_len -= copy;
+ data += copy;
+
+ mbox = rsp_msg->next;
+ while (out_len > 0) {
+ if (mbox == NULL)
+ return -1;
+ copy = RTE_MIN(out_len, XSC_CMDQ_DATA_SIZE);
+ mbox_buf = mbox->buf;
+ if (!mbox_buf->owner_status)
+ PMD_DRV_LOG(ERR, "Failed to check cmd owner");
+ memcpy(data, mbox_buf->data, copy);
+ data += copy;
+ out_len -= copy;
+ mbox = mbox->next;
+ }
+
+ return 0;
+}
+
+static enum xsc_cmd_status
+xsc_cmdq_wait_completion(struct xsc_dev *xdev, struct xsc_cmdq_rsp_msg *rsp_msg)
+{
+ struct xsc_vfio_priv *priv = (struct xsc_vfio_priv *)xdev->dev_priv;
+ struct xsc_cmd_queue *cmdq = priv->cmdq;
+ volatile struct xsc_cmdq_rsp_layout *rsp_lay;
+ struct xsc_cmd_outbox_hdr *out_hdr = (struct xsc_cmd_outbox_hdr *)rsp_msg->hdr.data;
+ int count = (XSC_CMDQ_WAIT_TIMEOUT * 1000) / XSC_CMDQ_WAIT_DELAY_MS;
+ uint32_t rsp_pid;
+ uint8_t cmd_status;
+ uint32_t i;
+
+ while (count-- > 0) {
+ rsp_pid = rte_read32((uint8_t *)xdev->bar_addr + cmdq->config->rsp_pid_addr);
+ if (rsp_pid == cmdq->rsp_cid) {
+ rte_delay_ms(XSC_CMDQ_WAIT_DELAY_MS);
+ continue;
+ }
+
+ rsp_lay = cmdq->rsp_lay + cmdq->rsp_cid;
+ if (cmdq->owner_learn == 0) {
+ /* First time learning owner_bit from hardware */
+ cmdq->owner_bit = rsp_lay->owner_bit;
+ cmdq->owner_learn = 1;
+ }
+
+ /* Waiting for dma to complete */
+ if (cmdq->owner_bit != rsp_lay->owner_bit)
+ continue;
+
+ for (i = 0; i < XSC_CMDQ_RSP_INLINE_SIZE; i++)
+ rsp_msg->hdr.data[i] = rsp_lay->out[i];
+
+ cmdq->rsp_cid = (cmdq->rsp_cid + 1) & cmdq->depth_m;
+ rte_write32(cmdq->rsp_cid, (uint8_t *)xdev->bar_addr + cmdq->config->rsp_cid_addr);
+
+ /* Change owner bit */
+ if (cmdq->rsp_cid == 0)
+ cmdq->owner_bit = !cmdq->owner_bit;
+
+ cmd_status = out_hdr->status;
+ if (cmd_status != 0)
+ return XSC_CMD_FAIL;
+ return XSC_CMD_SUCC;
+ }
+
+ return XSC_CMD_TIMEOUT;
+}
+
+static int
+xsc_cmdq_dummy_invoke(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq, uint32_t start, int num)
+{
+ struct xsc_cmdq_dummy_mbox_in in;
+ struct xsc_cmdq_dummy_mbox_out out;
+ struct xsc_cmdq_req_msg *req_msg = NULL;
+ struct xsc_cmdq_rsp_msg *rsp_msg = NULL;
+ struct xsc_cmdq_req_layout *req_lay;
+ int in_len = sizeof(in);
+ int out_len = sizeof(out);
+ int ret, i;
+ uint32_t start_pid = start;
+
+ memset(&in, 0, sizeof(in));
+ in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_DUMMY);
+
+ ret = xsc_cmdq_msg_construct(xdev, &req_msg, in_len, &rsp_msg, out_len);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to construct cmd msg for dummy exec");
+ return -1;
+ }
+
+ ret = xsc_cmdq_req_msg_copy(req_msg, &in, in_len);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to copy cmd buf to request msg for dummy exec");
+ goto error;
+ }
+
+ rte_spinlock_lock(&cmdq->lock);
+
+ for (i = 0; i < num; i++) {
+ req_lay = cmdq->req_lay + start_pid;
+ memset(req_lay, 0, sizeof(*req_lay));
+ memcpy(req_lay->in, req_msg->hdr.data, sizeof(req_lay->in));
+ req_lay->inlen = rte_cpu_to_be_32(req_msg->len);
+ req_lay->outlen = rte_cpu_to_be_32(rsp_msg->len);
+ req_lay->sig = 0xff;
+ req_lay->idx = 0;
+ req_lay->type = XSC_CMDQ_REQ_TYPE;
+ start_pid = (start_pid + 1) & cmdq->depth_m;
+ }
+
+ /* Ring doorbell after the descriptor is valid */
+ rte_write32(cmdq->req_pid, (uint8_t *)xdev->bar_addr + cmdq->config->req_pid_addr);
+
+ ret = xsc_cmdq_wait_completion(xdev, rsp_msg);
+ rte_spinlock_unlock(&cmdq->lock);
+
+error:
+ xsc_cmdq_msg_destruct(xdev, &req_msg, &rsp_msg);
+ return ret;
+}
+
+static int
+xsc_cmdq_req_status_restore(struct xsc_dev *xdev, struct xsc_cmd_queue *cmdq)
+{
+ uint32_t req_pid, req_cid;
+ uint32_t cnt;
+
+ req_pid = rte_read32((uint8_t *)xdev->bar_addr + cmdq->config->req_pid_addr);
+ req_cid = rte_read32((uint8_t *)xdev->bar_addr + cmdq->config->req_cid_addr);
+
+ if (req_pid >= (uint32_t)(1 << cmdq->depth_n) ||
+ req_cid >= (uint32_t)(1 << cmdq->depth_n)) {
+ PMD_DRV_LOG(ERR, "Request pid %u and cid %u must be less than %u",
+ req_pid, req_cid, 1 << cmdq->depth_n);
+ return -1;
+ }
+
+ cmdq->req_pid = req_pid;
+ if (req_pid == req_cid)
+ return 0;
+
+ cnt = (req_pid > req_cid) ? (req_pid - req_cid) :
+ ((1 << cmdq->depth_n) + req_pid - req_cid);
+ if (xsc_cmdq_dummy_invoke(xdev, cmdq, req_cid, cnt) != 0) {
+ PMD_DRV_LOG(ERR, "Failed to dummy invoke xsc cmd");
+ return -1;
+ }
+
+ return 0;
+}
+
+void
+xsc_vfio_mbox_destroy(struct xsc_cmd_queue *cmdq)
+{
+ if (cmdq == NULL)
+ return;
+
+ rte_memzone_free(cmdq->req_mz);
+ rte_memzone_free(cmdq->rsp_mz);
+ rte_mempool_free(cmdq->mbox_buf_pool);
+ rte_free(cmdq);
+}
+
+int
+xsc_vfio_mbox_init(struct xsc_dev *xdev)
+{
+ struct xsc_cmd_queue *cmdq;
+ struct xsc_vfio_priv *priv = (struct xsc_vfio_priv *)xdev->dev_priv;
+ char name[RTE_MEMZONE_NAMESIZE] = { 0 };
+ uint32_t size;
+
+ cmdq = rte_zmalloc(NULL, sizeof(*cmdq), RTE_CACHE_LINE_SIZE);
+ if (cmdq == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory for xsc_cmd_queue");
+ return -1;
+ }
+
+ snprintf(name, RTE_MEMZONE_NAMESIZE, "%s_cmdq", xdev->pci_dev->device.name);
+ size = (1 << XSC_CMDQ_DEPTH_LOG) * sizeof(struct xsc_cmdq_req_layout);
+ cmdq->req_mz = rte_memzone_reserve_aligned(name,
+ size, SOCKET_ID_ANY,
+ RTE_MEMZONE_IOVA_CONTIG,
+ XSC_PAGE_SIZE);
+ if (cmdq->req_mz == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory for cmd queue");
+ goto error;
+ }
+ cmdq->req_lay = cmdq->req_mz->addr;
+
+ snprintf(name, RTE_MEMZONE_NAMESIZE, "%s_cmd_cq", xdev->pci_dev->device.name);
+ size = (1 << XSC_CMDQ_DEPTH_LOG) * sizeof(struct xsc_cmdq_rsp_layout);
+ cmdq->rsp_mz = rte_memzone_reserve_aligned(name,
+ size, SOCKET_ID_ANY,
+ RTE_MEMZONE_IOVA_CONTIG,
+ XSC_PAGE_SIZE);
+ if (cmdq->rsp_mz == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc memory for cmd cq");
+ goto error;
+ }
+ cmdq->rsp_lay = cmdq->rsp_mz->addr;
+
+ snprintf(name, RTE_MEMZONE_NAMESIZE, "%s_mempool", xdev->pci_dev->device.name);
+ cmdq->mbox_buf_pool = rte_mempool_create(name, XSC_MBOX_BUF_NUM,
+ sizeof(struct xsc_cmdq_mbox_buf),
+ XSC_MBOX_BUF_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (cmdq->mbox_buf_pool == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to create mailbox buf pool");
+ goto error;
+ }
+
+ xsc_cmdq_config_init(xdev, cmdq);
+ xsc_cmdq_rsp_cid_update(xdev, cmdq);
+ xsc_cmdq_depth_set(xdev, cmdq);
+ if (xsc_cmdq_elt_size_check(xdev, cmdq) != 0)
+ goto error;
+
+ xsc_cmdq_req_base_addr_set(xdev, cmdq);
+ xsc_cmdq_rsp_base_addr_set(xdev, cmdq);
+ /* Check request status and restore it */
+ if (xsc_cmdq_req_status_restore(xdev, cmdq) != 0)
+ goto error;
+
+ rte_spinlock_init(&cmdq->lock);
+ priv->cmdq = cmdq;
+ return 0;
+
+error:
+ xsc_vfio_mbox_destroy(cmdq);
+ return -1;
+}
+
+static enum xsc_cmd_status
+xsc_cmdq_invoke(struct xsc_dev *xdev, struct xsc_cmdq_req_msg *req_msg,
+ struct xsc_cmdq_rsp_msg *rsp_msg)
+{
+ struct xsc_vfio_priv *priv = (struct xsc_vfio_priv *)xdev->dev_priv;
+ struct xsc_cmd_queue *cmdq = priv->cmdq;
+ struct xsc_cmdq_req_layout *req_lay;
+ enum xsc_cmd_status status = XSC_CMD_FAIL;
+
+ rte_spinlock_lock(&cmdq->lock);
+ req_lay = cmdq->req_lay + cmdq->req_pid;
+ memset(req_lay, 0, sizeof(*req_lay));
+ memcpy(req_lay->in, req_msg->hdr.data, sizeof(req_lay->in));
+ if (req_msg->next != NULL)
+ req_lay->in_ptr = rte_cpu_to_be_64(req_msg->next->buf_dma);
+ req_lay->inlen = rte_cpu_to_be_32(req_msg->len);
+
+ if (rsp_msg->next != NULL)
+ req_lay->out_ptr = rte_cpu_to_be_64(rsp_msg->next->buf_dma);
+ req_lay->outlen = rte_cpu_to_be_32(rsp_msg->len);
+
+ req_lay->sig = 0xff;
+ req_lay->idx = 0;
+ req_lay->type = XSC_CMDQ_REQ_TYPE;
+
+ /* Ring doorbell after the descriptor is valid */
+ cmdq->req_pid = (cmdq->req_pid + 1) & cmdq->depth_m;
+ rte_write32(cmdq->req_pid, (uint8_t *)xdev->bar_addr + cmdq->config->req_pid_addr);
+
+ status = xsc_cmdq_wait_completion(xdev, rsp_msg);
+ rte_spinlock_unlock(&cmdq->lock);
+
+ return status;
+}
+
+int
+xsc_vfio_mbox_exec(struct xsc_dev *xdev, void *data_in,
+ int in_len, void *data_out, int out_len)
+{
+ struct xsc_cmdq_req_msg *req_msg = NULL;
+ struct xsc_cmdq_rsp_msg *rsp_msg = NULL;
+ int ret;
+ enum xsc_cmd_status status;
+
+ ret = xsc_cmdq_msg_construct(xdev, &req_msg, in_len, &rsp_msg, out_len);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to construct cmd msg");
+ return -1;
+ }
+
+ ret = xsc_cmdq_req_msg_copy(req_msg, data_in, in_len);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to copy cmd buf to request msg");
+ goto error;
+ }
+
+ status = xsc_cmdq_invoke(xdev, req_msg, rsp_msg);
+ if (status != XSC_CMD_SUCC) {
+ PMD_DRV_LOG(ERR, "Failed to invoke xsc cmd, %s",
+ xsc_cmd_error[status]);
+ ret = -1;
+ goto error;
+ }
+
+ ret = xsc_cmdq_rsp_msg_copy(data_out, rsp_msg, out_len);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to copy response msg to out data");
+ goto error;
+ }
+
+error:
+ xsc_cmdq_msg_destruct(xdev, &req_msg, &rsp_msg);
+ return ret;
+}
diff --git a/drivers/net/xsc/xsc_vfio_mbox.h b/drivers/net/xsc/xsc_vfio_mbox.h
new file mode 100644
index 0000000000..49ca84f7ec
--- /dev/null
+++ b/drivers/net/xsc/xsc_vfio_mbox.h
@@ -0,0 +1,142 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_CMDQ_H_
+#define _XSC_CMDQ_H_
+
+#include <rte_common.h>
+#include <rte_mempool.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+#include <rte_byteorder.h>
+#include <rte_io.h>
+
+#include "xsc_dev.h"
+#include "xsc_cmd.h"
+
+#define XSC_CMDQ_DATA_SIZE 512
+#define XSC_CMDQ_REQ_INLINE_SIZE 8
+#define XSC_CMDQ_RSP_INLINE_SIZE 14
+
+struct xsc_cmdq_config {
+ uint32_t req_pid_addr;
+ uint32_t req_cid_addr;
+ uint32_t rsp_pid_addr;
+ uint32_t rsp_cid_addr;
+ uint32_t req_h_addr;
+ uint32_t req_l_addr;
+ uint32_t rsp_h_addr;
+ uint32_t rsp_l_addr;
+ uint32_t elt_sz_addr;
+ uint32_t depth_addr;
+};
+
+struct xsc_cmd_queue {
+ struct xsc_cmdq_req_layout *req_lay;
+ struct xsc_cmdq_rsp_layout *rsp_lay;
+ const struct rte_memzone *req_mz;
+ const struct rte_memzone *rsp_mz;
+ uint32_t req_pid;
+ uint32_t rsp_cid;
+ uint8_t owner_bit; /* CMDQ owner bit */
+ uint8_t owner_learn; /* Learn ownerbit from hw */
+ uint8_t depth_n; /* Log 2 of CMDQ depth */
+ uint8_t depth_m; /* CMDQ depth mask */
+ struct rte_mempool *mbox_buf_pool; /* CMDQ data pool */
+ struct xsc_cmdq_config *config;
+ rte_spinlock_t lock;
+};
+
+struct xsc_cmdq_mbox_buf {
+ uint8_t data[XSC_CMDQ_DATA_SIZE];
+ uint8_t rsv0[48];
+ rte_be64_t next; /* Next buf dma addr */
+ rte_be32_t block_num;
+ uint8_t owner_status;
+ uint8_t token;
+ uint8_t ctrl_sig;
+ uint8_t sig;
+};
+
+struct xsc_cmdq_mbox {
+ struct xsc_cmdq_mbox_buf *buf;
+ rte_iova_t buf_dma;
+ struct xsc_cmdq_mbox *next;
+};
+
+/* CMDQ request msg inline */
+struct xsc_cmdq_req_hdr {
+ rte_be32_t data[XSC_CMDQ_REQ_INLINE_SIZE];
+};
+
+struct xsc_cmdq_req_msg {
+ uint32_t len;
+ struct xsc_cmdq_req_hdr hdr;
+ struct xsc_cmdq_mbox *next;
+};
+
+/* CMDQ response msg inline */
+struct xsc_cmdq_rsp_hdr {
+ rte_be32_t data[XSC_CMDQ_RSP_INLINE_SIZE];
+};
+
+struct xsc_cmdq_rsp_msg {
+ uint32_t len;
+ struct xsc_cmdq_rsp_hdr hdr;
+ struct xsc_cmdq_mbox *next;
+};
+
+/* HW will use this for some records(e.g. vf_id) */
+struct xsc_cmdq_rsv {
+ uint16_t vf_id;
+ uint8_t rsv[2];
+};
+
+/* CMDQ request entry layout */
+struct xsc_cmdq_req_layout {
+ struct xsc_cmdq_rsv rsv0;
+ rte_be32_t inlen;
+ rte_be64_t in_ptr;
+ rte_be32_t in[XSC_CMDQ_REQ_INLINE_SIZE];
+ rte_be64_t out_ptr;
+ rte_be32_t outlen;
+ uint8_t token;
+ uint8_t sig;
+ uint8_t idx;
+ uint8_t type:7;
+ uint8_t owner_bit:1;
+};
+
+/* CMDQ response entry layout */
+struct xsc_cmdq_rsp_layout {
+ struct xsc_cmdq_rsv rsv0;
+ rte_be32_t out[XSC_CMDQ_RSP_INLINE_SIZE];
+ uint8_t token;
+ uint8_t sig;
+ uint8_t idx;
+ uint8_t type:7;
+ uint8_t owner_bit:1;
+};
+
+struct xsc_cmdq_dummy_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ uint8_t rsv[8];
+};
+
+struct xsc_cmdq_dummy_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ uint8_t rsv[8];
+};
+
+struct xsc_vfio_priv {
+ struct xsc_cmd_queue *cmdq;
+};
+
+int xsc_vfio_mbox_init(struct xsc_dev *xdev);
+void xsc_vfio_mbox_destroy(struct xsc_cmd_queue *cmdq);
+int xsc_vfio_mbox_exec(struct xsc_dev *xdev,
+ void *data_in, int in_len,
+ void *data_out, int out_len);
+
+#endif /* _XSC_CMDQ_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (2 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 03/15] net/xsc: add xsc mailbox WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 19:02 ` Stephen Hemminger
` (2 more replies)
2025-01-03 15:04 ` [PATCH v4 05/15] net/xsc: add PCT interfaces WanRenyong
` (10 subsequent siblings)
14 siblings, 3 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
XSC PMD is designed to support both VFIO and private kernel drivers.
This commit add xsc dev ops to support VFIO driver.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
Signed-off-by: Na Na <nana@yunsilicon.com>
---
drivers/net/xsc/meson.build | 1 +
drivers/net/xsc/xsc_defs.h | 8 +
drivers/net/xsc/xsc_dev.h | 32 ++
drivers/net/xsc/xsc_rxtx.h | 102 +++++
drivers/net/xsc/xsc_vfio.c | 750 ++++++++++++++++++++++++++++++++++++
5 files changed, 893 insertions(+)
create mode 100644 drivers/net/xsc/xsc_rxtx.h
create mode 100644 drivers/net/xsc/xsc_vfio.c
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
index df4c8ea499..4e20b30438 100644
--- a/drivers/net/xsc/meson.build
+++ b/drivers/net/xsc/meson.build
@@ -10,4 +10,5 @@ sources = files(
'xsc_ethdev.c',
'xsc_dev.c',
'xsc_vfio_mbox.c',
+ 'xsc_vfio.c',
)
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index a4b36685a6..8fd59133bc 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -16,6 +16,14 @@
#define XSC_VFREP_BASE_LOGICAL_PORT 1081
+#define XSC_PF_TX_DB_ADDR 0x4802000
+#define XSC_PF_RX_DB_ADDR 0x4804000
+#define XSC_PF_CQ_DB_ADDR 0x2120000
+
+#define XSC_VF_RX_DB_ADDR 0x8d4
+#define XSC_VF_TX_DB_ADDR 0x8d0
+#define XSC_VF_CQ_DB_ADDR 0x8c4
+
enum xsc_nic_mode {
XSC_NIC_MODE_LEGACY,
XSC_NIC_MODE_SWITCHDEV,
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index 7eae78d9bf..deeddeb7f1 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -14,6 +14,7 @@
#include "xsc_defs.h"
#include "xsc_log.h"
+#include "xsc_rxtx.h"
#define XSC_PPH_MODE_ARG "pph_mode"
#define XSC_NIC_MODE_ARG "nic_mode"
@@ -25,6 +26,18 @@
#define XSC_DEV_PCT_IDX_INVALID 0xFFFFFFFF
#define XSC_DEV_REPR_ID_INVALID 0x7FFFFFFF
+enum xsc_queue_type {
+ XSC_QUEUE_TYPE_RDMA_RC = 0,
+ XSC_QUEUE_TYPE_RDMA_MAD = 1,
+ XSC_QUEUE_TYPE_RAW = 2,
+ XSC_QUEUE_TYPE_VIRTIO_NET = 3,
+ XSC_QUEUE_TYPE_VIRTIO_BLK = 4,
+ XSC_QUEUE_TYPE_RAW_TPE = 5,
+ XSC_QUEUE_TYPE_RAW_TSO = 6,
+ XSC_QUEUE_TYPE_RAW_TX = 7,
+ XSC_QUEUE_TYPE_INVALID = 0xFF,
+};
+
struct xsc_hwinfo {
uint8_t valid; /* 1: current phy info is valid, 0 : invalid */
uint32_t pcie_no; /* pcie number , 0 or 1 */
@@ -120,6 +133,25 @@ struct xsc_dev_ops {
enum rte_pci_kernel_driver kdrv;
int (*dev_init)(struct xsc_dev *xdev);
int (*dev_close)(struct xsc_dev *xdev);
+ int (*get_mac)(struct xsc_dev *xdev, uint8_t *mac);
+ int (*set_link_up)(struct xsc_dev *xdev);
+ int (*set_link_down)(struct xsc_dev *xdev);
+ int (*link_update)(struct xsc_dev *xdev, uint8_t funcid_type, int wait_to_complete);
+ int (*set_mtu)(struct xsc_dev *xdev, uint16_t mtu);
+ int (*destroy_qp)(void *qp);
+ int (*destroy_cq)(void *cq);
+ int (*modify_qp_status)(struct xsc_dev *xdev,
+ uint32_t qpn, int num, int opcode);
+ int (*modify_qp_qostree)(struct xsc_dev *xdev, uint16_t qpn);
+
+ int (*rx_cq_create)(struct xsc_dev *xdev, struct xsc_rx_cq_params *cq_params,
+ struct xsc_rx_cq_info *cq_info);
+ int (*tx_cq_create)(struct xsc_dev *xdev, struct xsc_tx_cq_params *cq_params,
+ struct xsc_tx_cq_info *cq_info);
+ int (*tx_qp_create)(struct xsc_dev *xdev, struct xsc_tx_qp_params *qp_params,
+ struct xsc_tx_qp_info *qp_info);
+ int (*mailbox_exec)(struct xsc_dev *xdev, void *data_in,
+ int in_len, void *data_out, int out_len);
};
void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
diff --git a/drivers/net/xsc/xsc_rxtx.h b/drivers/net/xsc/xsc_rxtx.h
new file mode 100644
index 0000000000..725a5f18d1
--- /dev/null
+++ b/drivers/net/xsc/xsc_rxtx.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_RXTX_H_
+#define _XSC_RXTX_H_
+
+#include <rte_byteorder.h>
+
+struct xsc_wqe_data_seg {
+ union {
+ struct {
+ uint8_t in_line:1;
+ uint8_t rsv0:7;
+ };
+ struct {
+ rte_le32_t rsv1:1;
+ rte_le32_t seg_len:31;
+ rte_le32_t lkey;
+ rte_le64_t va;
+ };
+ struct {
+ uint8_t rsv2:1;
+ uint8_t len:7;
+ uint8_t in_line_data[15];
+ };
+ };
+} __rte_packed;
+
+struct xsc_cqe {
+ union {
+ uint8_t msg_opcode;
+ struct {
+ uint8_t error_code:7;
+ uint8_t is_error:1;
+ };
+ };
+ rte_le16_t qp_id:15;
+ rte_le16_t rsv:1;
+ uint8_t se:1;
+ uint8_t has_pph:1;
+ uint8_t type:1;
+ uint8_t with_immdt:1;
+ uint8_t csum_err:4;
+ rte_le32_t imm_data;
+ rte_le32_t msg_len;
+ rte_le32_t vni;
+ rte_le32_t tsl;
+ rte_le32_t tsh:16;
+ rte_le32_t wqe_id:16;
+ rte_le16_t rsv2[3];
+ rte_le16_t rsv3:15;
+ rte_le16_t owner:1;
+} __rte_packed;
+
+struct xsc_tx_cq_params {
+ uint16_t port_id;
+ uint16_t qp_id;
+ uint16_t elts_n;
+};
+
+struct xsc_tx_cq_info {
+ void *cq;
+ void *cqes;
+ uint32_t *cq_db;
+ uint32_t cqn;
+ uint16_t cqe_s;
+ uint16_t cqe_n;
+};
+
+struct xsc_tx_qp_params {
+ void *cq;
+ uint64_t tx_offloads;
+ uint16_t port_id;
+ uint16_t qp_id;
+ uint16_t elts_n;
+};
+
+struct xsc_tx_qp_info {
+ void *qp;
+ void *wqes;
+ uint32_t *qp_db;
+ uint32_t qpn;
+ uint16_t tso_en;
+ uint16_t wqe_n;
+};
+
+struct xsc_rx_cq_params {
+ uint16_t port_id;
+ uint16_t qp_id;
+ uint16_t wqe_s;
+};
+
+struct xsc_rx_cq_info {
+ void *cq;
+ void *cqes;
+ uint32_t *cq_db;
+ uint32_t cqn;
+ uint16_t cqe_n;
+};
+
+#endif /* _XSC_RXTX_H_ */
diff --git a/drivers/net/xsc/xsc_vfio.c b/drivers/net/xsc/xsc_vfio.c
new file mode 100644
index 0000000000..1142aedeac
--- /dev/null
+++ b/drivers/net/xsc/xsc_vfio.c
@@ -0,0 +1,750 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#include <sys/mman.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+
+#include <rte_pci.h>
+#include <ethdev_pci.h>
+#include <rte_bus_pci.h>
+#include <rte_bitops.h>
+
+#include "xsc_defs.h"
+#include "xsc_vfio_mbox.h"
+#include "xsc_ethdev.h"
+#include "xsc_rxtx.h"
+
+#define XSC_FEATURE_ONCHIP_FT_MASK RTE_BIT32(4)
+#define XSC_FEATURE_DMA_RW_TBL_MASK RTE_BIT32(8)
+#define XSC_FEATURE_PCT_EXP_MASK RTE_BIT32(19)
+#define XSC_HOST_PCIE_NO_DEFAULT 0
+#define XSC_SOC_PCIE_NO_DEFAULT 1
+
+#define XSC_SW2HW_MTU(mtu) ((mtu) + 14 + 4)
+#define XSC_SW2HW_RX_PKT_LEN(mtu) ((mtu) + 14 + 256)
+
+enum xsc_cq_type {
+ XSC_CQ_TYPE_NORMAL = 0,
+ XSC_CQ_TYPE_VIRTIO = 1,
+};
+
+struct xsc_vfio_cq {
+ const struct rte_memzone *mz;
+ struct xsc_dev *xdev;
+ uint32_t cqn;
+};
+
+struct xsc_vfio_qp {
+ const struct rte_memzone *mz;
+ struct xsc_dev *xdev;
+ uint32_t qpn;
+};
+
+static void
+xsc_vfio_pcie_no_init(struct xsc_hwinfo *hwinfo)
+{
+ uint func_id = hwinfo->func_id;
+
+ if (func_id >= hwinfo->pf0_vf_funcid_base &&
+ func_id <= hwinfo->pf0_vf_funcid_top)
+ hwinfo->pcie_no = hwinfo->pcie_host;
+ else if (func_id >= hwinfo->pf1_vf_funcid_base &&
+ func_id <= hwinfo->pf1_vf_funcid_top)
+ hwinfo->pcie_no = hwinfo->pcie_host;
+ else if (func_id >= hwinfo->pcie0_pf_funcid_base &&
+ func_id <= hwinfo->pcie0_pf_funcid_top)
+ hwinfo->pcie_no = XSC_HOST_PCIE_NO_DEFAULT;
+ else
+ hwinfo->pcie_no = XSC_SOC_PCIE_NO_DEFAULT;
+}
+
+static int
+xsc_vfio_hwinfo_init(struct xsc_dev *xdev)
+{
+ int ret;
+ uint32_t feature;
+ int in_len, out_len, cmd_len;
+ struct xsc_cmd_query_hca_cap_mbox_in *in;
+ struct xsc_cmd_query_hca_cap_mbox_out *out;
+ struct xsc_cmd_hca_cap *hca_cap;
+
+ in_len = sizeof(struct xsc_cmd_query_hca_cap_mbox_in);
+ out_len = sizeof(struct xsc_cmd_query_hca_cap_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+
+ in = malloc(cmd_len);
+ if (in == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc dev hwinfo cmd memory");
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+
+ memset(in, 0, cmd_len);
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_QUERY_HCA_CAP);
+ in->hdr.ver = rte_cpu_to_be_16(XSC_CMD_QUERY_HCA_CAP_V1);
+ out = (struct xsc_cmd_query_hca_cap_mbox_out *)in;
+
+ ret = xsc_vfio_mbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to get dev hwinfo, err=%d, out.status=%u",
+ ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ ret = -rte_errno;
+ goto exit;
+ }
+
+ hca_cap = &out->hca_cap;
+ xdev->hwinfo.valid = 1;
+ xdev->hwinfo.func_id = rte_be_to_cpu_32(hca_cap->glb_func_id);
+ xdev->hwinfo.pcie_host = hca_cap->pcie_host;
+ xdev->hwinfo.mac_phy_port = hca_cap->mac_port;
+ xdev->hwinfo.funcid_to_logic_port_off = rte_be_to_cpu_16(hca_cap->funcid_to_logic_port);
+ xdev->hwinfo.raw_qp_id_base = rte_be_to_cpu_16(hca_cap->raweth_qp_id_base);
+ xdev->hwinfo.raw_rss_qp_id_base = rte_be_to_cpu_16(hca_cap->raweth_rss_qp_id_base);
+ xdev->hwinfo.pf0_vf_funcid_base = rte_be_to_cpu_16(hca_cap->pf0_vf_funcid_base);
+ xdev->hwinfo.pf0_vf_funcid_top = rte_be_to_cpu_16(hca_cap->pf0_vf_funcid_top);
+ xdev->hwinfo.pf1_vf_funcid_base = rte_be_to_cpu_16(hca_cap->pf1_vf_funcid_base);
+ xdev->hwinfo.pf1_vf_funcid_top = rte_be_to_cpu_16(hca_cap->pf1_vf_funcid_top);
+ xdev->hwinfo.pcie0_pf_funcid_base = rte_be_to_cpu_16(hca_cap->pcie0_pf_funcid_base);
+ xdev->hwinfo.pcie0_pf_funcid_top = rte_be_to_cpu_16(hca_cap->pcie0_pf_funcid_top);
+ xdev->hwinfo.pcie1_pf_funcid_base = rte_be_to_cpu_16(hca_cap->pcie1_pf_funcid_base);
+ xdev->hwinfo.pcie1_pf_funcid_top = rte_be_to_cpu_16(hca_cap->pcie1_pf_funcid_top);
+ xdev->hwinfo.lag_port_start = hca_cap->lag_logic_port_ofst;
+ xdev->hwinfo.raw_tpe_qp_num = rte_be_to_cpu_16(hca_cap->raw_tpe_qp_num);
+ xdev->hwinfo.send_seg_num = hca_cap->send_seg_num;
+ xdev->hwinfo.recv_seg_num = hca_cap->recv_seg_num;
+ feature = rte_be_to_cpu_32(hca_cap->feature_flag);
+ xdev->hwinfo.on_chip_tbl_vld = (feature & XSC_FEATURE_ONCHIP_FT_MASK) ? 1 : 0;
+ xdev->hwinfo.dma_rw_tbl_vld = (feature & XSC_FEATURE_DMA_RW_TBL_MASK) ? 1 : 0;
+ xdev->hwinfo.pct_compress_vld = (feature & XSC_FEATURE_PCT_EXP_MASK) ? 1 : 0;
+ xdev->hwinfo.chip_version = rte_be_to_cpu_32(hca_cap->chip_ver_l);
+ xdev->hwinfo.hca_core_clock = rte_be_to_cpu_32(hca_cap->hca_core_clock);
+ xdev->hwinfo.mac_bit = hca_cap->mac_bit;
+ xsc_vfio_pcie_no_init(&xdev->hwinfo);
+
+exit:
+ free(in);
+ return ret;
+}
+
+static int
+xsc_vfio_dev_open(struct xsc_dev *xdev)
+{
+ struct rte_pci_addr *addr = &xdev->pci_dev->addr;
+ struct xsc_vfio_priv *priv;
+
+ snprintf(xdev->name, PCI_PRI_STR_SIZE, PCI_PRI_FMT,
+ addr->domain, addr->bus, addr->devid, addr->function);
+
+ priv = rte_zmalloc(NULL, sizeof(*priv), RTE_CACHE_LINE_SIZE);
+ if (priv == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc xsc vfio priv");
+ return -ENOMEM;
+ }
+
+ xdev->dev_priv = (void *)priv;
+ return 0;
+}
+
+static int
+xsc_vfio_bar_init(struct xsc_dev *xdev)
+{
+ int ret;
+
+ ret = rte_pci_map_device(xdev->pci_dev);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to map pci device");
+ return -EINVAL;
+ }
+
+ xdev->bar_len = xdev->pci_dev->mem_resource[0].len;
+ xdev->bar_addr = (void *)xdev->pci_dev->mem_resource[0].addr;
+ if (xdev->bar_addr == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to attach dev(%s) bar", xdev->pci_dev->device.name);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int
+xsc_vfio_dev_close(struct xsc_dev *xdev)
+{
+ struct xsc_vfio_priv *vfio_priv = (struct xsc_vfio_priv *)xdev->dev_priv;
+
+ xsc_vfio_mbox_destroy(vfio_priv->cmdq);
+ rte_free(vfio_priv);
+
+ return 0;
+}
+
+static int
+xsc_vfio_destroy_qp(void *qp)
+{
+ int ret;
+ int in_len, out_len, cmd_len;
+ struct xsc_cmd_destroy_qp_mbox_in *in;
+ struct xsc_cmd_destroy_qp_mbox_out *out;
+ struct xsc_vfio_qp *data = (struct xsc_vfio_qp *)qp;
+
+ in_len = sizeof(struct xsc_cmd_destroy_qp_mbox_in);
+ out_len = sizeof(struct xsc_cmd_destroy_qp_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+
+ in = malloc(cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc qp destroy cmd memory");
+ return -rte_errno;
+ }
+ memset(in, 0, cmd_len);
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_DESTROY_QP);
+ in->qpn = rte_cpu_to_be_32(data->qpn);
+ out = (struct xsc_cmd_destroy_qp_mbox_out *)in;
+ ret = xsc_vfio_mbox_exec(data->xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to destroy qp, type=%d, err=%d, out.status=%u",
+ XSC_QUEUE_TYPE_RAW, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ ret = -rte_errno;
+ goto exit;
+ }
+
+ rte_memzone_free(data->mz);
+ rte_free(qp);
+
+exit:
+ free(in);
+ return ret;
+}
+
+static int
+xsc_vfio_destroy_cq(void *cq)
+{
+ int ret;
+ int in_len, out_len, cmd_len;
+ struct xsc_cmd_destroy_cq_mbox_in *in;
+ struct xsc_cmd_destroy_cq_mbox_out *out;
+ struct xsc_vfio_cq *data = (struct xsc_vfio_cq *)cq;
+
+ in_len = sizeof(struct xsc_cmd_destroy_cq_mbox_in);
+ out_len = sizeof(struct xsc_cmd_destroy_cq_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+
+ in = malloc(cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc cq destroy cmd memory");
+ return -rte_errno;
+ }
+ memset(in, 0, cmd_len);
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_DESTROY_CQ);
+ in->cqn = rte_cpu_to_be_32(data->cqn);
+ out = (struct xsc_cmd_destroy_cq_mbox_out *)in;
+ ret = xsc_vfio_mbox_exec(data->xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to destroy cq, type=%d, err=%d, out.status=%u",
+ XSC_QUEUE_TYPE_RAW, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ ret = -rte_errno;
+ goto exit;
+ }
+
+ rte_memzone_free(data->mz);
+ rte_free(cq);
+
+exit:
+ free(in);
+ return ret;
+}
+
+static int
+xsc_vfio_set_mtu(struct xsc_dev *xdev, uint16_t mtu)
+{
+ struct xsc_cmd_set_mtu_mbox_in in;
+ struct xsc_cmd_set_mtu_mbox_out out;
+ int ret;
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+ in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_SET_MTU);
+ in.mtu = rte_cpu_to_be_16(XSC_SW2HW_MTU(mtu));
+ in.rx_buf_sz_min = rte_cpu_to_be_16(XSC_SW2HW_RX_PKT_LEN(mtu));
+ in.mac_port = (uint8_t)xdev->hwinfo.mac_phy_port;
+
+ ret = xsc_vfio_mbox_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (ret != 0 || out.hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to set mtu, port=%d, err=%d, out.status=%u",
+ xdev->port_id, ret, out.hdr.status);
+ rte_errno = ENOEXEC;
+ ret = -rte_errno;
+ }
+
+ return ret;
+}
+
+static int
+xsc_vfio_get_mac(struct xsc_dev *xdev, uint8_t *mac)
+{
+ struct xsc_cmd_query_eth_mac_mbox_in in;
+ struct xsc_cmd_query_eth_mac_mbox_out out;
+ int ret;
+
+ memset(&in, 0, sizeof(in));
+ memset(&out, 0, sizeof(out));
+ in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_QUERY_ETH_MAC);
+ ret = xsc_vfio_mbox_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (ret != 0 || out.hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to get mtu, port=%d, err=%d, out.status=%u",
+ xdev->port_id, ret, out.hdr.status);
+ rte_errno = ENOEXEC;
+ return -rte_errno;
+ }
+
+ memcpy(mac, out.mac, 6);
+
+ return 0;
+}
+
+static int
+xsc_vfio_modify_qp_status(struct xsc_dev *xdev, uint32_t qpn, int num, int opcode)
+{
+ int i, ret;
+ int in_len, out_len, cmd_len;
+ struct xsc_cmd_modify_qp_mbox_in *in;
+ struct xsc_cmd_modify_qp_mbox_out *out;
+
+ in_len = sizeof(struct xsc_cmd_modify_qp_mbox_in);
+ out_len = sizeof(struct xsc_cmd_modify_qp_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+
+ in = malloc(cmd_len);
+ if (in == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc cmdq qp modify status");
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+
+ memset(in, 0, cmd_len);
+ out = (struct xsc_cmd_modify_qp_mbox_out *)in;
+
+ for (i = 0; i < num; i++) {
+ in->hdr.opcode = rte_cpu_to_be_16(opcode);
+ in->hdr.ver = 0;
+ in->qpn = rte_cpu_to_be_32(qpn + i);
+ in->no_need_wait = 1;
+
+ ret = xsc_vfio_mbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Modify qp status failed, qpn=%d, err=%d, out.status=%u",
+ qpn + i, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ ret = -rte_errno;
+ goto exit;
+ }
+ }
+
+exit:
+ free(in);
+ return ret;
+}
+
+static int
+xsc_vfio_modify_qp_qostree(struct xsc_dev *xdev, uint16_t qpn)
+{
+ int ret;
+ int in_len, out_len, cmd_len;
+ struct xsc_cmd_modify_raw_qp_mbox_in *in;
+ struct xsc_cmd_modify_raw_qp_mbox_out *out;
+
+ in_len = sizeof(struct xsc_cmd_modify_raw_qp_mbox_in);
+ out_len = sizeof(struct xsc_cmd_modify_raw_qp_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+
+ in = malloc(cmd_len);
+ if (in == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc cmdq qp modify qostree");
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+
+ memset(in, 0, cmd_len);
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_MODIFY_RAW_QP);
+ in->req.prio = 0;
+ in->req.qp_out_port = 0xFF;
+ in->req.lag_id = rte_cpu_to_be_16(xdev->hwinfo.lag_id);
+ in->req.func_id = rte_cpu_to_be_16(xdev->hwinfo.func_id);
+ in->req.dma_direct = 0;
+ in->req.qpn = rte_cpu_to_be_16(qpn);
+ out = (struct xsc_cmd_modify_raw_qp_mbox_out *)in;
+
+ ret = xsc_vfio_mbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Filed to modify qp qostree, qpn=%d, err=%d, out.status=%u",
+ qpn, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ ret = -rte_errno;
+ goto exit;
+ }
+
+exit:
+ free(in);
+ return ret;
+}
+
+static int
+xsc_vfio_rx_cq_create(struct xsc_dev *xdev, struct xsc_rx_cq_params *cq_params,
+ struct xsc_rx_cq_info *cq_info)
+{
+ int ret;
+ int pa_len;
+ uint16_t i;
+ uint16_t pa_num;
+ uint8_t log_cq_sz;
+ uint16_t cqe_n;
+ uint32_t cqe_total_sz;
+ int in_len, out_len, cmd_len;
+ char name[RTE_ETH_NAME_MAX_LEN] = { 0 };
+ uint16_t port_id = cq_params->port_id;
+ uint16_t idx = cq_params->qp_id;
+ struct xsc_vfio_cq *cq;
+ const struct rte_memzone *cq_pas = NULL;
+ volatile struct xsc_cqe (*cqes)[];
+ struct xsc_cmd_create_cq_mbox_in *in = NULL;
+ struct xsc_cmd_create_cq_mbox_out *out = NULL;
+
+ cqe_n = cq_params->wqe_s;
+ log_cq_sz = rte_log2_u32(cqe_n);
+ cqe_total_sz = cqe_n * sizeof(struct xsc_cqe);
+ pa_num = (cqe_total_sz + XSC_PAGE_SIZE - 1) / XSC_PAGE_SIZE;
+ pa_len = sizeof(uint64_t) * pa_num;
+ in_len = sizeof(struct xsc_cmd_create_cq_mbox_in) + pa_len;
+ out_len = sizeof(struct xsc_cmd_create_cq_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+
+ cq = rte_zmalloc(NULL, sizeof(struct xsc_vfio_cq), 0);
+ if (cq == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc rx cq memory");
+ return -rte_errno;
+ }
+
+ in = malloc(cmd_len);
+ memset(in, 0, cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc rx cq exec cmd memory");
+ goto error;
+ }
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_CREATE_CQ);
+ in->ctx.eqn = 0;
+ in->ctx.pa_num = rte_cpu_to_be_16(pa_num);
+ in->ctx.glb_func_id = rte_cpu_to_be_16((uint16_t)xdev->hwinfo.func_id);
+ in->ctx.log_cq_sz = log_cq_sz;
+ in->ctx.cq_type = XSC_CQ_TYPE_NORMAL;
+
+ snprintf(name, sizeof(name), "mz_cqe_mem_rx_%u_%u", port_id, idx);
+ cq_pas = rte_memzone_reserve_aligned(name,
+ (XSC_PAGE_SIZE * pa_num),
+ SOCKET_ID_ANY,
+ 0, XSC_PAGE_SIZE);
+ if (cq_pas == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc rx cq pas memory");
+ goto error;
+ }
+ cq->mz = cq_pas;
+
+ for (i = 0; i < pa_num; i++)
+ in->pas[i] = rte_cpu_to_be_64(cq_pas->iova + i * XSC_PAGE_SIZE);
+
+ out = (struct xsc_cmd_create_cq_mbox_out *)in;
+ ret = xsc_vfio_mbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR,
+ "Failed to exec rx cq create cmd, port id=%d, err=%d, out.status=%u",
+ port_id, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ goto error;
+ }
+
+ cq_info->cq = (void *)cq;
+ cq_info->cqe_n = log_cq_sz;
+ cqes = (volatile struct xsc_cqe (*)[])(uintptr_t)cq_pas->addr;
+ for (i = 0; i < (1 << cq_info->cqe_n); i++)
+ (&(*cqes)[i])->owner = 1;
+ cq_info->cqes = (void *)cqes;
+ if (xsc_dev_is_vf(xdev))
+ cq_info->cq_db = (uint32_t *)((uint8_t *)xdev->bar_addr + XSC_VF_CQ_DB_ADDR);
+ else
+ cq_info->cq_db = (uint32_t *)((uint8_t *)xdev->bar_addr + XSC_PF_CQ_DB_ADDR);
+ cq_info->cqn = rte_be_to_cpu_32(out->cqn);
+ cq->cqn = cq_info->cqn;
+ cq->xdev = xdev;
+ PMD_DRV_LOG(INFO, "Port id=%d, Rx cqe_n:%d, cqn:%d",
+ port_id, cq_info->cqe_n, cq_info->cqn);
+
+ free(in);
+ return 0;
+
+error:
+ free(in);
+ rte_memzone_free(cq_pas);
+ rte_free(cq);
+ return -rte_errno;
+}
+
+static int
+xsc_vfio_tx_cq_create(struct xsc_dev *xdev, struct xsc_tx_cq_params *cq_params,
+ struct xsc_tx_cq_info *cq_info)
+{
+ struct xsc_vfio_cq *cq = NULL;
+ char name[RTE_ETH_NAME_MAX_LEN] = {0};
+ struct xsc_cmd_create_cq_mbox_in *in = NULL;
+ struct xsc_cmd_create_cq_mbox_out *out = NULL;
+ const struct rte_memzone *cq_pas = NULL;
+ struct xsc_cqe *cqes;
+ int in_len, out_len, cmd_len;
+ uint16_t pa_num;
+ uint16_t log_cq_sz;
+ int ret = 0;
+ int cqe_s = 1 << cq_params->elts_n;
+ uint64_t iova;
+ int i;
+
+ cq = rte_zmalloc(NULL, sizeof(struct xsc_vfio_cq), 0);
+ if (cq == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc tx cq memory");
+ return -rte_errno;
+ }
+
+ log_cq_sz = rte_log2_u32(cqe_s);
+ pa_num = (((1 << log_cq_sz) * sizeof(struct xsc_cqe)) / XSC_PAGE_SIZE);
+
+ snprintf(name, sizeof(name), "mz_cqe_mem_tx_%u_%u", cq_params->port_id, cq_params->qp_id);
+ cq_pas = rte_memzone_reserve_aligned(name,
+ (XSC_PAGE_SIZE * pa_num),
+ SOCKET_ID_ANY,
+ 0, XSC_PAGE_SIZE);
+ if (cq_pas == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc tx cq pas memory");
+ goto error;
+ }
+
+ cq->mz = cq_pas;
+ in_len = (sizeof(struct xsc_cmd_create_cq_mbox_in) + (pa_num * sizeof(uint64_t)));
+ out_len = sizeof(struct xsc_cmd_create_cq_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+ in = (struct xsc_cmd_create_cq_mbox_in *)malloc(cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc tx cq exec cmd memory");
+ goto error;
+ }
+ memset(in, 0, cmd_len);
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_CREATE_CQ);
+ in->ctx.eqn = 0;
+ in->ctx.pa_num = rte_cpu_to_be_16(pa_num);
+ in->ctx.glb_func_id = rte_cpu_to_be_16((uint16_t)xdev->hwinfo.func_id);
+ in->ctx.log_cq_sz = rte_log2_u32(cqe_s);
+ in->ctx.cq_type = XSC_CQ_TYPE_NORMAL;
+ iova = cq->mz->iova;
+ for (i = 0; i < pa_num; i++)
+ in->pas[i] = rte_cpu_to_be_64(iova + i * XSC_PAGE_SIZE);
+
+ out = (struct xsc_cmd_create_cq_mbox_out *)in;
+ ret = xsc_vfio_mbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to create tx cq, port id=%u, err=%d, out.status=%u",
+ cq_params->port_id, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ goto error;
+ }
+
+ cq->cqn = rte_be_to_cpu_32(out->cqn);
+ cq->xdev = xdev;
+
+ cq_info->cq = cq;
+ cqes = (struct xsc_cqe *)((uint8_t *)cq->mz->addr);
+ if (xsc_dev_is_vf(xdev))
+ cq_info->cq_db = (uint32_t *)((uint8_t *)xdev->bar_addr + XSC_VF_CQ_DB_ADDR);
+ else
+ cq_info->cq_db = (uint32_t *)((uint8_t *)xdev->bar_addr + XSC_PF_CQ_DB_ADDR);
+ cq_info->cqn = cq->cqn;
+ cq_info->cqe_s = cqe_s;
+ cq_info->cqe_n = log_cq_sz;
+
+ for (i = 0; i < cq_info->cqe_s; i++)
+ ((volatile struct xsc_cqe *)(cqes + i))->owner = 1;
+ cq_info->cqes = cqes;
+
+ free(in);
+ return 0;
+
+error:
+ free(in);
+ rte_memzone_free(cq_pas);
+ rte_free(cq);
+ return -rte_errno;
+}
+
+static int
+xsc_vfio_tx_qp_create(struct xsc_dev *xdev, struct xsc_tx_qp_params *qp_params,
+ struct xsc_tx_qp_info *qp_info)
+{
+ struct xsc_cmd_create_qp_mbox_in *in = NULL;
+ struct xsc_cmd_create_qp_mbox_out *out = NULL;
+ const struct rte_memzone *qp_pas = NULL;
+ struct xsc_vfio_cq *cq = (struct xsc_vfio_cq *)qp_params->cq;
+ struct xsc_vfio_qp *qp = NULL;
+ int in_len, out_len, cmd_len;
+ int ret = 0;
+ uint32_t send_ds_num = xdev->hwinfo.send_seg_num;
+ int wqe_s = 1 << qp_params->elts_n;
+ uint16_t pa_num;
+ uint8_t log_ele = 0;
+ uint32_t log_rq_sz = 0;
+ uint32_t log_sq_sz = 0;
+ int i;
+ uint64_t iova;
+ char name[RTE_ETH_NAME_MAX_LEN] = {0};
+
+ qp = rte_zmalloc(NULL, sizeof(struct xsc_vfio_qp), 0);
+ if (qp == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc tx qp memory");
+ return -rte_errno;
+ }
+
+ log_sq_sz = rte_log2_u32(wqe_s * send_ds_num);
+ log_ele = rte_log2_u32(sizeof(struct xsc_wqe_data_seg));
+ pa_num = ((1 << (log_rq_sz + log_sq_sz + log_ele))) / XSC_PAGE_SIZE;
+
+ snprintf(name, sizeof(name), "mz_wqe_mem_tx_%u_%u", qp_params->port_id, qp_params->qp_id);
+ qp_pas = rte_memzone_reserve_aligned(name,
+ (XSC_PAGE_SIZE * pa_num),
+ SOCKET_ID_ANY,
+ 0, XSC_PAGE_SIZE);
+ if (qp_pas == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc tx qp pas memory");
+ goto error;
+ }
+ qp->mz = qp_pas;
+
+ in_len = (sizeof(struct xsc_cmd_create_qp_mbox_in) + (pa_num * sizeof(uint64_t)));
+ out_len = sizeof(struct xsc_cmd_create_qp_mbox_out);
+ cmd_len = RTE_MAX(in_len, out_len);
+ in = (struct xsc_cmd_create_qp_mbox_in *)malloc(cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc tx qp exec cmd memory");
+ goto error;
+ }
+ memset(in, 0, cmd_len);
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_CREATE_QP);
+ in->req.input_qpn = 0;
+ in->req.pa_num = rte_cpu_to_be_16(pa_num);
+ in->req.qp_type = XSC_QUEUE_TYPE_RAW_TX;
+ in->req.log_sq_sz = log_sq_sz;
+ in->req.log_rq_sz = log_rq_sz;
+ in->req.dma_direct = 0;
+ in->req.pdn = 0;
+ in->req.cqn_send = rte_cpu_to_be_16((uint16_t)cq->cqn);
+ in->req.cqn_recv = 0;
+ in->req.glb_funcid = rte_cpu_to_be_16((uint16_t)xdev->hwinfo.func_id);
+ iova = qp->mz->iova;
+ for (i = 0; i < pa_num; i++)
+ in->req.pas[i] = rte_cpu_to_be_64(iova + i * XSC_PAGE_SIZE);
+
+ out = (struct xsc_cmd_create_qp_mbox_out *)in;
+ ret = xsc_vfio_mbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR, "Failed to create tx qp, port id=%u, err=%d, out.status=%u",
+ qp_params->port_id, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ goto error;
+ }
+
+ qp->qpn = rte_be_to_cpu_32(out->qpn);
+ qp->xdev = xdev;
+
+ qp_info->qp = qp;
+ qp_info->qpn = qp->qpn;
+ qp_info->wqes = (struct xsc_wqe *)qp->mz->addr;
+ qp_info->wqe_n = rte_log2_u32(wqe_s);
+
+ if (xsc_dev_is_vf(xdev))
+ qp_info->qp_db = (uint32_t *)((uint8_t *)xdev->bar_addr + XSC_VF_TX_DB_ADDR);
+ else
+ qp_info->qp_db = (uint32_t *)((uint8_t *)xdev->bar_addr + XSC_PF_TX_DB_ADDR);
+
+ free(in);
+ return 0;
+
+error:
+ free(in);
+ rte_memzone_free(qp_pas);
+ rte_free(qp);
+ return -rte_errno;
+}
+
+static int
+xsc_vfio_dev_init(struct xsc_dev *xdev)
+{
+ int ret;
+
+ ret = xsc_vfio_dev_open(xdev);
+ if (ret != 0)
+ goto open_fail;
+
+ ret = xsc_vfio_bar_init(xdev);
+ if (ret != 0)
+ goto init_fail;
+
+ if (xsc_vfio_mbox_init(xdev) != 0)
+ goto init_fail;
+
+ ret = xsc_vfio_hwinfo_init(xdev);
+ if (ret != 0)
+ goto init_fail;
+
+ return 0;
+
+init_fail:
+ xsc_vfio_dev_close(xdev);
+
+open_fail:
+ return -1;
+}
+
+static struct xsc_dev_ops *xsc_vfio_ops = &(struct xsc_dev_ops) {
+ .kdrv = RTE_PCI_KDRV_VFIO,
+ .dev_init = xsc_vfio_dev_init,
+ .dev_close = xsc_vfio_dev_close,
+ .set_mtu = xsc_vfio_set_mtu,
+ .get_mac = xsc_vfio_get_mac,
+ .destroy_qp = xsc_vfio_destroy_qp,
+ .destroy_cq = xsc_vfio_destroy_cq,
+ .modify_qp_status = xsc_vfio_modify_qp_status,
+ .modify_qp_qostree = xsc_vfio_modify_qp_qostree,
+ .rx_cq_create = xsc_vfio_rx_cq_create,
+ .tx_cq_create = xsc_vfio_tx_cq_create,
+ .tx_qp_create = xsc_vfio_tx_qp_create,
+ .mailbox_exec = xsc_vfio_mbox_exec,
+};
+
+RTE_INIT(xsc_vfio_ops_reg)
+{
+ xsc_dev_ops_register(xsc_vfio_ops);
+}
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 05/15] net/xsc: add PCT interfaces
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (3 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 06/15] net/xsc: initialize xsc representors WanRenyong
` (9 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
PCT is the abbreviation of Packet classifier table, which is built
in NP to define behavior of various packets.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
drivers/net/xsc/meson.build | 1 +
drivers/net/xsc/xsc_defs.h | 29 +++
drivers/net/xsc/xsc_dev.c | 19 +-
drivers/net/xsc/xsc_dev.h | 3 +
drivers/net/xsc/xsc_np.c | 492 ++++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_np.h | 154 +++++++++++
6 files changed, 697 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/xsc/xsc_np.c
create mode 100644 drivers/net/xsc/xsc_np.h
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
index 4e20b30438..5ee03ea835 100644
--- a/drivers/net/xsc/meson.build
+++ b/drivers/net/xsc/meson.build
@@ -11,4 +11,5 @@ sources = files(
'xsc_dev.c',
'xsc_vfio_mbox.c',
'xsc_vfio.c',
+ 'xsc_np.c',
)
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index 8fd59133bc..b1e37a5870 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -16,6 +16,26 @@
#define XSC_VFREP_BASE_LOGICAL_PORT 1081
+#define XSC_RSS_HASH_KEY_LEN 52
+#define XSC_RSS_HASH_BIT_IPV4_SIP (1ULL << 0)
+#define XSC_RSS_HASH_BIT_IPV4_DIP (1ULL << 1)
+#define XSC_RSS_HASH_BIT_IPV6_SIP (1ULL << 2)
+#define XSC_RSS_HASH_BIT_IPV6_DIP (1ULL << 3)
+#define XSC_RSS_HASH_BIT_IPV4_SPORT (1ULL << 4)
+#define XSC_RSS_HASH_BIT_IPV4_DPORT (1ULL << 5)
+#define XSC_RSS_HASH_BIT_IPV6_SPORT (1ULL << 6)
+#define XSC_RSS_HASH_BIT_IPV6_DPORT (1ULL << 7)
+#define XSC_RSS_HASH_BIT_TNL_ID (1ULL << 8)
+#define XSC_RSS_HASH_BIT_NXT_PRO (1ULL << 9)
+
+#define XSC_EPAT_VLD_FLAG (1ULL)
+#define XSC_EPAT_RX_QP_ID_OFST_FLAG (1ULL << 2)
+#define XSC_EPAT_QP_NUM_FLAG (1ULL << 3)
+#define XSC_EPAT_RSS_EN_FLAG (1ULL << 4)
+#define XSC_EPAT_RSS_HASH_TEMPLATE_FLAG (1ULL << 5)
+#define XSC_EPAT_RSS_HASH_FUNC_FLAG (1ULL << 6)
+#define XSC_EPAT_HAS_PPH_FLAG (1ULL << 9)
+
#define XSC_PF_TX_DB_ADDR 0x4802000
#define XSC_PF_RX_DB_ADDR 0x4804000
#define XSC_PF_CQ_DB_ADDR 0x2120000
@@ -38,4 +58,13 @@ enum xsc_pph_type {
XSC_UPLINK_PPH = 0x8,
};
+enum xsc_port_type {
+ XSC_PORT_TYPE_NONE = 0,
+ XSC_PORT_TYPE_UPLINK,
+ XSC_PORT_TYPE_UPLINK_BOND,
+ XSC_PORT_TYPE_PFVF,
+ XSC_PORT_TYPE_PFHPF,
+ XSC_PORT_TYPE_UNKNOWN,
+};
+
#endif /* XSC_DEFS_H_ */
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
index 1b8a84baa6..02c6346b45 100644
--- a/drivers/net/xsc/xsc_dev.c
+++ b/drivers/net/xsc/xsc_dev.c
@@ -54,8 +54,17 @@ xsc_dev_ops_register(struct xsc_dev_ops *new_ops)
}
int
-xsc_dev_close(struct xsc_dev *xdev, int __rte_unused repr_id)
+xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
+ int in_len, void *data_out, int out_len)
{
+ return xdev->dev_ops->mailbox_exec(xdev, data_in, in_len,
+ data_out, out_len);
+}
+
+int
+xsc_dev_close(struct xsc_dev *xdev, int repr_id)
+{
+ xsc_dev_clear_pct(xdev, repr_id);
return xdev->dev_ops->dev_close(xdev);
}
@@ -121,6 +130,7 @@ void
xsc_dev_uninit(struct xsc_dev *xdev)
{
PMD_INIT_FUNC_TRACE();
+ xsc_dev_pct_uninit();
xsc_dev_close(xdev, XSC_DEV_REPR_ID_INVALID);
rte_free(xdev);
}
@@ -159,6 +169,13 @@ xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **xdev)
goto hwinfo_init_fail;
}
+ ret = xsc_dev_pct_init();
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Failed to init xsc pct");
+ ret = -EINVAL;
+ goto hwinfo_init_fail;
+ }
+
*xdev = d;
return 0;
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index deeddeb7f1..60762c84de 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -15,6 +15,7 @@
#include "xsc_defs.h"
#include "xsc_log.h"
#include "xsc_rxtx.h"
+#include "xsc_np.h"
#define XSC_PPH_MODE_ARG "pph_mode"
#define XSC_NIC_MODE_ARG "nic_mode"
@@ -154,6 +155,8 @@ struct xsc_dev_ops {
int in_len, void *data_out, int out_len);
};
+int xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
+ int in_len, void *data_out, int out_len);
void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
int xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **dev);
void xsc_dev_uninit(struct xsc_dev *xdev);
diff --git a/drivers/net/xsc/xsc_np.c b/drivers/net/xsc/xsc_np.c
new file mode 100644
index 0000000000..d4eb833bf6
--- /dev/null
+++ b/drivers/net/xsc/xsc_np.c
@@ -0,0 +1,492 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#include <rte_bitmap.h>
+#include <rte_malloc.h>
+
+#include "xsc_log.h"
+#include "xsc_defs.h"
+#include "xsc_np.h"
+#include "xsc_cmd.h"
+#include "xsc_dev.h"
+
+#define XSC_RSS_HASH_FUNC_TOPELIZ 0x1
+#define XSC_LOGIC_PORT_MASK 0x07FF
+
+#define XSC_DEV_DEF_PCT_IDX_MIN 128
+#define XSC_DEV_DEF_PCT_IDX_MAX 138
+
+/* Each board has a PCT manager*/
+static struct xsc_dev_pct_mgr xsc_pct_mgr;
+
+enum xsc_np_type {
+ XSC_NP_IPAT = 0,
+ XSC_NP_PCT_V4 = 4,
+ XSC_NP_EPAT = 19,
+ XSC_NP_VFOS = 31,
+ XSC_NP_PG_QP_SET_ID = 41,
+ XSC_NP_MAX
+};
+
+enum xsc_np_opcode {
+ XSC_NP_OP_ADD,
+ XSC_NP_OP_DEL,
+ XSC_NP_OP_GET,
+ XSC_NP_OP_CLR,
+ XSC_NP_OP_MOD,
+ XSC_NP_OP_MAX
+};
+
+struct xsc_np_mbox_in {
+ struct xsc_cmd_inbox_hdr hdr;
+ rte_be16_t len;
+ rte_be16_t rsvd;
+ uint8_t data[];
+};
+
+struct xsc_np_mbox_out {
+ struct xsc_cmd_outbox_hdr hdr;
+ rte_be32_t error;
+ rte_be16_t len;
+ rte_be16_t rsvd;
+ uint8_t data[];
+};
+
+struct xsc_np_data_tl {
+ uint16_t table;
+ uint16_t opmod;
+ uint16_t length;
+ uint16_t rsvd;
+};
+
+enum xsc_hash_tmpl {
+ XSC_HASH_TMPL_IDX_IP_PORTS_IP6_PORTS = 0,
+ XSC_HASH_TMPL_IDX_IP_IP6,
+ XSC_HASH_TMPL_IDX_IP_PORTS_IP6,
+ XSC_HASH_TMPL_IDX_IP_IP6_PORTS,
+ XSC_HASH_TMPL_IDX_MAX,
+};
+
+static const int xsc_rss_hash_tmplate[XSC_HASH_TMPL_IDX_MAX] = {
+ XSC_RSS_HASH_BIT_IPV4_SIP | XSC_RSS_HASH_BIT_IPV4_DIP |
+ XSC_RSS_HASH_BIT_IPV6_SIP | XSC_RSS_HASH_BIT_IPV6_DIP |
+ XSC_RSS_HASH_BIT_IPV4_SPORT | XSC_RSS_HASH_BIT_IPV4_DPORT |
+ XSC_RSS_HASH_BIT_IPV6_SPORT | XSC_RSS_HASH_BIT_IPV6_DPORT,
+
+ XSC_RSS_HASH_BIT_IPV4_SIP | XSC_RSS_HASH_BIT_IPV4_DIP |
+ XSC_RSS_HASH_BIT_IPV6_SIP | XSC_RSS_HASH_BIT_IPV6_DIP,
+
+ XSC_RSS_HASH_BIT_IPV4_SIP | XSC_RSS_HASH_BIT_IPV4_DIP |
+ XSC_RSS_HASH_BIT_IPV6_SIP | XSC_RSS_HASH_BIT_IPV6_DIP |
+ XSC_RSS_HASH_BIT_IPV4_SPORT | XSC_RSS_HASH_BIT_IPV4_DPORT,
+
+ XSC_RSS_HASH_BIT_IPV4_SIP | XSC_RSS_HASH_BIT_IPV4_DIP |
+ XSC_RSS_HASH_BIT_IPV6_SIP | XSC_RSS_HASH_BIT_IPV6_DIP |
+ XSC_RSS_HASH_BIT_IPV6_SPORT | XSC_RSS_HASH_BIT_IPV6_DPORT,
+};
+
+static uint8_t
+xsc_rss_hash_template_get(struct rte_eth_rss_conf *rss_conf)
+{
+ int rss_hf = 0;
+ int i = 0;
+ uint8_t idx = 0;
+ uint8_t outer = 1;
+
+ if (rss_conf->rss_hf & RTE_ETH_RSS_IP) {
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_SIP;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_DIP;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_SIP;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_DIP;
+ }
+
+ if ((rss_conf->rss_hf & RTE_ETH_RSS_UDP) ||
+ (rss_conf->rss_hf & RTE_ETH_RSS_TCP)) {
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_SPORT;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_DPORT;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_SPORT;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_DPORT;
+ }
+
+ if (rss_conf->rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) {
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_SIP;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_SIP;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV4_DIP;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV6_DIP;
+ }
+
+ if (rss_conf->rss_hf & RTE_ETH_RSS_L3_DST_ONLY) {
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_DIP;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_DIP;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV4_SIP;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV6_SIP;
+ }
+
+ if (rss_conf->rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) {
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_SPORT;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_SPORT;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV4_DPORT;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV6_DPORT;
+ }
+
+ if (rss_conf->rss_hf & RTE_ETH_RSS_L4_DST_ONLY) {
+ rss_hf |= XSC_RSS_HASH_BIT_IPV4_DPORT;
+ rss_hf |= XSC_RSS_HASH_BIT_IPV6_DPORT;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV4_SPORT;
+ rss_hf &= ~XSC_RSS_HASH_BIT_IPV6_SPORT;
+ }
+
+ if ((rss_conf->rss_hf & RTE_ETH_RSS_LEVEL_PMD_DEFAULT) ||
+ (rss_conf->rss_hf & RTE_ETH_RSS_LEVEL_OUTERMOST))
+ outer = 1;
+
+ if (rss_conf->rss_hf & RTE_ETH_RSS_LEVEL_INNERMOST)
+ outer = 0;
+
+ for (i = 0; i < XSC_HASH_TMPL_IDX_MAX; i++) {
+ if (xsc_rss_hash_tmplate[i] == rss_hf) {
+ idx = i;
+ break;
+ }
+ }
+
+ idx = (idx << 1) | outer;
+ return idx;
+}
+
+static int
+xsc_dev_np_exec(struct xsc_dev *xdev, void *cmd, int len, int table, int opmod)
+{
+ struct xsc_np_data_tl *tl;
+ struct xsc_np_mbox_in *in;
+ struct xsc_np_mbox_out *out;
+ int in_len;
+ int out_len;
+ int data_len;
+ int cmd_len;
+ int ret;
+
+ data_len = sizeof(struct xsc_np_data_tl) + len;
+ in_len = sizeof(struct xsc_np_mbox_in) + data_len;
+ out_len = sizeof(struct xsc_np_mbox_out) + data_len;
+ cmd_len = RTE_MAX(in_len, out_len);
+ in = malloc(cmd_len);
+ memset(in, 0, cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Failed to alloc np cmd memory");
+ return -rte_errno;
+ }
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_EXEC_NP);
+ in->len = rte_cpu_to_be_16(data_len);
+
+ tl = (struct xsc_np_data_tl *)in->data;
+ tl->length = len;
+ tl->table = table;
+ tl->opmod = opmod;
+ if (cmd && len)
+ memcpy(tl + 1, cmd, len);
+
+ out = (struct xsc_np_mbox_out *)in;
+ ret = xsc_dev_mailbox_exec(xdev, in, in_len, out, out_len);
+
+ free(in);
+ return ret;
+}
+
+int
+xsc_dev_create_pct(struct xsc_dev *xdev, int repr_id,
+ uint16_t logical_in_port, uint16_t dst_info)
+{
+ int ret;
+ struct xsc_np_pct_v4_add add;
+ struct xsc_repr_port *repr = &xdev->repr_ports[repr_id];
+ struct xsc_dev_pct_list *pct_list = &repr->def_pct_list;
+
+ memset(&add, 0, sizeof(add));
+ add.key.logical_in_port = logical_in_port & XSC_LOGIC_PORT_MASK;
+ add.mask.logical_in_port = XSC_LOGIC_PORT_MASK;
+ add.action.dst_info = dst_info;
+ add.pct_idx = xsc_dev_pct_idx_alloc();
+ if (add.pct_idx == XSC_DEV_PCT_IDX_INVALID)
+ return -1;
+
+ ret = xsc_dev_np_exec(xdev, &add, sizeof(add), XSC_NP_PCT_V4, XSC_NP_OP_ADD);
+ if (unlikely(ret != 0)) {
+ xsc_dev_pct_idx_free(add.pct_idx);
+ return -1;
+ }
+
+ xsc_dev_pct_entry_insert(pct_list, add.key.logical_in_port, add.pct_idx);
+ return 0;
+}
+
+int
+xsc_dev_destroy_pct(struct xsc_dev *xdev, uint16_t logical_in_port, uint32_t pct_idx)
+{
+ struct xsc_np_pct_v4_del del;
+
+ memset(&del, 0, sizeof(del));
+ del.key.logical_in_port = logical_in_port & XSC_LOGIC_PORT_MASK;
+ del.mask.logical_in_port = XSC_LOGIC_PORT_MASK;
+ del.pct_idx = pct_idx;
+ return xsc_dev_np_exec(xdev, &del, sizeof(del), XSC_NP_PCT_V4, XSC_NP_OP_DEL);
+}
+
+void
+xsc_dev_clear_pct(struct xsc_dev *xdev, int repr_id)
+{
+ struct xsc_repr_port *repr;
+ struct xsc_dev_pct_entry *pct_entry;
+ struct xsc_dev_pct_list *pct_list;
+
+ if (repr_id == XSC_DEV_REPR_ID_INVALID)
+ return;
+
+ repr = &xdev->repr_ports[repr_id];
+ pct_list = &repr->def_pct_list;
+
+ while ((pct_entry = xsc_dev_pct_first_get(pct_list)) != NULL) {
+ xsc_dev_destroy_pct(xdev, pct_entry->logic_port, pct_entry->pct_idx);
+ xsc_dev_pct_entry_remove(pct_entry);
+ }
+}
+
+int
+xsc_dev_create_ipat(struct xsc_dev *xdev, uint16_t logic_in_port, uint16_t dst_info)
+{
+ struct xsc_np_ipat add;
+
+ memset(&add, 0, sizeof(add));
+ add.key.logical_in_port = logic_in_port;
+ add.action.dst_info = dst_info;
+ add.action.vld = 1;
+ return xsc_dev_np_exec(xdev, &add, sizeof(add), XSC_NP_IPAT, XSC_NP_OP_ADD);
+}
+
+int
+xsc_dev_get_ipat_vld(struct xsc_dev *xdev, uint16_t logic_in_port)
+{
+ int ret;
+ struct xsc_np_ipat get;
+
+ memset(&get, 0, sizeof(get));
+ get.key.logical_in_port = logic_in_port;
+
+ ret = xsc_dev_np_exec(xdev, &get, sizeof(get), XSC_NP_IPAT, XSC_NP_OP_GET);
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Get ipat vld failed, logic in port=%u", logic_in_port);
+
+ return get.action.vld;
+}
+
+int
+xsc_dev_destroy_ipat(struct xsc_dev *xdev, uint16_t logic_in_port)
+{
+ struct xsc_ipat_key del;
+
+ memset(&del, 0, sizeof(del));
+ del.logical_in_port = logic_in_port;
+ return xsc_dev_np_exec(xdev, &del, sizeof(del), XSC_NP_IPAT, XSC_NP_OP_DEL);
+}
+
+int
+xsc_dev_create_epat(struct xsc_dev *xdev, uint16_t dst_info, uint8_t dst_port,
+ uint16_t qpn_ofst, uint8_t qp_num, struct rte_eth_rss_conf *rss_conf)
+{
+ struct xsc_np_epat_add add;
+
+ memset(&add, 0, sizeof(add));
+ add.key.dst_info = dst_info;
+ add.action.dst_port = dst_port;
+ add.action.vld = 1;
+ add.action.rx_qp_id_ofst = qpn_ofst;
+ add.action.qp_num = qp_num - 1;
+ add.action.rss_en = 1;
+ add.action.rss_hash_func = XSC_RSS_HASH_FUNC_TOPELIZ;
+ add.action.rss_hash_template = xsc_rss_hash_template_get(rss_conf);
+
+ return xsc_dev_np_exec(xdev, &add, sizeof(add), XSC_NP_EPAT, XSC_NP_OP_ADD);
+}
+
+int
+xsc_dev_vf_modify_epat(struct xsc_dev *xdev, uint16_t dst_info, uint16_t qpn_ofst,
+ uint8_t qp_num, struct rte_eth_rss_conf *rss_conf)
+{
+ struct xsc_np_epat_mod mod;
+
+ memset(&mod, 0, sizeof(mod));
+ mod.flags = XSC_EPAT_VLD_FLAG | XSC_EPAT_RX_QP_ID_OFST_FLAG |
+ XSC_EPAT_QP_NUM_FLAG | XSC_EPAT_HAS_PPH_FLAG |
+ XSC_EPAT_RSS_EN_FLAG | XSC_EPAT_RSS_HASH_TEMPLATE_FLAG |
+ XSC_EPAT_RSS_HASH_FUNC_FLAG;
+
+ mod.key.dst_info = dst_info;
+ mod.action.vld = 1;
+ mod.action.rx_qp_id_ofst = qpn_ofst;
+ mod.action.qp_num = qp_num - 1;
+ mod.action.rss_en = 1;
+ mod.action.rss_hash_func = XSC_RSS_HASH_FUNC_TOPELIZ;
+ mod.action.rss_hash_template = xsc_rss_hash_template_get(rss_conf);
+
+ return xsc_dev_np_exec(xdev, &mod, sizeof(mod), XSC_NP_EPAT, XSC_NP_OP_MOD);
+}
+
+int
+xsc_dev_set_qpsetid(struct xsc_dev *xdev, uint32_t txqpn, uint16_t qp_set_id)
+{
+ int ret;
+ struct xsc_pg_set_id add;
+ uint16_t qp_id_base = xdev->hwinfo.raw_qp_id_base;
+
+ memset(&add, 0, sizeof(add));
+ add.key.qp_id = txqpn - qp_id_base;
+ add.action.qp_set_id = qp_set_id;
+
+ ret = xsc_dev_np_exec(xdev, &add, sizeof(add), XSC_NP_PG_QP_SET_ID, XSC_NP_OP_ADD);
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to set qp %u setid %u", txqpn, qp_set_id);
+
+ return ret;
+}
+
+int
+xsc_dev_destroy_epat(struct xsc_dev *xdev, uint16_t dst_info)
+{
+ struct xsc_epat_key del;
+
+ memset(&del, 0, sizeof(del));
+
+ del.dst_info = dst_info;
+ return xsc_dev_np_exec(xdev, &del, sizeof(del), XSC_NP_EPAT, XSC_NP_OP_DEL);
+}
+
+int
+xsc_dev_create_vfos_baselp(struct xsc_dev *xdev)
+{
+ int ret;
+ struct xsc_np_vfso add;
+
+ memset(&add, 0, sizeof(add));
+ add.key.src_port = xdev->vfrep_offset;
+ add.action.ofst = xdev->vfos_logical_in_port;
+
+ ret = xsc_dev_np_exec(xdev, &add, sizeof(add), XSC_NP_VFOS, XSC_NP_OP_ADD);
+ if (ret != 0)
+ PMD_DRV_LOG(ERR, "Failed to set vfos, port=%u, offset=%u",
+ add.key.src_port, add.action.ofst);
+
+ return ret;
+}
+
+void
+xsc_dev_pct_uninit(void)
+{
+ rte_bitmap_free(xsc_pct_mgr.bmp_pct);
+ rte_free(xsc_pct_mgr.bmp_mem);
+}
+
+int
+xsc_dev_pct_init(void)
+{
+ int ret;
+ uint8_t *bmp_mem;
+ uint32_t pos, pct_sz, bmp_sz;
+
+ if (xsc_pct_mgr.bmp_mem != NULL)
+ return 0;
+
+ pct_sz = XSC_DEV_DEF_PCT_IDX_MAX - XSC_DEV_DEF_PCT_IDX_MIN + 1;
+ bmp_sz = rte_bitmap_get_memory_footprint(pct_sz);
+ bmp_mem = rte_zmalloc(NULL, bmp_sz, RTE_CACHE_LINE_SIZE);
+ if (bmp_mem == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc pct bitmap memory");
+ ret = -ENOMEM;
+ goto pct_init_fail;
+ }
+
+ xsc_pct_mgr.bmp_mem = bmp_mem;
+ xsc_pct_mgr.bmp_pct = rte_bitmap_init(pct_sz, bmp_mem, bmp_sz);
+ if (xsc_pct_mgr.bmp_pct == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to init pct bitmap");
+ ret = -EINVAL;
+ goto pct_init_fail;
+ }
+
+ /* Mark all pct bitmap available */
+ for (pos = 0; pos < pct_sz; pos++)
+ rte_bitmap_set(xsc_pct_mgr.bmp_pct, pos);
+
+ return 0;
+
+pct_init_fail:
+ xsc_dev_pct_uninit();
+ return ret;
+}
+
+uint32_t
+xsc_dev_pct_idx_alloc(void)
+{
+ int ret;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+
+ ret = rte_bitmap_scan(xsc_pct_mgr.bmp_pct, &pos, &slab);
+ if (ret != 0) {
+ pos += rte_bsf64(slab);
+ rte_bitmap_clear(xsc_pct_mgr.bmp_pct, pos);
+ return (pos + XSC_DEV_DEF_PCT_IDX_MIN);
+ }
+
+ PMD_DRV_LOG(ERR, "Failed to alloc xsc pct idx");
+ return XSC_DEV_PCT_IDX_INVALID;
+}
+
+void
+xsc_dev_pct_idx_free(uint32_t pct_idx)
+{
+ rte_bitmap_set(xsc_pct_mgr.bmp_pct, pct_idx - XSC_DEV_DEF_PCT_IDX_MIN);
+}
+
+int
+xsc_dev_pct_entry_insert(struct xsc_dev_pct_list *pct_list,
+ uint32_t logic_port, uint32_t pct_idx)
+{
+ struct xsc_dev_pct_entry *pct_entry;
+
+ pct_entry = rte_zmalloc(NULL, sizeof(struct xsc_dev_pct_entry), RTE_CACHE_LINE_SIZE);
+ if (pct_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc pct entry memory");
+ return -ENOMEM;
+ }
+
+ pct_entry->logic_port = logic_port;
+ pct_entry->pct_idx = pct_idx;
+ LIST_INSERT_HEAD(pct_list, pct_entry, next);
+
+ return 0;
+}
+
+struct xsc_dev_pct_entry *
+xsc_dev_pct_first_get(struct xsc_dev_pct_list *pct_list)
+{
+ struct xsc_dev_pct_entry *pct_entry;
+
+ pct_entry = LIST_FIRST(pct_list);
+ return pct_entry;
+}
+
+int
+xsc_dev_pct_entry_remove(struct xsc_dev_pct_entry *pct_entry)
+{
+ if (pct_entry == NULL)
+ return -1;
+
+ xsc_dev_pct_idx_free(pct_entry->pct_idx);
+ LIST_REMOVE(pct_entry, next);
+ rte_free(pct_entry);
+
+ return 0;
+}
diff --git a/drivers/net/xsc/xsc_np.h b/drivers/net/xsc/xsc_np.h
new file mode 100644
index 0000000000..3ceaf93ae4
--- /dev/null
+++ b/drivers/net/xsc/xsc_np.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_NP_H_
+#define _XSC_NP_H_
+
+#include <rte_byteorder.h>
+#include <rte_ethdev.h>
+
+struct xsc_dev;
+
+struct xsc_ipat_key {
+ uint16_t logical_in_port:11;
+ uint16_t rsv:5;
+} __rte_packed;
+
+struct xsc_ipat_action {
+ uint64_t rsv0;
+ uint64_t rsv1:9;
+ uint64_t dst_info:11;
+ uint64_t rsv2:34;
+ uint64_t vld:1;
+ uint64_t rsv:1;
+} __rte_packed;
+
+struct xsc_np_ipat {
+ struct xsc_ipat_key key;
+ struct xsc_ipat_action action;
+};
+
+struct xsc_epat_key {
+ uint16_t dst_info:11;
+ uint16_t rsv:5;
+} __rte_packed;
+
+struct xsc_epat_action {
+ uint8_t rsv0[14];
+ uint8_t rsv1:4;
+ uint8_t dst_port:4;
+ uint8_t rss_hash_func:2;
+ uint8_t rss_hash_template:5;
+ uint8_t rss_en:1;
+ uint8_t qp_num;
+ uint16_t rx_qp_id_ofst:12;
+ uint16_t rsv3:4;
+ uint8_t rsv4:7;
+ uint8_t vld:1;
+} __rte_packed;
+
+struct xsc_np_epat_add {
+ struct xsc_epat_key key;
+ struct xsc_epat_action action;
+};
+
+struct xsc_np_epat_mod {
+ uint64_t flags;
+ struct xsc_epat_key key;
+ struct xsc_epat_action action;
+};
+
+struct xsc_pct_v4_key {
+ uint16_t rsv0[20];
+ uint32_t rsv1:13;
+ uint32_t logical_in_port:11;
+ uint32_t rsv2:8;
+} __rte_packed;
+
+struct xsc_pct_action {
+ uint64_t rsv0:29;
+ uint64_t dst_info:11;
+ uint64_t rsv1:8;
+} __rte_packed;
+
+struct xsc_np_pct_v4_add {
+ struct xsc_pct_v4_key key;
+ struct xsc_pct_v4_key mask;
+ struct xsc_pct_action action;
+ uint32_t pct_idx;
+};
+
+struct xsc_np_pct_v4_del {
+ struct xsc_pct_v4_key key;
+ struct xsc_pct_v4_key mask;
+ uint32_t pct_idx;
+};
+
+struct xsc_pg_qp_set_id_key {
+ uint16_t qp_id:13;
+ uint16_t rsv:3;
+} __rte_packed;
+
+struct xsc_pg_qp_set_id_action {
+ uint16_t qp_set_id:9;
+ uint16_t rsv:7;
+} __rte_packed;
+
+struct xsc_pg_set_id {
+ struct xsc_pg_qp_set_id_key key;
+ struct xsc_pg_qp_set_id_action action;
+};
+
+struct xsc_vfos_key {
+ uint16_t src_port:11;
+ uint16_t rsv:5;
+} __rte_packed;
+
+struct xsc_vfos_start_ofst_action {
+ uint16_t ofst:11;
+ uint16_t rsv:5;
+} __rte_packed;
+
+struct xsc_np_vfso {
+ struct xsc_vfos_key key;
+ struct xsc_vfos_start_ofst_action action;
+};
+
+struct xsc_dev_pct_mgr {
+ uint8_t *bmp_mem;
+ struct rte_bitmap *bmp_pct;
+};
+
+struct xsc_dev_pct_entry {
+ LIST_ENTRY(xsc_dev_pct_entry) next;
+ uint32_t logic_port;
+ uint32_t pct_idx;
+};
+
+LIST_HEAD(xsc_dev_pct_list, xsc_dev_pct_entry);
+
+int xsc_dev_create_pct(struct xsc_dev *xdev, int repr_id,
+ uint16_t logical_in_port, uint16_t dst_info);
+int xsc_dev_destroy_pct(struct xsc_dev *xdev, uint16_t logical_in_port, uint32_t pct_idx);
+void xsc_dev_clear_pct(struct xsc_dev *xdev, int repr_id);
+int xsc_dev_create_ipat(struct xsc_dev *xdev, uint16_t logic_in_port, uint16_t dst_info);
+int xsc_dev_get_ipat_vld(struct xsc_dev *xdev, uint16_t logic_in_port);
+int xsc_dev_destroy_ipat(struct xsc_dev *xdev, uint16_t logic_in_port);
+int xsc_dev_create_epat(struct xsc_dev *xdev, uint16_t dst_info, uint8_t dst_port,
+ uint16_t qpn_ofst, uint8_t qp_num, struct rte_eth_rss_conf *rss_conf);
+int xsc_dev_vf_modify_epat(struct xsc_dev *xdev, uint16_t dst_info, uint16_t qpn_ofst,
+ uint8_t qp_num, struct rte_eth_rss_conf *rss_conf);
+int xsc_dev_destroy_epat(struct xsc_dev *xdev, uint16_t dst_info);
+int xsc_dev_set_qpsetid(struct xsc_dev *xdev, uint32_t txqpn, uint16_t qp_set_id);
+int xsc_dev_create_vfos_baselp(struct xsc_dev *xdev);
+void xsc_dev_pct_uninit(void);
+int xsc_dev_pct_init(void);
+uint32_t xsc_dev_pct_idx_alloc(void);
+void xsc_dev_pct_idx_free(uint32_t pct_idx);
+int xsc_dev_pct_entry_insert(struct xsc_dev_pct_list *pct_list,
+ uint32_t logic_port, uint32_t pct_idx);
+struct xsc_dev_pct_entry *xsc_dev_pct_first_get(struct xsc_dev_pct_list *pct_list);
+int xsc_dev_pct_entry_remove(struct xsc_dev_pct_entry *pct_entry);
+
+#endif /* _XSC_NP_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 06/15] net/xsc: initialize xsc representors
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (4 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 05/15] net/xsc: add PCT interfaces WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops WanRenyong
` (8 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
For the design of the xsc PMD, each ethdev corresponds to a representor.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
drivers/net/xsc/xsc_defs.h | 11 +++
drivers/net/xsc/xsc_dev.c | 95 ++++++++++++++++++++
drivers/net/xsc/xsc_dev.h | 3 +
drivers/net/xsc/xsc_ethdev.c | 170 +++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_ethdev.h | 19 ++++
5 files changed, 298 insertions(+)
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index b1e37a5870..111776f37e 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -6,6 +6,7 @@
#define XSC_DEFS_H_
#define XSC_PAGE_SIZE 4096
+#define XSC_PHY_PORT_NUM 1
#define XSC_PCI_VENDOR_ID 0x1f67
#define XSC_PCI_DEV_ID_MS 0x1111
@@ -15,6 +16,7 @@
#define XSC_PCI_DEV_ID_MVS 0x1153
#define XSC_VFREP_BASE_LOGICAL_PORT 1081
+#define XSC_MAX_MAC_ADDRESSES 3
#define XSC_RSS_HASH_KEY_LEN 52
#define XSC_RSS_HASH_BIT_IPV4_SIP (1ULL << 0)
@@ -58,6 +60,15 @@ enum xsc_pph_type {
XSC_UPLINK_PPH = 0x8,
};
+enum xsc_funcid_type {
+ XSC_FUNCID_TYPE_INVAL = 0x0,
+ XSC_EMU_FUNCID = 0x1,
+ XSC_PHYPORT_MAC_FUNCID = 0x2,
+ XSC_VF_IOCTL_FUNCID = 0x3,
+ XSC_PHYPORT_LAG_FUNCID = 0x4,
+ XSC_FUNCID_TYPE_UNKNOWN = 0x5,
+};
+
enum xsc_port_type {
XSC_PORT_TYPE_NONE = 0,
XSC_PORT_TYPE_UPLINK,
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
index 02c6346b45..71d7ab7ea4 100644
--- a/drivers/net/xsc/xsc_dev.c
+++ b/drivers/net/xsc/xsc_dev.c
@@ -61,6 +61,12 @@ xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
data_out, out_len);
}
+int
+xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac)
+{
+ return xdev->dev_ops->get_mac(xdev, mac);
+}
+
int
xsc_dev_close(struct xsc_dev *xdev, int repr_id)
{
@@ -126,6 +132,95 @@ xsc_dev_args_parse(struct xsc_dev *xdev, struct rte_devargs *devargs)
rte_kvargs_free(kvlist);
}
+int
+xsc_dev_qp_set_id_get(struct xsc_dev *xdev, int repr_id)
+{
+ if (xsc_dev_is_vf(xdev))
+ return 0;
+
+ return (repr_id % 511 + 1);
+}
+
+static void
+xsc_repr_info_init(struct xsc_dev *xdev, struct xsc_repr_info *info,
+ enum xsc_port_type port_type,
+ enum xsc_funcid_type funcid_type, int32_t repr_id)
+{
+ int qp_set_id, logical_port;
+ struct xsc_hwinfo *hwinfo = &xdev->hwinfo;
+
+ info->repr_id = repr_id;
+ info->port_type = port_type;
+ if (port_type == XSC_PORT_TYPE_UPLINK_BOND) {
+ info->pf_bond = 1;
+ info->funcid = XSC_PHYPORT_LAG_FUNCID << 14;
+ } else if (port_type == XSC_PORT_TYPE_UPLINK) {
+ info->pf_bond = -1;
+ info->funcid = funcid_type << 14;
+ } else if (port_type == XSC_PORT_TYPE_PFVF) {
+ info->funcid = funcid_type << 14;
+ }
+
+ qp_set_id = xsc_dev_qp_set_id_get(xdev, repr_id);
+ if (xsc_dev_is_vf(xdev))
+ logical_port = xdev->hwinfo.func_id +
+ xdev->hwinfo.funcid_to_logic_port_off;
+ else
+ logical_port = xdev->vfos_logical_in_port + qp_set_id - 1;
+
+ info->logical_port = logical_port;
+ info->local_dstinfo = logical_port;
+ info->peer_logical_port = hwinfo->mac_phy_port;
+ info->peer_dstinfo = hwinfo->mac_phy_port;
+}
+
+int
+xsc_dev_repr_ports_probe(struct xsc_dev *xdev, int nb_repr_ports, int max_eth_ports)
+{
+ int funcid_type;
+ struct xsc_repr_port *repr_port;
+ int i;
+
+ PMD_INIT_FUNC_TRACE();
+
+ xdev->num_repr_ports = nb_repr_ports + XSC_PHY_PORT_NUM;
+ if (xdev->num_repr_ports > max_eth_ports) {
+ PMD_DRV_LOG(ERR, "Repr ports num %u, should be less than max %u",
+ xdev->num_repr_ports, max_eth_ports);
+ return -EINVAL;
+ }
+
+ xdev->repr_ports = rte_zmalloc(NULL,
+ sizeof(struct xsc_repr_port) * xdev->num_repr_ports,
+ RTE_CACHE_LINE_SIZE);
+ if (xdev->repr_ports == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate memory for repr ports");
+ return -ENOMEM;
+ }
+
+ funcid_type = (xdev->devargs.nic_mode == XSC_NIC_MODE_SWITCHDEV) ?
+ XSC_VF_IOCTL_FUNCID : XSC_PHYPORT_MAC_FUNCID;
+
+ /* PF representor use the last repr_ports */
+ repr_port = &xdev->repr_ports[xdev->num_repr_ports - 1];
+ xsc_repr_info_init(xdev, &repr_port->info, XSC_PORT_TYPE_UPLINK,
+ XSC_PHYPORT_MAC_FUNCID, xdev->num_repr_ports - 1);
+ repr_port->info.ifindex = xdev->ifindex;
+ repr_port->xdev = xdev;
+ LIST_INIT(&repr_port->def_pct_list);
+
+ /* VF representor start from 0 */
+ for (i = 0; i < nb_repr_ports; i++) {
+ repr_port = &xdev->repr_ports[i];
+ xsc_repr_info_init(xdev, &repr_port->info,
+ XSC_PORT_TYPE_PFVF, funcid_type, i);
+ repr_port->xdev = xdev;
+ LIST_INIT(&repr_port->def_pct_list);
+ }
+
+ return 0;
+}
+
void
xsc_dev_uninit(struct xsc_dev *xdev)
{
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index 60762c84de..9010225a20 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -161,6 +161,9 @@ void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
int xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **dev);
void xsc_dev_uninit(struct xsc_dev *xdev);
int xsc_dev_close(struct xsc_dev *xdev, int repr_id);
+int xsc_dev_repr_ports_probe(struct xsc_dev *xdev, int nb_repr_ports, int max_eth_ports);
bool xsc_dev_is_vf(struct xsc_dev *xdev);
+int xsc_dev_qp_set_id_get(struct xsc_dev *xdev, int repr_id);
+int xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac);
#endif /* _XSC_DEV_H_ */
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 4bdc70507f..9fc5464754 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -9,6 +9,166 @@
#include "xsc_defs.h"
#include "xsc_ethdev.h"
+static int
+xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uint32_t index)
+{
+ int i;
+
+ rte_errno = EINVAL;
+ if (index > XSC_MAX_MAC_ADDRESSES)
+ return -rte_errno;
+
+ if (rte_is_zero_ether_addr(mac))
+ return -rte_errno;
+
+ for (i = 0; i != XSC_MAX_MAC_ADDRESSES; ++i) {
+ if (i == (int)index)
+ continue;
+ if (memcmp(&dev->data->mac_addrs[i], mac, sizeof(*mac)))
+ continue;
+ /* Address already configured elsewhere, return with error */
+ rte_errno = EADDRINUSE;
+ return -rte_errno;
+ }
+
+ dev->data->mac_addrs[index] = *mac;
+ return 0;
+}
+
+static int
+xsc_ethdev_init_one_representor(struct rte_eth_dev *eth_dev, void *init_params)
+{
+ int ret;
+ struct xsc_repr_port *repr_port = (struct xsc_repr_port *)init_params;
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(eth_dev);
+ struct xsc_dev_config *config = &priv->config;
+ struct rte_ether_addr mac;
+
+ priv->repr_port = repr_port;
+ repr_port->drv_data = eth_dev;
+ priv->xdev = repr_port->xdev;
+ priv->mtu = RTE_ETHER_MTU;
+ priv->funcid_type = (repr_port->info.funcid & XSC_FUNCID_TYPE_MASK) >> 14;
+ priv->funcid = repr_port->info.funcid & XSC_FUNCID_MASK;
+ if (repr_port->info.port_type == XSC_PORT_TYPE_UPLINK ||
+ repr_port->info.port_type == XSC_PORT_TYPE_UPLINK_BOND)
+ priv->eth_type = RTE_ETH_REPRESENTOR_PF;
+ else
+ priv->eth_type = RTE_ETH_REPRESENTOR_VF;
+ priv->representor_id = repr_port->info.repr_id;
+ priv->dev_data = eth_dev->data;
+ priv->ifindex = repr_port->info.ifindex;
+
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+ eth_dev->data->mac_addrs = priv->mac;
+ if (rte_is_zero_ether_addr(eth_dev->data->mac_addrs)) {
+ ret = xsc_dev_get_mac(priv->xdev, mac.addr_bytes);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Port %u cannot get MAC address",
+ eth_dev->data->port_id);
+ return -ENODEV;
+ }
+ }
+
+ xsc_ethdev_mac_addr_add(eth_dev, &mac, 0);
+
+ config->hw_csum = 1;
+ config->pph_flag = priv->xdev->devargs.pph_mode;
+ if ((config->pph_flag & XSC_TX_PPH) != 0) {
+ config->tso = 0;
+ } else {
+ config->tso = 1;
+ if (config->tso)
+ config->tso_max_payload_sz = 1500;
+ }
+
+ priv->is_representor = (priv->eth_type == RTE_ETH_REPRESENTOR_NONE) ? 0 : 1;
+ if (priv->is_representor) {
+ eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+ eth_dev->data->representor_id = priv->representor_id;
+ eth_dev->data->backer_port_id = eth_dev->data->port_id;
+ }
+
+ eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+
+ rte_eth_dev_probing_finish(eth_dev);
+
+ return 0;
+}
+
+static int
+xsc_ethdev_init_representors(struct rte_eth_dev *eth_dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(eth_dev);
+ struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+ struct rte_device *dev;
+ struct xsc_dev *xdev;
+ struct xsc_repr_port *repr_port;
+ char name[RTE_ETH_NAME_MAX_LEN];
+ int i;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev = &priv->pci_dev->device;
+ if (dev->devargs != NULL) {
+ ret = rte_eth_devargs_parse(dev->devargs->args, ð_da, 1);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Failed to parse device arguments: %s",
+ dev->devargs->args);
+ return -EINVAL;
+ }
+ }
+
+ xdev = priv->xdev;
+ ret = xsc_dev_repr_ports_probe(xdev, eth_da.nb_representor_ports, RTE_MAX_ETHPORTS);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to probe %d xsc device representors",
+ eth_da.nb_representor_ports);
+ return ret;
+ }
+
+ /* PF rep init */
+ repr_port = &xdev->repr_ports[xdev->num_repr_ports - 1];
+ ret = xsc_ethdev_init_one_representor(eth_dev, repr_port);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to init backing representor");
+ return ret;
+ }
+
+ /* VF rep init */
+ for (i = 0; i < eth_da.nb_representor_ports; i++) {
+ repr_port = &xdev->repr_ports[i];
+ snprintf(name, sizeof(name), "%s_rep_%d",
+ xdev->name, repr_port->info.repr_id);
+ ret = rte_eth_dev_create(dev,
+ name,
+ sizeof(struct xsc_ethdev_priv),
+ NULL, NULL,
+ xsc_ethdev_init_one_representor,
+ repr_port);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to create representor: %d", i);
+ goto destroy_reprs;
+ }
+ }
+
+ return 0;
+
+destroy_reprs:
+ /* Destroy vf reprs */
+ while ((i--) > 1) {
+ repr_port = &xdev->repr_ports[i];
+ rte_eth_dev_destroy((struct rte_eth_dev *)repr_port->drv_data, NULL);
+ }
+
+ /* Destroy pf repr */
+ repr_port = &xdev->repr_ports[xdev->num_repr_ports - 1];
+ rte_eth_dev_destroy((struct rte_eth_dev *)repr_port->drv_data, NULL);
+ return ret;
+}
+
static int
xsc_ethdev_init(struct rte_eth_dev *eth_dev)
{
@@ -27,7 +187,17 @@ xsc_ethdev_init(struct rte_eth_dev *eth_dev)
}
priv->xdev->port_id = eth_dev->data->port_id;
+ ret = xsc_ethdev_init_representors(eth_dev);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed to initialize representors");
+ goto uninit_xsc_dev;
+ }
+
return 0;
+
+uninit_xsc_dev:
+ xsc_dev_uninit(priv->xdev);
+ return ret;
}
static int
diff --git a/drivers/net/xsc/xsc_ethdev.h b/drivers/net/xsc/xsc_ethdev.h
index 05040f8865..7d161bd22e 100644
--- a/drivers/net/xsc/xsc_ethdev.h
+++ b/drivers/net/xsc/xsc_ethdev.h
@@ -11,6 +11,25 @@ struct xsc_ethdev_priv {
struct rte_eth_dev *eth_dev;
struct rte_pci_device *pci_dev;
struct xsc_dev *xdev;
+ struct xsc_repr_port *repr_port;
+ struct xsc_dev_config config;
+ struct rte_eth_dev_data *dev_data;
+ struct rte_ether_addr mac[XSC_MAX_MAC_ADDRESSES];
+ struct rte_eth_rss_conf rss_conf;
+
+ int representor_id;
+ uint32_t ifindex;
+ uint16_t mtu;
+ uint8_t isolated;
+ uint8_t is_representor;
+
+ uint32_t mode:7;
+ uint32_t member_bitmap:8;
+ uint32_t funcid_type:3;
+ uint32_t funcid:14;
+
+ uint16_t eth_type;
+ uint16_t qp_set_id;
};
#define TO_XSC_ETHDEV_PRIV(dev) ((struct xsc_ethdev_priv *)(dev)->data->dev_private)
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (5 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 06/15] net/xsc: initialize xsc representors WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 19:14 ` Stephen Hemminger
2025-01-03 15:04 ` [PATCH v4 08/15] net/xsc: add Rx and Tx queue setup WanRenyong
` (7 subsequent siblings)
14 siblings, 1 reply; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev configure and RSS hash functions.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
doc/guides/nics/features/xsc.ini | 3 +
drivers/net/xsc/xsc_defs.h | 15 +++++
drivers/net/xsc/xsc_dev.c | 26 ++++++++
drivers/net/xsc/xsc_dev.h | 1 +
drivers/net/xsc/xsc_ethdev.c | 106 +++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_ethdev.h | 7 ++
6 files changed, 158 insertions(+)
diff --git a/doc/guides/nics/features/xsc.ini b/doc/guides/nics/features/xsc.ini
index b5c44ce535..bdeb7a984b 100644
--- a/doc/guides/nics/features/xsc.ini
+++ b/doc/guides/nics/features/xsc.ini
@@ -4,6 +4,9 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+RSS hash = Y
+RSS key update = Y
+RSS reta update = Y
Linux = Y
ARMv8 = Y
x86-64 = Y
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index 111776f37e..c445eadca1 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -78,4 +78,19 @@ enum xsc_port_type {
XSC_PORT_TYPE_UNKNOWN,
};
+enum xsc_tbm_cap {
+ XSC_TBM_CAP_HASH_PPH = 0,
+ XSC_TBM_CAP_RSS,
+ XSC_TBM_CAP_PP_BYPASS,
+ XSC_TBM_CAP_PCT_DROP_CONFIG,
+};
+
+enum xsc_rss_hf {
+ XSC_RSS_HASH_KEY_UPDATE = 0,
+ XSC_RSS_HASH_TEMP_UPDATE,
+ XSC_RSS_HASH_FUNC_UPDATE,
+ XSC_RSS_RXQ_UPDATE,
+ XSC_RSS_RXQ_DROP,
+};
+
#endif /* XSC_DEFS_H_ */
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
index 71d7ab7ea4..84bab2bb93 100644
--- a/drivers/net/xsc/xsc_dev.c
+++ b/drivers/net/xsc/xsc_dev.c
@@ -19,6 +19,7 @@
#include "xsc_log.h"
#include "xsc_defs.h"
#include "xsc_dev.h"
+#include "xsc_cmd.h"
#define XSC_DEV_DEF_FLOW_MODE 7
@@ -74,6 +75,31 @@ xsc_dev_close(struct xsc_dev *xdev, int repr_id)
return xdev->dev_ops->dev_close(xdev);
}
+int
+xsc_dev_rss_key_modify(struct xsc_dev *xdev, uint8_t *rss_key, uint8_t rss_key_len)
+{
+ struct xsc_cmd_modify_nic_hca_mbox_in in = {};
+ struct xsc_cmd_modify_nic_hca_mbox_out out = {};
+ uint8_t rss_caps_mask = 0;
+ int ret, key_len = 0;
+
+ in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_MODIFY_NIC_HCA);
+
+ key_len = RTE_MIN(rss_key_len, XSC_RSS_HASH_KEY_LEN);
+ rte_memcpy(in.rss.hash_key, rss_key, key_len);
+ rss_caps_mask |= RTE_BIT32(XSC_RSS_HASH_KEY_UPDATE);
+
+ in.rss.caps_mask = rss_caps_mask;
+ in.rss.rss_en = 1;
+ in.nic.caps_mask = rte_cpu_to_be_16(RTE_BIT32(XSC_TBM_CAP_RSS));
+ in.nic.caps = in.nic.caps_mask;
+
+ ret = xsc_dev_mailbox_exec(xdev, &in, sizeof(in), &out, sizeof(out));
+ if (ret != 0 || out.hdr.status != 0)
+ return -1;
+ return 0;
+}
+
static int
xsc_dev_alloc_vfos_info(struct xsc_dev *xdev)
{
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index 9010225a20..3bebb18b98 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -162,6 +162,7 @@ int xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **dev);
void xsc_dev_uninit(struct xsc_dev *xdev);
int xsc_dev_close(struct xsc_dev *xdev, int repr_id);
int xsc_dev_repr_ports_probe(struct xsc_dev *xdev, int nb_repr_ports, int max_eth_ports);
+int xsc_dev_rss_key_modify(struct xsc_dev *xdev, uint8_t *rss_key, uint8_t rss_key_len);
bool xsc_dev_is_vf(struct xsc_dev *xdev);
int xsc_dev_qp_set_id_get(struct xsc_dev *xdev, int repr_id);
int xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac);
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 9fc5464754..81ac062862 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -9,6 +9,105 @@
#include "xsc_defs.h"
#include "xsc_ethdev.h"
+static int
+xsc_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+
+ if (!rss_conf) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ if (rss_conf->rss_key != NULL && rss_conf->rss_key_len >= priv->rss_conf.rss_key_len)
+ memcpy(rss_conf->rss_key, priv->rss_conf.rss_key, priv->rss_conf.rss_key_len);
+
+ rss_conf->rss_key_len = priv->rss_conf.rss_key_len;
+ rss_conf->rss_hf = priv->rss_conf.rss_hf;
+ return 0;
+}
+
+static int
+xsc_ethdev_rss_hash_update(struct rte_eth_dev *dev,
+ struct rte_eth_rss_conf *rss_conf)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ int ret = 0;
+
+ if (rss_conf->rss_key_len > XSC_RSS_HASH_KEY_LEN || rss_conf->rss_key == NULL) {
+ PMD_DRV_LOG(ERR, "Xsc pmd key len is %d bigger than %d",
+ rss_conf->rss_key_len, XSC_RSS_HASH_KEY_LEN);
+ return -EINVAL;
+ }
+
+ ret = xsc_dev_rss_key_modify(priv->xdev, rss_conf->rss_key, rss_conf->rss_key_len);
+ if (ret == 0) {
+ rte_memcpy(priv->rss_conf.rss_key, rss_conf->rss_key,
+ priv->rss_conf.rss_key_len);
+ priv->rss_conf.rss_key_len = rss_conf->rss_key_len;
+ priv->rss_conf.rss_hf = rss_conf->rss_hf;
+ }
+
+ return ret;
+}
+
+static int
+xsc_ethdev_configure(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
+ int ret;
+ struct rte_eth_rss_conf *rss_conf;
+
+ priv->num_sq = dev->data->nb_tx_queues;
+ priv->num_rq = dev->data->nb_rx_queues;
+
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ if (priv->rss_conf.rss_key == NULL) {
+ priv->rss_conf.rss_key = rte_zmalloc(NULL, XSC_RSS_HASH_KEY_LEN,
+ RTE_CACHE_LINE_SIZE);
+ if (priv->rss_conf.rss_key == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to alloc rss key");
+ rte_errno = ENOMEM;
+ ret = -rte_errno;
+ goto error;
+ }
+ priv->rss_conf.rss_key_len = XSC_RSS_HASH_KEY_LEN;
+ }
+
+ if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
+ rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+ ret = xsc_ethdev_rss_hash_update(dev, rss_conf);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Xsc pmd set rss key error!");
+ rte_errno = -ENOEXEC;
+ goto error;
+ }
+ }
+
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+ PMD_DRV_LOG(ERR, "Xsc pmd do not support vlan filter now!");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+ PMD_DRV_LOG(ERR, "Xsc pmd do not support vlan strip now!");
+ rte_errno = EINVAL;
+ goto error;
+ }
+
+ priv->txqs = (void *)dev->data->tx_queues;
+ priv->rxqs = (void *)dev->data->rx_queues;
+ return 0;
+
+error:
+ return -rte_errno;
+}
+
static int
xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uint32_t index)
{
@@ -35,6 +134,12 @@ xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uin
return 0;
}
+const struct eth_dev_ops xsc_eth_dev_ops = {
+ .dev_configure = xsc_ethdev_configure,
+ .rss_hash_update = xsc_ethdev_rss_hash_update,
+ .rss_hash_conf_get = xsc_ethdev_rss_hash_conf_get,
+};
+
static int
xsc_ethdev_init_one_representor(struct rte_eth_dev *eth_dev, void *init_params)
{
@@ -89,6 +194,7 @@ xsc_ethdev_init_one_representor(struct rte_eth_dev *eth_dev, void *init_params)
eth_dev->data->backer_port_id = eth_dev->data->port_id;
}
+ eth_dev->dev_ops = &xsc_eth_dev_ops;
eth_dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
eth_dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
diff --git a/drivers/net/xsc/xsc_ethdev.h b/drivers/net/xsc/xsc_ethdev.h
index 7d161bd22e..bc0fc54d50 100644
--- a/drivers/net/xsc/xsc_ethdev.h
+++ b/drivers/net/xsc/xsc_ethdev.h
@@ -30,6 +30,13 @@ struct xsc_ethdev_priv {
uint16_t eth_type;
uint16_t qp_set_id;
+
+ uint16_t num_sq;
+ uint16_t num_rq;
+
+ uint16_t flags;
+ struct xsc_txq_data *(*txqs)[];
+ struct xsc_rxq_data *(*rxqs)[];
};
#define TO_XSC_ETHDEV_PRIV(dev) ((struct xsc_ethdev_priv *)(dev)->data->dev_private)
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 08/15] net/xsc: add Rx and Tx queue setup
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (6 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 09/15] net/xsc: add ethdev start WanRenyong
` (6 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev Rx and Tx queue setup functions.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
Signed-off-by: Rong Qian <qianr@yunsilicon.com>
---
drivers/net/xsc/xsc_defs.h | 4 ++
drivers/net/xsc/xsc_ethdev.c | 83 ++++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_rx.h | 59 +++++++++++++++++++++++++
drivers/net/xsc/xsc_rxtx.h | 49 +++++++++++++++++++++
drivers/net/xsc/xsc_tx.h | 55 ++++++++++++++++++++++++
5 files changed, 250 insertions(+)
create mode 100644 drivers/net/xsc/xsc_rx.h
create mode 100644 drivers/net/xsc/xsc_tx.h
diff --git a/drivers/net/xsc/xsc_defs.h b/drivers/net/xsc/xsc_defs.h
index c445eadca1..6497b53e1e 100644
--- a/drivers/net/xsc/xsc_defs.h
+++ b/drivers/net/xsc/xsc_defs.h
@@ -38,6 +38,10 @@
#define XSC_EPAT_RSS_HASH_FUNC_FLAG (1ULL << 6)
#define XSC_EPAT_HAS_PPH_FLAG (1ULL << 9)
+#define XSC_MAX_DESC_NUMBER 1024
+#define XSC_SEND_WQE_DS 3
+#define XSC_ESEG_EXTRA_DATA_SIZE 48u
+
#define XSC_PF_TX_DB_ADDR 0x4802000
#define XSC_PF_RX_DB_ADDR 0x4804000
#define XSC_PF_CQ_DB_ADDR 0x2120000
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 81ac062862..0a16b4338c 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -8,6 +8,8 @@
#include "xsc_log.h"
#include "xsc_defs.h"
#include "xsc_ethdev.h"
+#include "xsc_rx.h"
+#include "xsc_tx.h"
static int
xsc_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
@@ -108,6 +110,85 @@ xsc_ethdev_configure(struct rte_eth_dev *dev)
return -rte_errno;
}
+static int
+xsc_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+ uint32_t socket, const struct rte_eth_rxconf *conf,
+ struct rte_mempool *mp)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_rxq_data *rxq_data = NULL;
+ uint16_t desc_n;
+ uint16_t rx_free_thresh;
+ uint64_t offloads = conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+ desc = (desc > XSC_MAX_DESC_NUMBER) ? XSC_MAX_DESC_NUMBER : desc;
+ desc_n = desc;
+
+ if (!rte_is_power_of_2(desc))
+ desc_n = 1 << rte_log2_u32(desc);
+
+ rxq_data = rte_malloc_socket(NULL, sizeof(*rxq_data) + desc_n * sizeof(struct rte_mbuf *),
+ RTE_CACHE_LINE_SIZE, socket);
+ if (rxq_data == NULL) {
+ PMD_DRV_LOG(ERR, "Port %u create rxq idx %d failure",
+ dev->data->port_id, idx);
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+ rxq_data->idx = idx;
+ rxq_data->priv = priv;
+ (*priv->rxqs)[idx] = rxq_data;
+
+ rx_free_thresh = (conf->rx_free_thresh) ? conf->rx_free_thresh : XSC_RX_FREE_THRESH;
+ rxq_data->rx_free_thresh = rx_free_thresh;
+
+ rxq_data->elts = (struct rte_mbuf *(*)[desc_n])(rxq_data + 1);
+ rxq_data->mp = mp;
+ rxq_data->socket = socket;
+
+ rxq_data->csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
+ rxq_data->hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
+ rxq_data->crc_present = 0;
+
+ rxq_data->wqe_n = rte_log2_u32(desc_n);
+ rxq_data->wqe_s = desc_n;
+ rxq_data->wqe_m = desc_n - 1;
+
+ rxq_data->port_id = dev->data->port_id;
+ dev->data->rx_queues[idx] = rxq_data;
+ return 0;
+}
+
+static int
+xsc_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+ uint32_t socket, const struct rte_eth_txconf *conf)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_txq_data *txq;
+ uint16_t desc_n;
+
+ desc = (desc > XSC_MAX_DESC_NUMBER) ? XSC_MAX_DESC_NUMBER : desc;
+ desc_n = desc;
+
+ if (!rte_is_power_of_2(desc))
+ desc_n = 1 << rte_log2_u32(desc);
+
+ txq = rte_malloc_socket(NULL, sizeof(*txq) + desc_n * sizeof(struct rte_mbuf *),
+ RTE_CACHE_LINE_SIZE, socket);
+ txq->offloads = conf->offloads | dev->data->dev_conf.txmode.offloads;
+ txq->priv = priv;
+ txq->socket = socket;
+
+ txq->elts_n = rte_log2_u32(desc_n);
+ txq->elts_s = desc_n;
+ txq->elts_m = desc_n - 1;
+ txq->port_id = dev->data->port_id;
+ txq->idx = idx;
+
+ (*priv->txqs)[idx] = txq;
+ return 0;
+}
+
static int
xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uint32_t index)
{
@@ -136,6 +217,8 @@ xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uin
const struct eth_dev_ops xsc_eth_dev_ops = {
.dev_configure = xsc_ethdev_configure,
+ .rx_queue_setup = xsc_ethdev_rx_queue_setup,
+ .tx_queue_setup = xsc_ethdev_tx_queue_setup,
.rss_hash_update = xsc_ethdev_rss_hash_update,
.rss_hash_conf_get = xsc_ethdev_rss_hash_conf_get,
};
diff --git a/drivers/net/xsc/xsc_rx.h b/drivers/net/xsc/xsc_rx.h
new file mode 100644
index 0000000000..3653c0e335
--- /dev/null
+++ b/drivers/net/xsc/xsc_rx.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_RX_H_
+#define _XSC_RX_H_
+
+#define XSC_RX_FREE_THRESH 32
+
+struct xsc_rxq_stats {
+ uint64_t rx_pkts; /* Total number of rx packets */
+ uint64_t rx_bytes; /* Total number of rx bytes */
+ uint64_t rx_errors; /* Total number of rx error packets */
+ uint64_t rx_nombuf; /* Total number of rx mbuf alloc failed */
+};
+
+struct __rte_cache_aligned xsc_rxq_data {
+ uint16_t idx; /*QP idx */
+ uint16_t port_id;
+ void *cq; /* CQ pointer */
+ void *qp; /* QP pointer */
+ uint32_t cqn; /* CQ serial number */
+ uint32_t qpn; /* QP serial number */
+ uint16_t wqe_s; /* Number of WQE */
+ uint16_t wqe_m; /* Mask of WQE number */
+ uint16_t cqe_s; /* Number of CQE */
+ uint16_t cqe_m; /* Mask of CQE number */
+ uint16_t wqe_n:4; /* Log 2 of WQE number */
+ uint16_t sge_n:4; /* Log 2 of each WQE DS number */
+ uint16_t cqe_n:4; /* Log 2 of CQE number */
+ uint16_t rsv0:4;
+ volatile uint32_t *rq_db;
+ volatile uint32_t *cq_db;
+ uint32_t rq_ci;
+ uint32_t rq_pi;
+ uint16_t cq_ci;
+ uint16_t rx_free_thresh;
+ uint16_t nb_rx_hold;
+ volatile void *wqes;
+ union {
+ volatile struct xsc_cqe(*cqes)[];
+ volatile struct xsc_cqe_u64(*cqes_u64)[];
+ };
+ struct rte_mbuf *(*elts)[]; /* Record the mbuf of wqe addr */
+ struct rte_mempool *mp;
+ const struct rte_memzone *rq_pas; /* Palist memory */
+ uint32_t socket;
+ struct xsc_ethdev_priv *priv;
+ struct xsc_rxq_stats stats;
+ /* attr */
+ uint16_t csum:1; /* Checksum offloading enable */
+ uint16_t hw_timestamp:1;
+ uint16_t vlan_strip:1;
+ uint16_t crc_present:1; /* CRC flag */
+ uint16_t rss_hash:1; /* RSS hash enabled */
+ uint16_t rsv1:11;
+};
+
+#endif /* _XSC_RX_H_ */
diff --git a/drivers/net/xsc/xsc_rxtx.h b/drivers/net/xsc/xsc_rxtx.h
index 725a5f18d1..6311ed12d2 100644
--- a/drivers/net/xsc/xsc_rxtx.h
+++ b/drivers/net/xsc/xsc_rxtx.h
@@ -7,6 +7,39 @@
#include <rte_byteorder.h>
+#define XSC_CQE_OWNER_MASK 0x1
+#define XSC_CQE_OWNER_HW 0x2
+#define XSC_CQE_OWNER_SW 0x4
+#define XSC_CQE_OWNER_ERR 0x8
+#define XSC_OPCODE_RAW 0x7
+
+struct xsc_send_wqe_ctrl_seg {
+ rte_le32_t msg_opcode:8;
+ rte_le32_t with_immdt:1;
+ rte_le32_t csum_en:2;
+ rte_le32_t ds_data_num:5;
+ rte_le32_t wqe_id:16;
+ rte_le32_t msg_len;
+ union {
+ rte_le32_t opcode_data;
+ struct {
+ rte_le16_t has_pph:1;
+ rte_le16_t so_type:1;
+ rte_le16_t so_data_size:14;
+ rte_le16_t rsv1:8;
+ rte_le16_t so_hdr_len:8;
+ };
+ struct {
+ rte_le16_t desc_id;
+ rte_le16_t is_last_wqe:1;
+ rte_le16_t dst_qp_id:15;
+ };
+ };
+ rte_le32_t se:1;
+ rte_le32_t ce:1;
+ rte_le32_t rsv2:30;
+} __rte_packed;
+
struct xsc_wqe_data_seg {
union {
struct {
@@ -27,6 +60,17 @@ struct xsc_wqe_data_seg {
};
} __rte_packed;
+struct xsc_wqe {
+ union {
+ struct xsc_send_wqe_ctrl_seg cseg;
+ uint32_t ctrl[4];
+ };
+ union {
+ struct xsc_wqe_data_seg dseg[XSC_SEND_WQE_DS];
+ uint8_t data[XSC_ESEG_EXTRA_DATA_SIZE];
+ };
+} __rte_packed;
+
struct xsc_cqe {
union {
uint8_t msg_opcode;
@@ -53,6 +97,11 @@ struct xsc_cqe {
rte_le16_t owner:1;
} __rte_packed;
+struct xsc_cqe_u64 {
+ struct xsc_cqe cqe0;
+ struct xsc_cqe cqe1;
+};
+
struct xsc_tx_cq_params {
uint16_t port_id;
uint16_t qp_id;
diff --git a/drivers/net/xsc/xsc_tx.h b/drivers/net/xsc/xsc_tx.h
new file mode 100644
index 0000000000..11e249a4e3
--- /dev/null
+++ b/drivers/net/xsc/xsc_tx.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#ifndef _XSC_TX_H_
+#define _XSC_TX_H_
+
+#define XSC_TX_COMP_CQE_HANDLE_MAX 2
+
+struct xsc_txq_stats {
+ uint64_t tx_pkts; /* Total number of tx packets */
+ uint64_t tx_bytes; /* Total number of tx bytes */
+ uint64_t tx_errors; /* Total number of tx error packets */
+};
+
+struct __rte_cache_aligned xsc_txq_data {
+ uint16_t idx; /*QP idx */
+ uint16_t port_id;
+ void *cq; /* CQ pointer */
+ void *qp; /* QP pointer */
+ uint32_t cqn; /* CQ serial number */
+ uint32_t qpn; /* QP serial number */
+ uint16_t elts_head; /* Current pos in (*elts)[] */
+ uint16_t elts_tail; /* Counter of first element awaiting completion */
+ uint16_t elts_comp; /* Elts index since last completion request */
+ uint16_t elts_s; /* Number of (*elts)[] */
+ uint16_t elts_m; /* Mask of (*elts)[] number */
+ uint16_t wqe_ci; /* Consumer index for TXQ */
+ uint16_t wqe_pi; /* Producer index for TXQ */
+ uint16_t wqe_s; /* Number of WQE */
+ uint16_t wqe_m; /* Mask of WQE number */
+ uint16_t wqe_comp; /* WQE index since last completion request */
+ uint16_t cq_ci; /* Consumer index for CQ */
+ uint16_t cq_pi; /* Production index for CQ */
+ uint16_t cqe_s; /* Number of CQE */
+ uint16_t cqe_m; /* Mask of CQE number */
+ uint16_t elts_n:4; /* Log 2 of (*elts)[] number */
+ uint16_t cqe_n:4; /* Log 2 of CQE number */
+ uint16_t wqe_n:4; /* Log 2 of WQE number */
+ uint16_t wqe_ds_n:4; /* Log 2 of each WQE DS number */
+ uint64_t offloads; /* TXQ offloads */
+ struct xsc_wqe *wqes;
+ volatile struct xsc_cqe *cqes;
+ volatile uint32_t *qp_db;
+ volatile uint32_t *cq_db;
+ struct xsc_ethdev_priv *priv;
+ struct xsc_txq_stats stats;
+ uint32_t socket;
+ uint8_t tso_en:1; /* TSO enable 0-off 1-on */
+ uint8_t rsv:7;
+ uint16_t *fcqs; /* Free completion queue. */
+ struct rte_mbuf *elts[]; /* Storage for queued packets, for free */
+};
+
+#endif /* _XSC_TX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 09/15] net/xsc: add ethdev start
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (7 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 08/15] net/xsc: add Rx and Tx queue setup WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 19:17 ` Stephen Hemminger
2025-01-03 15:04 ` [PATCH v4 10/15] net/xsc: add ethdev stop and close WanRenyong
` (5 subsequent siblings)
14 siblings, 1 reply; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev start function.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
Signed-off-by: Rong Qian <qianr@yunsilicon.com>
---
drivers/net/xsc/meson.build | 2 +
drivers/net/xsc/xsc_dev.c | 33 ++++
drivers/net/xsc/xsc_dev.h | 8 +
drivers/net/xsc/xsc_ethdev.c | 174 +++++++++++++++++++++
drivers/net/xsc/xsc_ethdev.h | 19 +++
drivers/net/xsc/xsc_rx.c | 291 +++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_rx.h | 3 +
drivers/net/xsc/xsc_rxtx.h | 27 ++++
drivers/net/xsc/xsc_tx.c | 93 +++++++++++
drivers/net/xsc/xsc_tx.h | 4 +
10 files changed, 654 insertions(+)
create mode 100644 drivers/net/xsc/xsc_rx.c
create mode 100644 drivers/net/xsc/xsc_tx.c
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
index 5ee03ea835..79664374e3 100644
--- a/drivers/net/xsc/meson.build
+++ b/drivers/net/xsc/meson.build
@@ -12,4 +12,6 @@ sources = files(
'xsc_vfio_mbox.c',
'xsc_vfio.c',
'xsc_np.c',
+ 'xsc_rx.c',
+ 'xsc_tx.c',
)
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
index 84bab2bb93..45e93e1a85 100644
--- a/drivers/net/xsc/xsc_dev.c
+++ b/drivers/net/xsc/xsc_dev.c
@@ -68,6 +68,39 @@ xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac)
return xdev->dev_ops->get_mac(xdev, mac);
}
+int
+xsc_dev_modify_qp_status(struct xsc_dev *xdev, uint32_t qpn, int num, int opcode)
+{
+ return xdev->dev_ops->modify_qp_status(xdev, qpn, num, opcode);
+}
+
+int
+xsc_dev_modify_qp_qostree(struct xsc_dev *xdev, uint16_t qpn)
+{
+ return xdev->dev_ops->modify_qp_qostree(xdev, qpn);
+}
+
+int
+xsc_dev_rx_cq_create(struct xsc_dev *xdev, struct xsc_rx_cq_params *cq_params,
+ struct xsc_rx_cq_info *cq_info)
+{
+ return xdev->dev_ops->rx_cq_create(xdev, cq_params, cq_info);
+}
+
+int
+xsc_dev_tx_cq_create(struct xsc_dev *xdev, struct xsc_tx_cq_params *cq_params,
+ struct xsc_tx_cq_info *cq_info)
+{
+ return xdev->dev_ops->tx_cq_create(xdev, cq_params, cq_info);
+}
+
+int
+xsc_dev_tx_qp_create(struct xsc_dev *xdev, struct xsc_tx_qp_params *qp_params,
+ struct xsc_tx_qp_info *qp_info)
+{
+ return xdev->dev_ops->tx_qp_create(xdev, qp_params, qp_info);
+}
+
int
xsc_dev_close(struct xsc_dev *xdev, int repr_id)
{
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index 3bebb18b98..5aa1d8704e 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -158,6 +158,14 @@ struct xsc_dev_ops {
int xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
int in_len, void *data_out, int out_len);
void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
+int xsc_dev_modify_qp_status(struct xsc_dev *xdev, uint32_t qpn, int num, int opcode);
+int xsc_dev_modify_qp_qostree(struct xsc_dev *xdev, uint16_t qpn);
+int xsc_dev_rx_cq_create(struct xsc_dev *xdev, struct xsc_rx_cq_params *cq_params,
+ struct xsc_rx_cq_info *cq_info);
+int xsc_dev_tx_cq_create(struct xsc_dev *xdev, struct xsc_tx_cq_params *cq_params,
+ struct xsc_tx_cq_info *cq_info);
+int xsc_dev_tx_qp_create(struct xsc_dev *xdev, struct xsc_tx_qp_params *qp_params,
+ struct xsc_tx_qp_info *qp_info);
int xsc_dev_init(struct rte_pci_device *pci_dev, struct xsc_dev **dev);
void xsc_dev_uninit(struct xsc_dev *xdev);
int xsc_dev_close(struct xsc_dev *xdev, int repr_id);
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 0a16b4338c..0443460cf7 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -10,6 +10,8 @@
#include "xsc_ethdev.h"
#include "xsc_rx.h"
#include "xsc_tx.h"
+#include "xsc_dev.h"
+#include "xsc_cmd.h"
static int
xsc_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
@@ -110,6 +112,177 @@ xsc_ethdev_configure(struct rte_eth_dev *dev)
return -rte_errno;
}
+static int
+xsc_ethdev_enable(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_hwinfo *hwinfo;
+ int peer_dstinfo = 0;
+ int peer_logicalport = 0;
+ int logical_port = 0;
+ int local_dstinfo = 0;
+ int pcie_logic_port = 0;
+ int qp_set_id;
+ int repr_id;
+ struct xsc_rxq_data *rxq = xsc_rxq_get(priv, 0);
+ uint16_t rx_qpn = (uint16_t)rxq->qpn;
+ int i, vld;
+ struct xsc_txq_data *txq;
+ struct xsc_repr_port *repr;
+ struct xsc_repr_info *repr_info;
+
+ if (priv->funcid_type != XSC_PHYPORT_MAC_FUNCID)
+ return -ENODEV;
+
+ hwinfo = &priv->xdev->hwinfo;
+ repr_id = priv->representor_id;
+ repr = &priv->xdev->repr_ports[repr_id];
+ repr_info = &repr->info;
+
+ qp_set_id = xsc_dev_qp_set_id_get(priv->xdev, repr_id);
+ logical_port = repr_info->logical_port;
+ local_dstinfo = repr_info->local_dstinfo;
+ peer_logicalport = repr_info->peer_logical_port;
+ peer_dstinfo = repr_info->peer_dstinfo;
+
+ pcie_logic_port = hwinfo->pcie_no + 8;
+
+ for (i = 0; i < priv->num_sq; i++) {
+ txq = xsc_txq_get(priv, i);
+ xsc_dev_modify_qp_status(priv->xdev, txq->qpn, 1, XSC_CMD_OP_RTR2RTS_QP);
+ xsc_dev_modify_qp_qostree(priv->xdev, txq->qpn);
+ xsc_dev_set_qpsetid(priv->xdev, txq->qpn, qp_set_id);
+ }
+
+ if (!xsc_dev_is_vf(priv->xdev)) {
+ xsc_dev_create_ipat(priv->xdev, logical_port, peer_dstinfo);
+ xsc_dev_create_vfos_baselp(priv->xdev);
+ xsc_dev_create_epat(priv->xdev, local_dstinfo, pcie_logic_port,
+ rx_qpn - hwinfo->raw_rss_qp_id_base,
+ priv->num_rq, &priv->rss_conf);
+ xsc_dev_create_pct(priv->xdev, repr_id, logical_port, peer_dstinfo);
+ xsc_dev_create_pct(priv->xdev, repr_id, peer_logicalport, local_dstinfo);
+ } else {
+ vld = xsc_dev_get_ipat_vld(priv->xdev, logical_port);
+ if (vld == 0)
+ xsc_dev_create_ipat(priv->xdev, logical_port, peer_dstinfo);
+ xsc_dev_vf_modify_epat(priv->xdev, local_dstinfo,
+ rx_qpn - hwinfo->raw_rss_qp_id_base,
+ priv->num_rq, &priv->rss_conf);
+ }
+
+ return 0;
+}
+
+static int
+xsc_txq_start(struct xsc_ethdev_priv *priv)
+{
+ struct xsc_txq_data *txq_data;
+ struct rte_eth_dev *dev = priv->eth_dev;
+ uint64_t offloads = dev->data->dev_conf.txmode.offloads;
+ uint16_t i;
+ int ret;
+ size_t size;
+
+ if (priv->flags & XSC_FLAG_TX_QUEUE_INIT) {
+ for (i = 0; i != priv->num_sq; ++i)
+ dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+ }
+
+ for (i = 0; i != priv->num_sq; ++i) {
+ txq_data = xsc_txq_get(priv, i);
+ xsc_txq_elts_alloc(txq_data);
+ ret = xsc_txq_obj_new(priv->xdev, txq_data, offloads, i);
+ if (ret < 0)
+ goto error;
+ dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ PMD_DRV_LOG(INFO, "Port %u create tx success", dev->data->port_id);
+
+ size = txq_data->cqe_s * sizeof(*txq_data->fcqs);
+ txq_data->fcqs = rte_zmalloc(NULL, size, RTE_CACHE_LINE_SIZE);
+ if (!txq_data->fcqs) {
+ PMD_DRV_LOG(ERR, "Port %u txq %u alloc fcqs memory failed",
+ dev->data->port_id, i);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+ }
+
+ priv->flags |= XSC_FLAG_TX_QUEUE_INIT;
+ return 0;
+
+error:
+ /* Queue resources are released by xsc_ethdev_start calling the stop interface */
+ return -rte_errno;
+}
+
+static int
+xsc_rxq_start(struct xsc_ethdev_priv *priv)
+{
+ struct xsc_rxq_data *rxq_data;
+ struct rte_eth_dev *dev = priv->eth_dev;
+ uint16_t i;
+ int ret;
+
+ if (priv->flags & XSC_FLAG_RX_QUEUE_INIT) {
+ for (i = 0; i != priv->num_sq; ++i)
+ dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ return 0;
+ }
+
+ for (i = 0; i != priv->num_rq; ++i) {
+ rxq_data = xsc_rxq_get(priv, i);
+ if (dev->data->rx_queue_state[i] != RTE_ETH_QUEUE_STATE_STARTED) {
+ ret = xsc_rxq_elts_alloc(rxq_data);
+ if (ret != 0)
+ goto error;
+ }
+ }
+
+ ret = xsc_rxq_rss_obj_new(priv, priv->dev_data->port_id);
+ if (ret != 0)
+ goto error;
+
+ priv->flags |= XSC_FLAG_RX_QUEUE_INIT;
+ return 0;
+error:
+ /* Queue resources are released by xsc_ethdev_start calling the stop interface */
+ return -rte_errno;
+}
+
+static int
+xsc_ethdev_start(struct rte_eth_dev *dev)
+{
+ int ret;
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+
+ ret = xsc_txq_start(priv);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Port %u txq start failed: %s",
+ dev->data->port_id, strerror(rte_errno));
+ goto error;
+ }
+
+ ret = xsc_rxq_start(priv);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Port %u Rx queue start failed: %s",
+ dev->data->port_id, strerror(rte_errno));
+ goto error;
+ }
+
+ dev->data->dev_started = 1;
+
+ rte_wmb();
+ ret = xsc_ethdev_enable(dev);
+
+ return 0;
+
+error:
+ dev->data->dev_started = 0;
+ return -rte_errno;
+}
+
static int
xsc_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint32_t socket, const struct rte_eth_rxconf *conf,
@@ -217,6 +390,7 @@ xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uin
const struct eth_dev_ops xsc_eth_dev_ops = {
.dev_configure = xsc_ethdev_configure,
+ .dev_start = xsc_ethdev_start,
.rx_queue_setup = xsc_ethdev_rx_queue_setup,
.tx_queue_setup = xsc_ethdev_tx_queue_setup,
.rss_hash_update = xsc_ethdev_rss_hash_update,
diff --git a/drivers/net/xsc/xsc_ethdev.h b/drivers/net/xsc/xsc_ethdev.h
index bc0fc54d50..0b307c2828 100644
--- a/drivers/net/xsc/xsc_ethdev.h
+++ b/drivers/net/xsc/xsc_ethdev.h
@@ -7,6 +7,9 @@
#include "xsc_dev.h"
+#define XSC_FLAG_RX_QUEUE_INIT 0x1
+#define XSC_FLAG_TX_QUEUE_INIT 0x2
+
struct xsc_ethdev_priv {
struct rte_eth_dev *eth_dev;
struct rte_pci_device *pci_dev;
@@ -41,4 +44,20 @@ struct xsc_ethdev_priv {
#define TO_XSC_ETHDEV_PRIV(dev) ((struct xsc_ethdev_priv *)(dev)->data->dev_private)
+static __rte_always_inline struct xsc_txq_data *
+xsc_txq_get(struct xsc_ethdev_priv *priv, uint16_t idx)
+{
+ if (priv->txqs != NULL && (*priv->txqs)[idx] != NULL)
+ return (*priv->txqs)[idx];
+ return NULL;
+}
+
+static __rte_always_inline struct xsc_rxq_data *
+xsc_rxq_get(struct xsc_ethdev_priv *priv, uint16_t idx)
+{
+ if (priv->rxqs != NULL && (*priv->rxqs)[idx] != NULL)
+ return (*priv->rxqs)[idx];
+ return NULL;
+}
+
#endif /* _XSC_ETHDEV_H_ */
diff --git a/drivers/net/xsc/xsc_rx.c b/drivers/net/xsc/xsc_rx.c
new file mode 100644
index 0000000000..f3667313be
--- /dev/null
+++ b/drivers/net/xsc/xsc_rx.c
@@ -0,0 +1,291 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#include <rte_io.h>
+
+#include "xsc_log.h"
+#include "xsc_defs.h"
+#include "xsc_dev.h"
+#include "xsc_ethdev.h"
+#include "xsc_cmd.h"
+#include "xsc_rx.h"
+
+#define XSC_MAX_RECV_LEN 9800
+
+static void
+xsc_rxq_initialize(struct xsc_dev *xdev, struct xsc_rxq_data *rxq_data)
+{
+ const uint32_t wqe_n = rxq_data->wqe_s;
+ uint32_t i;
+ uint32_t seg_len = 0;
+ struct xsc_hwinfo *hwinfo = &xdev->hwinfo;
+ uint32_t rx_ds_num = hwinfo->recv_seg_num;
+ uint32_t log2ds = rte_log2_u32(rx_ds_num);
+ uintptr_t addr;
+ struct rte_mbuf *mbuf;
+ void *jumbo_buffer_pa = xdev->jumbo_buffer_pa;
+ void *jumbo_buffer_va = xdev->jumbo_buffer_va;
+ volatile struct xsc_wqe_data_seg *seg;
+ volatile struct xsc_wqe_data_seg *seg_next;
+
+ for (i = 0; (i != wqe_n); ++i) {
+ mbuf = (*rxq_data->elts)[i];
+ seg = &((volatile struct xsc_wqe_data_seg *)rxq_data->wqes)[i * rx_ds_num];
+ addr = (uintptr_t)rte_pktmbuf_iova(mbuf);
+ if (rx_ds_num == 1)
+ seg_len = XSC_MAX_RECV_LEN;
+ else
+ seg_len = rte_pktmbuf_data_len(mbuf);
+ *seg = (struct xsc_wqe_data_seg){
+ .va = rte_cpu_to_le_64(addr),
+ .seg_len = rte_cpu_to_le_32(seg_len),
+ .lkey = 0,
+ };
+
+ if (rx_ds_num != 1) {
+ seg_next = seg + 1;
+ if (jumbo_buffer_va == NULL) {
+ jumbo_buffer_pa = rte_malloc(NULL, XSC_MAX_RECV_LEN, 0);
+ if (jumbo_buffer_pa == NULL) {
+ /* Rely on mtu */
+ seg->seg_len = XSC_MAX_RECV_LEN;
+ PMD_DRV_LOG(ERR, "Failed to malloc jumbo_buffer");
+ continue;
+ } else {
+ jumbo_buffer_va =
+ (void *)rte_malloc_virt2iova(jumbo_buffer_pa);
+ if ((rte_iova_t)jumbo_buffer_va == RTE_BAD_IOVA) {
+ seg->seg_len = XSC_MAX_RECV_LEN;
+ PMD_DRV_LOG(ERR, "Failed to turn jumbo_buffer");
+ continue;
+ }
+ }
+ xdev->jumbo_buffer_pa = jumbo_buffer_pa;
+ xdev->jumbo_buffer_va = jumbo_buffer_va;
+ }
+ *seg_next = (struct xsc_wqe_data_seg){
+ .va = rte_cpu_to_le_64((uint64_t)jumbo_buffer_va),
+ .seg_len = rte_cpu_to_le_32(XSC_MAX_RECV_LEN - seg_len),
+ .lkey = 0,
+ };
+ }
+ }
+
+ rxq_data->rq_ci = wqe_n;
+ rxq_data->sge_n = log2ds;
+
+ union xsc_recv_doorbell recv_db = {
+ .recv_data = 0
+ };
+
+ recv_db.next_pid = wqe_n << log2ds;
+ recv_db.qp_num = rxq_data->qpn;
+ rte_write32(rte_cpu_to_le_32(recv_db.recv_data), rxq_data->rq_db);
+}
+
+static int
+xsc_rss_qp_create(struct xsc_ethdev_priv *priv, int port_id)
+{
+ struct xsc_cmd_create_multiqp_mbox_in *in;
+ struct xsc_cmd_create_qp_request *req;
+ struct xsc_cmd_create_multiqp_mbox_out *out;
+ uint8_t log_ele;
+ uint64_t iova;
+ int wqe_n;
+ int in_len, out_len, cmd_len;
+ int entry_total_len, entry_len;
+ uint8_t log_rq_sz, log_sq_sz = 0;
+ uint32_t wqe_total_len;
+ int j, ret;
+ uint16_t i, pa_num;
+ int rqn_base;
+ struct xsc_rxq_data *rxq_data;
+ struct xsc_dev *xdev = priv->xdev;
+ struct xsc_hwinfo *hwinfo = &xdev->hwinfo;
+ char name[RTE_ETH_NAME_MAX_LEN] = { 0 };
+
+ rxq_data = xsc_rxq_get(priv, 0);
+ log_ele = rte_log2_u32(sizeof(struct xsc_wqe_data_seg));
+ wqe_n = rxq_data->wqe_s;
+ log_rq_sz = rte_log2_u32(wqe_n * hwinfo->recv_seg_num);
+ wqe_total_len = 1 << (log_rq_sz + log_sq_sz + log_ele);
+
+ pa_num = (wqe_total_len + XSC_PAGE_SIZE - 1) / XSC_PAGE_SIZE;
+ entry_len = sizeof(struct xsc_cmd_create_qp_request) + sizeof(uint64_t) * pa_num;
+ entry_total_len = entry_len * priv->num_rq;
+
+ in_len = sizeof(struct xsc_cmd_create_multiqp_mbox_in) + entry_total_len;
+ out_len = sizeof(struct xsc_cmd_create_multiqp_mbox_out) + entry_total_len;
+ cmd_len = RTE_MAX(in_len, out_len);
+ in = malloc(cmd_len);
+ memset(in, 0, cmd_len);
+ if (in == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Alloc rss qp create cmd memory failed");
+ goto error;
+ }
+
+ in->qp_num = rte_cpu_to_be_16((uint16_t)priv->num_rq);
+ in->qp_type = XSC_QUEUE_TYPE_RAW;
+ in->req_len = rte_cpu_to_be_32(cmd_len);
+
+ for (i = 0; i < priv->num_rq; i++) {
+ rxq_data = (*priv->rxqs)[i];
+ req = (struct xsc_cmd_create_qp_request *)(&in->data[0] + entry_len * i);
+ req->input_qpn = rte_cpu_to_be_16(0); /* useless for eth */
+ req->pa_num = rte_cpu_to_be_16(pa_num);
+ req->qp_type = XSC_QUEUE_TYPE_RAW;
+ req->log_rq_sz = log_rq_sz;
+ req->cqn_recv = rte_cpu_to_be_16((uint16_t)rxq_data->cqn);
+ req->cqn_send = req->cqn_recv;
+ req->glb_funcid = rte_cpu_to_be_16((uint16_t)hwinfo->func_id);
+ /* Alloc pas addr */
+ snprintf(name, sizeof(name), "wqe_mem_rx_%d_%d", port_id, i);
+ rxq_data->rq_pas = rte_memzone_reserve_aligned(name,
+ (XSC_PAGE_SIZE * pa_num),
+ SOCKET_ID_ANY,
+ 0, XSC_PAGE_SIZE);
+ if (rxq_data->rq_pas == NULL) {
+ rte_errno = ENOMEM;
+ PMD_DRV_LOG(ERR, "Alloc rxq pas memory failed");
+ goto error;
+ }
+
+ iova = rxq_data->rq_pas->iova;
+ for (j = 0; j < pa_num; j++)
+ req->pas[j] = rte_cpu_to_be_64(iova + j * XSC_PAGE_SIZE);
+ }
+
+ in->hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_CREATE_MULTI_QP);
+ out = (struct xsc_cmd_create_multiqp_mbox_out *)in;
+ ret = xsc_dev_mailbox_exec(xdev, in, in_len, out, out_len);
+ if (ret != 0 || out->hdr.status != 0) {
+ PMD_DRV_LOG(ERR,
+ "Create rss rq failed, port id=%d, qp_num=%d, ret=%d, out.status=%u",
+ port_id, priv->num_rq, ret, out->hdr.status);
+ rte_errno = ENOEXEC;
+ goto error;
+ }
+ rqn_base = rte_be_to_cpu_32(out->qpn_base) & 0xffffff;
+
+ for (i = 0; i < priv->num_rq; i++) {
+ rxq_data = xsc_rxq_get(priv, i);
+ rxq_data->wqes = rxq_data->rq_pas->addr;
+ if (!xsc_dev_is_vf(xdev))
+ rxq_data->rq_db = (uint32_t *)((uint8_t *)xdev->bar_addr +
+ XSC_PF_RX_DB_ADDR);
+ else
+ rxq_data->rq_db = (uint32_t *)((uint8_t *)xdev->bar_addr +
+ XSC_VF_RX_DB_ADDR);
+
+ rxq_data->qpn = rqn_base + i;
+ xsc_dev_modify_qp_status(xdev, rxq_data->qpn, 1, XSC_CMD_OP_RTR2RTS_QP);
+ xsc_rxq_initialize(xdev, rxq_data);
+ rxq_data->cq_ci = 0;
+ priv->dev_data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
+ PMD_DRV_LOG(INFO, "Port %u create rx qp, wqe_s:%d, wqe_n:%d, qp_db=%p, qpn:%d",
+ port_id,
+ rxq_data->wqe_s, rxq_data->wqe_n,
+ rxq_data->rq_db, rxq_data->qpn);
+ }
+
+ free(in);
+ return 0;
+
+error:
+ free(in);
+ return -rte_errno;
+}
+
+int
+xsc_rxq_rss_obj_new(struct xsc_ethdev_priv *priv, uint16_t port_id)
+{
+ int ret;
+ uint32_t i;
+ struct xsc_dev *xdev = priv->xdev;
+ struct xsc_rxq_data *rxq_data;
+ struct xsc_rx_cq_params cq_params = {0};
+ struct xsc_rx_cq_info cq_info = {0};
+
+ /* Create CQ */
+ for (i = 0; i < priv->num_rq; ++i) {
+ rxq_data = xsc_rxq_get(priv, i);
+
+ memset(&cq_params, 0, sizeof(cq_params));
+ memset(&cq_info, 0, sizeof(cq_info));
+ cq_params.port_id = rxq_data->port_id;
+ cq_params.qp_id = rxq_data->idx;
+ cq_params.wqe_s = rxq_data->wqe_s;
+
+ ret = xsc_dev_rx_cq_create(xdev, &cq_params, &cq_info);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Port %u rxq %u create cq fail", port_id, i);
+ rte_errno = errno;
+ goto error;
+ }
+
+ rxq_data->cq = cq_info.cq;
+ rxq_data->cqe_n = cq_info.cqe_n;
+ rxq_data->cqe_s = 1 << rxq_data->cqe_n;
+ rxq_data->cqe_m = rxq_data->cqe_s - 1;
+ rxq_data->cqes = cq_info.cqes;
+ rxq_data->cq_db = cq_info.cq_db;
+ rxq_data->cqn = cq_info.cqn;
+
+ PMD_DRV_LOG(INFO, "Port %u create rx cq, cqe_s:%d, cqe_n:%d, cq_db=%p, cqn:%d",
+ port_id,
+ rxq_data->cqe_s, rxq_data->cqe_n,
+ rxq_data->cq_db, rxq_data->cqn);
+ }
+
+ ret = xsc_rss_qp_create(priv, port_id);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Port %u rss rxq create fail", port_id);
+ goto error;
+ }
+ return 0;
+
+error:
+ return -rte_errno;
+}
+
+int
+xsc_rxq_elts_alloc(struct xsc_rxq_data *rxq_data)
+{
+ uint32_t elts_s = rxq_data->wqe_s;
+ struct rte_mbuf *mbuf;
+ uint32_t i;
+
+ for (i = 0; (i != elts_s); ++i) {
+ mbuf = rte_pktmbuf_alloc(rxq_data->mp);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(ERR, "Port %u rxq %u empty mbuf pool",
+ rxq_data->port_id, rxq_data->idx);
+ rte_errno = ENOMEM;
+ goto error;
+ }
+
+ mbuf->port = rxq_data->port_id;
+ mbuf->nb_segs = 1;
+ rte_pktmbuf_data_len(mbuf) = rte_pktmbuf_data_room_size(rxq_data->mp)
+ - mbuf->data_off;
+ rte_pktmbuf_pkt_len(mbuf) = rte_pktmbuf_data_room_size(rxq_data->mp)
+ - mbuf->data_off;
+ (*rxq_data->elts)[i] = mbuf;
+ }
+
+ return 0;
+error:
+ elts_s = i;
+ for (i = 0; (i != elts_s); ++i) {
+ if ((*rxq_data->elts)[i] != NULL)
+ rte_pktmbuf_free_seg((*rxq_data->elts)[i]);
+ (*rxq_data->elts)[i] = NULL;
+ }
+
+ PMD_DRV_LOG(ERR, "Port %u rxq %u start failed, free elts",
+ rxq_data->port_id, rxq_data->idx);
+
+ return -rte_errno;
+}
diff --git a/drivers/net/xsc/xsc_rx.h b/drivers/net/xsc/xsc_rx.h
index 3653c0e335..5a2c4839ce 100644
--- a/drivers/net/xsc/xsc_rx.h
+++ b/drivers/net/xsc/xsc_rx.h
@@ -56,4 +56,7 @@ struct __rte_cache_aligned xsc_rxq_data {
uint16_t rsv1:11;
};
+int xsc_rxq_elts_alloc(struct xsc_rxq_data *rxq_data);
+int xsc_rxq_rss_obj_new(struct xsc_ethdev_priv *priv, uint16_t port_id);
+
#endif /* _XSC_RX_H_ */
diff --git a/drivers/net/xsc/xsc_rxtx.h b/drivers/net/xsc/xsc_rxtx.h
index 6311ed12d2..2771efafc6 100644
--- a/drivers/net/xsc/xsc_rxtx.h
+++ b/drivers/net/xsc/xsc_rxtx.h
@@ -102,6 +102,24 @@ struct xsc_cqe_u64 {
struct xsc_cqe cqe1;
};
+union xsc_cq_doorbell {
+ struct {
+ uint32_t next_cid:16;
+ uint32_t cq_num:15;
+ uint32_t cq_sta:1;
+ };
+ uint32_t cq_data;
+};
+
+union xsc_send_doorbell {
+ struct {
+ uint32_t next_pid:16;
+ uint32_t qp_num:15;
+ uint32_t rsv:1;
+ };
+ uint32_t send_data;
+};
+
struct xsc_tx_cq_params {
uint16_t port_id;
uint16_t qp_id;
@@ -134,6 +152,15 @@ struct xsc_tx_qp_info {
uint16_t wqe_n;
};
+union xsc_recv_doorbell {
+ struct {
+ uint32_t next_pid:13;
+ uint32_t qp_num:15;
+ uint32_t rsv:4;
+ };
+ uint32_t recv_data;
+};
+
struct xsc_rx_cq_params {
uint16_t port_id;
uint16_t qp_id;
diff --git a/drivers/net/xsc/xsc_tx.c b/drivers/net/xsc/xsc_tx.c
new file mode 100644
index 0000000000..ba80488010
--- /dev/null
+++ b/drivers/net/xsc/xsc_tx.c
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2025 Yunsilicon Technology Co., Ltd.
+ */
+
+#include <rte_io.h>
+
+#include "xsc_log.h"
+#include "xsc_defs.h"
+#include "xsc_dev.h"
+#include "xsc_ethdev.h"
+#include "xsc_cmd.h"
+#include "xsc_tx.h"
+#include "xsc_np.h"
+
+void
+xsc_txq_elts_alloc(struct xsc_txq_data *txq_data)
+{
+ const uint32_t elts_s = 1 << txq_data->elts_n;
+ uint32_t i;
+
+ for (i = 0; i < elts_s; ++i)
+ txq_data->elts[i] = NULL;
+ txq_data->elts_head = 0;
+ txq_data->elts_tail = 0;
+ txq_data->elts_comp = 0;
+}
+
+int
+xsc_txq_obj_new(struct xsc_dev *xdev, struct xsc_txq_data *txq_data,
+ uint64_t offloads, uint16_t idx)
+{
+ int ret = 0;
+ struct xsc_tx_cq_params cq_params = {0};
+ struct xsc_tx_cq_info cq_info = {0};
+ struct xsc_tx_qp_params qp_params = {0};
+ struct xsc_tx_qp_info qp_info = {0};
+
+ cq_params.port_id = txq_data->port_id;
+ cq_params.qp_id = txq_data->idx;
+ cq_params.elts_n = txq_data->elts_n;
+ ret = xsc_dev_tx_cq_create(xdev, &cq_params, &cq_info);
+ if (ret) {
+ rte_errno = errno;
+ goto error;
+ }
+
+ txq_data->cq = cq_info.cq;
+ txq_data->cqe_n = cq_info.cqe_n;
+ txq_data->cqe_s = cq_info.cqe_s;
+ txq_data->cq_db = cq_info.cq_db;
+ txq_data->cqn = cq_info.cqn;
+ txq_data->cqes = cq_info.cqes;
+ txq_data->cqe_m = txq_data->cqe_s - 1;
+
+ PMD_DRV_LOG(INFO, "Create tx cq, cqe_s:%d, cqe_n:%d, cq_db=%p, cqn:%d",
+ txq_data->cqe_s, txq_data->cqe_n,
+ txq_data->cq_db, txq_data->cqn);
+
+ qp_params.cq = txq_data->cq;
+ qp_params.tx_offloads = offloads;
+ qp_params.port_id = txq_data->port_id;
+ qp_params.qp_id = idx;
+ qp_params.elts_n = txq_data->elts_n;
+ ret = xsc_dev_tx_qp_create(xdev, &qp_params, &qp_info);
+
+ if (ret != 0) {
+ rte_errno = errno;
+ goto error;
+ }
+
+ txq_data->qp = qp_info.qp;
+ txq_data->qpn = qp_info.qpn;
+ txq_data->wqes = qp_info.wqes;
+ txq_data->wqe_n = qp_info.wqe_n;
+ txq_data->wqe_s = 1 << txq_data->wqe_n;
+ txq_data->wqe_m = txq_data->wqe_s - 1;
+ txq_data->wqe_ds_n = rte_log2_u32(xdev->hwinfo.send_seg_num);
+ txq_data->qp_db = qp_info.qp_db;
+
+ txq_data->cq_ci = 0;
+ txq_data->cq_pi = 0;
+ txq_data->wqe_ci = 0;
+ txq_data->wqe_pi = 0;
+ txq_data->wqe_comp = 0;
+
+ PMD_DRV_LOG(INFO, "Create tx qp, wqe_s:%d, wqe_n:%d, qp_db=%p, qpn:%d",
+ txq_data->wqe_s, txq_data->wqe_n,
+ txq_data->qp_db, txq_data->qpn);
+ return 0;
+
+error:
+ return -rte_errno;
+}
diff --git a/drivers/net/xsc/xsc_tx.h b/drivers/net/xsc/xsc_tx.h
index 11e249a4e3..674b65a555 100644
--- a/drivers/net/xsc/xsc_tx.h
+++ b/drivers/net/xsc/xsc_tx.h
@@ -52,4 +52,8 @@ struct __rte_cache_aligned xsc_txq_data {
struct rte_mbuf *elts[]; /* Storage for queued packets, for free */
};
+int xsc_txq_obj_new(struct xsc_dev *xdev, struct xsc_txq_data *txq_data,
+ uint64_t offloads, uint16_t idx);
+void xsc_txq_elts_alloc(struct xsc_txq_data *txq_data);
+
#endif /* _XSC_TX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 10/15] net/xsc: add ethdev stop and close
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (8 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 09/15] net/xsc: add ethdev start WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 11/15] net/xsc: add ethdev Rx burst WanRenyong
` (4 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev close and stop functions.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
drivers/net/xsc/xsc_dev.c | 12 ++++
drivers/net/xsc/xsc_dev.h | 2 +
drivers/net/xsc/xsc_ethdev.c | 108 +++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_rx.c | 47 +++++++++++++++
drivers/net/xsc/xsc_rx.h | 2 +
drivers/net/xsc/xsc_tx.c | 33 +++++++++++
drivers/net/xsc/xsc_tx.h | 2 +
7 files changed, 206 insertions(+)
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
index 45e93e1a85..80813c2c82 100644
--- a/drivers/net/xsc/xsc_dev.c
+++ b/drivers/net/xsc/xsc_dev.c
@@ -68,6 +68,18 @@ xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac)
return xdev->dev_ops->get_mac(xdev, mac);
}
+int
+xsc_dev_destroy_qp(struct xsc_dev *xdev, void *qp)
+{
+ return xdev->dev_ops->destroy_qp(qp);
+}
+
+int
+xsc_dev_destroy_cq(struct xsc_dev *xdev, void *cq)
+{
+ return xdev->dev_ops->destroy_cq(cq);
+}
+
int
xsc_dev_modify_qp_status(struct xsc_dev *xdev, uint32_t qpn, int num, int opcode)
{
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index 5aa1d8704e..4b5ab1d692 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -158,6 +158,8 @@ struct xsc_dev_ops {
int xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
int in_len, void *data_out, int out_len);
void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
+int xsc_dev_destroy_qp(struct xsc_dev *xdev, void *qp);
+int xsc_dev_destroy_cq(struct xsc_dev *xdev, void *cq);
int xsc_dev_modify_qp_status(struct xsc_dev *xdev, uint32_t qpn, int num, int opcode);
int xsc_dev_modify_qp_qostree(struct xsc_dev *xdev, uint16_t qpn);
int xsc_dev_rx_cq_create(struct xsc_dev *xdev, struct xsc_rx_cq_params *cq_params,
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 0443460cf7..c5ee079d4a 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -112,6 +112,44 @@ xsc_ethdev_configure(struct rte_eth_dev *dev)
return -rte_errno;
}
+static void
+xsc_ethdev_txq_release(struct rte_eth_dev *dev, uint16_t idx)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_txq_data *txq_data = xsc_txq_get(priv, idx);
+
+ if (txq_data == NULL)
+ return;
+
+ xsc_dev_set_qpsetid(priv->xdev, txq_data->qpn, 0);
+ xsc_txq_obj_release(priv->xdev, txq_data);
+ rte_free(txq_data->fcqs);
+ txq_data->fcqs = NULL;
+ xsc_txq_elts_free(txq_data);
+ rte_free(txq_data);
+ (*priv->txqs)[idx] = NULL;
+
+ dev->data->tx_queues[idx] = NULL;
+ dev->data->tx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
+}
+
+static void
+xsc_ethdev_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_rxq_data *rxq_data = xsc_rxq_get(priv, idx);
+
+ if (rxq_data == NULL)
+ return;
+ xsc_rxq_rss_obj_release(priv->xdev, rxq_data);
+ xsc_rxq_elts_free(rxq_data);
+ rte_free(rxq_data);
+ (*priv->rxqs)[idx] = NULL;
+
+ dev->data->rx_queues[idx] = NULL;
+ dev->data->rx_queue_state[idx] = RTE_ETH_QUEUE_STATE_STOPPED;
+}
+
static int
xsc_ethdev_enable(struct rte_eth_dev *dev)
{
@@ -174,6 +212,30 @@ xsc_ethdev_enable(struct rte_eth_dev *dev)
return 0;
}
+static void
+xsc_rxq_stop(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ uint16_t i;
+
+ for (i = 0; i != priv->num_rq; ++i)
+ xsc_ethdev_rxq_release(dev, i);
+ priv->rxqs = NULL;
+ priv->flags &= ~XSC_FLAG_RX_QUEUE_INIT;
+}
+
+static void
+xsc_txq_stop(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ uint16_t i;
+
+ for (i = 0; i != priv->num_sq; ++i)
+ xsc_ethdev_txq_release(dev, i);
+ priv->txqs = NULL;
+ priv->flags &= ~XSC_FLAG_TX_QUEUE_INIT;
+}
+
static int
xsc_txq_start(struct xsc_ethdev_priv *priv)
{
@@ -280,9 +342,51 @@ xsc_ethdev_start(struct rte_eth_dev *dev)
error:
dev->data->dev_started = 0;
+ xsc_txq_stop(dev);
+ xsc_rxq_stop(dev);
return -rte_errno;
}
+static int
+xsc_ethdev_stop(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ uint16_t i;
+
+ PMD_DRV_LOG(DEBUG, "Port %u stopping", dev->data->port_id);
+ dev->data->dev_started = 0;
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ rte_wmb();
+
+ rte_delay_us_sleep(1000 * priv->num_rq);
+ for (i = 0; i < priv->num_rq; ++i)
+ dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ for (i = 0; i < priv->num_sq; ++i)
+ dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+static int
+xsc_ethdev_close(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+
+ PMD_DRV_LOG(DEBUG, "Port %u closing", dev->data->port_id);
+ dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
+ dev->tx_pkt_burst = rte_eth_pkt_burst_dummy;
+ rte_wmb();
+
+ xsc_txq_stop(dev);
+ xsc_rxq_stop(dev);
+
+ rte_free(priv->rss_conf.rss_key);
+ xsc_dev_close(priv->xdev, priv->representor_id);
+ dev->data->mac_addrs = NULL;
+ return 0;
+}
+
static int
xsc_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint32_t socket, const struct rte_eth_rxconf *conf,
@@ -391,8 +495,12 @@ xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uin
const struct eth_dev_ops xsc_eth_dev_ops = {
.dev_configure = xsc_ethdev_configure,
.dev_start = xsc_ethdev_start,
+ .dev_stop = xsc_ethdev_stop,
+ .dev_close = xsc_ethdev_close,
.rx_queue_setup = xsc_ethdev_rx_queue_setup,
.tx_queue_setup = xsc_ethdev_tx_queue_setup,
+ .rx_queue_release = xsc_ethdev_rxq_release,
+ .tx_queue_release = xsc_ethdev_txq_release,
.rss_hash_update = xsc_ethdev_rss_hash_update,
.rss_hash_conf_get = xsc_ethdev_rss_hash_conf_get,
};
diff --git a/drivers/net/xsc/xsc_rx.c b/drivers/net/xsc/xsc_rx.c
index f3667313be..2081f3b619 100644
--- a/drivers/net/xsc/xsc_rx.c
+++ b/drivers/net/xsc/xsc_rx.c
@@ -289,3 +289,50 @@ xsc_rxq_elts_alloc(struct xsc_rxq_data *rxq_data)
return -rte_errno;
}
+
+void
+xsc_rxq_elts_free(struct xsc_rxq_data *rxq_data)
+{
+ uint16_t i;
+
+ if (rxq_data->elts == NULL)
+ return;
+ for (i = 0; i != rxq_data->wqe_s; ++i) {
+ if ((*rxq_data->elts)[i] != NULL)
+ rte_pktmbuf_free_seg((*rxq_data->elts)[i]);
+ (*rxq_data->elts)[i] = NULL;
+ }
+
+ PMD_DRV_LOG(DEBUG, "Port %u rxq %u free elts", rxq_data->port_id, rxq_data->idx);
+}
+
+void
+xsc_rxq_rss_obj_release(struct xsc_dev *xdev, struct xsc_rxq_data *rxq_data)
+{
+ struct xsc_cmd_destroy_qp_mbox_in in = { .hdr = { 0 } };
+ struct xsc_cmd_destroy_qp_mbox_out out = { .hdr = { 0 } };
+ int ret, in_len, out_len;
+ uint32_t qpn = rxq_data->qpn;
+
+ xsc_dev_modify_qp_status(xdev, qpn, 1, XSC_CMD_OP_QP_2RST);
+
+ in_len = sizeof(struct xsc_cmd_destroy_qp_mbox_in);
+ out_len = sizeof(struct xsc_cmd_destroy_qp_mbox_out);
+ in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_DESTROY_QP);
+ in.qpn = rte_cpu_to_be_32(rxq_data->qpn);
+
+ ret = xsc_dev_mailbox_exec(xdev, &in, in_len, &out, out_len);
+ if (ret != 0 || out.hdr.status != 0) {
+ PMD_DRV_LOG(ERR,
+ "Release rss rq failed, port id=%d, qid=%d, err=%d, out.status=%u",
+ rxq_data->port_id, rxq_data->idx, ret, out.hdr.status);
+ rte_errno = ENOEXEC;
+ return;
+ }
+
+ rte_memzone_free(rxq_data->rq_pas);
+
+ if (rxq_data->cq != NULL)
+ xsc_dev_destroy_cq(xdev, rxq_data->cq);
+ rxq_data->cq = NULL;
+}
diff --git a/drivers/net/xsc/xsc_rx.h b/drivers/net/xsc/xsc_rx.h
index 5a2c4839ce..e24b1a8829 100644
--- a/drivers/net/xsc/xsc_rx.h
+++ b/drivers/net/xsc/xsc_rx.h
@@ -58,5 +58,7 @@ struct __rte_cache_aligned xsc_rxq_data {
int xsc_rxq_elts_alloc(struct xsc_rxq_data *rxq_data);
int xsc_rxq_rss_obj_new(struct xsc_ethdev_priv *priv, uint16_t port_id);
+void xsc_rxq_rss_obj_release(struct xsc_dev *xdev, struct xsc_rxq_data *rxq_data);
+void xsc_rxq_elts_free(struct xsc_rxq_data *rxq_data);
#endif /* _XSC_RX_H_ */
diff --git a/drivers/net/xsc/xsc_tx.c b/drivers/net/xsc/xsc_tx.c
index ba80488010..56daf6b4c6 100644
--- a/drivers/net/xsc/xsc_tx.c
+++ b/drivers/net/xsc/xsc_tx.c
@@ -91,3 +91,36 @@ xsc_txq_obj_new(struct xsc_dev *xdev, struct xsc_txq_data *txq_data,
error:
return -rte_errno;
}
+
+void
+xsc_txq_obj_release(struct xsc_dev *xdev, struct xsc_txq_data *txq_data)
+{
+ PMD_DRV_LOG(DEBUG, "Destroy tx queue %u, portid %u",
+ txq_data->idx, txq_data->port_id);
+ if (txq_data->qp != NULL)
+ xsc_dev_destroy_qp(xdev, txq_data->qp);
+ if (txq_data->cq != NULL)
+ xsc_dev_destroy_cq(xdev, txq_data->cq);
+}
+
+void
+xsc_txq_elts_free(struct xsc_txq_data *txq_data)
+{
+ const uint16_t elts_n = 1 << txq_data->elts_n;
+ const uint16_t elts_m = elts_n - 1;
+ uint16_t elts_head = txq_data->elts_head;
+ uint16_t elts_tail = txq_data->elts_tail;
+ struct rte_mbuf *(*elts)[elts_n] = &txq_data->elts;
+
+ txq_data->elts_head = 0;
+ txq_data->elts_tail = 0;
+ txq_data->elts_comp = 0;
+
+ while (elts_tail != elts_head) {
+ struct rte_mbuf *elt = (*elts)[elts_tail & elts_m];
+
+ rte_pktmbuf_free_seg(elt);
+ ++elts_tail;
+ }
+ PMD_DRV_LOG(DEBUG, "Port %u txq %u free elts", txq_data->port_id, txq_data->idx);
+}
diff --git a/drivers/net/xsc/xsc_tx.h b/drivers/net/xsc/xsc_tx.h
index 674b65a555..208f1c8490 100644
--- a/drivers/net/xsc/xsc_tx.h
+++ b/drivers/net/xsc/xsc_tx.h
@@ -55,5 +55,7 @@ struct __rte_cache_aligned xsc_txq_data {
int xsc_txq_obj_new(struct xsc_dev *xdev, struct xsc_txq_data *txq_data,
uint64_t offloads, uint16_t idx);
void xsc_txq_elts_alloc(struct xsc_txq_data *txq_data);
+void xsc_txq_obj_release(struct xsc_dev *xdev, struct xsc_txq_data *txq_data);
+void xsc_txq_elts_free(struct xsc_txq_data *txq_data);
#endif /* _XSC_TX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 11/15] net/xsc: add ethdev Rx burst
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (9 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 10/15] net/xsc: add ethdev stop and close WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 12/15] net/xsc: add ethdev Tx burst WanRenyong
` (3 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev Rx burst function.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
Signed-off-by: Xiaoxiong Zhang <zhangxx@yunsilicon.com>
---
drivers/net/xsc/xsc_ethdev.c | 2 +
drivers/net/xsc/xsc_rx.c | 174 +++++++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_rx.h | 1 +
drivers/net/xsc/xsc_rxtx.h | 13 +++
4 files changed, 190 insertions(+)
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index c5ee079d4a..00bd617c3e 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -336,6 +336,8 @@ xsc_ethdev_start(struct rte_eth_dev *dev)
dev->data->dev_started = 1;
rte_wmb();
+ dev->rx_pkt_burst = xsc_rx_burst;
+
ret = xsc_ethdev_enable(dev);
return 0;
diff --git a/drivers/net/xsc/xsc_rx.c b/drivers/net/xsc/xsc_rx.c
index 2081f3b619..58a9cc2f26 100644
--- a/drivers/net/xsc/xsc_rx.c
+++ b/drivers/net/xsc/xsc_rx.c
@@ -13,6 +13,180 @@
#define XSC_MAX_RECV_LEN 9800
+static inline void
+xsc_cq_to_mbuf(struct xsc_rxq_data *rxq, struct rte_mbuf *pkt,
+ volatile struct xsc_cqe *cqe)
+{
+ uint32_t rss_hash_res = 0;
+
+ pkt->port = rxq->port_id;
+ if (rxq->rss_hash) {
+ rss_hash_res = rte_be_to_cpu_32(cqe->vni);
+ if (rss_hash_res) {
+ pkt->hash.rss = rss_hash_res;
+ pkt->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
+ }
+ }
+}
+
+static inline int
+xsc_rx_poll_len(struct xsc_rxq_data *rxq, volatile struct xsc_cqe *cqe)
+{
+ int len;
+
+ do {
+ len = 0;
+ int ret;
+
+ ret = xsc_check_cqe_own(cqe, rxq->cqe_n, rxq->cq_ci);
+ if (unlikely(ret != XSC_CQE_OWNER_SW)) {
+ if (unlikely(ret == XSC_CQE_OWNER_ERR)) {
+ ++rxq->stats.rx_errors;
+ if (ret == XSC_CQE_OWNER_HW || ret == -1)
+ return 0;
+ } else {
+ return 0;
+ }
+ }
+
+ rxq->cq_ci += 1;
+ len = rte_le_to_cpu_32(cqe->msg_len);
+ return len;
+ } while (1);
+}
+
+static __rte_always_inline void
+xsc_pkt_info_sync(struct rte_mbuf *rep, struct rte_mbuf *seg)
+{
+ if (rep != NULL && seg != NULL) {
+ rep->data_len = seg->data_len;
+ rep->pkt_len = seg->pkt_len;
+ rep->data_off = seg->data_off;
+ rep->port = seg->port;
+ }
+}
+
+uint16_t
+xsc_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
+{
+ struct xsc_rxq_data *rxq = dpdk_rxq;
+ const uint32_t wqe_m = rxq->wqe_m;
+ const uint32_t cqe_m = rxq->cqe_m;
+ const uint32_t sge_n = rxq->sge_n;
+ struct rte_mbuf *pkt = NULL;
+ struct rte_mbuf *seg = NULL;
+ volatile struct xsc_cqe *cqe = &(*rxq->cqes)[rxq->cq_ci & cqe_m];
+ uint32_t nb_pkts = 0;
+ uint64_t nb_bytes = 0;
+ uint32_t rq_ci = rxq->rq_ci;
+ int len = 0;
+ uint32_t cq_ci_two = 0;
+ int valid_cqe_num = 0;
+ int cqe_msg_len = 0;
+ volatile struct xsc_cqe_u64 *cqe_u64 = NULL;
+ struct rte_mbuf *rep;
+
+ while (pkts_n) {
+ uint32_t idx = rq_ci & wqe_m;
+ volatile struct xsc_wqe_data_seg *wqe =
+ &((volatile struct xsc_wqe_data_seg *)rxq->wqes)[idx << sge_n];
+
+ seg = (*rxq->elts)[idx];
+ rte_prefetch0(cqe);
+ rte_prefetch0(wqe);
+
+ rep = rte_mbuf_raw_alloc(seg->pool);
+ if (unlikely(rep == NULL)) {
+ ++rxq->stats.rx_nombuf;
+ break;
+ }
+
+ if (!pkt) {
+ if (valid_cqe_num) {
+ cqe = cqe + 1;
+ len = cqe_msg_len;
+ valid_cqe_num = 0;
+ } else if ((rxq->cq_ci % 2 == 0) && (pkts_n > 1)) {
+ cq_ci_two = (rxq->cq_ci & rxq->cqe_m) / 2;
+ cqe_u64 = &(*rxq->cqes_u64)[cq_ci_two];
+ cqe = (volatile struct xsc_cqe *)cqe_u64;
+ len = xsc_rx_poll_len(rxq, cqe);
+ if (len > 0) {
+ cqe_msg_len = xsc_rx_poll_len(rxq, cqe + 1);
+ if (cqe_msg_len > 0)
+ valid_cqe_num = 1;
+ }
+ } else {
+ cqe = &(*rxq->cqes)[rxq->cq_ci & rxq->cqe_m];
+ len = xsc_rx_poll_len(rxq, cqe);
+ }
+
+ if (!len) {
+ rte_mbuf_raw_free(rep);
+ break;
+ }
+
+ if (len > rte_pktmbuf_data_len(seg)) {
+ rte_mbuf_raw_free(rep);
+ pkt = NULL;
+ ++rq_ci;
+ continue;
+ }
+
+ pkt = seg;
+ pkt->ol_flags &= RTE_MBUF_F_EXTERNAL;
+ xsc_cq_to_mbuf(rxq, pkt, cqe);
+
+ if (rxq->crc_present)
+ len -= RTE_ETHER_CRC_LEN;
+ rte_pktmbuf_pkt_len(pkt) = len;
+ }
+
+ xsc_pkt_info_sync(rep, seg);
+ (*rxq->elts)[idx] = rep;
+
+ /* Fill wqe */
+ wqe->va = rte_cpu_to_le_64(rte_pktmbuf_iova(rep));
+ rte_pktmbuf_data_len(seg) = len;
+ nb_bytes += rte_pktmbuf_pkt_len(pkt);
+
+ *(pkts++) = pkt;
+ pkt = NULL;
+ --pkts_n;
+ ++nb_pkts;
+ ++rq_ci;
+ }
+
+ if (unlikely(nb_pkts == 0 && rq_ci == rxq->rq_ci))
+ return 0;
+
+ rxq->rq_ci = rq_ci;
+ rxq->nb_rx_hold += nb_pkts;
+
+ if (rxq->nb_rx_hold >= rxq->rx_free_thresh) {
+ union xsc_cq_doorbell cq_db = {
+ .cq_data = 0
+ };
+ cq_db.next_cid = rxq->cq_ci;
+ cq_db.cq_num = rxq->cqn;
+
+ union xsc_recv_doorbell rq_db = {
+ .recv_data = 0
+ };
+ rq_db.next_pid = (rxq->rq_ci << sge_n);
+ rq_db.qp_num = rxq->qpn;
+
+ rte_write32(rte_cpu_to_le_32(cq_db.cq_data), rxq->cq_db);
+ rte_write32(rte_cpu_to_le_32(rq_db.recv_data), rxq->rq_db);
+ rxq->nb_rx_hold = 0;
+ }
+
+ rxq->stats.rx_pkts += nb_pkts;
+ rxq->stats.rx_bytes += nb_bytes;
+
+ return nb_pkts;
+}
+
static void
xsc_rxq_initialize(struct xsc_dev *xdev, struct xsc_rxq_data *rxq_data)
{
diff --git a/drivers/net/xsc/xsc_rx.h b/drivers/net/xsc/xsc_rx.h
index e24b1a8829..90fbb89197 100644
--- a/drivers/net/xsc/xsc_rx.h
+++ b/drivers/net/xsc/xsc_rx.h
@@ -56,6 +56,7 @@ struct __rte_cache_aligned xsc_rxq_data {
uint16_t rsv1:11;
};
+uint16_t xsc_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n);
int xsc_rxq_elts_alloc(struct xsc_rxq_data *rxq_data);
int xsc_rxq_rss_obj_new(struct xsc_ethdev_priv *priv, uint16_t port_id);
void xsc_rxq_rss_obj_release(struct xsc_dev *xdev, struct xsc_rxq_data *rxq_data);
diff --git a/drivers/net/xsc/xsc_rxtx.h b/drivers/net/xsc/xsc_rxtx.h
index 2771efafc6..fa068f8b29 100644
--- a/drivers/net/xsc/xsc_rxtx.h
+++ b/drivers/net/xsc/xsc_rxtx.h
@@ -175,4 +175,17 @@ struct xsc_rx_cq_info {
uint16_t cqe_n;
};
+static __rte_always_inline int
+xsc_check_cqe_own(volatile struct xsc_cqe *cqe, const uint16_t cqe_n, const uint16_t ci)
+{
+ if (unlikely(((cqe->owner & XSC_CQE_OWNER_MASK) != ((ci >> cqe_n) & XSC_CQE_OWNER_MASK))))
+ return XSC_CQE_OWNER_HW;
+
+ rte_io_rmb();
+ if (cqe->msg_len <= 0 && cqe->is_error)
+ return XSC_CQE_OWNER_ERR;
+
+ return XSC_CQE_OWNER_SW;
+}
+
#endif /* _XSC_RXTX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 12/15] net/xsc: add ethdev Tx burst
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (10 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 11/15] net/xsc: add ethdev Rx burst WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 13/15] net/xsc: add basic stats ops WanRenyong
` (2 subsequent siblings)
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev Tx burst function.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
Signed-off-by: Dongwei Xu <xudw@yunsilicon.com>
---
doc/guides/nics/features/xsc.ini | 4 +
drivers/net/xsc/xsc_ethdev.c | 1 +
drivers/net/xsc/xsc_tx.c | 228 +++++++++++++++++++++++++++++++
drivers/net/xsc/xsc_tx.h | 1 +
4 files changed, 234 insertions(+)
diff --git a/doc/guides/nics/features/xsc.ini b/doc/guides/nics/features/xsc.ini
index bdeb7a984b..772c6418c4 100644
--- a/doc/guides/nics/features/xsc.ini
+++ b/doc/guides/nics/features/xsc.ini
@@ -7,6 +7,10 @@
RSS hash = Y
RSS key update = Y
RSS reta update = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
+Inner L3 checksum = Y
+Inner L4 checksum = Y
Linux = Y
ARMv8 = Y
x86-64 = Y
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 00bd617c3e..0c49170313 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -337,6 +337,7 @@ xsc_ethdev_start(struct rte_eth_dev *dev)
rte_wmb();
dev->rx_pkt_burst = xsc_rx_burst;
+ dev->tx_pkt_burst = xsc_tx_burst;
ret = xsc_ethdev_enable(dev);
diff --git a/drivers/net/xsc/xsc_tx.c b/drivers/net/xsc/xsc_tx.c
index 56daf6b4c6..406fa95381 100644
--- a/drivers/net/xsc/xsc_tx.c
+++ b/drivers/net/xsc/xsc_tx.c
@@ -124,3 +124,231 @@ xsc_txq_elts_free(struct xsc_txq_data *txq_data)
}
PMD_DRV_LOG(DEBUG, "Port %u txq %u free elts", txq_data->port_id, txq_data->idx);
}
+
+static __rte_always_inline void
+xsc_tx_elts_flush(struct xsc_txq_data *__rte_restrict txq, uint16_t tail)
+{
+ uint16_t elts_n = tail - txq->elts_tail;
+ uint32_t free_n;
+
+ do {
+ free_n = txq->elts_s - (txq->elts_tail & txq->elts_m);
+ free_n = RTE_MIN(free_n, elts_n);
+ rte_pktmbuf_free_bulk(&txq->elts[txq->elts_tail & txq->elts_m], free_n);
+ txq->elts_tail += free_n;
+ elts_n -= free_n;
+ } while (elts_n > 0);
+}
+
+static void
+xsc_tx_cqes_handle(struct xsc_txq_data *__rte_restrict txq)
+{
+ uint32_t count = XSC_TX_COMP_CQE_HANDLE_MAX;
+ volatile struct xsc_cqe *last_cqe = NULL;
+ volatile struct xsc_cqe *cqe;
+ bool doorbell = false;
+ int ret;
+ uint16_t tail;
+
+ do {
+ cqe = &txq->cqes[txq->cq_ci & txq->cqe_m];
+ ret = xsc_check_cqe_own(cqe, txq->cqe_n, txq->cq_ci);
+ if (unlikely(ret != XSC_CQE_OWNER_SW)) {
+ if (likely(ret != XSC_CQE_OWNER_ERR))
+ /* No new CQEs in completion queue. */
+ break;
+ doorbell = true;
+ ++txq->cq_ci;
+ txq->cq_pi = txq->cq_ci;
+ last_cqe = NULL;
+ ++txq->stats.tx_errors;
+ continue;
+ }
+
+ doorbell = true;
+ ++txq->cq_ci;
+ last_cqe = cqe;
+ } while (--count > 0);
+
+ if (likely(doorbell)) {
+ union xsc_cq_doorbell cq_db = {
+ .cq_data = 0
+ };
+ cq_db.next_cid = txq->cq_ci;
+ cq_db.cq_num = txq->cqn;
+
+ /* Ring doorbell */
+ rte_write32(rte_cpu_to_le_32(cq_db.cq_data), txq->cq_db);
+
+ /* Release completed elts */
+ if (likely(last_cqe != NULL)) {
+ txq->wqe_pi = rte_le_to_cpu_16(last_cqe->wqe_id) >> txq->wqe_ds_n;
+ tail = txq->fcqs[(txq->cq_ci - 1) & txq->cqe_m];
+ if (likely(tail != txq->elts_tail))
+ xsc_tx_elts_flush(txq, tail);
+ }
+ }
+}
+
+static __rte_always_inline void
+xsc_tx_wqe_ctrl_seg_init(struct xsc_txq_data *__rte_restrict txq,
+ struct rte_mbuf *__rte_restrict mbuf,
+ struct xsc_wqe *__rte_restrict wqe)
+{
+ struct xsc_send_wqe_ctrl_seg *cs = &wqe->cseg;
+ int i = 0;
+ int ds_max = (1 << txq->wqe_ds_n) - 1;
+
+ cs->msg_opcode = XSC_OPCODE_RAW;
+ cs->wqe_id = rte_cpu_to_le_16(txq->wqe_ci << txq->wqe_ds_n);
+ cs->has_pph = 0;
+ /* Clear dseg's seg len */
+ if (cs->ds_data_num > 1 && cs->ds_data_num <= ds_max) {
+ for (i = 1; i < cs->ds_data_num; i++)
+ wqe->dseg[i].seg_len = 0;
+ }
+
+ cs->ds_data_num = mbuf->nb_segs;
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM)
+ cs->csum_en = 0x2;
+ else
+ cs->csum_en = 0;
+
+ if (txq->tso_en == 1 && (mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ cs->has_pph = 0;
+ cs->so_type = 1;
+ cs->so_hdr_len = mbuf->l2_len + mbuf->l3_len + mbuf->l4_len;
+ cs->so_data_size = rte_cpu_to_le_16(mbuf->tso_segsz);
+ }
+
+ cs->msg_len = rte_cpu_to_le_32(rte_pktmbuf_pkt_len(mbuf));
+ if (unlikely(cs->msg_len == 0))
+ cs->msg_len = rte_cpu_to_le_32(rte_pktmbuf_data_len(mbuf));
+
+ /* Do not generate cqe for every pkts */
+ cs->ce = 0;
+}
+
+static __rte_always_inline void
+xsc_tx_wqe_data_seg_init(struct rte_mbuf *mbuf, struct xsc_wqe *wqe)
+{
+ uint16_t i, nb_segs = mbuf->nb_segs;
+ uint32_t data_len;
+ rte_iova_t iova;
+ struct xsc_wqe_data_seg *dseg;
+
+ for (i = 0; i < nb_segs; ++i) {
+ dseg = &wqe->dseg[i];
+ iova = rte_pktmbuf_iova(mbuf);
+ data_len = rte_pktmbuf_data_len(mbuf);
+
+ dseg->in_line = 0;
+ dseg->seg_len = rte_cpu_to_le_32(data_len);
+ dseg->lkey = 0;
+ dseg->va = rte_cpu_to_le_64(iova);
+ mbuf = mbuf->next;
+ }
+}
+
+static __rte_always_inline struct xsc_wqe *
+xsc_tx_wqes_fill(struct xsc_txq_data *__rte_restrict txq,
+ struct rte_mbuf **__rte_restrict pkts,
+ uint32_t pkts_n)
+{
+ uint32_t i;
+ struct xsc_wqe *wqe = NULL;
+ struct rte_mbuf *mbuf;
+
+ for (i = 0; i < pkts_n; i++) {
+ rte_prefetch0(pkts[i]);
+ mbuf = pkts[i];
+ wqe = (struct xsc_wqe *)((struct xsc_send_wqe_ctrl_seg *)txq->wqes +
+ (txq->wqe_ci & txq->wqe_m) * (1 << txq->wqe_ds_n));
+
+ /* Init wqe ctrl seg */
+ xsc_tx_wqe_ctrl_seg_init(txq, mbuf, wqe);
+ /* Init wqe data segs */
+ xsc_tx_wqe_data_seg_init(mbuf, wqe);
+ ++txq->wqe_ci;
+ txq->stats.tx_bytes += rte_pktmbuf_pkt_len(mbuf);
+ }
+
+ return wqe;
+}
+
+static __rte_always_inline void
+xsc_tx_doorbell_ring(volatile uint32_t *db, uint32_t index,
+ uint32_t qpn, uint16_t ds_n)
+{
+ union xsc_send_doorbell tx_db;
+
+ tx_db.next_pid = index << ds_n;
+ tx_db.qp_num = qpn;
+
+ rte_write32(rte_cpu_to_le_32(tx_db.send_data), db);
+}
+
+static __rte_always_inline void
+xsc_tx_elts_store(struct xsc_txq_data *__rte_restrict txq,
+ struct rte_mbuf **__rte_restrict pkts,
+ uint32_t pkts_n)
+{
+ uint32_t part;
+ struct rte_mbuf **elts = (struct rte_mbuf **)txq->elts;
+
+ part = txq->elts_s - (txq->elts_head & txq->elts_m);
+ rte_memcpy((void *)(elts + (txq->elts_head & txq->elts_m)),
+ (void *)pkts,
+ RTE_MIN(part, pkts_n) * sizeof(struct rte_mbuf *));
+
+ if (unlikely(part < pkts_n))
+ rte_memcpy((void *)elts, (void *)(pkts + part),
+ (pkts_n - part) * sizeof(struct rte_mbuf *));
+}
+
+uint16_t
+xsc_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n)
+{
+ struct xsc_txq_data *txq = dpdk_txq;
+ uint32_t tx_n, remain_n = pkts_n;
+ uint16_t idx, elts_free, wqe_free;
+ uint16_t elts_head;
+ struct xsc_wqe *last_wqe;
+
+ if (unlikely(!pkts_n))
+ return 0;
+
+ do {
+ xsc_tx_cqes_handle(txq);
+
+ elts_free = txq->elts_s - (uint16_t)(txq->elts_head - txq->elts_tail);
+ wqe_free = txq->wqe_s - ((uint16_t)((txq->wqe_ci << txq->wqe_ds_n) -
+ (txq->wqe_pi << txq->wqe_ds_n)) >> txq->wqe_ds_n);
+ if (unlikely(elts_free == 0 || wqe_free == 0))
+ break;
+
+ /* Fill in WQEs */
+ tx_n = RTE_MIN(remain_n, wqe_free);
+ idx = pkts_n - remain_n;
+ last_wqe = xsc_tx_wqes_fill(txq, &pkts[idx], tx_n);
+ remain_n -= tx_n;
+ last_wqe->cseg.ce = 1;
+
+ /* Update free-cqs, elts_comp */
+ elts_head = txq->elts_head;
+ elts_head += tx_n;
+ if ((uint16_t)(elts_head - txq->elts_comp) > 0) {
+ txq->elts_comp = elts_head;
+ txq->fcqs[txq->cq_pi++ & txq->cqe_m] = elts_head;
+ }
+
+ /* Ring tx doorbell */
+ xsc_tx_doorbell_ring(txq->qp_db, txq->wqe_ci, txq->qpn, txq->wqe_ds_n);
+
+ xsc_tx_elts_store(txq, &pkts[idx], tx_n);
+ txq->elts_head += tx_n;
+ } while (remain_n > 0);
+
+ txq->stats.tx_pkts += (pkts_n - remain_n);
+ return pkts_n - remain_n;
+}
diff --git a/drivers/net/xsc/xsc_tx.h b/drivers/net/xsc/xsc_tx.h
index 208f1c8490..88419dd3a0 100644
--- a/drivers/net/xsc/xsc_tx.h
+++ b/drivers/net/xsc/xsc_tx.h
@@ -52,6 +52,7 @@ struct __rte_cache_aligned xsc_txq_data {
struct rte_mbuf *elts[]; /* Storage for queued packets, for free */
};
+uint16_t xsc_tx_burst(void *dpdk_txq, struct rte_mbuf **pkts, uint16_t pkts_n);
int xsc_txq_obj_new(struct xsc_dev *xdev, struct xsc_txq_data *txq_data,
uint64_t offloads, uint16_t idx);
void xsc_txq_elts_alloc(struct xsc_txq_data *txq_data);
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 13/15] net/xsc: add basic stats ops
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (11 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 12/15] net/xsc: add ethdev Tx burst WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 14/15] net/xsc: add ethdev infos get WanRenyong
2025-01-03 15:04 ` [PATCH v4 15/15] net/xsc: add ethdev link and MTU ops WanRenyong
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev basic statatics ops.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
doc/guides/nics/features/xsc.ini | 1 +
drivers/net/xsc/xsc_ethdev.c | 75 ++++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+)
diff --git a/doc/guides/nics/features/xsc.ini b/doc/guides/nics/features/xsc.ini
index 772c6418c4..eb88517104 100644
--- a/doc/guides/nics/features/xsc.ini
+++ b/doc/guides/nics/features/xsc.ini
@@ -11,6 +11,7 @@ L3 checksum offload = Y
L4 checksum offload = Y
Inner L3 checksum = Y
Inner L4 checksum = Y
+Basic stats = Y
Linux = Y
ARMv8 = Y
x86-64 = Y
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index 0c49170313..e44792e374 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -469,6 +469,79 @@ xsc_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return 0;
}
+static int
+xsc_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ uint32_t rxqs_n = priv->num_rq;
+ uint32_t txqs_n = priv->num_sq;
+ uint32_t i, idx;
+ struct xsc_rxq_data *rxq;
+ struct xsc_txq_data *txq;
+
+ for (i = 0; i < rxqs_n; ++i) {
+ rxq = xsc_rxq_get(priv, i);
+ if (unlikely(rxq == NULL))
+ continue;
+
+ idx = rxq->idx;
+ if (idx < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_ipackets[idx] += rxq->stats.rx_pkts;
+ stats->q_ibytes[idx] += rxq->stats.rx_bytes;
+ stats->q_errors[idx] += (rxq->stats.rx_errors +
+ rxq->stats.rx_nombuf);
+ }
+ stats->ipackets += rxq->stats.rx_pkts;
+ stats->ibytes += rxq->stats.rx_bytes;
+ stats->ierrors += rxq->stats.rx_errors;
+ stats->rx_nombuf += rxq->stats.rx_nombuf;
+ }
+
+ for (i = 0; i < txqs_n; ++i) {
+ txq = xsc_txq_get(priv, i);
+ if (unlikely(txq == NULL))
+ continue;
+
+ idx = txq->idx;
+ if (idx < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_opackets[idx] += txq->stats.tx_pkts;
+ stats->q_obytes[idx] += txq->stats.tx_bytes;
+ stats->q_errors[idx] += txq->stats.tx_errors;
+ }
+ stats->opackets += txq->stats.tx_pkts;
+ stats->obytes += txq->stats.tx_bytes;
+ stats->oerrors += txq->stats.tx_errors;
+ }
+
+ return 0;
+}
+
+static int
+xsc_ethdev_stats_reset(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ uint32_t rxqs_n = priv->num_rq;
+ uint32_t txqs_n = priv->num_sq;
+ uint32_t i;
+ struct xsc_rxq_data *rxq;
+ struct xsc_txq_data *txq;
+
+ for (i = 0; i < rxqs_n; ++i) {
+ rxq = xsc_rxq_get(priv, i);
+ if (unlikely(rxq == NULL))
+ continue;
+ memset(&rxq->stats, 0, sizeof(struct xsc_rxq_stats));
+ }
+ for (i = 0; i < txqs_n; ++i) {
+ txq = xsc_txq_get(priv, i);
+ if (unlikely(txq == NULL))
+ continue;
+ memset(&txq->stats, 0, sizeof(struct xsc_txq_stats));
+ }
+
+ return 0;
+}
+
static int
xsc_ethdev_mac_addr_add(struct rte_eth_dev *dev, struct rte_ether_addr *mac, uint32_t index)
{
@@ -500,6 +573,8 @@ const struct eth_dev_ops xsc_eth_dev_ops = {
.dev_start = xsc_ethdev_start,
.dev_stop = xsc_ethdev_stop,
.dev_close = xsc_ethdev_close,
+ .stats_get = xsc_ethdev_stats_get,
+ .stats_reset = xsc_ethdev_stats_reset,
.rx_queue_setup = xsc_ethdev_rx_queue_setup,
.tx_queue_setup = xsc_ethdev_tx_queue_setup,
.rx_queue_release = xsc_ethdev_rxq_release,
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 14/15] net/xsc: add ethdev infos get
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (12 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 13/15] net/xsc: add basic stats ops WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
2025-01-03 19:22 ` Stephen Hemminger
2025-01-03 15:04 ` [PATCH v4 15/15] net/xsc: add ethdev link and MTU ops WanRenyong
14 siblings, 1 reply; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev information get ops.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
drivers/net/xsc/xsc_ethdev.c | 61 ++++++++++++++++++++++++++++++++++++
1 file changed, 61 insertions(+)
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index e44792e374..f4c127d7d4 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -390,6 +390,66 @@ xsc_ethdev_close(struct rte_eth_dev *dev)
return 0;
}
+static uint64_t
+xsc_get_rx_queue_offloads(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_dev_config *config = &priv->config;
+ uint64_t offloads = 0;
+
+ if (config->hw_csum)
+ offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+
+ return offloads;
+}
+
+static uint64_t
+xsc_get_tx_port_offloads(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ uint64_t offloads = 0;
+ struct xsc_dev_config *config = &priv->config;
+
+ if (config->hw_csum)
+ offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
+ if (config->tso)
+ offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
+ return offloads;
+}
+
+static int
+xsc_ethdev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+
+ info->min_rx_bufsize = 64;
+ info->max_rx_pktlen = 65536;
+ info->max_lro_pkt_size = 0;
+ info->max_rx_queues = 256;
+ info->max_tx_queues = 1024;
+ info->rx_desc_lim.nb_max = 4096;
+ info->rx_desc_lim.nb_min = 16;
+ info->tx_desc_lim.nb_max = 8192;
+ info->tx_desc_lim.nb_min = 128;
+
+ info->rx_queue_offload_capa = xsc_get_rx_queue_offloads(dev);
+ info->rx_offload_capa = info->rx_queue_offload_capa;
+ info->tx_offload_capa = xsc_get_tx_port_offloads(dev);
+
+ info->if_index = priv->ifindex;
+ info->speed_capa = priv->xdev->link_speed_capa;
+ info->hash_key_size = XSC_RSS_HASH_KEY_LEN;
+ info->tx_desc_lim.nb_seg_max = 8;
+ info->tx_desc_lim.nb_mtu_seg_max = 8;
+ info->switch_info.name = dev->data->name;
+ info->switch_info.port_id = priv->representor_id;
+ return 0;
+}
+
static int
xsc_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
uint32_t socket, const struct rte_eth_rxconf *conf,
@@ -575,6 +635,7 @@ const struct eth_dev_ops xsc_eth_dev_ops = {
.dev_close = xsc_ethdev_close,
.stats_get = xsc_ethdev_stats_get,
.stats_reset = xsc_ethdev_stats_reset,
+ .dev_infos_get = xsc_ethdev_infos_get,
.rx_queue_setup = xsc_ethdev_rx_queue_setup,
.tx_queue_setup = xsc_ethdev_tx_queue_setup,
.rx_queue_release = xsc_ethdev_rxq_release,
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 15/15] net/xsc: add ethdev link and MTU ops
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
` (13 preceding siblings ...)
2025-01-03 15:04 ` [PATCH v4 14/15] net/xsc: add ethdev infos get WanRenyong
@ 2025-01-03 15:04 ` WanRenyong
14 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
Implement xsc ethdev link and MTU ops.
Signed-off-by: WanRenyong <wanry@yunsilicon.com>
---
doc/guides/nics/features/xsc.ini | 1 +
drivers/net/xsc/xsc_dev.c | 33 ++++++++++++++++++
drivers/net/xsc/xsc_dev.h | 4 +++
drivers/net/xsc/xsc_ethdev.c | 60 ++++++++++++++++++++++++++++++++
4 files changed, 98 insertions(+)
diff --git a/doc/guides/nics/features/xsc.ini b/doc/guides/nics/features/xsc.ini
index eb88517104..d73cf9d136 100644
--- a/doc/guides/nics/features/xsc.ini
+++ b/doc/guides/nics/features/xsc.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+MTU update = Y
RSS hash = Y
RSS key update = Y
RSS reta update = Y
diff --git a/drivers/net/xsc/xsc_dev.c b/drivers/net/xsc/xsc_dev.c
index 80813c2c82..b285a1e950 100644
--- a/drivers/net/xsc/xsc_dev.c
+++ b/drivers/net/xsc/xsc_dev.c
@@ -62,6 +62,39 @@ xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
data_out, out_len);
}
+int
+xsc_dev_set_link_up(struct xsc_dev *xdev)
+{
+ if (xdev->dev_ops->set_link_up == NULL)
+ return -ENOTSUP;
+
+ return xdev->dev_ops->set_link_up(xdev);
+}
+
+int
+xsc_dev_set_link_down(struct xsc_dev *xdev)
+{
+ if (xdev->dev_ops->set_link_down == NULL)
+ return -ENOTSUP;
+
+ return xdev->dev_ops->set_link_down(xdev);
+}
+
+int
+xsc_dev_link_update(struct xsc_dev *xdev, uint8_t funcid_type, int wait_to_complete)
+{
+ if (xdev->dev_ops->link_update == NULL)
+ return -ENOTSUP;
+
+ return xdev->dev_ops->link_update(xdev, funcid_type, wait_to_complete);
+}
+
+int
+xsc_dev_set_mtu(struct xsc_dev *xdev, uint16_t mtu)
+{
+ return xdev->dev_ops->set_mtu(xdev, mtu);
+}
+
int
xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac)
{
diff --git a/drivers/net/xsc/xsc_dev.h b/drivers/net/xsc/xsc_dev.h
index 4b5ab1d692..ef0933ab06 100644
--- a/drivers/net/xsc/xsc_dev.h
+++ b/drivers/net/xsc/xsc_dev.h
@@ -158,6 +158,9 @@ struct xsc_dev_ops {
int xsc_dev_mailbox_exec(struct xsc_dev *xdev, void *data_in,
int in_len, void *data_out, int out_len);
void xsc_dev_ops_register(struct xsc_dev_ops *new_ops);
+int xsc_dev_set_link_up(struct xsc_dev *xdev);
+int xsc_dev_set_link_down(struct xsc_dev *xde);
+int xsc_dev_link_update(struct xsc_dev *xdev, uint8_t funcid_type, int wait_to_complete);
int xsc_dev_destroy_qp(struct xsc_dev *xdev, void *qp);
int xsc_dev_destroy_cq(struct xsc_dev *xdev, void *cq);
int xsc_dev_modify_qp_status(struct xsc_dev *xdev, uint32_t qpn, int num, int opcode);
@@ -175,6 +178,7 @@ int xsc_dev_repr_ports_probe(struct xsc_dev *xdev, int nb_repr_ports, int max_et
int xsc_dev_rss_key_modify(struct xsc_dev *xdev, uint8_t *rss_key, uint8_t rss_key_len);
bool xsc_dev_is_vf(struct xsc_dev *xdev);
int xsc_dev_qp_set_id_get(struct xsc_dev *xdev, int repr_id);
+int xsc_dev_set_mtu(struct xsc_dev *xdev, uint16_t mtu);
int xsc_dev_get_mac(struct xsc_dev *xdev, uint8_t *mac);
#endif /* _XSC_DEV_H_ */
diff --git a/drivers/net/xsc/xsc_ethdev.c b/drivers/net/xsc/xsc_ethdev.c
index f4c127d7d4..9e60ea8b61 100644
--- a/drivers/net/xsc/xsc_ethdev.c
+++ b/drivers/net/xsc/xsc_ethdev.c
@@ -390,6 +390,41 @@ xsc_ethdev_close(struct rte_eth_dev *dev)
return 0;
}
+static int
+xsc_ethdev_set_link_up(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_dev *xdev = priv->xdev;
+
+ return xsc_dev_set_link_up(xdev);
+}
+
+static int
+xsc_ethdev_set_link_down(struct rte_eth_dev *dev)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_dev *xdev = priv->xdev;
+
+ return xsc_dev_set_link_down(xdev);
+}
+
+static int
+xsc_ethdev_link_update(struct rte_eth_dev *dev,
+ int wait_to_complete)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ struct xsc_dev *xdev = priv->xdev;
+ int ret = 0;
+
+ ret = xsc_dev_link_update(xdev, priv->funcid_type, wait_to_complete);
+ if (ret == 0) {
+ dev->data->dev_link = xdev->pf_dev_link;
+ dev->data->dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+ }
+ return ret;
+}
+
static uint64_t
xsc_get_rx_queue_offloads(struct rte_eth_dev *dev)
{
@@ -529,6 +564,27 @@ xsc_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
return 0;
}
+static int
+xsc_ethdev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
+{
+ struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
+ int ret = 0;
+
+ if (priv->eth_type != RTE_ETH_REPRESENTOR_PF) {
+ priv->mtu = mtu;
+ return 0;
+ }
+
+ ret = xsc_dev_set_mtu(priv->xdev, mtu);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "Mtu set to %u failure", mtu);
+ return -EAGAIN;
+ }
+
+ priv->mtu = mtu;
+ return 0;
+}
+
static int
xsc_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
{
@@ -632,7 +688,10 @@ const struct eth_dev_ops xsc_eth_dev_ops = {
.dev_configure = xsc_ethdev_configure,
.dev_start = xsc_ethdev_start,
.dev_stop = xsc_ethdev_stop,
+ .dev_set_link_up = xsc_ethdev_set_link_up,
+ .dev_set_link_down = xsc_ethdev_set_link_down,
.dev_close = xsc_ethdev_close,
+ .link_update = xsc_ethdev_link_update,
.stats_get = xsc_ethdev_stats_get,
.stats_reset = xsc_ethdev_stats_reset,
.dev_infos_get = xsc_ethdev_infos_get,
@@ -640,6 +699,7 @@ const struct eth_dev_ops xsc_eth_dev_ops = {
.tx_queue_setup = xsc_ethdev_tx_queue_setup,
.rx_queue_release = xsc_ethdev_rxq_release,
.tx_queue_release = xsc_ethdev_txq_release,
+ .mtu_set = xsc_ethdev_set_mtu,
.rss_hash_update = xsc_ethdev_rss_hash_update,
.rss_hash_conf_get = xsc_ethdev_rss_hash_conf_get,
};
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* [PATCH v4 00/15] XSC PMD for Yunsilicon NICs
@ 2025-01-03 15:04 WanRenyong
2025-01-03 15:04 ` [PATCH v4 01/15] net/xsc: add xsc PMD framework WanRenyong
` (14 more replies)
0 siblings, 15 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-03 15:04 UTC (permalink / raw)
To: dev
Cc: ferruh.yigit, thomas, andrew.rybchenko, qianr, nana, zhangxx,
zhangxx, xudw, jacky, weihg
This xsc PMD (**librte_net_xsc**) provides poll mode driver for
Yunsilicon metaScale serials NICs.
Features:
---------
- MTU update
- TSO
- RSS hash
- RSS key update
- RSS reta update
- L3 checksum offload
- L4 checksum offload
- Inner L3 checksum
- Inner L4 checksum
- Basic stats
Support NICs:
-------------
- metaScale-200S Single QSFP56 Port 200GE SmartNIC
- metaScale-200 Quad QSFP28 Ports 100GE SmartNIC
- metaScale-50 Dual QSFP28 Port 25GE SmartNIC
- metaScale-100Q Quad QSFP28 Port 25GE SmartNIC
---
v4:
* Based on the review comments from previous versions, reconstruct the xsc PMD to eliminate
the dependency on rdma core library and proprietary kernel driver, while adding support for
the vfio kernel driver.
v3:
* fix compilation errors
v2:
* fix checkpatch warnings and errors
---
WanRenyong (15):
net/xsc: add xsc PMD framework
net/xsc: add xsc device initialization
net/xsc: add xsc mailbox
net/xsc: add xsc dev ops to support VFIO driver
net/xsc: add PCT interfaces
net/xsc: initialize xsc representors
net/xsc: add ethdev configure and RSS ops
net/xsc: add Rx and Tx queue setup
net/xsc: add ethdev start
net/xsc: add ethdev stop and close
net/xsc: add ethdev Rx burst
net/xsc: add ethdev Tx burst
net/xsc: add basic stats ops
net/xsc: add ethdev infos get
net/xsc: add ethdev link and MTU ops
.mailmap | 5 +
MAINTAINERS | 10 +
doc/guides/nics/features/xsc.ini | 18 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/xsc.rst | 31 +
doc/guides/rel_notes/release_25_03.rst | 4 +
drivers/net/meson.build | 1 +
drivers/net/xsc/meson.build | 17 +
drivers/net/xsc/xsc_cmd.h | 387 ++++++++++
drivers/net/xsc/xsc_defs.h | 100 +++
drivers/net/xsc/xsc_dev.c | 397 +++++++++++
drivers/net/xsc/xsc_dev.h | 184 +++++
drivers/net/xsc/xsc_ethdev.c | 943 +++++++++++++++++++++++++
drivers/net/xsc/xsc_ethdev.h | 63 ++
drivers/net/xsc/xsc_log.h | 24 +
drivers/net/xsc/xsc_np.c | 492 +++++++++++++
drivers/net/xsc/xsc_np.h | 154 ++++
drivers/net/xsc/xsc_rx.c | 512 ++++++++++++++
drivers/net/xsc/xsc_rx.h | 65 ++
drivers/net/xsc/xsc_rxtx.h | 191 +++++
drivers/net/xsc/xsc_tx.c | 354 ++++++++++
drivers/net/xsc/xsc_tx.h | 62 ++
drivers/net/xsc/xsc_vfio.c | 750 ++++++++++++++++++++
drivers/net/xsc/xsc_vfio_mbox.c | 691 ++++++++++++++++++
drivers/net/xsc/xsc_vfio_mbox.h | 142 ++++
25 files changed, 5598 insertions(+)
create mode 100644 doc/guides/nics/features/xsc.ini
create mode 100644 doc/guides/nics/xsc.rst
create mode 100644 drivers/net/xsc/meson.build
create mode 100644 drivers/net/xsc/xsc_cmd.h
create mode 100644 drivers/net/xsc/xsc_defs.h
create mode 100644 drivers/net/xsc/xsc_dev.c
create mode 100644 drivers/net/xsc/xsc_dev.h
create mode 100644 drivers/net/xsc/xsc_ethdev.c
create mode 100644 drivers/net/xsc/xsc_ethdev.h
create mode 100644 drivers/net/xsc/xsc_log.h
create mode 100644 drivers/net/xsc/xsc_np.c
create mode 100644 drivers/net/xsc/xsc_np.h
create mode 100644 drivers/net/xsc/xsc_rx.c
create mode 100644 drivers/net/xsc/xsc_rx.h
create mode 100644 drivers/net/xsc/xsc_rxtx.h
create mode 100644 drivers/net/xsc/xsc_tx.c
create mode 100644 drivers/net/xsc/xsc_tx.h
create mode 100644 drivers/net/xsc/xsc_vfio.c
create mode 100644 drivers/net/xsc/xsc_vfio_mbox.c
create mode 100644 drivers/net/xsc/xsc_vfio_mbox.h
--
2.25.1
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 02/15] net/xsc: add xsc device initialization
2025-01-03 15:04 ` [PATCH v4 02/15] net/xsc: add xsc device initialization WanRenyong
@ 2025-01-03 18:58 ` Stephen Hemminger
2025-01-06 3:29 ` WanRenyong
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 18:58 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:08 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +struct xsc_hwinfo {
> + uint8_t valid; /* 1: current phy info is valid, 0 : invalid */
> + uint32_t pcie_no; /* pcie number , 0 or 1 */
> + uint32_t func_id; /* pf glb func id */
> + uint32_t pcie_host; /* host pcie number */
> + uint32_t mac_phy_port; /* mac port */
> + uint32_t funcid_to_logic_port_off; /* port func id offset */
> + uint16_t lag_id;
> + uint16_t raw_qp_id_base;
> + uint16_t raw_rss_qp_id_base;
> + uint16_t pf0_vf_funcid_base;
> + uint16_t pf0_vf_funcid_top;
> + uint16_t pf1_vf_funcid_base;
> + uint16_t pf1_vf_funcid_top;
> + uint16_t pcie0_pf_funcid_base;
> + uint16_t pcie0_pf_funcid_top;
> + uint16_t pcie1_pf_funcid_base;
> + uint16_t pcie1_pf_funcid_top;
> + uint16_t lag_port_start;
> + uint16_t raw_tpe_qp_num;
> + int send_seg_num;
> + int recv_seg_num;
> + uint8_t on_chip_tbl_vld;
> + uint8_t dma_rw_tbl_vld;
> + uint8_t pct_compress_vld;
> + uint32_t chip_version;
> + uint32_t hca_core_clock;
> + uint8_t mac_bit;
> + uint8_t esw_mode;
> +};
Can you rearrange elements in this structure so there are less holes?
Or is it shared with the hardware.
Unless you need negative value as a sentinel, avoid use of int where unsigned could be used for
seg_num.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 01/15] net/xsc: add xsc PMD framework
2025-01-03 15:04 ` [PATCH v4 01/15] net/xsc: add xsc PMD framework WanRenyong
@ 2025-01-03 19:00 ` Stephen Hemminger
2025-01-06 1:36 ` WanRenyong
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:00 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:06 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +XSC Poll Mode Driver
> +======================
> +
> +The xsc PMD (**librte_net_xsc**) provides poll mode driver support for
> +10/25/50/100/200 Gbps Yunsilicon metaScale Series Network Adapters.
> +
> +Supported NICs
> +--------------
> +
> +The following Yunsilicon device models are supported by the same xsc driver:
> +
> + - metaScale-200S
> + - metaScale-200
> + - metaScale-100Q
> + - metaScale-50
> +
> +Prerequisites
> +--------------
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> +
> +- Learning about Yunsilicon metaScale Series NICs using
> + `<https://www.yunsilicon.com/#/productInformation>`_.
> +
> +Limitations or Known issues
> +---------------------------
> +32bit ARCHs are not supported.
> +Windows and BSD are not supported yet.
What kernel components does this driver expect? Are they all available in current kernels?
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 15:04 ` [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver WanRenyong
@ 2025-01-03 19:02 ` Stephen Hemminger
2025-01-06 1:53 ` WanRenyong
2025-01-03 19:04 ` Stephen Hemminger
2025-01-03 19:06 ` Stephen Hemminger
2 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:02 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:13 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +static int
> +xsc_vfio_destroy_qp(void *qp)
> +{
> + int ret;
> + int in_len, out_len, cmd_len;
> + struct xsc_cmd_destroy_qp_mbox_in *in;
> + struct xsc_cmd_destroy_qp_mbox_out *out;
> + struct xsc_vfio_qp *data = (struct xsc_vfio_qp *)qp;
> +
> + in_len = sizeof(struct xsc_cmd_destroy_qp_mbox_in);
> + out_len = sizeof(struct xsc_cmd_destroy_qp_mbox_out);
> + cmd_len = RTE_MAX(in_len, out_len);
> +
> + in = malloc(cmd_len);
> + if (in == NULL) {
> + rte_errno = ENOMEM;
> + PMD_DRV_LOG(ERR, "Failed to alloc qp destroy cmd memory");
> + return -rte_errno;
> + }
> + memset(in, 0, cmd_len);
If this data structure needs to be shared between primary and secondary process,
then it needs to be allocated with rte_malloc(). If it does not need to be
shared, then it can come from heap with malloc().
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 15:04 ` [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver WanRenyong
2025-01-03 19:02 ` Stephen Hemminger
@ 2025-01-03 19:04 ` Stephen Hemminger
2025-01-06 2:01 ` WanRenyong
2025-01-03 19:06 ` Stephen Hemminger
2 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:04 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:13 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +static int
> +xsc_vfio_set_mtu(struct xsc_dev *xdev, uint16_t mtu)
> +{
> + struct xsc_cmd_set_mtu_mbox_in in;
> + struct xsc_cmd_set_mtu_mbox_out out;
> + int ret;
> +
> + memset(&in, 0, sizeof(in));
> + memset(&out, 0, sizeof(out));
Optionally, you can initalize on stack variables with:
struct xsc_cmd_set_mtu_mbox_in in = { };
Either way is ok, it is up to you.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 15:04 ` [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver WanRenyong
2025-01-03 19:02 ` Stephen Hemminger
2025-01-03 19:04 ` Stephen Hemminger
@ 2025-01-03 19:06 ` Stephen Hemminger
2025-01-06 2:02 ` WanRenyong
2 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:06 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:13 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +static int
> +xsc_vfio_get_mac(struct xsc_dev *xdev, uint8_t *mac)
> +{
> + struct xsc_cmd_query_eth_mac_mbox_in in;
> + struct xsc_cmd_query_eth_mac_mbox_out out;
> + int ret;
> +
> + memset(&in, 0, sizeof(in));
> + memset(&out, 0, sizeof(out));
> + in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_QUERY_ETH_MAC);
> + ret = xsc_vfio_mbox_exec(xdev, &in, sizeof(in), &out, sizeof(out));
> + if (ret != 0 || out.hdr.status != 0) {
> + PMD_DRV_LOG(ERR, "Failed to get mtu, port=%d, err=%d, out.status=%u",
> + xdev->port_id, ret, out.hdr.status);
> + rte_errno = ENOEXEC;
> + return -rte_errno;
> + }
> +
> + memcpy(mac, out.mac, 6);
Prefer to use RTE_ETHER_ADDR_LEN rather than 6.
Or use rte_ether_addr_copy
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops
2025-01-03 15:04 ` [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops WanRenyong
@ 2025-01-03 19:14 ` Stephen Hemminger
2025-01-06 2:20 ` WanRenyong
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:14 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:19 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +static int
> +xsc_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
> +
> + if (!rss_conf) {
> + rte_errno = EINVAL;
> + return -rte_errno;
> + }
The parameter rss_conf is already checked for null in rte_eth_dev_rss_hash_conf_get().
> +static int
> +xsc_ethdev_rss_hash_update(struct rte_eth_dev *dev,
> + struct rte_eth_rss_conf *rss_conf)
> +{
> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
> + int ret = 0;
> +
> + if (rss_conf->rss_key_len > XSC_RSS_HASH_KEY_LEN || rss_conf->rss_key == NULL) {
> + PMD_DRV_LOG(ERR, "Xsc pmd key len is %d bigger than %d",
> + rss_conf->rss_key_len, XSC_RSS_HASH_KEY_LEN);
> + return -EINVAL;
> + }
Key length is already validated against value returned from dev_info.hash_key_size before
this is called by rte_eth_dev_rss_hash_update().
> +static int
> +xsc_ethdev_configure(struct rte_eth_dev *dev)
> +{
> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
> + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
> + int ret;
> + struct rte_eth_rss_conf *rss_conf;
> +
> + priv->num_sq = dev->data->nb_tx_queues;
> + priv->num_rq = dev->data->nb_rx_queues;
> +
> + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
> + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
> +
> + if (priv->rss_conf.rss_key == NULL) {
> + priv->rss_conf.rss_key = rte_zmalloc(NULL, XSC_RSS_HASH_KEY_LEN,
> + RTE_CACHE_LINE_SIZE);
> + if (priv->rss_conf.rss_key == NULL) {
> + PMD_DRV_LOG(ERR, "Failed to alloc rss key");
> + rte_errno = ENOMEM;
> + ret = -rte_errno;
> + goto error;
> + }
> + priv->rss_conf.rss_key_len = XSC_RSS_HASH_KEY_LEN;
> + }
> +
> + if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
> + rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
> + ret = xsc_ethdev_rss_hash_update(dev, rss_conf);
> + if (ret != 0) {
> + PMD_DRV_LOG(ERR, "Xsc pmd set rss key error!");
> + rte_errno = -ENOEXEC;
> + goto error;
> + }
> + }
> +
> + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
> + PMD_DRV_LOG(ERR, "Xsc pmd do not support vlan filter now!");
> + rte_errno = EINVAL;
> + goto error;
> + }
> +
> + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
> + PMD_DRV_LOG(ERR, "Xsc pmd do not support vlan strip now!");
> + rte_errno = EINVAL;
> + goto error;
> + }
These offload flags are already validated against rx_offload_capa by rte_eth_dev_configure().
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 09/15] net/xsc: add ethdev start
2025-01-03 15:04 ` [PATCH v4 09/15] net/xsc: add ethdev start WanRenyong
@ 2025-01-03 19:17 ` Stephen Hemminger
2025-01-06 3:01 ` WanRenyong
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:17 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:23 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +static int
> +xsc_ethdev_start(struct rte_eth_dev *dev)
> +{
> + int ret;
> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
> +
> + ret = xsc_txq_start(priv);
> + if (ret) {
> + PMD_DRV_LOG(ERR, "Port %u txq start failed: %s",
> + dev->data->port_id, strerror(rte_errno));
> + goto error;
> + }
> +
> + ret = xsc_rxq_start(priv);
> + if (ret) {
> + PMD_DRV_LOG(ERR, "Port %u Rx queue start failed: %s",
> + dev->data->port_id, strerror(rte_errno));
> + goto error;
> + }
> +
> + dev->data->dev_started = 1;
> +
> + rte_wmb();
In general, it is preferred that DPDK drivers use rte_atomic to get
finer grain control over shared variables. Rather than using volatile
and barriers. This is not an absolute requirement, but something
that is preferred and improves performance on weakly ordered platforms.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 14/15] net/xsc: add ethdev infos get
2025-01-03 15:04 ` [PATCH v4 14/15] net/xsc: add ethdev infos get WanRenyong
@ 2025-01-03 19:22 ` Stephen Hemminger
2025-01-06 4:03 ` WanRenyong
0 siblings, 1 reply; 32+ messages in thread
From: Stephen Hemminger @ 2025-01-03 19:22 UTC (permalink / raw)
To: WanRenyong
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On Fri, 03 Jan 2025 23:04:35 +0800
"WanRenyong" <wanry@yunsilicon.com> wrote:
> +
> +static int
> +xsc_ethdev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
> +{
> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
> +
> + info->min_rx_bufsize = 64;
> + info->max_rx_pktlen = 65536;
> + info->max_lro_pkt_size = 0;
> + info->max_rx_queues = 256;
> + info->max_tx_queues = 1024;
> + info->rx_desc_lim.nb_max = 4096;
> + info->rx_desc_lim.nb_min = 16;
> + info->tx_desc_lim.nb_max = 8192;
> + info->tx_desc_lim.nb_min = 128;
> +
> + info->rx_queue_offload_capa = xsc_get_rx_queue_offloads(dev);
> + info->rx_offload_capa = info->rx_queue_offload_capa;
> + info->tx_offload_capa = xsc_get_tx_port_offloads(dev);
> +
> + info->if_index = priv->ifindex;
> + info->speed_capa = priv->xdev->link_speed_capa;
> + info->hash_key_size = XSC_RSS_HASH_KEY_LEN;
> + info->tx_desc_lim.nb_seg_max = 8;
> + info->tx_desc_lim.nb_mtu_seg_max = 8;
> + info->switch_info.name = dev->data->name;
> + info->switch_info.port_id = priv->representor_id;
> + return 0;
> +}
> +
Note: that driver probably won't be at all functional without info_get
but as long as each patch builds, it doesn't matter to me what order the
patchset is in. Too hard to get a working driver at each step.
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 01/15] net/xsc: add xsc PMD framework
2025-01-03 19:00 ` Stephen Hemminger
@ 2025-01-06 1:36 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 1:36 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, qianr, nana, zhangxx, xudw, jacky, weihg
On 2025/1/4 3:00, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:06 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +XSC Poll Mode Driver
>> +======================
>> +
>> +The xsc PMD (**librte_net_xsc**) provides poll mode driver support for
>> +10/25/50/100/200 Gbps Yunsilicon metaScale Series Network Adapters.
>> +
>> +Supported NICs
>> +--------------
>> +
>> +The following Yunsilicon device models are supported by the same xsc driver:
>> +
>> + - metaScale-200S
>> + - metaScale-200
>> + - metaScale-100Q
>> + - metaScale-50
>> +
>> +Prerequisites
>> +--------------
>> +
>> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
>> +
>> +- Learning about Yunsilicon metaScale Series NICs using
>> + `<https://www.yunsilicon.com/#/productInformation>`_.
>> +
>> +Limitations or Known issues
>> +---------------------------
>> +32bit ARCHs are not supported.
>> +Windows and BSD are not supported yet.
> What kernel components does this driver expect? Are they all available in current kernels?
Currently this driver expects only the vfio-pci kernel module which is
recommended by DPDK .
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 19:02 ` Stephen Hemminger
@ 2025-01-06 1:53 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 1:53 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, qianr, nana, zhangxx, xudw, jacky, weihg
On 2025/1/4 3:02, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:13 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +static int
>> +xsc_vfio_destroy_qp(void *qp)
>> +{
>> + int ret;
>> + int in_len, out_len, cmd_len;
>> + struct xsc_cmd_destroy_qp_mbox_in *in;
>> + struct xsc_cmd_destroy_qp_mbox_out *out;
>> + struct xsc_vfio_qp *data = (struct xsc_vfio_qp *)qp;
>> +
>> + in_len = sizeof(struct xsc_cmd_destroy_qp_mbox_in);
>> + out_len = sizeof(struct xsc_cmd_destroy_qp_mbox_out);
>> + cmd_len = RTE_MAX(in_len, out_len);
>> +
>> + in = malloc(cmd_len);
>> + if (in == NULL) {
>> + rte_errno = ENOMEM;
>> + PMD_DRV_LOG(ERR, "Failed to alloc qp destroy cmd memory");
>> + return -rte_errno;
>> + }
>> + memset(in, 0, cmd_len);
> If this data structure needs to be shared between primary and secondary process,
> then it needs to be allocated with rte_malloc(). If it does not need to be
> shared, then it can come from heap with malloc().
no, this data structure is not shared between primary and secondary process.
it is a temporary memory to initial mailbox data structure.
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 19:04 ` Stephen Hemminger
@ 2025-01-06 2:01 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 2:01 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, qianr, nana, zhangxx, xudw, jacky, weihg
On 2025/1/4 3:04, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:13 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +static int
>> +xsc_vfio_set_mtu(struct xsc_dev *xdev, uint16_t mtu)
>> +{
>> + struct xsc_cmd_set_mtu_mbox_in in;
>> + struct xsc_cmd_set_mtu_mbox_out out;
>> + int ret;
>> +
>> + memset(&in, 0, sizeof(in));
>> + memset(&out, 0, sizeof(out));
> Optionally, you can initalize on stack variables with:
> struct xsc_cmd_set_mtu_mbox_in in = { };
>
> Either way is ok, it is up to you.
Got it.
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver
2025-01-03 19:06 ` Stephen Hemminger
@ 2025-01-06 2:02 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 2:02 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On 2025/1/4 3:06, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:13 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +static int
>> +xsc_vfio_get_mac(struct xsc_dev *xdev, uint8_t *mac)
>> +{
>> + struct xsc_cmd_query_eth_mac_mbox_in in;
>> + struct xsc_cmd_query_eth_mac_mbox_out out;
>> + int ret;
>> +
>> + memset(&in, 0, sizeof(in));
>> + memset(&out, 0, sizeof(out));
>> + in.hdr.opcode = rte_cpu_to_be_16(XSC_CMD_OP_QUERY_ETH_MAC);
>> + ret = xsc_vfio_mbox_exec(xdev, &in, sizeof(in), &out, sizeof(out));
>> + if (ret != 0 || out.hdr.status != 0) {
>> + PMD_DRV_LOG(ERR, "Failed to get mtu, port=%d, err=%d, out.status=%u",
>> + xdev->port_id, ret, out.hdr.status);
>> + rte_errno = ENOEXEC;
>> + return -rte_errno;
>> + }
>> +
>> + memcpy(mac, out.mac, 6);
> Prefer to use RTE_ETHER_ADDR_LEN rather than 6.
> Or use rte_ether_addr_copy
will fix it in the next version.
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops
2025-01-03 19:14 ` Stephen Hemminger
@ 2025-01-06 2:20 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 2:20 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, qianr, nana, zhangxx, xudw, jacky, weihg
On 2025/1/4 3:14, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:19 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +static int
>> +xsc_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
>> + struct rte_eth_rss_conf *rss_conf)
>> +{
>> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
>> +
>> + if (!rss_conf) {
>> + rte_errno = EINVAL;
>> + return -rte_errno;
>> + }
> The parameter rss_conf is already checked for null in rte_eth_dev_rss_hash_conf_get().
will remove rss_conf check in the next version.
>
>> +static int
>> +xsc_ethdev_rss_hash_update(struct rte_eth_dev *dev,
>> + struct rte_eth_rss_conf *rss_conf)
>> +{
>> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
>> + int ret = 0;
>> +
>> + if (rss_conf->rss_key_len > XSC_RSS_HASH_KEY_LEN || rss_conf->rss_key == NULL) {
>> + PMD_DRV_LOG(ERR, "Xsc pmd key len is %d bigger than %d",
>> + rss_conf->rss_key_len, XSC_RSS_HASH_KEY_LEN);
>> + return -EINVAL;
>> + }
> Key length is already validated against value returned from dev_info.hash_key_size before
> this is called by rte_eth_dev_rss_hash_update().
will remove the key length validation in the next version.
>> +static int
>> +xsc_ethdev_configure(struct rte_eth_dev *dev)
>> +{
>> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
>> + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
>> + int ret;
>> + struct rte_eth_rss_conf *rss_conf;
>> +
>> + priv->num_sq = dev->data->nb_tx_queues;
>> + priv->num_rq = dev->data->nb_rx_queues;
>> +
>> + if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
>> + dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
>> +
>> + if (priv->rss_conf.rss_key == NULL) {
>> + priv->rss_conf.rss_key = rte_zmalloc(NULL, XSC_RSS_HASH_KEY_LEN,
>> + RTE_CACHE_LINE_SIZE);
>> + if (priv->rss_conf.rss_key == NULL) {
>> + PMD_DRV_LOG(ERR, "Failed to alloc rss key");
>> + rte_errno = ENOMEM;
>> + ret = -rte_errno;
>> + goto error;
>> + }
>> + priv->rss_conf.rss_key_len = XSC_RSS_HASH_KEY_LEN;
>> + }
>> +
>> + if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
>> + rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
>> + ret = xsc_ethdev_rss_hash_update(dev, rss_conf);
>> + if (ret != 0) {
>> + PMD_DRV_LOG(ERR, "Xsc pmd set rss key error!");
>> + rte_errno = -ENOEXEC;
>> + goto error;
>> + }
>> + }
>> +
>> + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
>> + PMD_DRV_LOG(ERR, "Xsc pmd do not support vlan filter now!");
>> + rte_errno = EINVAL;
>> + goto error;
>> + }
>> +
>> + if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
>> + PMD_DRV_LOG(ERR, "Xsc pmd do not support vlan strip now!");
>> + rte_errno = EINVAL;
>> + goto error;
>> + }
> These offload flags are already validated against rx_offload_capa by rte_eth_dev_configure().
will remove these offload flags validation in the next version.
>
>
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 09/15] net/xsc: add ethdev start
2025-01-03 19:17 ` Stephen Hemminger
@ 2025-01-06 3:01 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 3:01 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, qianr, nana, zhangxx, xudw, jacky, weihg
On 2025/1/4 3:17, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:23 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +static int
>> +xsc_ethdev_start(struct rte_eth_dev *dev)
>> +{
>> + int ret;
>> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
>> +
>> + ret = xsc_txq_start(priv);
>> + if (ret) {
>> + PMD_DRV_LOG(ERR, "Port %u txq start failed: %s",
>> + dev->data->port_id, strerror(rte_errno));
>> + goto error;
>> + }
>> +
>> + ret = xsc_rxq_start(priv);
>> + if (ret) {
>> + PMD_DRV_LOG(ERR, "Port %u Rx queue start failed: %s",
>> + dev->data->port_id, strerror(rte_errno));
>> + goto error;
>> + }
>> +
>> + dev->data->dev_started = 1;
>> +
>> + rte_wmb();
> In general, it is preferred that DPDK drivers use rte_atomic to get
> finer grain control over shared variables. Rather than using volatile
> and barriers. This is not an absolute requirement, but something
> that is preferred and improves performance on weakly ordered platforms.
Understood, maybe rte_wmb is not neccessary here, will remove it in the
next version.
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 02/15] net/xsc: add xsc device initialization
2025-01-03 18:58 ` Stephen Hemminger
@ 2025-01-06 3:29 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 3:29 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, qianr, nana, zhangxx, xudw, jacky, weihg
On 2025/1/4 2:58, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:08 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +struct xsc_hwinfo {
>> + uint8_t valid; /* 1: current phy info is valid, 0 : invalid */
>> + uint32_t pcie_no; /* pcie number , 0 or 1 */
>> + uint32_t func_id; /* pf glb func id */
>> + uint32_t pcie_host; /* host pcie number */
>> + uint32_t mac_phy_port; /* mac port */
>> + uint32_t funcid_to_logic_port_off; /* port func id offset */
>> + uint16_t lag_id;
>> + uint16_t raw_qp_id_base;
>> + uint16_t raw_rss_qp_id_base;
>> + uint16_t pf0_vf_funcid_base;
>> + uint16_t pf0_vf_funcid_top;
>> + uint16_t pf1_vf_funcid_base;
>> + uint16_t pf1_vf_funcid_top;
>> + uint16_t pcie0_pf_funcid_base;
>> + uint16_t pcie0_pf_funcid_top;
>> + uint16_t pcie1_pf_funcid_base;
>> + uint16_t pcie1_pf_funcid_top;
>> + uint16_t lag_port_start;
>> + uint16_t raw_tpe_qp_num;
>> + int send_seg_num;
>> + int recv_seg_num;
>> + uint8_t on_chip_tbl_vld;
>> + uint8_t dma_rw_tbl_vld;
>> + uint8_t pct_compress_vld;
>> + uint32_t chip_version;
>> + uint32_t hca_core_clock;
>> + uint8_t mac_bit;
>> + uint8_t esw_mode;
>> +};
> Can you rearrange elements in this structure so there are less holes?
> Or is it shared with the hardware.
OK, I will try to rearrange the elemments in the next version.
It 's not shared with hardware, it is just used to store hardward infos.
>
> Unless you need negative value as a sentinel, avoid use of int where unsigned could be used for
> seg_num.
Got it, will fix it in the next version.
unsigned
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
* Re: [PATCH v4 14/15] net/xsc: add ethdev infos get
2025-01-03 19:22 ` Stephen Hemminger
@ 2025-01-06 4:03 ` WanRenyong
0 siblings, 0 replies; 32+ messages in thread
From: WanRenyong @ 2025-01-06 4:03 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, ferruh.yigit, thomas, andrew.rybchenko, qianr, nana,
zhangxx, xudw, jacky, weihg
On 2025/1/4 3:22, Stephen Hemminger wrote:
> On Fri, 03 Jan 2025 23:04:35 +0800
> "WanRenyong" <wanry@yunsilicon.com> wrote:
>
>> +
>> +static int
>> +xsc_ethdev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
>> +{
>> + struct xsc_ethdev_priv *priv = TO_XSC_ETHDEV_PRIV(dev);
>> +
>> + info->min_rx_bufsize = 64;
>> + info->max_rx_pktlen = 65536;
>> + info->max_lro_pkt_size = 0;
>> + info->max_rx_queues = 256;
>> + info->max_tx_queues = 1024;
>> + info->rx_desc_lim.nb_max = 4096;
>> + info->rx_desc_lim.nb_min = 16;
>> + info->tx_desc_lim.nb_max = 8192;
>> + info->tx_desc_lim.nb_min = 128;
>> +
>> + info->rx_queue_offload_capa = xsc_get_rx_queue_offloads(dev);
>> + info->rx_offload_capa = info->rx_queue_offload_capa;
>> + info->tx_offload_capa = xsc_get_tx_port_offloads(dev);
>> +
>> + info->if_index = priv->ifindex;
>> + info->speed_capa = priv->xdev->link_speed_capa;
>> + info->hash_key_size = XSC_RSS_HASH_KEY_LEN;
>> + info->tx_desc_lim.nb_seg_max = 8;
>> + info->tx_desc_lim.nb_mtu_seg_max = 8;
>> + info->switch_info.name = dev->data->name;
>> + info->switch_info.port_id = priv->representor_id;
>> + return 0;
>> +}
>> +
> Note: that driver probably won't be at all functional without info_get
> but as long as each patch builds, it doesn't matter to me what order the
> patchset is in. Too hard to get a working driver at each step.
Agree with you. When I made the patches, I had refered the the
suggestion of Adding a new driver in DPDK Contributor's guidelines, the
device info is recommanded to be placed later position.
Altering the orde of the patchs is a bit of a hassle, so if it's not
unacceptable, I am going to leave it as is.
--
Thanks,
WanRenyong
^ permalink raw reply [flat|nested] 32+ messages in thread
end of thread, other threads:[~2025-01-06 4:03 UTC | newest]
Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-01-03 15:04 [PATCH v4 00/15] XSC PMD for Yunsilicon NICs WanRenyong
2025-01-03 15:04 ` [PATCH v4 01/15] net/xsc: add xsc PMD framework WanRenyong
2025-01-03 19:00 ` Stephen Hemminger
2025-01-06 1:36 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 02/15] net/xsc: add xsc device initialization WanRenyong
2025-01-03 18:58 ` Stephen Hemminger
2025-01-06 3:29 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 03/15] net/xsc: add xsc mailbox WanRenyong
2025-01-03 15:04 ` [PATCH v4 04/15] net/xsc: add xsc dev ops to support VFIO driver WanRenyong
2025-01-03 19:02 ` Stephen Hemminger
2025-01-06 1:53 ` WanRenyong
2025-01-03 19:04 ` Stephen Hemminger
2025-01-06 2:01 ` WanRenyong
2025-01-03 19:06 ` Stephen Hemminger
2025-01-06 2:02 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 05/15] net/xsc: add PCT interfaces WanRenyong
2025-01-03 15:04 ` [PATCH v4 06/15] net/xsc: initialize xsc representors WanRenyong
2025-01-03 15:04 ` [PATCH v4 07/15] net/xsc: add ethdev configure and RSS ops WanRenyong
2025-01-03 19:14 ` Stephen Hemminger
2025-01-06 2:20 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 08/15] net/xsc: add Rx and Tx queue setup WanRenyong
2025-01-03 15:04 ` [PATCH v4 09/15] net/xsc: add ethdev start WanRenyong
2025-01-03 19:17 ` Stephen Hemminger
2025-01-06 3:01 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 10/15] net/xsc: add ethdev stop and close WanRenyong
2025-01-03 15:04 ` [PATCH v4 11/15] net/xsc: add ethdev Rx burst WanRenyong
2025-01-03 15:04 ` [PATCH v4 12/15] net/xsc: add ethdev Tx burst WanRenyong
2025-01-03 15:04 ` [PATCH v4 13/15] net/xsc: add basic stats ops WanRenyong
2025-01-03 15:04 ` [PATCH v4 14/15] net/xsc: add ethdev infos get WanRenyong
2025-01-03 19:22 ` Stephen Hemminger
2025-01-06 4:03 ` WanRenyong
2025-01-03 15:04 ` [PATCH v4 15/15] net/xsc: add ethdev link and MTU ops WanRenyong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).