* [PATCH 0/8] Add ODM DMA device
@ 2024-04-15 15:31 Anoob Joseph
2024-04-15 15:31 ` [PATCH 1/8] usertools/devbind: add " Anoob Joseph
` (8 more replies)
0 siblings, 9 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add Odyssey ODM DMA device. This PMD abstracts ODM hardware unit on
Odyssey SoC which can perform mem to mem copies.
The hardware unit can support upto 32 queues (vchan) and 16 VFs. It
also supports 'fill' operation with specific values. It also supports
SG mode of operation with upto 4 src pointers and 4 destination
pointers.
The PMD is tested with both unit tests and performance applications.
Anoob Joseph (3):
usertools/devbind: add ODM DMA device
dma/odm: add framework for ODM DMA device
dma/odm: add hardware defines
Gowrishankar Muthukrishnan (3):
dma/odm: add dev init and fini
dma/odm: add device ops
dma/odm: add stats
Vidya Sagar Velumuri (2):
dma/odm: add copy and copy sg ops
dma/odm: add remaining ops
MAINTAINERS | 7 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +
drivers/dma/odm/odm.c | 237 ++++++++++++
drivers/dma/odm/odm.h | 217 +++++++++++
drivers/dma/odm/odm_dmadev.c | 710 +++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 +++
usertools/dpdk-devbind.py | 5 +-
10 files changed, 1332 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/dmadevs/odm.rst
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.c
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
create mode 100644 drivers/dma/odm/odm_priv.h
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 1/8] usertools/devbind: add ODM DMA device
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 2/8] dma/odm: add framework for " Anoob Joseph
` (7 subsequent siblings)
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add support for ODM DMA device in devbind.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
usertools/dpdk-devbind.py | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index a278f5e7f3..6493877caa 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -48,6 +48,8 @@
hisilicon_dma = {'Class': '08', 'Vendor': '19e5', 'Device': 'a122',
'SVendor': None, 'SDevice': None}
+odm_dma = {'Class': '08', 'Vendor': '177d', 'Device': 'a08c',
+ 'SVendor': None, 'SDevice': None}
intel_dlb = {'Class': '0b', 'Vendor': '8086', 'Device': '270b,2710,2714',
'SVendor': None, 'SDevice': None}
@@ -82,7 +84,8 @@
baseband_devices = [acceleration_class]
crypto_devices = [encryption_class, intel_processor_class]
dma_devices = [cnxk_dma, hisilicon_dma,
- intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx]
+ intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx,
+ odm_dma]
eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, cnxk_sso]
mempool_devices = [cavium_fpa, cnxk_npa]
compress_devices = [cavium_zip]
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 2/8] dma/odm: add framework for ODM DMA device
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
2024-04-15 15:31 ` [PATCH 1/8] usertools/devbind: add " Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 3/8] dma/odm: add hardware defines Anoob Joseph
` (6 subsequent siblings)
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add framework for Odyssey ODM DMA device.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 6 +++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +++++++
drivers/dma/odm/odm.h | 29 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 74 ++++++++++++++++++++++++++++++++++++
5 files changed, 124 insertions(+)
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7abb3aee49..b8d2f7b3d8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1268,6 +1268,12 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
+Marvell Odyssey ODM DMA
+M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/dma/odm/
+
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index 582654ea1b..358132759a 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -8,6 +8,7 @@ drivers = [
'hisilicon',
'idxd',
'ioat',
+ 'odm',
'skeleton',
]
std_deps = ['dmadev']
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
new file mode 100644
index 0000000000..227b10c890
--- /dev/null
+++ b/drivers/dma/odm/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2024 Marvell.
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
+
+sources = files('odm_dmadev.c')
+
+pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
new file mode 100644
index 0000000000..aeeb6f9e9a
--- /dev/null
+++ b/drivers/dma/odm/odm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_H_
+#define _ODM_H_
+
+#include <rte_log.h>
+
+extern int odm_logtype;
+
+#define odm_err(...) \
+ rte_log(RTE_LOG_ERR, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_info(...) \
+ rte_log(RTE_LOG_INFO, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+
+struct __rte_cache_aligned odm_dev {
+ struct rte_pci_device *pci_dev;
+ uint8_t *rbase;
+ uint16_t vfid;
+ uint8_t max_qs;
+ uint8_t num_qs;
+};
+
+#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
new file mode 100644
index 0000000000..cc3342cf7b
--- /dev/null
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <string.h>
+
+#include <bus_pci_driver.h>
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_dmadev.h>
+#include <rte_dmadev_pmd.h>
+#include <rte_pci.h>
+
+#include "odm.h"
+
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
+#define PCI_DRIVER_NAME dma_odm
+
+static int
+odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+ struct odm_dev *odm = NULL;
+ struct rte_dma_dev *dmadev;
+
+ if (!pci_dev->mem_resource[0].addr)
+ return -ENODEV;
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*odm));
+ if (dmadev == NULL) {
+ odm_err("DMA device allocation failed for %s", name);
+ return -ENOMEM;
+ }
+
+ odm_info("DMA device %s probed", name);
+
+ return 0;
+}
+
+static int
+odm_dmadev_remove(struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ return rte_dma_pmd_release(name);
+}
+
+static const struct rte_pci_id odm_dma_pci_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_ODYSSEY_ODM_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver odm_dmadev = {
+ .id_table = odm_dma_pci_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = odm_dmadev_probe,
+ .remove = odm_dmadev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(PCI_DRIVER_NAME, odm_dmadev);
+RTE_PMD_REGISTER_PCI_TABLE(PCI_DRIVER_NAME, odm_dma_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(PCI_DRIVER_NAME, "vfio-pci");
+RTE_LOG_REGISTER_DEFAULT(odm_logtype, NOTICE);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 3/8] dma/odm: add hardware defines
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
2024-04-15 15:31 ` [PATCH 1/8] usertools/devbind: add " Anoob Joseph
2024-04-15 15:31 ` [PATCH 2/8] dma/odm: add framework for " Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 4/8] dma/odm: add dev init and fini Anoob Joseph
` (5 subsequent siblings)
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add ODM registers and structures. Add mailbox structs as well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.h | 116 +++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 ++++++++++++++++
2 files changed, 165 insertions(+)
create mode 100644 drivers/dma/odm/odm_priv.h
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index aeeb6f9e9a..7564ffbed4 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,6 +9,47 @@
extern int odm_logtype;
+/* ODM VF register offsets from VF_BAR0 */
+#define ODM_VDMA_EN(x) (0x00 | (x << 3))
+#define ODM_VDMA_REQQ_CTL(x) (0x80 | (x << 3))
+#define ODM_VDMA_DBELL(x) (0x100 | (x << 3))
+#define ODM_VDMA_RING_CFG(x) (0x180 | (x << 3))
+#define ODM_VDMA_IRING_BADDR(x) (0x200 | (x << 3))
+#define ODM_VDMA_CRING_BADDR(x) (0x280 | (x << 3))
+#define ODM_VDMA_COUNTS(x) (0x300 | (x << 3))
+#define ODM_VDMA_IRING_NADDR(x) (0x380 | (x << 3))
+#define ODM_VDMA_CRING_NADDR(x) (0x400 | (x << 3))
+#define ODM_VDMA_IRING_DBG(x) (0x480 | (x << 3))
+#define ODM_VDMA_CNT(x) (0x580 | (x << 3))
+#define ODM_VF_INT (0x1000)
+#define ODM_VF_INT_W1S (0x1008)
+#define ODM_VF_INT_ENA_W1C (0x1010)
+#define ODM_VF_INT_ENA_W1S (0x1018)
+#define ODM_MBOX_VF_PF_DATA(i) (0x2000 | (i << 3))
+
+#define ODM_MBOX_RETRY_CNT (0xfffffff)
+#define ODM_MBOX_ERR_CODE_MAX (0xf)
+#define ODM_IRING_IDLE_WAIT_CNT (0xfffffff)
+
+/**
+ * Enumeration odm_hdr_xtype_e
+ *
+ * ODM Transfer Type Enumeration
+ * Enumerates the pointer type in ODM_DMA_INSTR_HDR_S[XTYPE]
+ */
+#define ODM_XTYPE_INTERNAL 2
+#define ODM_XTYPE_FILL0 4
+#define ODM_XTYPE_FILL1 5
+
+/**
+ * ODM Header completion type enumeration
+ * Enumerates the completion type in ODM_DMA_INSTR_HDR_S[CT]
+ */
+#define ODM_HDR_CT_CW_CA 0x0
+#define ODM_HDR_CT_CW_NC 0x1
+
+#define ODM_MAX_QUEUES_PER_DEV 16
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -18,6 +59,81 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+/**
+ * Structure odm_instr_hdr_s for ODM
+ *
+ * ODM DMA Instruction Header Format
+ */
+union odm_instr_hdr_s {
+ uint64_t u;
+ struct odm_instr_hdr {
+ uint64_t nfst : 3;
+ uint64_t reserved_3 : 1;
+ uint64_t nlst : 3;
+ uint64_t reserved_7_9 : 3;
+ uint64_t ct : 2;
+ uint64_t stse : 1;
+ uint64_t reserved_13_28 : 16;
+ uint64_t sts : 1;
+ uint64_t reserved_30_49 : 20;
+ uint64_t xtype : 3;
+ uint64_t reserved_53_63 : 11;
+ } s;
+};
+
+/**
+ * ODM Completion Entry Structure
+ *
+ */
+union odm_cmpl_ent_s {
+ uint32_t u;
+ struct odm_cmpl_ent {
+ uint32_t cmp_code : 8;
+ uint32_t rsvd : 23;
+ uint32_t valid : 1;
+ } s;
+};
+
+/**
+ * ODM DMA Ring Configuration Register
+ */
+union odm_vdma_ring_cfg_s {
+ uint64_t u;
+ struct {
+ uint64_t isize : 8;
+ uint64_t rsvd_8_15 : 8;
+ uint64_t csize : 8;
+ uint64_t rsvd_24_63 : 40;
+ } s;
+};
+
+/**
+ * ODM DMA Instruction Ring DBG
+ */
+union odm_vdma_iring_dbg_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell_cnt : 32;
+ uint64_t offset : 16;
+ uint64_t rsvd_48_62 : 15;
+ uint64_t iwbusy : 1;
+ } s;
+};
+
+/**
+ * ODM DMA Counts
+ */
+union odm_vdma_counts_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell : 32;
+ uint64_t buf_used_cnt : 9;
+ uint64_t rsvd_41_43 : 3;
+ uint64_t rsvd_buf_used_cnt : 3;
+ uint64_t rsvd_47_63 : 17;
+ } s;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
uint8_t *rbase;
diff --git a/drivers/dma/odm/odm_priv.h b/drivers/dma/odm/odm_priv.h
new file mode 100644
index 0000000000..1878f4d9a6
--- /dev/null
+++ b/drivers/dma/odm/odm_priv.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_PRIV_H_
+#define _ODM_PRIV_H_
+
+#define ODM_MAX_VFS 16
+#define ODM_MAX_QUEUES 32
+
+#define ODM_CMD_QUEUE_SIZE 4096
+
+#define ODM_DEV_INIT 0x1
+#define ODM_DEV_CLOSE 0x2
+#define ODM_QUEUE_OPEN 0x3
+#define ODM_QUEUE_CLOSE 0x4
+#define ODM_REG_DUMP 0x5
+
+struct odm_mbox_dev_msg {
+ /* Response code */
+ uint64_t rsp : 8;
+ /* Number of VFs */
+ uint64_t nvfs : 2;
+ /* Error code */
+ uint64_t err : 6;
+ /* Reserved */
+ uint64_t rsvd_16_63 : 48;
+};
+
+struct odm_mbox_queue_msg {
+ /* Command code */
+ uint64_t cmd : 8;
+ /* VF ID to configure */
+ uint64_t vfid : 8;
+ /* Queue index in the VF */
+ uint64_t qidx : 8;
+ /* Reserved */
+ uint64_t rsvd_24_63 : 40;
+};
+
+union odm_mbox_msg {
+ uint64_t u[2];
+ struct {
+ struct odm_mbox_dev_msg d;
+ struct odm_mbox_queue_msg q;
+ };
+};
+
+#endif /* _ODM_PRIV_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 4/8] dma/odm: add dev init and fini
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
` (2 preceding siblings ...)
2024-04-15 15:31 ` [PATCH 3/8] dma/odm: add hardware defines Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 5/8] dma/odm: add device ops Anoob Joseph
` (4 subsequent siblings)
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add ODM device init and fini.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/meson.build | 2 +-
drivers/dma/odm/odm.c | 97 ++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm.h | 10 ++++
drivers/dma/odm/odm_dmadev.c | 13 +++++
4 files changed, 121 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma/odm/odm.c
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
index 227b10c890..d597762d37 100644
--- a/drivers/dma/odm/meson.build
+++ b/drivers/dma/odm/meson.build
@@ -9,6 +9,6 @@ endif
deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
-sources = files('odm_dmadev.c')
+sources = files('odm_dmadev.c', 'odm.c')
pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
new file mode 100644
index 0000000000..c0963da451
--- /dev/null
+++ b/drivers/dma/odm/odm.c
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <stdint.h>
+
+#include <bus_pci_driver.h>
+
+#include <rte_io.h>
+
+#include "odm.h"
+#include "odm_priv.h"
+
+static void
+odm_vchan_resc_free(struct odm_dev *odm, int qno)
+{
+ RTE_SET_USED(odm);
+ RTE_SET_USED(qno);
+}
+
+static int
+send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg *rsp)
+{
+ int retry_cnt = ODM_MBOX_RETRY_CNT;
+ union odm_mbox_msg pf_msg;
+
+ msg->d.err = ODM_MBOX_ERR_CODE_MAX;
+ odm_write64(msg->u[0], odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ odm_write64(msg->u[1], odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ pf_msg.u[0] = 0;
+ pf_msg.u[1] = 0;
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+
+ while (pf_msg.d.rsp == 0 && retry_cnt > 0) {
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ --retry_cnt;
+ }
+
+ if (retry_cnt <= 0)
+ return -EBADE;
+
+ pf_msg.u[1] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ if (rsp) {
+ rsp->u[0] = pf_msg.u[0];
+ rsp->u[1] = pf_msg.u[1];
+ }
+
+ if (pf_msg.d.rsp == msg->d.err && pf_msg.d.err != 0)
+ return -EBADE;
+
+ return 0;
+}
+
+int
+odm_dev_init(struct odm_dev *odm)
+{
+ struct rte_pci_device *pci_dev = odm->pci_dev;
+ union odm_mbox_msg mbox_msg;
+ uint16_t vfid;
+ int rc;
+
+ odm->rbase = pci_dev->mem_resource[0].addr;
+ vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7);
+ vfid -= 1;
+ odm->vfid = vfid;
+ odm->num_qs = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_INIT;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (!rc)
+ odm->max_qs = 1 << (4 - mbox_msg.d.nvfs);
+
+ return rc;
+}
+
+int
+odm_dev_fini(struct odm_dev *odm)
+{
+ union odm_mbox_msg mbox_msg;
+ int qno, rc = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_CLOSE;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+
+ for (qno = 0; qno < odm->num_qs; qno++)
+ odm_vchan_resc_free(odm, qno);
+
+ return rc;
+}
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 7564ffbed4..9fd3e30ad8 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -5,6 +5,10 @@
#ifndef _ODM_H_
#define _ODM_H_
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_compat.h>
#include <rte_log.h>
extern int odm_logtype;
@@ -50,6 +54,9 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
+#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -142,4 +149,7 @@ struct __rte_cache_aligned odm_dev {
uint8_t num_qs;
};
+int odm_dev_init(struct odm_dev *odm);
+int odm_dev_fini(struct odm_dev *odm);
+
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index cc3342cf7b..bef335c10c 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -23,6 +23,7 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
char name[RTE_DEV_NAME_MAX_LEN];
struct odm_dev *odm = NULL;
struct rte_dma_dev *dmadev;
+ int rc;
if (!pci_dev->mem_resource[0].addr)
return -ENODEV;
@@ -37,8 +38,20 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
}
odm_info("DMA device %s probed", name);
+ odm = dmadev->data->dev_private;
+
+ odm->pci_dev = pci_dev;
+
+ rc = odm_dev_init(odm);
+ if (rc < 0)
+ goto dma_pmd_release;
return 0;
+
+dma_pmd_release:
+ rte_dma_pmd_release(name);
+
+ return rc;
}
static int
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 5/8] dma/odm: add device ops
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
` (3 preceding siblings ...)
2024-04-15 15:31 ` [PATCH 4/8] dma/odm: add dev init and fini Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 6/8] dma/odm: add stats Anoob Joseph
` (3 subsequent siblings)
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA device control ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.c | 144 ++++++++++++++++++++++++++++++++++-
drivers/dma/odm/odm.h | 58 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 85 +++++++++++++++++++++
3 files changed, 285 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
index c0963da451..6094ace9fd 100644
--- a/drivers/dma/odm/odm.c
+++ b/drivers/dma/odm/odm.c
@@ -7,6 +7,7 @@
#include <bus_pci_driver.h>
#include <rte_io.h>
+#include <rte_malloc.h>
#include "odm.h"
#include "odm_priv.h"
@@ -14,8 +15,15 @@
static void
odm_vchan_resc_free(struct odm_dev *odm, int qno)
{
- RTE_SET_USED(odm);
- RTE_SET_USED(qno);
+ struct odm_queue *vq = &odm->vq[qno];
+
+ rte_memzone_free(vq->iring_mz);
+ rte_memzone_free(vq->cring_mz);
+ rte_free(vq->extra_ins_sz);
+
+ vq->iring_mz = NULL;
+ vq->cring_mz = NULL;
+ vq->extra_ins_sz = NULL;
}
static int
@@ -53,6 +61,138 @@ send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg
return 0;
}
+static int
+odm_queue_ring_config(struct odm_dev *odm, int vchan, int isize, int csize)
+{
+ union odm_vdma_ring_cfg_s ring_cfg = {0};
+ struct odm_queue *vq = &odm->vq[vchan];
+
+ if (vq->iring_mz == NULL || vq->cring_mz == NULL)
+ return -EINVAL;
+
+ ring_cfg.s.isize = (isize / 1024) - 1;
+ ring_cfg.s.csize = (csize / 1024) - 1;
+
+ odm_write64(ring_cfg.u, odm->rbase + ODM_VDMA_RING_CFG(vchan));
+ odm_write64(vq->iring_mz->iova, odm->rbase + ODM_VDMA_IRING_BADDR(vchan));
+ odm_write64(vq->cring_mz->iova, odm->rbase + ODM_VDMA_CRING_BADDR(vchan));
+
+ return 0;
+}
+
+int
+odm_enable(struct odm_dev *odm)
+{
+ struct odm_queue *vq;
+ int qno, rc = 0;
+
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ vq = &odm->vq[qno];
+
+ vq->desc_idx = vq->stats.completed_offset;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ vq->iring_head = 0;
+ vq->cring_head = 0;
+ vq->ins_ring_head = 0;
+ vq->iring_sz_available = vq->iring_max_words;
+
+ rc = odm_queue_ring_config(odm, qno, vq->iring_max_words * 8,
+ vq->cring_max_entry * 4);
+ if (rc < 0)
+ break;
+
+ odm_write64(0x1, odm->rbase + ODM_VDMA_EN(qno));
+ }
+
+ return rc;
+}
+
+int
+odm_disable(struct odm_dev *odm)
+{
+ int qno, wait_cnt = ODM_IRING_IDLE_WAIT_CNT;
+ uint64_t val;
+
+ /* Disable the queue and wait for the queue to became idle */
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ odm_write64(0x0, odm->rbase + ODM_VDMA_EN(qno));
+ do {
+ val = odm_read64(odm->rbase + ODM_VDMA_IRING_BADDR(qno));
+ } while ((!(val & 1ULL << 63)) && (--wait_cnt > 0));
+ }
+
+ return 0;
+}
+
+int
+odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc)
+{
+ struct odm_queue *vq = &odm->vq[vchan];
+ int isize, csize, max_nb_desc, rc = 0;
+ union odm_mbox_msg mbox_msg;
+ const struct rte_memzone *mz;
+ char name[32];
+
+ if (vq->iring_mz != NULL)
+ odm_vchan_resc_free(odm, vchan);
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+
+ /* ODM PF driver expects vfid starts from index 0 */
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_QUEUE_OPEN;
+ mbox_msg.q.qidx = vchan;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (rc < 0)
+ return rc;
+
+ /* Determine instruction & completion ring sizes. */
+
+ /* Create iring that can support nb_desc. Round up to a multiple of 1024. */
+ isize = RTE_ALIGN_CEIL(nb_desc * ODM_IRING_ENTRY_SIZE_MAX * 8, 1024);
+ isize = RTE_MIN(isize, ODM_IRING_MAX_SIZE);
+ snprintf(name, sizeof(name), "vq%d_iring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, isize, 0, ODM_MEMZONE_FLAGS, 1024);
+ if (mz == NULL)
+ return -ENOMEM;
+ vq->iring_mz = mz;
+ vq->iring_max_words = isize / 8;
+
+ /* Create cring that can support max instructions that can be inflight in hw. */
+ max_nb_desc = (isize / (ODM_IRING_ENTRY_SIZE_MIN * 8));
+ csize = RTE_ALIGN_CEIL(max_nb_desc * sizeof(union odm_cmpl_ent_s), 1024);
+ snprintf(name, sizeof(name), "vq%d_cring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, csize, 0, ODM_MEMZONE_FLAGS, 1024);
+ if (mz == NULL) {
+ rc = -ENOMEM;
+ goto iring_free;
+ }
+ vq->cring_mz = mz;
+ vq->cring_max_entry = csize / 4;
+
+ /* Allocate memory to track the size of each instruction. */
+ snprintf(name, sizeof(name), "vq%d_extra%d", odm->vfid, vchan);
+ vq->extra_ins_sz = rte_zmalloc(name, vq->cring_max_entry, 0);
+ if (vq->extra_ins_sz == NULL) {
+ rc = -ENOMEM;
+ goto cring_free;
+ }
+
+ vq->stats = (struct vq_stats){0};
+ return rc;
+
+cring_free:
+ rte_memzone_free(odm->vq[vchan].cring_mz);
+ vq->cring_mz = NULL;
+iring_free:
+ rte_memzone_free(odm->vq[vchan].iring_mz);
+ vq->iring_mz = NULL;
+
+ return rc;
+}
+
int
odm_dev_init(struct odm_dev *odm)
{
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 9fd3e30ad8..e1373e0c7f 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,7 +9,9 @@
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_io.h>
#include <rte_log.h>
+#include <rte_memzone.h>
extern int odm_logtype;
@@ -54,6 +56,14 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define ODM_IRING_MAX_SIZE (256 * 1024)
+#define ODM_IRING_ENTRY_SIZE_MIN 4
+#define ODM_IRING_ENTRY_SIZE_MAX 13
+#define ODM_IRING_MAX_WORDS (ODM_IRING_MAX_SIZE / 8)
+#define ODM_IRING_MAX_ENTRY (ODM_IRING_MAX_WORDS / ODM_IRING_ENTRY_SIZE_MIN)
+
+#define ODM_MAX_POINTER 4
+
#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
@@ -66,6 +76,10 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define ODM_MEMZONE_FLAGS \
+ (RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
+ RTE_MEMZONE_512MB | RTE_MEMZONE_4GB | RTE_MEMZONE_SIZE_HINT_ONLY)
+
/**
* Structure odm_instr_hdr_s for ODM
*
@@ -141,8 +155,48 @@ union odm_vdma_counts_s {
} s;
};
+struct vq_stats {
+ uint64_t submitted;
+ uint64_t completed;
+ uint64_t errors;
+ /*
+ * Since stats.completed is used to return completion index, account for any packets
+ * received before stats is reset.
+ */
+ uint64_t completed_offset;
+};
+
+struct odm_queue {
+ struct odm_dev *dev;
+ /* Instructions that are prepared on the iring, but is not pushed to hw yet. */
+ uint16_t pending_submit_cnt;
+ /* Length (in words) of instructions that are not yet pushed to hw. */
+ uint16_t pending_submit_len;
+ uint16_t desc_idx;
+ /* Instruction ring head. Used for enqueue. */
+ uint16_t iring_head;
+ /* Completion ring head. Used for dequeue. */
+ uint16_t cring_head;
+ /* Extra instruction size ring head. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_head;
+ /* Extra instruction size ring tail. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_tail;
+ /* Instruction size available.*/
+ uint16_t iring_sz_available;
+ /* Number of 8-byte words in iring.*/
+ uint16_t iring_max_words;
+ /* Number of words in cring.*/
+ uint16_t cring_max_entry;
+ /* Extra instruction size used per inflight instruction.*/
+ uint8_t *extra_ins_sz;
+ struct vq_stats stats;
+ const struct rte_memzone *iring_mz;
+ const struct rte_memzone *cring_mz;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
+ struct odm_queue vq[ODM_MAX_QUEUES_PER_DEV];
uint8_t *rbase;
uint16_t vfid;
uint8_t max_qs;
@@ -151,5 +205,9 @@ struct __rte_cache_aligned odm_dev {
int odm_dev_init(struct odm_dev *odm);
int odm_dev_fini(struct odm_dev *odm);
+int odm_configure(struct odm_dev *odm);
+int odm_enable(struct odm_dev *odm);
+int odm_disable(struct odm_dev *odm);
+int odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc);
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index bef335c10c..8c705978fe 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -17,6 +17,87 @@
#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
#define PCI_DRIVER_NAME dma_odm
+static int
+odm_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(size);
+
+ odm = dev->fp_obj->dev_private;
+
+ dev_info->max_vchans = odm->max_qs;
+ dev_info->nb_vchans = odm->num_qs;
+ dev_info->dev_capa =
+ (RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG);
+ dev_info->max_desc = ODM_IRING_MAX_ENTRY;
+ dev_info->min_desc = 1;
+ dev_info->max_sges = ODM_MAX_POINTER;
+
+ return 0;
+}
+
+static int
+odm_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(conf_sz);
+
+ odm = dev->fp_obj->dev_private;
+ odm->num_qs = conf->nb_vchans;
+
+ return 0;
+}
+
+static int
+odm_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
+ const struct rte_dma_vchan_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ RTE_SET_USED(conf_sz);
+ return odm_vchan_setup(odm, vchan, conf->nb_desc);
+}
+
+static int
+odm_dmadev_start(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_enable(odm);
+}
+
+static int
+odm_dmadev_stop(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_disable(odm);
+}
+
+static int
+odm_dmadev_close(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ odm_disable(odm);
+ odm_dev_fini(odm);
+
+ return 0;
+}
+
+static const struct rte_dma_dev_ops odm_dmadev_ops = {
+ .dev_close = odm_dmadev_close,
+ .dev_configure = odm_dmadev_configure,
+ .dev_info_get = odm_dmadev_info_get,
+ .dev_start = odm_dmadev_start,
+ .dev_stop = odm_dmadev_stop,
+ .stats_get = NULL,
+ .stats_reset = NULL,
+ .vchan_setup = odm_dmadev_vchan_setup,
+};
+
static int
odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
{
@@ -40,6 +121,10 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
odm_info("DMA device %s probed", name);
odm = dmadev->data->dev_private;
+ dmadev->device = &pci_dev->device;
+ dmadev->fp_obj->dev_private = odm;
+ dmadev->dev_ops = &odm_dmadev_ops;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 6/8] dma/odm: add stats
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
` (4 preceding siblings ...)
2024-04-15 15:31 ` [PATCH 5/8] dma/odm: add device ops Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 7/8] dma/odm: add copy and copy sg ops Anoob Joseph
` (2 subsequent siblings)
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA dev stats.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 63 ++++++++++++++++++++++++++++++++++--
1 file changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 8c705978fe..13b2588246 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -87,14 +87,73 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
+ uint32_t size)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ if (size < sizeof(rte_stats))
+ return -EINVAL;
+ if (rte_stats == NULL)
+ return -EINVAL;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[vchan].stats;
+
+ *rte_stats = *stats;
+ } else {
+ int i;
+
+ for (i = 0; i < odm->num_qs; i++) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[i].stats;
+
+ rte_stats->submitted += stats->submitted;
+ rte_stats->completed += stats->completed;
+ rte_stats->errors += stats->errors;
+ }
+ }
+
+ return 0;
+}
+
+static void
+odm_vq_stats_reset(struct vq_stats *vq_stats)
+{
+ vq_stats->completed_offset += vq_stats->completed;
+ vq_stats->completed = 0;
+ vq_stats->errors = 0;
+ vq_stats->submitted = 0;
+}
+
+static int
+odm_stats_reset(struct rte_dma_dev *dev, uint16_t vchan)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+ struct vq_stats *vq_stats;
+ int i;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ vq_stats = &odm->vq[vchan].stats;
+ odm_vq_stats_reset(vq_stats);
+ } else {
+ for (i = 0; i < odm->num_qs; i++) {
+ vq_stats = &odm->vq[i].stats;
+ odm_vq_stats_reset(vq_stats);
+ }
+ }
+
+ return 0;
+}
+
static const struct rte_dma_dev_ops odm_dmadev_ops = {
.dev_close = odm_dmadev_close,
.dev_configure = odm_dmadev_configure,
.dev_info_get = odm_dmadev_info_get,
.dev_start = odm_dmadev_start,
.dev_stop = odm_dmadev_stop,
- .stats_get = NULL,
- .stats_reset = NULL,
+ .stats_get = odm_stats_get,
+ .stats_reset = odm_stats_reset,
.vchan_setup = odm_dmadev_vchan_setup,
};
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 7/8] dma/odm: add copy and copy sg ops
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
` (5 preceding siblings ...)
2024-04-15 15:31 ` [PATCH 6/8] dma/odm: add stats Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-15 15:31 ` [PATCH 8/8] dma/odm: add remaining ops Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add ODM copy and copy SG ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 233 +++++++++++++++++++++++++++++++++++
1 file changed, 233 insertions(+)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 13b2588246..327692426f 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -9,6 +9,7 @@
#include <rte_common.h>
#include <rte_dmadev.h>
#include <rte_dmadev_pmd.h>
+#include <rte_memcpy.h>
#include <rte_pci.h>
#include "odm.h"
@@ -87,6 +88,235 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length,
+ uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ const union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ .s.nfst = 1,
+ .s.nlst = 1,
+ };
+
+ vq = &odm->vq[vchan];
+
+ h = length;
+ h |= ((uint64_t)length << 32);
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = src;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head++] = hdr.u;
+ iring_head_ptr[iring_head++] = h;
+ iring_head_ptr[iring_head++] = src;
+ iring_head_ptr[iring_head++] = dst;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
+static inline void
+odm_dmadev_fill_sg(uint64_t *cmd, const struct rte_dma_sge *src, const struct rte_dma_sge *dst,
+ uint16_t nb_src, uint16_t nb_dst, union odm_instr_hdr_s *hdr)
+{
+ int i = 0, j = 0;
+ uint64_t h = 0;
+
+ cmd[j++] = hdr->u;
+ /* When nb_src is even */
+ if (!(nb_src & 0x1)) {
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Fill the iring with dst pointers */
+ for (i = 1; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is odd */
+ if (nb_dst & 0x1) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ } else {
+ /* When nb_src is odd */
+
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Handle the last src pointer */
+ h = ((uint64_t)dst[0].length << 32) | src[nb_src - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[nb_src - 1].addr;
+ cmd[j++] = dst[0].addr;
+
+ /* Fill the iring with dst pointers */
+ for (i = 2; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is even */
+ if (!(nb_dst & 0x1)) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ }
+}
+
+static int
+odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src,
+ const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_head, ins_ring_head;
+ uint64_t cmd[ODM_IRING_ENTRY_SIZE_MAX];
+ struct odm_dev *odm = dev_private;
+ uint32_t s_sz = 0, d_sz = 0;
+ uint16_t iring_sz_available;
+ uint64_t *iring_head_ptr;
+ int i, nb, num_words;
+ struct odm_queue *vq;
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ };
+
+ vq = &odm->vq[vchan];
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+ iring_sz_available = vq->iring_sz_available;
+ ins_ring_head = vq->ins_ring_head;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+
+ if (unlikely(nb_src > 4 || nb_dst > 4))
+ return -EINVAL;
+
+ for (i = 0; i < nb_src; i++)
+ s_sz += src[i].length;
+
+ for (i = 0; i < nb_dst; i++)
+ d_sz += dst[i].length;
+
+ if (s_sz != d_sz)
+ return -EINVAL;
+
+ nb = nb_src + nb_dst;
+ hdr.s.nfst = nb_src;
+ hdr.s.nlst = nb_dst;
+ num_words = 1 + 3 * (nb / 2 + (nb & 0x1));
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+ int words_avail = max_iring_words - iring_head;
+
+ odm_dmadev_fill_sg(cmd, src, dst, nb_src, nb_dst, &hdr);
+ rte_memcpy((void *)&iring_head_ptr[iring_head], (void *)cmd, words_avail * 8);
+ iring_head = num_words - words_avail;
+ rte_memcpy((void *)iring_head_ptr, (void *)&cmd[words_avail], iring_head * 8);
+ } else {
+ odm_dmadev_fill_sg(&iring_head_ptr[iring_head], src, dst, nb_src, nb_dst, &hdr);
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* Save extra space used for the instruction. */
+ vq->extra_ins_sz[ins_ring_head] = num_words - 4;
+
+ vq->ins_ring_head = (ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -184,6 +414,9 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->dev_private = odm;
dmadev->dev_ops = &odm_dmadev_ops;
+ dmadev->fp_obj->copy = odm_dmadev_copy;
+ dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH 8/8] dma/odm: add remaining ops
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
` (6 preceding siblings ...)
2024-04-15 15:31 ` [PATCH 7/8] dma/odm: add copy and copy sg ops Anoob Joseph
@ 2024-04-15 15:31 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
8 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-15 15:31 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add all remaining ops such as fill, burst_capacity etc. Also update the
documentation.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++++++++++
drivers/dma/odm/odm.h | 4 +
drivers/dma/odm/odm_dmadev.c | 246 +++++++++++++++++++++++++++++++++++
5 files changed, 344 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index b8d2f7b3d8..38293008aa 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1273,6 +1273,7 @@ M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/odm/
+F: doc/guides/dmadevs/odm.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst
index 5bd25b32b9..ce9f6eb260 100644
--- a/doc/guides/dmadevs/index.rst
+++ b/doc/guides/dmadevs/index.rst
@@ -17,3 +17,4 @@ an application through DMA API.
hisilicon
idxd
ioat
+ odm
diff --git a/doc/guides/dmadevs/odm.rst b/doc/guides/dmadevs/odm.rst
new file mode 100644
index 0000000000..a2eaab59a0
--- /dev/null
+++ b/doc/guides/dmadevs/odm.rst
@@ -0,0 +1,92 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 Marvell.
+
+Odyssey ODM DMA Device Driver
+=============================
+
+The ``odm`` DMA device driver provides a poll-mode driver (PMD) for Marvell Odyssey
+DMA Hardware Accelerator block found in Odyssey SoC. The block supports only mem
+to mem DMA transfers.
+
+ODM DMA device can support up to 32 queues and 16 VFs.
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+Device Setup
+-------------
+
+ODM DMA device is initialized by kernel PF driver. The PF kernel driver is part
+of Marvell software packages for Odyssey.
+
+Kernel module can be inserted as in below example::
+
+ $ sudo insmod odyssey_odm.ko
+
+ODM DMA device can support up to 16 VFs::
+
+ $ sudo echo 16 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs
+
+Above command creates 16 VFs with 2 queues each.
+
+The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the
+presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma``
+will show all the Odyssey ODM DMA devices.
+
+Devices using VFIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The HW devices to be used will need to be bound to a user-space IO driver.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
+For example::
+
+ $ dpdk-devbind.py -b vfio-pci 0000:08:00.1
+
+Device Probing and Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the devices from an application, the dmadev API can be used.
+
+Once configured, the device can then be made ready for use
+by calling the ``rte_dma_start()`` API.
+
+Performing Data Copies
+~~~~~~~~~~~~~~~~~~~~~~
+
+Refer to the :ref:`Enqueue / Dequeue APIs <dmadev_enqueue_dequeue>` section
+of the dmadev library documentation for details on operation enqueue and
+submission API usage.
+
+Performance Tuning Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To achieve higher performance, DMA device needs to be tuned using PF kernel
+driver module parameters.
+
+Following options are exposed by kernel PF driver via devlink interface for
+tuning performance.
+
+``eng_sel``
+
+ ODM DMA device has 2 engines internally. Engine to queue mapping is decided
+ by a hardware register which can be configured as below::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name eng_sel value 3435973836 cmode runtime
+
+ Each bit in the register corresponds to one queue. Each queue would be
+ associated with one engine. If the value of the bit corresponding to the queue
+ is 0, then engine 0 would be picked. If it is 1, then engine 1 would be
+ picked.
+
+ In the above command, the register value is set as
+ ``1100 1100 1100 1100 1100 1100 1100 1100`` which allows for alternate engines
+ to be used with alternate VFs (assuming the system has 16 VFs with 2 queues
+ each).
+
+``max_load_request``
+
+ Specifies maximum outstanding load requests on internal bus. Values can range
+ from 1 to 512. Set to 512 for maximum requests in flight.::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name max_load_request value 512 cmode runtime
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index e1373e0c7f..1d60d2d11a 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -75,6 +75,10 @@ extern int odm_logtype;
rte_log(RTE_LOG_INFO, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_debug(...) \
+ rte_log(RTE_LOG_DEBUG, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
#define ODM_MEMZONE_FLAGS \
(RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 327692426f..04286e3bf7 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -317,6 +317,247 @@ odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *
return vq->desc_idx++;
}
+static int
+odm_dmadev_fill(void *dev_private, uint16_t vchan, uint64_t pattern, rte_iova_t dst,
+ uint32_t length, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ vq = &odm->vq[vchan];
+
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.nfst = 0,
+ .s.nlst = 1,
+ };
+
+ h = (uint64_t)length;
+
+ switch (pattern) {
+ case 0:
+ hdr.s.xtype = ODM_XTYPE_FILL0;
+ break;
+ case 0xffffffffffffffff:
+ hdr.s.xtype = ODM_XTYPE_FILL1;
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = 0;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head_ptr[iring_head + 1] = h;
+ iring_head_ptr[iring_head + 2] = dst;
+ iring_head_ptr[iring_head + 3] = 0;
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ return vq->desc_idx++;
+}
+
+static uint16_t
+odm_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx,
+ bool *has_error)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint64_t nb_err = 0;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (unlikely(vq->stats.submitted == vq->stats.completed)) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit(cmpl_ptr, rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ if (cmpl.s.cmp_code)
+ nb_err++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit(cmpl_ptr, cmpl_zero.u, rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->stats.errors += nb_err;
+
+ if (unlikely(has_error != NULL && nb_err))
+ *has_error = true;
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static uint16_t
+odm_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls,
+ uint16_t *last_idx, enum rte_dma_status_code *status)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (vq->stats.submitted == vq->stats.completed) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+#ifdef ODM_DEBUG
+ odm_debug("cring_head: 0x%" PRIx16, cring_head);
+ odm_debug("Submitted: 0x%" PRIx64, vq->stats.submitted);
+ odm_debug("Completed: 0x%" PRIx64, vq->stats.completed);
+ odm_debug("Hardware count: 0x%" PRIx64, odm_read64(odm->rbase + ODM_VDMA_CNT(vchan)));
+#endif
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit(cmpl_ptr, rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ status[cnt] = cmpl.s.cmp_code;
+
+ if (cmpl.s.cmp_code)
+ vq->stats.errors++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit(cmpl_ptr, cmpl_zero.u, rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static int
+odm_dmadev_submit(void *dev_private, uint16_t vchan)
+{
+ struct odm_dev *odm = dev_private;
+ uint16_t pending_submit_len;
+ struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ pending_submit_len = vq->pending_submit_len;
+
+ if (pending_submit_len == 0)
+ return 0;
+
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->pending_submit_len = 0;
+ vq->stats.submitted += vq->pending_submit_cnt;
+ vq->pending_submit_cnt = 0;
+
+ return 0;
+}
+
+static uint16_t
+odm_dmadev_burst_capacity(const void *dev_private, uint16_t vchan __rte_unused)
+{
+ const struct odm_dev *odm = dev_private;
+ const struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ return (vq->iring_sz_available / ODM_IRING_ENTRY_SIZE_MIN);
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -416,6 +657,11 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->copy = odm_dmadev_copy;
dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+ dmadev->fp_obj->fill = odm_dmadev_fill;
+ dmadev->fp_obj->submit = odm_dmadev_submit;
+ dmadev->fp_obj->completed = odm_dmadev_completed;
+ dmadev->fp_obj->completed_status = odm_dmadev_completed_status;
+ dmadev->fp_obj->burst_capacity = odm_dmadev_burst_capacity;
odm->pci_dev = pci_dev;
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 0/7] Add ODM DMA device
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
` (7 preceding siblings ...)
2024-04-15 15:31 ` [PATCH 8/8] dma/odm: add remaining ops Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 1/7] dma/odm: add framework for " Anoob Joseph
` (7 more replies)
8 siblings, 8 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add Odyssey ODM DMA device. This PMD abstracts ODM hardware unit on
Odyssey SoC which can perform mem to mem copies.
The hardware unit can support upto 32 queues (vchan) and 16 VFs. It
supports 'fill' operation with specific values. It also supports
SG mode of operation with upto 4 src pointers and 4 destination
pointers.
The PMD is tested with both unit tests and performance applications.
Changes in v2
- Addressed build failure with CI
- Moved update to usertools as separate patch
Anoob Joseph (2):
dma/odm: add framework for ODM DMA device
dma/odm: add hardware defines
Gowrishankar Muthukrishnan (3):
dma/odm: add dev init and fini
dma/odm: add device ops
dma/odm: add stats
Vidya Sagar Velumuri (2):
dma/odm: add copy and copy sg ops
dma/odm: add remaining ops
MAINTAINERS | 7 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +
drivers/dma/odm/odm.c | 237 ++++++++++++
drivers/dma/odm/odm.h | 217 +++++++++++
drivers/dma/odm/odm_dmadev.c | 713 +++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 +++
9 files changed, 1331 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.c
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
create mode 100644 drivers/dma/odm/odm_priv.h
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 1/7] dma/odm: add framework for ODM DMA device
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 2/7] dma/odm: add hardware defines Anoob Joseph
` (6 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add framework for Odyssey ODM DMA device.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 6 +++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +++++++
drivers/dma/odm/odm.h | 29 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 74 ++++++++++++++++++++++++++++++++++++
5 files changed, 124 insertions(+)
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7abb3aee49..b8d2f7b3d8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1268,6 +1268,12 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
+Marvell Odyssey ODM DMA
+M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/dma/odm/
+
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index 582654ea1b..358132759a 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -8,6 +8,7 @@ drivers = [
'hisilicon',
'idxd',
'ioat',
+ 'odm',
'skeleton',
]
std_deps = ['dmadev']
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
new file mode 100644
index 0000000000..227b10c890
--- /dev/null
+++ b/drivers/dma/odm/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2024 Marvell.
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
+
+sources = files('odm_dmadev.c')
+
+pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
new file mode 100644
index 0000000000..aeeb6f9e9a
--- /dev/null
+++ b/drivers/dma/odm/odm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_H_
+#define _ODM_H_
+
+#include <rte_log.h>
+
+extern int odm_logtype;
+
+#define odm_err(...) \
+ rte_log(RTE_LOG_ERR, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_info(...) \
+ rte_log(RTE_LOG_INFO, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+
+struct __rte_cache_aligned odm_dev {
+ struct rte_pci_device *pci_dev;
+ uint8_t *rbase;
+ uint16_t vfid;
+ uint8_t max_qs;
+ uint8_t num_qs;
+};
+
+#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
new file mode 100644
index 0000000000..cc3342cf7b
--- /dev/null
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <string.h>
+
+#include <bus_pci_driver.h>
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_dmadev.h>
+#include <rte_dmadev_pmd.h>
+#include <rte_pci.h>
+
+#include "odm.h"
+
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
+#define PCI_DRIVER_NAME dma_odm
+
+static int
+odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+ struct odm_dev *odm = NULL;
+ struct rte_dma_dev *dmadev;
+
+ if (!pci_dev->mem_resource[0].addr)
+ return -ENODEV;
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*odm));
+ if (dmadev == NULL) {
+ odm_err("DMA device allocation failed for %s", name);
+ return -ENOMEM;
+ }
+
+ odm_info("DMA device %s probed", name);
+
+ return 0;
+}
+
+static int
+odm_dmadev_remove(struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ return rte_dma_pmd_release(name);
+}
+
+static const struct rte_pci_id odm_dma_pci_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_ODYSSEY_ODM_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver odm_dmadev = {
+ .id_table = odm_dma_pci_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = odm_dmadev_probe,
+ .remove = odm_dmadev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(PCI_DRIVER_NAME, odm_dmadev);
+RTE_PMD_REGISTER_PCI_TABLE(PCI_DRIVER_NAME, odm_dma_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(PCI_DRIVER_NAME, "vfio-pci");
+RTE_LOG_REGISTER_DEFAULT(odm_logtype, NOTICE);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 2/7] dma/odm: add hardware defines
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 1/7] dma/odm: add framework for " Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 3/7] dma/odm: add dev init and fini Anoob Joseph
` (5 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add ODM registers and structures. Add mailbox structs as well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.h | 116 +++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 ++++++++++++++++
2 files changed, 165 insertions(+)
create mode 100644 drivers/dma/odm/odm_priv.h
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index aeeb6f9e9a..7564ffbed4 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,6 +9,47 @@
extern int odm_logtype;
+/* ODM VF register offsets from VF_BAR0 */
+#define ODM_VDMA_EN(x) (0x00 | (x << 3))
+#define ODM_VDMA_REQQ_CTL(x) (0x80 | (x << 3))
+#define ODM_VDMA_DBELL(x) (0x100 | (x << 3))
+#define ODM_VDMA_RING_CFG(x) (0x180 | (x << 3))
+#define ODM_VDMA_IRING_BADDR(x) (0x200 | (x << 3))
+#define ODM_VDMA_CRING_BADDR(x) (0x280 | (x << 3))
+#define ODM_VDMA_COUNTS(x) (0x300 | (x << 3))
+#define ODM_VDMA_IRING_NADDR(x) (0x380 | (x << 3))
+#define ODM_VDMA_CRING_NADDR(x) (0x400 | (x << 3))
+#define ODM_VDMA_IRING_DBG(x) (0x480 | (x << 3))
+#define ODM_VDMA_CNT(x) (0x580 | (x << 3))
+#define ODM_VF_INT (0x1000)
+#define ODM_VF_INT_W1S (0x1008)
+#define ODM_VF_INT_ENA_W1C (0x1010)
+#define ODM_VF_INT_ENA_W1S (0x1018)
+#define ODM_MBOX_VF_PF_DATA(i) (0x2000 | (i << 3))
+
+#define ODM_MBOX_RETRY_CNT (0xfffffff)
+#define ODM_MBOX_ERR_CODE_MAX (0xf)
+#define ODM_IRING_IDLE_WAIT_CNT (0xfffffff)
+
+/**
+ * Enumeration odm_hdr_xtype_e
+ *
+ * ODM Transfer Type Enumeration
+ * Enumerates the pointer type in ODM_DMA_INSTR_HDR_S[XTYPE]
+ */
+#define ODM_XTYPE_INTERNAL 2
+#define ODM_XTYPE_FILL0 4
+#define ODM_XTYPE_FILL1 5
+
+/**
+ * ODM Header completion type enumeration
+ * Enumerates the completion type in ODM_DMA_INSTR_HDR_S[CT]
+ */
+#define ODM_HDR_CT_CW_CA 0x0
+#define ODM_HDR_CT_CW_NC 0x1
+
+#define ODM_MAX_QUEUES_PER_DEV 16
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -18,6 +59,81 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+/**
+ * Structure odm_instr_hdr_s for ODM
+ *
+ * ODM DMA Instruction Header Format
+ */
+union odm_instr_hdr_s {
+ uint64_t u;
+ struct odm_instr_hdr {
+ uint64_t nfst : 3;
+ uint64_t reserved_3 : 1;
+ uint64_t nlst : 3;
+ uint64_t reserved_7_9 : 3;
+ uint64_t ct : 2;
+ uint64_t stse : 1;
+ uint64_t reserved_13_28 : 16;
+ uint64_t sts : 1;
+ uint64_t reserved_30_49 : 20;
+ uint64_t xtype : 3;
+ uint64_t reserved_53_63 : 11;
+ } s;
+};
+
+/**
+ * ODM Completion Entry Structure
+ *
+ */
+union odm_cmpl_ent_s {
+ uint32_t u;
+ struct odm_cmpl_ent {
+ uint32_t cmp_code : 8;
+ uint32_t rsvd : 23;
+ uint32_t valid : 1;
+ } s;
+};
+
+/**
+ * ODM DMA Ring Configuration Register
+ */
+union odm_vdma_ring_cfg_s {
+ uint64_t u;
+ struct {
+ uint64_t isize : 8;
+ uint64_t rsvd_8_15 : 8;
+ uint64_t csize : 8;
+ uint64_t rsvd_24_63 : 40;
+ } s;
+};
+
+/**
+ * ODM DMA Instruction Ring DBG
+ */
+union odm_vdma_iring_dbg_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell_cnt : 32;
+ uint64_t offset : 16;
+ uint64_t rsvd_48_62 : 15;
+ uint64_t iwbusy : 1;
+ } s;
+};
+
+/**
+ * ODM DMA Counts
+ */
+union odm_vdma_counts_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell : 32;
+ uint64_t buf_used_cnt : 9;
+ uint64_t rsvd_41_43 : 3;
+ uint64_t rsvd_buf_used_cnt : 3;
+ uint64_t rsvd_47_63 : 17;
+ } s;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
uint8_t *rbase;
diff --git a/drivers/dma/odm/odm_priv.h b/drivers/dma/odm/odm_priv.h
new file mode 100644
index 0000000000..1878f4d9a6
--- /dev/null
+++ b/drivers/dma/odm/odm_priv.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_PRIV_H_
+#define _ODM_PRIV_H_
+
+#define ODM_MAX_VFS 16
+#define ODM_MAX_QUEUES 32
+
+#define ODM_CMD_QUEUE_SIZE 4096
+
+#define ODM_DEV_INIT 0x1
+#define ODM_DEV_CLOSE 0x2
+#define ODM_QUEUE_OPEN 0x3
+#define ODM_QUEUE_CLOSE 0x4
+#define ODM_REG_DUMP 0x5
+
+struct odm_mbox_dev_msg {
+ /* Response code */
+ uint64_t rsp : 8;
+ /* Number of VFs */
+ uint64_t nvfs : 2;
+ /* Error code */
+ uint64_t err : 6;
+ /* Reserved */
+ uint64_t rsvd_16_63 : 48;
+};
+
+struct odm_mbox_queue_msg {
+ /* Command code */
+ uint64_t cmd : 8;
+ /* VF ID to configure */
+ uint64_t vfid : 8;
+ /* Queue index in the VF */
+ uint64_t qidx : 8;
+ /* Reserved */
+ uint64_t rsvd_24_63 : 40;
+};
+
+union odm_mbox_msg {
+ uint64_t u[2];
+ struct {
+ struct odm_mbox_dev_msg d;
+ struct odm_mbox_queue_msg q;
+ };
+};
+
+#endif /* _ODM_PRIV_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 3/7] dma/odm: add dev init and fini
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 1/7] dma/odm: add framework for " Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 2/7] dma/odm: add hardware defines Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 4/7] dma/odm: add device ops Anoob Joseph
` (4 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add ODM device init and fini.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/meson.build | 2 +-
drivers/dma/odm/odm.c | 97 ++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm.h | 10 ++++
drivers/dma/odm/odm_dmadev.c | 13 +++++
4 files changed, 121 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma/odm/odm.c
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
index 227b10c890..d597762d37 100644
--- a/drivers/dma/odm/meson.build
+++ b/drivers/dma/odm/meson.build
@@ -9,6 +9,6 @@ endif
deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
-sources = files('odm_dmadev.c')
+sources = files('odm_dmadev.c', 'odm.c')
pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
new file mode 100644
index 0000000000..c0963da451
--- /dev/null
+++ b/drivers/dma/odm/odm.c
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <stdint.h>
+
+#include <bus_pci_driver.h>
+
+#include <rte_io.h>
+
+#include "odm.h"
+#include "odm_priv.h"
+
+static void
+odm_vchan_resc_free(struct odm_dev *odm, int qno)
+{
+ RTE_SET_USED(odm);
+ RTE_SET_USED(qno);
+}
+
+static int
+send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg *rsp)
+{
+ int retry_cnt = ODM_MBOX_RETRY_CNT;
+ union odm_mbox_msg pf_msg;
+
+ msg->d.err = ODM_MBOX_ERR_CODE_MAX;
+ odm_write64(msg->u[0], odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ odm_write64(msg->u[1], odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ pf_msg.u[0] = 0;
+ pf_msg.u[1] = 0;
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+
+ while (pf_msg.d.rsp == 0 && retry_cnt > 0) {
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ --retry_cnt;
+ }
+
+ if (retry_cnt <= 0)
+ return -EBADE;
+
+ pf_msg.u[1] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ if (rsp) {
+ rsp->u[0] = pf_msg.u[0];
+ rsp->u[1] = pf_msg.u[1];
+ }
+
+ if (pf_msg.d.rsp == msg->d.err && pf_msg.d.err != 0)
+ return -EBADE;
+
+ return 0;
+}
+
+int
+odm_dev_init(struct odm_dev *odm)
+{
+ struct rte_pci_device *pci_dev = odm->pci_dev;
+ union odm_mbox_msg mbox_msg;
+ uint16_t vfid;
+ int rc;
+
+ odm->rbase = pci_dev->mem_resource[0].addr;
+ vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7);
+ vfid -= 1;
+ odm->vfid = vfid;
+ odm->num_qs = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_INIT;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (!rc)
+ odm->max_qs = 1 << (4 - mbox_msg.d.nvfs);
+
+ return rc;
+}
+
+int
+odm_dev_fini(struct odm_dev *odm)
+{
+ union odm_mbox_msg mbox_msg;
+ int qno, rc = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_CLOSE;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+
+ for (qno = 0; qno < odm->num_qs; qno++)
+ odm_vchan_resc_free(odm, qno);
+
+ return rc;
+}
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 7564ffbed4..9fd3e30ad8 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -5,6 +5,10 @@
#ifndef _ODM_H_
#define _ODM_H_
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_compat.h>
#include <rte_log.h>
extern int odm_logtype;
@@ -50,6 +54,9 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
+#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -142,4 +149,7 @@ struct __rte_cache_aligned odm_dev {
uint8_t num_qs;
};
+int odm_dev_init(struct odm_dev *odm);
+int odm_dev_fini(struct odm_dev *odm);
+
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index cc3342cf7b..bef335c10c 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -23,6 +23,7 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
char name[RTE_DEV_NAME_MAX_LEN];
struct odm_dev *odm = NULL;
struct rte_dma_dev *dmadev;
+ int rc;
if (!pci_dev->mem_resource[0].addr)
return -ENODEV;
@@ -37,8 +38,20 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
}
odm_info("DMA device %s probed", name);
+ odm = dmadev->data->dev_private;
+
+ odm->pci_dev = pci_dev;
+
+ rc = odm_dev_init(odm);
+ if (rc < 0)
+ goto dma_pmd_release;
return 0;
+
+dma_pmd_release:
+ rte_dma_pmd_release(name);
+
+ return rc;
}
static int
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 4/7] dma/odm: add device ops
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
` (2 preceding siblings ...)
2024-04-17 7:27 ` [PATCH v2 3/7] dma/odm: add dev init and fini Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 5/7] dma/odm: add stats Anoob Joseph
` (3 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA device control ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.c | 144 ++++++++++++++++++++++++++++++++++-
drivers/dma/odm/odm.h | 58 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 85 +++++++++++++++++++++
3 files changed, 285 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
index c0963da451..6094ace9fd 100644
--- a/drivers/dma/odm/odm.c
+++ b/drivers/dma/odm/odm.c
@@ -7,6 +7,7 @@
#include <bus_pci_driver.h>
#include <rte_io.h>
+#include <rte_malloc.h>
#include "odm.h"
#include "odm_priv.h"
@@ -14,8 +15,15 @@
static void
odm_vchan_resc_free(struct odm_dev *odm, int qno)
{
- RTE_SET_USED(odm);
- RTE_SET_USED(qno);
+ struct odm_queue *vq = &odm->vq[qno];
+
+ rte_memzone_free(vq->iring_mz);
+ rte_memzone_free(vq->cring_mz);
+ rte_free(vq->extra_ins_sz);
+
+ vq->iring_mz = NULL;
+ vq->cring_mz = NULL;
+ vq->extra_ins_sz = NULL;
}
static int
@@ -53,6 +61,138 @@ send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg
return 0;
}
+static int
+odm_queue_ring_config(struct odm_dev *odm, int vchan, int isize, int csize)
+{
+ union odm_vdma_ring_cfg_s ring_cfg = {0};
+ struct odm_queue *vq = &odm->vq[vchan];
+
+ if (vq->iring_mz == NULL || vq->cring_mz == NULL)
+ return -EINVAL;
+
+ ring_cfg.s.isize = (isize / 1024) - 1;
+ ring_cfg.s.csize = (csize / 1024) - 1;
+
+ odm_write64(ring_cfg.u, odm->rbase + ODM_VDMA_RING_CFG(vchan));
+ odm_write64(vq->iring_mz->iova, odm->rbase + ODM_VDMA_IRING_BADDR(vchan));
+ odm_write64(vq->cring_mz->iova, odm->rbase + ODM_VDMA_CRING_BADDR(vchan));
+
+ return 0;
+}
+
+int
+odm_enable(struct odm_dev *odm)
+{
+ struct odm_queue *vq;
+ int qno, rc = 0;
+
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ vq = &odm->vq[qno];
+
+ vq->desc_idx = vq->stats.completed_offset;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ vq->iring_head = 0;
+ vq->cring_head = 0;
+ vq->ins_ring_head = 0;
+ vq->iring_sz_available = vq->iring_max_words;
+
+ rc = odm_queue_ring_config(odm, qno, vq->iring_max_words * 8,
+ vq->cring_max_entry * 4);
+ if (rc < 0)
+ break;
+
+ odm_write64(0x1, odm->rbase + ODM_VDMA_EN(qno));
+ }
+
+ return rc;
+}
+
+int
+odm_disable(struct odm_dev *odm)
+{
+ int qno, wait_cnt = ODM_IRING_IDLE_WAIT_CNT;
+ uint64_t val;
+
+ /* Disable the queue and wait for the queue to became idle */
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ odm_write64(0x0, odm->rbase + ODM_VDMA_EN(qno));
+ do {
+ val = odm_read64(odm->rbase + ODM_VDMA_IRING_BADDR(qno));
+ } while ((!(val & 1ULL << 63)) && (--wait_cnt > 0));
+ }
+
+ return 0;
+}
+
+int
+odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc)
+{
+ struct odm_queue *vq = &odm->vq[vchan];
+ int isize, csize, max_nb_desc, rc = 0;
+ union odm_mbox_msg mbox_msg;
+ const struct rte_memzone *mz;
+ char name[32];
+
+ if (vq->iring_mz != NULL)
+ odm_vchan_resc_free(odm, vchan);
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+
+ /* ODM PF driver expects vfid starts from index 0 */
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_QUEUE_OPEN;
+ mbox_msg.q.qidx = vchan;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (rc < 0)
+ return rc;
+
+ /* Determine instruction & completion ring sizes. */
+
+ /* Create iring that can support nb_desc. Round up to a multiple of 1024. */
+ isize = RTE_ALIGN_CEIL(nb_desc * ODM_IRING_ENTRY_SIZE_MAX * 8, 1024);
+ isize = RTE_MIN(isize, ODM_IRING_MAX_SIZE);
+ snprintf(name, sizeof(name), "vq%d_iring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, isize, 0, ODM_MEMZONE_FLAGS, 1024);
+ if (mz == NULL)
+ return -ENOMEM;
+ vq->iring_mz = mz;
+ vq->iring_max_words = isize / 8;
+
+ /* Create cring that can support max instructions that can be inflight in hw. */
+ max_nb_desc = (isize / (ODM_IRING_ENTRY_SIZE_MIN * 8));
+ csize = RTE_ALIGN_CEIL(max_nb_desc * sizeof(union odm_cmpl_ent_s), 1024);
+ snprintf(name, sizeof(name), "vq%d_cring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, csize, 0, ODM_MEMZONE_FLAGS, 1024);
+ if (mz == NULL) {
+ rc = -ENOMEM;
+ goto iring_free;
+ }
+ vq->cring_mz = mz;
+ vq->cring_max_entry = csize / 4;
+
+ /* Allocate memory to track the size of each instruction. */
+ snprintf(name, sizeof(name), "vq%d_extra%d", odm->vfid, vchan);
+ vq->extra_ins_sz = rte_zmalloc(name, vq->cring_max_entry, 0);
+ if (vq->extra_ins_sz == NULL) {
+ rc = -ENOMEM;
+ goto cring_free;
+ }
+
+ vq->stats = (struct vq_stats){0};
+ return rc;
+
+cring_free:
+ rte_memzone_free(odm->vq[vchan].cring_mz);
+ vq->cring_mz = NULL;
+iring_free:
+ rte_memzone_free(odm->vq[vchan].iring_mz);
+ vq->iring_mz = NULL;
+
+ return rc;
+}
+
int
odm_dev_init(struct odm_dev *odm)
{
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 9fd3e30ad8..e1373e0c7f 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,7 +9,9 @@
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_io.h>
#include <rte_log.h>
+#include <rte_memzone.h>
extern int odm_logtype;
@@ -54,6 +56,14 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define ODM_IRING_MAX_SIZE (256 * 1024)
+#define ODM_IRING_ENTRY_SIZE_MIN 4
+#define ODM_IRING_ENTRY_SIZE_MAX 13
+#define ODM_IRING_MAX_WORDS (ODM_IRING_MAX_SIZE / 8)
+#define ODM_IRING_MAX_ENTRY (ODM_IRING_MAX_WORDS / ODM_IRING_ENTRY_SIZE_MIN)
+
+#define ODM_MAX_POINTER 4
+
#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
@@ -66,6 +76,10 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define ODM_MEMZONE_FLAGS \
+ (RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
+ RTE_MEMZONE_512MB | RTE_MEMZONE_4GB | RTE_MEMZONE_SIZE_HINT_ONLY)
+
/**
* Structure odm_instr_hdr_s for ODM
*
@@ -141,8 +155,48 @@ union odm_vdma_counts_s {
} s;
};
+struct vq_stats {
+ uint64_t submitted;
+ uint64_t completed;
+ uint64_t errors;
+ /*
+ * Since stats.completed is used to return completion index, account for any packets
+ * received before stats is reset.
+ */
+ uint64_t completed_offset;
+};
+
+struct odm_queue {
+ struct odm_dev *dev;
+ /* Instructions that are prepared on the iring, but is not pushed to hw yet. */
+ uint16_t pending_submit_cnt;
+ /* Length (in words) of instructions that are not yet pushed to hw. */
+ uint16_t pending_submit_len;
+ uint16_t desc_idx;
+ /* Instruction ring head. Used for enqueue. */
+ uint16_t iring_head;
+ /* Completion ring head. Used for dequeue. */
+ uint16_t cring_head;
+ /* Extra instruction size ring head. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_head;
+ /* Extra instruction size ring tail. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_tail;
+ /* Instruction size available.*/
+ uint16_t iring_sz_available;
+ /* Number of 8-byte words in iring.*/
+ uint16_t iring_max_words;
+ /* Number of words in cring.*/
+ uint16_t cring_max_entry;
+ /* Extra instruction size used per inflight instruction.*/
+ uint8_t *extra_ins_sz;
+ struct vq_stats stats;
+ const struct rte_memzone *iring_mz;
+ const struct rte_memzone *cring_mz;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
+ struct odm_queue vq[ODM_MAX_QUEUES_PER_DEV];
uint8_t *rbase;
uint16_t vfid;
uint8_t max_qs;
@@ -151,5 +205,9 @@ struct __rte_cache_aligned odm_dev {
int odm_dev_init(struct odm_dev *odm);
int odm_dev_fini(struct odm_dev *odm);
+int odm_configure(struct odm_dev *odm);
+int odm_enable(struct odm_dev *odm);
+int odm_disable(struct odm_dev *odm);
+int odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc);
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index bef335c10c..8c705978fe 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -17,6 +17,87 @@
#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
#define PCI_DRIVER_NAME dma_odm
+static int
+odm_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(size);
+
+ odm = dev->fp_obj->dev_private;
+
+ dev_info->max_vchans = odm->max_qs;
+ dev_info->nb_vchans = odm->num_qs;
+ dev_info->dev_capa =
+ (RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG);
+ dev_info->max_desc = ODM_IRING_MAX_ENTRY;
+ dev_info->min_desc = 1;
+ dev_info->max_sges = ODM_MAX_POINTER;
+
+ return 0;
+}
+
+static int
+odm_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(conf_sz);
+
+ odm = dev->fp_obj->dev_private;
+ odm->num_qs = conf->nb_vchans;
+
+ return 0;
+}
+
+static int
+odm_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
+ const struct rte_dma_vchan_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ RTE_SET_USED(conf_sz);
+ return odm_vchan_setup(odm, vchan, conf->nb_desc);
+}
+
+static int
+odm_dmadev_start(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_enable(odm);
+}
+
+static int
+odm_dmadev_stop(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_disable(odm);
+}
+
+static int
+odm_dmadev_close(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ odm_disable(odm);
+ odm_dev_fini(odm);
+
+ return 0;
+}
+
+static const struct rte_dma_dev_ops odm_dmadev_ops = {
+ .dev_close = odm_dmadev_close,
+ .dev_configure = odm_dmadev_configure,
+ .dev_info_get = odm_dmadev_info_get,
+ .dev_start = odm_dmadev_start,
+ .dev_stop = odm_dmadev_stop,
+ .stats_get = NULL,
+ .stats_reset = NULL,
+ .vchan_setup = odm_dmadev_vchan_setup,
+};
+
static int
odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
{
@@ -40,6 +121,10 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
odm_info("DMA device %s probed", name);
odm = dmadev->data->dev_private;
+ dmadev->device = &pci_dev->device;
+ dmadev->fp_obj->dev_private = odm;
+ dmadev->dev_ops = &odm_dmadev_ops;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 5/7] dma/odm: add stats
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
` (3 preceding siblings ...)
2024-04-17 7:27 ` [PATCH v2 4/7] dma/odm: add device ops Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
` (2 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA dev stats.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 63 ++++++++++++++++++++++++++++++++++--
1 file changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 8c705978fe..13b2588246 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -87,14 +87,73 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
+ uint32_t size)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ if (size < sizeof(rte_stats))
+ return -EINVAL;
+ if (rte_stats == NULL)
+ return -EINVAL;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[vchan].stats;
+
+ *rte_stats = *stats;
+ } else {
+ int i;
+
+ for (i = 0; i < odm->num_qs; i++) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[i].stats;
+
+ rte_stats->submitted += stats->submitted;
+ rte_stats->completed += stats->completed;
+ rte_stats->errors += stats->errors;
+ }
+ }
+
+ return 0;
+}
+
+static void
+odm_vq_stats_reset(struct vq_stats *vq_stats)
+{
+ vq_stats->completed_offset += vq_stats->completed;
+ vq_stats->completed = 0;
+ vq_stats->errors = 0;
+ vq_stats->submitted = 0;
+}
+
+static int
+odm_stats_reset(struct rte_dma_dev *dev, uint16_t vchan)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+ struct vq_stats *vq_stats;
+ int i;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ vq_stats = &odm->vq[vchan].stats;
+ odm_vq_stats_reset(vq_stats);
+ } else {
+ for (i = 0; i < odm->num_qs; i++) {
+ vq_stats = &odm->vq[i].stats;
+ odm_vq_stats_reset(vq_stats);
+ }
+ }
+
+ return 0;
+}
+
static const struct rte_dma_dev_ops odm_dmadev_ops = {
.dev_close = odm_dmadev_close,
.dev_configure = odm_dmadev_configure,
.dev_info_get = odm_dmadev_info_get,
.dev_start = odm_dmadev_start,
.dev_stop = odm_dmadev_stop,
- .stats_get = NULL,
- .stats_reset = NULL,
+ .stats_get = odm_stats_get,
+ .stats_reset = odm_stats_reset,
.vchan_setup = odm_dmadev_vchan_setup,
};
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 6/7] dma/odm: add copy and copy sg ops
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
` (4 preceding siblings ...)
2024-04-17 7:27 ` [PATCH v2 5/7] dma/odm: add stats Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 7/7] dma/odm: add remaining ops Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add ODM copy and copy SG ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 236 +++++++++++++++++++++++++++++++++++
1 file changed, 236 insertions(+)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 13b2588246..b21be83a89 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -9,6 +9,7 @@
#include <rte_common.h>
#include <rte_dmadev.h>
#include <rte_dmadev_pmd.h>
+#include <rte_memcpy.h>
#include <rte_pci.h>
#include "odm.h"
@@ -87,6 +88,238 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length,
+ uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ const union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ .s.nfst = 1,
+ .s.nlst = 1,
+ };
+
+ vq = &odm->vq[vchan];
+
+ h = length;
+ h |= ((uint64_t)length << 32);
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = src;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head++] = hdr.u;
+ iring_head_ptr[iring_head++] = h;
+ iring_head_ptr[iring_head++] = src;
+ iring_head_ptr[iring_head++] = dst;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
+static inline void
+odm_dmadev_fill_sg(uint64_t *cmd, const struct rte_dma_sge *src, const struct rte_dma_sge *dst,
+ uint16_t nb_src, uint16_t nb_dst, union odm_instr_hdr_s *hdr)
+{
+ int i = 0, j = 0;
+ uint64_t h = 0;
+
+ cmd[j++] = hdr->u;
+ /* When nb_src is even */
+ if (!(nb_src & 0x1)) {
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Fill the iring with dst pointers */
+ for (i = 1; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is odd */
+ if (nb_dst & 0x1) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ } else {
+ /* When nb_src is odd */
+
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Handle the last src pointer */
+ h = ((uint64_t)dst[0].length << 32) | src[nb_src - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[nb_src - 1].addr;
+ cmd[j++] = dst[0].addr;
+
+ /* Fill the iring with dst pointers */
+ for (i = 2; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is even */
+ if (!(nb_dst & 0x1)) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ }
+}
+
+static int
+odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src,
+ const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_head, ins_ring_head;
+ uint16_t iring_sz_available, i, nb, num_words;
+ uint64_t cmd[ODM_IRING_ENTRY_SIZE_MAX];
+ struct odm_dev *odm = dev_private;
+ uint32_t s_sz = 0, d_sz = 0;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ };
+
+ vq = &odm->vq[vchan];
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+ iring_sz_available = vq->iring_sz_available;
+ ins_ring_head = vq->ins_ring_head;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+
+ if (unlikely(nb_src > 4 || nb_dst > 4))
+ return -EINVAL;
+
+ for (i = 0; i < nb_src; i++)
+ s_sz += src[i].length;
+
+ for (i = 0; i < nb_dst; i++)
+ d_sz += dst[i].length;
+
+ if (s_sz != d_sz)
+ return -EINVAL;
+
+ nb = nb_src + nb_dst;
+ hdr.s.nfst = nb_src;
+ hdr.s.nlst = nb_dst;
+ num_words = 1 + 3 * (nb / 2 + (nb & 0x1));
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+ uint16_t words_avail = max_iring_words - iring_head;
+ uint16_t words_pend = num_words - words_avail;
+
+ if (unlikely(words_avail + words_pend > ODM_IRING_ENTRY_SIZE_MAX))
+ return -ENOSPC;
+
+ odm_dmadev_fill_sg(cmd, src, dst, nb_src, nb_dst, &hdr);
+ rte_memcpy((void *)&iring_head_ptr[iring_head], (void *)cmd, words_avail * 8);
+ rte_memcpy((void *)iring_head_ptr, (void *)&cmd[words_avail], words_pend * 8);
+ iring_head = words_pend;
+ } else {
+ odm_dmadev_fill_sg(&iring_head_ptr[iring_head], src, dst, nb_src, nb_dst, &hdr);
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* Save extra space used for the instruction. */
+ vq->extra_ins_sz[ins_ring_head] = num_words - 4;
+
+ vq->ins_ring_head = (ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -184,6 +417,9 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->dev_private = odm;
dmadev->dev_ops = &odm_dmadev_ops;
+ dmadev->fp_obj->copy = odm_dmadev_copy;
+ dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v2 7/7] dma/odm: add remaining ops
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
` (5 preceding siblings ...)
2024-04-17 7:27 ` [PATCH v2 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
@ 2024-04-17 7:27 ` Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-17 7:27 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add all remaining ops such as fill, burst_capacity etc. Also update the
documentation.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++++++++++
drivers/dma/odm/odm.h | 4 +
drivers/dma/odm/odm_dmadev.c | 246 +++++++++++++++++++++++++++++++++++
5 files changed, 344 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index b8d2f7b3d8..38293008aa 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1273,6 +1273,7 @@ M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/odm/
+F: doc/guides/dmadevs/odm.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst
index 5bd25b32b9..ce9f6eb260 100644
--- a/doc/guides/dmadevs/index.rst
+++ b/doc/guides/dmadevs/index.rst
@@ -17,3 +17,4 @@ an application through DMA API.
hisilicon
idxd
ioat
+ odm
diff --git a/doc/guides/dmadevs/odm.rst b/doc/guides/dmadevs/odm.rst
new file mode 100644
index 0000000000..a2eaab59a0
--- /dev/null
+++ b/doc/guides/dmadevs/odm.rst
@@ -0,0 +1,92 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 Marvell.
+
+Odyssey ODM DMA Device Driver
+=============================
+
+The ``odm`` DMA device driver provides a poll-mode driver (PMD) for Marvell Odyssey
+DMA Hardware Accelerator block found in Odyssey SoC. The block supports only mem
+to mem DMA transfers.
+
+ODM DMA device can support up to 32 queues and 16 VFs.
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+Device Setup
+-------------
+
+ODM DMA device is initialized by kernel PF driver. The PF kernel driver is part
+of Marvell software packages for Odyssey.
+
+Kernel module can be inserted as in below example::
+
+ $ sudo insmod odyssey_odm.ko
+
+ODM DMA device can support up to 16 VFs::
+
+ $ sudo echo 16 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs
+
+Above command creates 16 VFs with 2 queues each.
+
+The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the
+presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma``
+will show all the Odyssey ODM DMA devices.
+
+Devices using VFIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The HW devices to be used will need to be bound to a user-space IO driver.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
+For example::
+
+ $ dpdk-devbind.py -b vfio-pci 0000:08:00.1
+
+Device Probing and Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the devices from an application, the dmadev API can be used.
+
+Once configured, the device can then be made ready for use
+by calling the ``rte_dma_start()`` API.
+
+Performing Data Copies
+~~~~~~~~~~~~~~~~~~~~~~
+
+Refer to the :ref:`Enqueue / Dequeue APIs <dmadev_enqueue_dequeue>` section
+of the dmadev library documentation for details on operation enqueue and
+submission API usage.
+
+Performance Tuning Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To achieve higher performance, DMA device needs to be tuned using PF kernel
+driver module parameters.
+
+Following options are exposed by kernel PF driver via devlink interface for
+tuning performance.
+
+``eng_sel``
+
+ ODM DMA device has 2 engines internally. Engine to queue mapping is decided
+ by a hardware register which can be configured as below::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name eng_sel value 3435973836 cmode runtime
+
+ Each bit in the register corresponds to one queue. Each queue would be
+ associated with one engine. If the value of the bit corresponding to the queue
+ is 0, then engine 0 would be picked. If it is 1, then engine 1 would be
+ picked.
+
+ In the above command, the register value is set as
+ ``1100 1100 1100 1100 1100 1100 1100 1100`` which allows for alternate engines
+ to be used with alternate VFs (assuming the system has 16 VFs with 2 queues
+ each).
+
+``max_load_request``
+
+ Specifies maximum outstanding load requests on internal bus. Values can range
+ from 1 to 512. Set to 512 for maximum requests in flight.::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name max_load_request value 512 cmode runtime
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index e1373e0c7f..1d60d2d11a 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -75,6 +75,10 @@ extern int odm_logtype;
rte_log(RTE_LOG_INFO, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_debug(...) \
+ rte_log(RTE_LOG_DEBUG, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
#define ODM_MEMZONE_FLAGS \
(RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index b21be83a89..6e9ef90494 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -320,6 +320,247 @@ odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *
return vq->desc_idx++;
}
+static int
+odm_dmadev_fill(void *dev_private, uint16_t vchan, uint64_t pattern, rte_iova_t dst,
+ uint32_t length, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ vq = &odm->vq[vchan];
+
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.nfst = 0,
+ .s.nlst = 1,
+ };
+
+ h = (uint64_t)length;
+
+ switch (pattern) {
+ case 0:
+ hdr.s.xtype = ODM_XTYPE_FILL0;
+ break;
+ case 0xffffffffffffffff:
+ hdr.s.xtype = ODM_XTYPE_FILL1;
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = 0;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head_ptr[iring_head + 1] = h;
+ iring_head_ptr[iring_head + 2] = dst;
+ iring_head_ptr[iring_head + 3] = 0;
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ return vq->desc_idx++;
+}
+
+static uint16_t
+odm_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx,
+ bool *has_error)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint64_t nb_err = 0;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (unlikely(vq->stats.submitted == vq->stats.completed)) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit(cmpl_ptr, rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ if (cmpl.s.cmp_code)
+ nb_err++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit(cmpl_ptr, cmpl_zero.u, rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->stats.errors += nb_err;
+
+ if (unlikely(has_error != NULL && nb_err))
+ *has_error = true;
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static uint16_t
+odm_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls,
+ uint16_t *last_idx, enum rte_dma_status_code *status)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (vq->stats.submitted == vq->stats.completed) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+#ifdef ODM_DEBUG
+ odm_debug("cring_head: 0x%" PRIx16, cring_head);
+ odm_debug("Submitted: 0x%" PRIx64, vq->stats.submitted);
+ odm_debug("Completed: 0x%" PRIx64, vq->stats.completed);
+ odm_debug("Hardware count: 0x%" PRIx64, odm_read64(odm->rbase + ODM_VDMA_CNT(vchan)));
+#endif
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit(cmpl_ptr, rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ status[cnt] = cmpl.s.cmp_code;
+
+ if (cmpl.s.cmp_code)
+ vq->stats.errors++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit(cmpl_ptr, cmpl_zero.u, rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static int
+odm_dmadev_submit(void *dev_private, uint16_t vchan)
+{
+ struct odm_dev *odm = dev_private;
+ uint16_t pending_submit_len;
+ struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ pending_submit_len = vq->pending_submit_len;
+
+ if (pending_submit_len == 0)
+ return 0;
+
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->pending_submit_len = 0;
+ vq->stats.submitted += vq->pending_submit_cnt;
+ vq->pending_submit_cnt = 0;
+
+ return 0;
+}
+
+static uint16_t
+odm_dmadev_burst_capacity(const void *dev_private, uint16_t vchan __rte_unused)
+{
+ const struct odm_dev *odm = dev_private;
+ const struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ return (vq->iring_sz_available / ODM_IRING_ENTRY_SIZE_MIN);
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -419,6 +660,11 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->copy = odm_dmadev_copy;
dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+ dmadev->fp_obj->fill = odm_dmadev_fill;
+ dmadev->fp_obj->submit = odm_dmadev_submit;
+ dmadev->fp_obj->completed = odm_dmadev_completed;
+ dmadev->fp_obj->completed_status = odm_dmadev_completed_status;
+ dmadev->fp_obj->burst_capacity = odm_dmadev_burst_capacity;
odm->pci_dev = pci_dev;
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 0/7] Add ODM DMA device
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
` (6 preceding siblings ...)
2024-04-17 7:27 ` [PATCH v2 7/7] dma/odm: add remaining ops Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 1/7] dma/odm: add framework for " Anoob Joseph
` (7 more replies)
7 siblings, 8 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add Odyssey ODM DMA device. This PMD abstracts ODM hardware unit on
Odyssey SoC which can perform mem to mem copies.
The hardware unit can support upto 32 queues (vchan) and 16 VFs. It
supports 'fill' operation with specific values. It also supports
SG mode of operation with upto 4 src pointers and 4 destination
pointers.
The PMD is tested with both unit tests and performance applications.
Changes in v3
- Addressed build failure with stdatomic stage in CI
Changes in v2
- Addressed build failure in CI
- Moved update to usertools as separate patch
Anoob Joseph (2):
dma/odm: add framework for ODM DMA device
dma/odm: add hardware defines
Gowrishankar Muthukrishnan (3):
dma/odm: add dev init and fini
dma/odm: add device ops
dma/odm: add stats
Vidya Sagar Velumuri (2):
dma/odm: add copy and copy sg ops
dma/odm: add remaining ops
MAINTAINERS | 7 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +
drivers/dma/odm/odm.c | 237 ++++++++++++
drivers/dma/odm/odm.h | 217 +++++++++++
drivers/dma/odm/odm_dmadev.c | 717 +++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 +++
9 files changed, 1335 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.c
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
create mode 100644 drivers/dma/odm/odm_priv.h
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 1/7] dma/odm: add framework for ODM DMA device
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-05-24 13:26 ` Jerin Jacob
2024-04-19 6:43 ` [PATCH v3 2/7] dma/odm: add hardware defines Anoob Joseph
` (6 subsequent siblings)
7 siblings, 1 reply; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add framework for Odyssey ODM DMA device.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 6 +++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +++++++
drivers/dma/odm/odm.h | 29 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 74 ++++++++++++++++++++++++++++++++++++
5 files changed, 124 insertions(+)
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7abb3aee49..b8d2f7b3d8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1268,6 +1268,12 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
+Marvell Odyssey ODM DMA
+M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/dma/odm/
+
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index 582654ea1b..358132759a 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -8,6 +8,7 @@ drivers = [
'hisilicon',
'idxd',
'ioat',
+ 'odm',
'skeleton',
]
std_deps = ['dmadev']
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
new file mode 100644
index 0000000000..227b10c890
--- /dev/null
+++ b/drivers/dma/odm/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2024 Marvell.
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
+
+sources = files('odm_dmadev.c')
+
+pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
new file mode 100644
index 0000000000..aeeb6f9e9a
--- /dev/null
+++ b/drivers/dma/odm/odm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_H_
+#define _ODM_H_
+
+#include <rte_log.h>
+
+extern int odm_logtype;
+
+#define odm_err(...) \
+ rte_log(RTE_LOG_ERR, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_info(...) \
+ rte_log(RTE_LOG_INFO, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+
+struct __rte_cache_aligned odm_dev {
+ struct rte_pci_device *pci_dev;
+ uint8_t *rbase;
+ uint16_t vfid;
+ uint8_t max_qs;
+ uint8_t num_qs;
+};
+
+#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
new file mode 100644
index 0000000000..cc3342cf7b
--- /dev/null
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <string.h>
+
+#include <bus_pci_driver.h>
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_dmadev.h>
+#include <rte_dmadev_pmd.h>
+#include <rte_pci.h>
+
+#include "odm.h"
+
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
+#define PCI_DRIVER_NAME dma_odm
+
+static int
+odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+ struct odm_dev *odm = NULL;
+ struct rte_dma_dev *dmadev;
+
+ if (!pci_dev->mem_resource[0].addr)
+ return -ENODEV;
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*odm));
+ if (dmadev == NULL) {
+ odm_err("DMA device allocation failed for %s", name);
+ return -ENOMEM;
+ }
+
+ odm_info("DMA device %s probed", name);
+
+ return 0;
+}
+
+static int
+odm_dmadev_remove(struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ return rte_dma_pmd_release(name);
+}
+
+static const struct rte_pci_id odm_dma_pci_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_ODYSSEY_ODM_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver odm_dmadev = {
+ .id_table = odm_dma_pci_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = odm_dmadev_probe,
+ .remove = odm_dmadev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(PCI_DRIVER_NAME, odm_dmadev);
+RTE_PMD_REGISTER_PCI_TABLE(PCI_DRIVER_NAME, odm_dma_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(PCI_DRIVER_NAME, "vfio-pci");
+RTE_LOG_REGISTER_DEFAULT(odm_logtype, NOTICE);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 2/7] dma/odm: add hardware defines
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 1/7] dma/odm: add framework for " Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-05-24 13:29 ` Jerin Jacob
2024-04-19 6:43 ` [PATCH v3 3/7] dma/odm: add dev init and fini Anoob Joseph
` (5 subsequent siblings)
7 siblings, 1 reply; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add ODM registers and structures. Add mailbox structs as well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.h | 116 +++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 ++++++++++++++++
2 files changed, 165 insertions(+)
create mode 100644 drivers/dma/odm/odm_priv.h
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index aeeb6f9e9a..7564ffbed4 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,6 +9,47 @@
extern int odm_logtype;
+/* ODM VF register offsets from VF_BAR0 */
+#define ODM_VDMA_EN(x) (0x00 | (x << 3))
+#define ODM_VDMA_REQQ_CTL(x) (0x80 | (x << 3))
+#define ODM_VDMA_DBELL(x) (0x100 | (x << 3))
+#define ODM_VDMA_RING_CFG(x) (0x180 | (x << 3))
+#define ODM_VDMA_IRING_BADDR(x) (0x200 | (x << 3))
+#define ODM_VDMA_CRING_BADDR(x) (0x280 | (x << 3))
+#define ODM_VDMA_COUNTS(x) (0x300 | (x << 3))
+#define ODM_VDMA_IRING_NADDR(x) (0x380 | (x << 3))
+#define ODM_VDMA_CRING_NADDR(x) (0x400 | (x << 3))
+#define ODM_VDMA_IRING_DBG(x) (0x480 | (x << 3))
+#define ODM_VDMA_CNT(x) (0x580 | (x << 3))
+#define ODM_VF_INT (0x1000)
+#define ODM_VF_INT_W1S (0x1008)
+#define ODM_VF_INT_ENA_W1C (0x1010)
+#define ODM_VF_INT_ENA_W1S (0x1018)
+#define ODM_MBOX_VF_PF_DATA(i) (0x2000 | (i << 3))
+
+#define ODM_MBOX_RETRY_CNT (0xfffffff)
+#define ODM_MBOX_ERR_CODE_MAX (0xf)
+#define ODM_IRING_IDLE_WAIT_CNT (0xfffffff)
+
+/**
+ * Enumeration odm_hdr_xtype_e
+ *
+ * ODM Transfer Type Enumeration
+ * Enumerates the pointer type in ODM_DMA_INSTR_HDR_S[XTYPE]
+ */
+#define ODM_XTYPE_INTERNAL 2
+#define ODM_XTYPE_FILL0 4
+#define ODM_XTYPE_FILL1 5
+
+/**
+ * ODM Header completion type enumeration
+ * Enumerates the completion type in ODM_DMA_INSTR_HDR_S[CT]
+ */
+#define ODM_HDR_CT_CW_CA 0x0
+#define ODM_HDR_CT_CW_NC 0x1
+
+#define ODM_MAX_QUEUES_PER_DEV 16
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -18,6 +59,81 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+/**
+ * Structure odm_instr_hdr_s for ODM
+ *
+ * ODM DMA Instruction Header Format
+ */
+union odm_instr_hdr_s {
+ uint64_t u;
+ struct odm_instr_hdr {
+ uint64_t nfst : 3;
+ uint64_t reserved_3 : 1;
+ uint64_t nlst : 3;
+ uint64_t reserved_7_9 : 3;
+ uint64_t ct : 2;
+ uint64_t stse : 1;
+ uint64_t reserved_13_28 : 16;
+ uint64_t sts : 1;
+ uint64_t reserved_30_49 : 20;
+ uint64_t xtype : 3;
+ uint64_t reserved_53_63 : 11;
+ } s;
+};
+
+/**
+ * ODM Completion Entry Structure
+ *
+ */
+union odm_cmpl_ent_s {
+ uint32_t u;
+ struct odm_cmpl_ent {
+ uint32_t cmp_code : 8;
+ uint32_t rsvd : 23;
+ uint32_t valid : 1;
+ } s;
+};
+
+/**
+ * ODM DMA Ring Configuration Register
+ */
+union odm_vdma_ring_cfg_s {
+ uint64_t u;
+ struct {
+ uint64_t isize : 8;
+ uint64_t rsvd_8_15 : 8;
+ uint64_t csize : 8;
+ uint64_t rsvd_24_63 : 40;
+ } s;
+};
+
+/**
+ * ODM DMA Instruction Ring DBG
+ */
+union odm_vdma_iring_dbg_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell_cnt : 32;
+ uint64_t offset : 16;
+ uint64_t rsvd_48_62 : 15;
+ uint64_t iwbusy : 1;
+ } s;
+};
+
+/**
+ * ODM DMA Counts
+ */
+union odm_vdma_counts_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell : 32;
+ uint64_t buf_used_cnt : 9;
+ uint64_t rsvd_41_43 : 3;
+ uint64_t rsvd_buf_used_cnt : 3;
+ uint64_t rsvd_47_63 : 17;
+ } s;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
uint8_t *rbase;
diff --git a/drivers/dma/odm/odm_priv.h b/drivers/dma/odm/odm_priv.h
new file mode 100644
index 0000000000..1878f4d9a6
--- /dev/null
+++ b/drivers/dma/odm/odm_priv.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_PRIV_H_
+#define _ODM_PRIV_H_
+
+#define ODM_MAX_VFS 16
+#define ODM_MAX_QUEUES 32
+
+#define ODM_CMD_QUEUE_SIZE 4096
+
+#define ODM_DEV_INIT 0x1
+#define ODM_DEV_CLOSE 0x2
+#define ODM_QUEUE_OPEN 0x3
+#define ODM_QUEUE_CLOSE 0x4
+#define ODM_REG_DUMP 0x5
+
+struct odm_mbox_dev_msg {
+ /* Response code */
+ uint64_t rsp : 8;
+ /* Number of VFs */
+ uint64_t nvfs : 2;
+ /* Error code */
+ uint64_t err : 6;
+ /* Reserved */
+ uint64_t rsvd_16_63 : 48;
+};
+
+struct odm_mbox_queue_msg {
+ /* Command code */
+ uint64_t cmd : 8;
+ /* VF ID to configure */
+ uint64_t vfid : 8;
+ /* Queue index in the VF */
+ uint64_t qidx : 8;
+ /* Reserved */
+ uint64_t rsvd_24_63 : 40;
+};
+
+union odm_mbox_msg {
+ uint64_t u[2];
+ struct {
+ struct odm_mbox_dev_msg d;
+ struct odm_mbox_queue_msg q;
+ };
+};
+
+#endif /* _ODM_PRIV_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 3/7] dma/odm: add dev init and fini
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 1/7] dma/odm: add framework for " Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 2/7] dma/odm: add hardware defines Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 4/7] dma/odm: add device ops Anoob Joseph
` (4 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add ODM device init and fini.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/meson.build | 2 +-
drivers/dma/odm/odm.c | 97 ++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm.h | 10 ++++
drivers/dma/odm/odm_dmadev.c | 13 +++++
4 files changed, 121 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma/odm/odm.c
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
index 227b10c890..d597762d37 100644
--- a/drivers/dma/odm/meson.build
+++ b/drivers/dma/odm/meson.build
@@ -9,6 +9,6 @@ endif
deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
-sources = files('odm_dmadev.c')
+sources = files('odm_dmadev.c', 'odm.c')
pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
new file mode 100644
index 0000000000..c0963da451
--- /dev/null
+++ b/drivers/dma/odm/odm.c
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <stdint.h>
+
+#include <bus_pci_driver.h>
+
+#include <rte_io.h>
+
+#include "odm.h"
+#include "odm_priv.h"
+
+static void
+odm_vchan_resc_free(struct odm_dev *odm, int qno)
+{
+ RTE_SET_USED(odm);
+ RTE_SET_USED(qno);
+}
+
+static int
+send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg *rsp)
+{
+ int retry_cnt = ODM_MBOX_RETRY_CNT;
+ union odm_mbox_msg pf_msg;
+
+ msg->d.err = ODM_MBOX_ERR_CODE_MAX;
+ odm_write64(msg->u[0], odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ odm_write64(msg->u[1], odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ pf_msg.u[0] = 0;
+ pf_msg.u[1] = 0;
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+
+ while (pf_msg.d.rsp == 0 && retry_cnt > 0) {
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ --retry_cnt;
+ }
+
+ if (retry_cnt <= 0)
+ return -EBADE;
+
+ pf_msg.u[1] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ if (rsp) {
+ rsp->u[0] = pf_msg.u[0];
+ rsp->u[1] = pf_msg.u[1];
+ }
+
+ if (pf_msg.d.rsp == msg->d.err && pf_msg.d.err != 0)
+ return -EBADE;
+
+ return 0;
+}
+
+int
+odm_dev_init(struct odm_dev *odm)
+{
+ struct rte_pci_device *pci_dev = odm->pci_dev;
+ union odm_mbox_msg mbox_msg;
+ uint16_t vfid;
+ int rc;
+
+ odm->rbase = pci_dev->mem_resource[0].addr;
+ vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7);
+ vfid -= 1;
+ odm->vfid = vfid;
+ odm->num_qs = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_INIT;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (!rc)
+ odm->max_qs = 1 << (4 - mbox_msg.d.nvfs);
+
+ return rc;
+}
+
+int
+odm_dev_fini(struct odm_dev *odm)
+{
+ union odm_mbox_msg mbox_msg;
+ int qno, rc = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_CLOSE;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+
+ for (qno = 0; qno < odm->num_qs; qno++)
+ odm_vchan_resc_free(odm, qno);
+
+ return rc;
+}
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 7564ffbed4..9fd3e30ad8 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -5,6 +5,10 @@
#ifndef _ODM_H_
#define _ODM_H_
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_compat.h>
#include <rte_log.h>
extern int odm_logtype;
@@ -50,6 +54,9 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
+#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -142,4 +149,7 @@ struct __rte_cache_aligned odm_dev {
uint8_t num_qs;
};
+int odm_dev_init(struct odm_dev *odm);
+int odm_dev_fini(struct odm_dev *odm);
+
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index cc3342cf7b..bef335c10c 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -23,6 +23,7 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
char name[RTE_DEV_NAME_MAX_LEN];
struct odm_dev *odm = NULL;
struct rte_dma_dev *dmadev;
+ int rc;
if (!pci_dev->mem_resource[0].addr)
return -ENODEV;
@@ -37,8 +38,20 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
}
odm_info("DMA device %s probed", name);
+ odm = dmadev->data->dev_private;
+
+ odm->pci_dev = pci_dev;
+
+ rc = odm_dev_init(odm);
+ if (rc < 0)
+ goto dma_pmd_release;
return 0;
+
+dma_pmd_release:
+ rte_dma_pmd_release(name);
+
+ return rc;
}
static int
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 4/7] dma/odm: add device ops
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
` (2 preceding siblings ...)
2024-04-19 6:43 ` [PATCH v3 3/7] dma/odm: add dev init and fini Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-05-24 13:37 ` Jerin Jacob
2024-04-19 6:43 ` [PATCH v3 5/7] dma/odm: add stats Anoob Joseph
` (3 subsequent siblings)
7 siblings, 1 reply; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA device control ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.c | 144 ++++++++++++++++++++++++++++++++++-
drivers/dma/odm/odm.h | 58 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 85 +++++++++++++++++++++
3 files changed, 285 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
index c0963da451..6094ace9fd 100644
--- a/drivers/dma/odm/odm.c
+++ b/drivers/dma/odm/odm.c
@@ -7,6 +7,7 @@
#include <bus_pci_driver.h>
#include <rte_io.h>
+#include <rte_malloc.h>
#include "odm.h"
#include "odm_priv.h"
@@ -14,8 +15,15 @@
static void
odm_vchan_resc_free(struct odm_dev *odm, int qno)
{
- RTE_SET_USED(odm);
- RTE_SET_USED(qno);
+ struct odm_queue *vq = &odm->vq[qno];
+
+ rte_memzone_free(vq->iring_mz);
+ rte_memzone_free(vq->cring_mz);
+ rte_free(vq->extra_ins_sz);
+
+ vq->iring_mz = NULL;
+ vq->cring_mz = NULL;
+ vq->extra_ins_sz = NULL;
}
static int
@@ -53,6 +61,138 @@ send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg
return 0;
}
+static int
+odm_queue_ring_config(struct odm_dev *odm, int vchan, int isize, int csize)
+{
+ union odm_vdma_ring_cfg_s ring_cfg = {0};
+ struct odm_queue *vq = &odm->vq[vchan];
+
+ if (vq->iring_mz == NULL || vq->cring_mz == NULL)
+ return -EINVAL;
+
+ ring_cfg.s.isize = (isize / 1024) - 1;
+ ring_cfg.s.csize = (csize / 1024) - 1;
+
+ odm_write64(ring_cfg.u, odm->rbase + ODM_VDMA_RING_CFG(vchan));
+ odm_write64(vq->iring_mz->iova, odm->rbase + ODM_VDMA_IRING_BADDR(vchan));
+ odm_write64(vq->cring_mz->iova, odm->rbase + ODM_VDMA_CRING_BADDR(vchan));
+
+ return 0;
+}
+
+int
+odm_enable(struct odm_dev *odm)
+{
+ struct odm_queue *vq;
+ int qno, rc = 0;
+
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ vq = &odm->vq[qno];
+
+ vq->desc_idx = vq->stats.completed_offset;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ vq->iring_head = 0;
+ vq->cring_head = 0;
+ vq->ins_ring_head = 0;
+ vq->iring_sz_available = vq->iring_max_words;
+
+ rc = odm_queue_ring_config(odm, qno, vq->iring_max_words * 8,
+ vq->cring_max_entry * 4);
+ if (rc < 0)
+ break;
+
+ odm_write64(0x1, odm->rbase + ODM_VDMA_EN(qno));
+ }
+
+ return rc;
+}
+
+int
+odm_disable(struct odm_dev *odm)
+{
+ int qno, wait_cnt = ODM_IRING_IDLE_WAIT_CNT;
+ uint64_t val;
+
+ /* Disable the queue and wait for the queue to became idle */
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ odm_write64(0x0, odm->rbase + ODM_VDMA_EN(qno));
+ do {
+ val = odm_read64(odm->rbase + ODM_VDMA_IRING_BADDR(qno));
+ } while ((!(val & 1ULL << 63)) && (--wait_cnt > 0));
+ }
+
+ return 0;
+}
+
+int
+odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc)
+{
+ struct odm_queue *vq = &odm->vq[vchan];
+ int isize, csize, max_nb_desc, rc = 0;
+ union odm_mbox_msg mbox_msg;
+ const struct rte_memzone *mz;
+ char name[32];
+
+ if (vq->iring_mz != NULL)
+ odm_vchan_resc_free(odm, vchan);
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+
+ /* ODM PF driver expects vfid starts from index 0 */
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_QUEUE_OPEN;
+ mbox_msg.q.qidx = vchan;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (rc < 0)
+ return rc;
+
+ /* Determine instruction & completion ring sizes. */
+
+ /* Create iring that can support nb_desc. Round up to a multiple of 1024. */
+ isize = RTE_ALIGN_CEIL(nb_desc * ODM_IRING_ENTRY_SIZE_MAX * 8, 1024);
+ isize = RTE_MIN(isize, ODM_IRING_MAX_SIZE);
+ snprintf(name, sizeof(name), "vq%d_iring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, isize, 0, ODM_MEMZONE_FLAGS, 1024);
+ if (mz == NULL)
+ return -ENOMEM;
+ vq->iring_mz = mz;
+ vq->iring_max_words = isize / 8;
+
+ /* Create cring that can support max instructions that can be inflight in hw. */
+ max_nb_desc = (isize / (ODM_IRING_ENTRY_SIZE_MIN * 8));
+ csize = RTE_ALIGN_CEIL(max_nb_desc * sizeof(union odm_cmpl_ent_s), 1024);
+ snprintf(name, sizeof(name), "vq%d_cring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, csize, 0, ODM_MEMZONE_FLAGS, 1024);
+ if (mz == NULL) {
+ rc = -ENOMEM;
+ goto iring_free;
+ }
+ vq->cring_mz = mz;
+ vq->cring_max_entry = csize / 4;
+
+ /* Allocate memory to track the size of each instruction. */
+ snprintf(name, sizeof(name), "vq%d_extra%d", odm->vfid, vchan);
+ vq->extra_ins_sz = rte_zmalloc(name, vq->cring_max_entry, 0);
+ if (vq->extra_ins_sz == NULL) {
+ rc = -ENOMEM;
+ goto cring_free;
+ }
+
+ vq->stats = (struct vq_stats){0};
+ return rc;
+
+cring_free:
+ rte_memzone_free(odm->vq[vchan].cring_mz);
+ vq->cring_mz = NULL;
+iring_free:
+ rte_memzone_free(odm->vq[vchan].iring_mz);
+ vq->iring_mz = NULL;
+
+ return rc;
+}
+
int
odm_dev_init(struct odm_dev *odm)
{
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 9fd3e30ad8..e1373e0c7f 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,7 +9,9 @@
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_io.h>
#include <rte_log.h>
+#include <rte_memzone.h>
extern int odm_logtype;
@@ -54,6 +56,14 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define ODM_IRING_MAX_SIZE (256 * 1024)
+#define ODM_IRING_ENTRY_SIZE_MIN 4
+#define ODM_IRING_ENTRY_SIZE_MAX 13
+#define ODM_IRING_MAX_WORDS (ODM_IRING_MAX_SIZE / 8)
+#define ODM_IRING_MAX_ENTRY (ODM_IRING_MAX_WORDS / ODM_IRING_ENTRY_SIZE_MIN)
+
+#define ODM_MAX_POINTER 4
+
#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
@@ -66,6 +76,10 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define ODM_MEMZONE_FLAGS \
+ (RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
+ RTE_MEMZONE_512MB | RTE_MEMZONE_4GB | RTE_MEMZONE_SIZE_HINT_ONLY)
+
/**
* Structure odm_instr_hdr_s for ODM
*
@@ -141,8 +155,48 @@ union odm_vdma_counts_s {
} s;
};
+struct vq_stats {
+ uint64_t submitted;
+ uint64_t completed;
+ uint64_t errors;
+ /*
+ * Since stats.completed is used to return completion index, account for any packets
+ * received before stats is reset.
+ */
+ uint64_t completed_offset;
+};
+
+struct odm_queue {
+ struct odm_dev *dev;
+ /* Instructions that are prepared on the iring, but is not pushed to hw yet. */
+ uint16_t pending_submit_cnt;
+ /* Length (in words) of instructions that are not yet pushed to hw. */
+ uint16_t pending_submit_len;
+ uint16_t desc_idx;
+ /* Instruction ring head. Used for enqueue. */
+ uint16_t iring_head;
+ /* Completion ring head. Used for dequeue. */
+ uint16_t cring_head;
+ /* Extra instruction size ring head. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_head;
+ /* Extra instruction size ring tail. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_tail;
+ /* Instruction size available.*/
+ uint16_t iring_sz_available;
+ /* Number of 8-byte words in iring.*/
+ uint16_t iring_max_words;
+ /* Number of words in cring.*/
+ uint16_t cring_max_entry;
+ /* Extra instruction size used per inflight instruction.*/
+ uint8_t *extra_ins_sz;
+ struct vq_stats stats;
+ const struct rte_memzone *iring_mz;
+ const struct rte_memzone *cring_mz;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
+ struct odm_queue vq[ODM_MAX_QUEUES_PER_DEV];
uint8_t *rbase;
uint16_t vfid;
uint8_t max_qs;
@@ -151,5 +205,9 @@ struct __rte_cache_aligned odm_dev {
int odm_dev_init(struct odm_dev *odm);
int odm_dev_fini(struct odm_dev *odm);
+int odm_configure(struct odm_dev *odm);
+int odm_enable(struct odm_dev *odm);
+int odm_disable(struct odm_dev *odm);
+int odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc);
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index bef335c10c..8c705978fe 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -17,6 +17,87 @@
#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
#define PCI_DRIVER_NAME dma_odm
+static int
+odm_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(size);
+
+ odm = dev->fp_obj->dev_private;
+
+ dev_info->max_vchans = odm->max_qs;
+ dev_info->nb_vchans = odm->num_qs;
+ dev_info->dev_capa =
+ (RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG);
+ dev_info->max_desc = ODM_IRING_MAX_ENTRY;
+ dev_info->min_desc = 1;
+ dev_info->max_sges = ODM_MAX_POINTER;
+
+ return 0;
+}
+
+static int
+odm_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(conf_sz);
+
+ odm = dev->fp_obj->dev_private;
+ odm->num_qs = conf->nb_vchans;
+
+ return 0;
+}
+
+static int
+odm_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
+ const struct rte_dma_vchan_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ RTE_SET_USED(conf_sz);
+ return odm_vchan_setup(odm, vchan, conf->nb_desc);
+}
+
+static int
+odm_dmadev_start(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_enable(odm);
+}
+
+static int
+odm_dmadev_stop(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_disable(odm);
+}
+
+static int
+odm_dmadev_close(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ odm_disable(odm);
+ odm_dev_fini(odm);
+
+ return 0;
+}
+
+static const struct rte_dma_dev_ops odm_dmadev_ops = {
+ .dev_close = odm_dmadev_close,
+ .dev_configure = odm_dmadev_configure,
+ .dev_info_get = odm_dmadev_info_get,
+ .dev_start = odm_dmadev_start,
+ .dev_stop = odm_dmadev_stop,
+ .stats_get = NULL,
+ .stats_reset = NULL,
+ .vchan_setup = odm_dmadev_vchan_setup,
+};
+
static int
odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
{
@@ -40,6 +121,10 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
odm_info("DMA device %s probed", name);
odm = dmadev->data->dev_private;
+ dmadev->device = &pci_dev->device;
+ dmadev->fp_obj->dev_private = odm;
+ dmadev->dev_ops = &odm_dmadev_ops;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 5/7] dma/odm: add stats
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
` (3 preceding siblings ...)
2024-04-19 6:43 ` [PATCH v3 4/7] dma/odm: add device ops Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
` (2 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA dev stats.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 63 ++++++++++++++++++++++++++++++++++--
1 file changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 8c705978fe..13b2588246 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -87,14 +87,73 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
+ uint32_t size)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ if (size < sizeof(rte_stats))
+ return -EINVAL;
+ if (rte_stats == NULL)
+ return -EINVAL;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[vchan].stats;
+
+ *rte_stats = *stats;
+ } else {
+ int i;
+
+ for (i = 0; i < odm->num_qs; i++) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[i].stats;
+
+ rte_stats->submitted += stats->submitted;
+ rte_stats->completed += stats->completed;
+ rte_stats->errors += stats->errors;
+ }
+ }
+
+ return 0;
+}
+
+static void
+odm_vq_stats_reset(struct vq_stats *vq_stats)
+{
+ vq_stats->completed_offset += vq_stats->completed;
+ vq_stats->completed = 0;
+ vq_stats->errors = 0;
+ vq_stats->submitted = 0;
+}
+
+static int
+odm_stats_reset(struct rte_dma_dev *dev, uint16_t vchan)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+ struct vq_stats *vq_stats;
+ int i;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ vq_stats = &odm->vq[vchan].stats;
+ odm_vq_stats_reset(vq_stats);
+ } else {
+ for (i = 0; i < odm->num_qs; i++) {
+ vq_stats = &odm->vq[i].stats;
+ odm_vq_stats_reset(vq_stats);
+ }
+ }
+
+ return 0;
+}
+
static const struct rte_dma_dev_ops odm_dmadev_ops = {
.dev_close = odm_dmadev_close,
.dev_configure = odm_dmadev_configure,
.dev_info_get = odm_dmadev_info_get,
.dev_start = odm_dmadev_start,
.dev_stop = odm_dmadev_stop,
- .stats_get = NULL,
- .stats_reset = NULL,
+ .stats_get = odm_stats_get,
+ .stats_reset = odm_stats_reset,
.vchan_setup = odm_dmadev_vchan_setup,
};
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 6/7] dma/odm: add copy and copy sg ops
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
` (4 preceding siblings ...)
2024-04-19 6:43 ` [PATCH v3 5/7] dma/odm: add stats Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 7/7] dma/odm: add remaining ops Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add ODM copy and copy SG ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 236 +++++++++++++++++++++++++++++++++++
1 file changed, 236 insertions(+)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 13b2588246..b21be83a89 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -9,6 +9,7 @@
#include <rte_common.h>
#include <rte_dmadev.h>
#include <rte_dmadev_pmd.h>
+#include <rte_memcpy.h>
#include <rte_pci.h>
#include "odm.h"
@@ -87,6 +88,238 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length,
+ uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ const union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ .s.nfst = 1,
+ .s.nlst = 1,
+ };
+
+ vq = &odm->vq[vchan];
+
+ h = length;
+ h |= ((uint64_t)length << 32);
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = src;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head++] = hdr.u;
+ iring_head_ptr[iring_head++] = h;
+ iring_head_ptr[iring_head++] = src;
+ iring_head_ptr[iring_head++] = dst;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
+static inline void
+odm_dmadev_fill_sg(uint64_t *cmd, const struct rte_dma_sge *src, const struct rte_dma_sge *dst,
+ uint16_t nb_src, uint16_t nb_dst, union odm_instr_hdr_s *hdr)
+{
+ int i = 0, j = 0;
+ uint64_t h = 0;
+
+ cmd[j++] = hdr->u;
+ /* When nb_src is even */
+ if (!(nb_src & 0x1)) {
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Fill the iring with dst pointers */
+ for (i = 1; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is odd */
+ if (nb_dst & 0x1) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ } else {
+ /* When nb_src is odd */
+
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Handle the last src pointer */
+ h = ((uint64_t)dst[0].length << 32) | src[nb_src - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[nb_src - 1].addr;
+ cmd[j++] = dst[0].addr;
+
+ /* Fill the iring with dst pointers */
+ for (i = 2; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is even */
+ if (!(nb_dst & 0x1)) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ }
+}
+
+static int
+odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src,
+ const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_head, ins_ring_head;
+ uint16_t iring_sz_available, i, nb, num_words;
+ uint64_t cmd[ODM_IRING_ENTRY_SIZE_MAX];
+ struct odm_dev *odm = dev_private;
+ uint32_t s_sz = 0, d_sz = 0;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ };
+
+ vq = &odm->vq[vchan];
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+ iring_sz_available = vq->iring_sz_available;
+ ins_ring_head = vq->ins_ring_head;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+
+ if (unlikely(nb_src > 4 || nb_dst > 4))
+ return -EINVAL;
+
+ for (i = 0; i < nb_src; i++)
+ s_sz += src[i].length;
+
+ for (i = 0; i < nb_dst; i++)
+ d_sz += dst[i].length;
+
+ if (s_sz != d_sz)
+ return -EINVAL;
+
+ nb = nb_src + nb_dst;
+ hdr.s.nfst = nb_src;
+ hdr.s.nlst = nb_dst;
+ num_words = 1 + 3 * (nb / 2 + (nb & 0x1));
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+ uint16_t words_avail = max_iring_words - iring_head;
+ uint16_t words_pend = num_words - words_avail;
+
+ if (unlikely(words_avail + words_pend > ODM_IRING_ENTRY_SIZE_MAX))
+ return -ENOSPC;
+
+ odm_dmadev_fill_sg(cmd, src, dst, nb_src, nb_dst, &hdr);
+ rte_memcpy((void *)&iring_head_ptr[iring_head], (void *)cmd, words_avail * 8);
+ rte_memcpy((void *)iring_head_ptr, (void *)&cmd[words_avail], words_pend * 8);
+ iring_head = words_pend;
+ } else {
+ odm_dmadev_fill_sg(&iring_head_ptr[iring_head], src, dst, nb_src, nb_dst, &hdr);
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* Save extra space used for the instruction. */
+ vq->extra_ins_sz[ins_ring_head] = num_words - 4;
+
+ vq->ins_ring_head = (ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -184,6 +417,9 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->dev_private = odm;
dmadev->dev_ops = &odm_dmadev_ops;
+ dmadev->fp_obj->copy = odm_dmadev_copy;
+ dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v3 7/7] dma/odm: add remaining ops
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
` (5 preceding siblings ...)
2024-04-19 6:43 ` [PATCH v3 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
@ 2024-04-19 6:43 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-04-19 6:43 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add all remaining ops such as fill, burst_capacity etc. Also update the
documentation.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++++++++++
drivers/dma/odm/odm.h | 4 +
drivers/dma/odm/odm_dmadev.c | 250 +++++++++++++++++++++++++++++++++++
5 files changed, 348 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index b8d2f7b3d8..38293008aa 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1273,6 +1273,7 @@ M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/odm/
+F: doc/guides/dmadevs/odm.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst
index 5bd25b32b9..ce9f6eb260 100644
--- a/doc/guides/dmadevs/index.rst
+++ b/doc/guides/dmadevs/index.rst
@@ -17,3 +17,4 @@ an application through DMA API.
hisilicon
idxd
ioat
+ odm
diff --git a/doc/guides/dmadevs/odm.rst b/doc/guides/dmadevs/odm.rst
new file mode 100644
index 0000000000..a2eaab59a0
--- /dev/null
+++ b/doc/guides/dmadevs/odm.rst
@@ -0,0 +1,92 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 Marvell.
+
+Odyssey ODM DMA Device Driver
+=============================
+
+The ``odm`` DMA device driver provides a poll-mode driver (PMD) for Marvell Odyssey
+DMA Hardware Accelerator block found in Odyssey SoC. The block supports only mem
+to mem DMA transfers.
+
+ODM DMA device can support up to 32 queues and 16 VFs.
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+Device Setup
+-------------
+
+ODM DMA device is initialized by kernel PF driver. The PF kernel driver is part
+of Marvell software packages for Odyssey.
+
+Kernel module can be inserted as in below example::
+
+ $ sudo insmod odyssey_odm.ko
+
+ODM DMA device can support up to 16 VFs::
+
+ $ sudo echo 16 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs
+
+Above command creates 16 VFs with 2 queues each.
+
+The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the
+presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma``
+will show all the Odyssey ODM DMA devices.
+
+Devices using VFIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The HW devices to be used will need to be bound to a user-space IO driver.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
+For example::
+
+ $ dpdk-devbind.py -b vfio-pci 0000:08:00.1
+
+Device Probing and Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the devices from an application, the dmadev API can be used.
+
+Once configured, the device can then be made ready for use
+by calling the ``rte_dma_start()`` API.
+
+Performing Data Copies
+~~~~~~~~~~~~~~~~~~~~~~
+
+Refer to the :ref:`Enqueue / Dequeue APIs <dmadev_enqueue_dequeue>` section
+of the dmadev library documentation for details on operation enqueue and
+submission API usage.
+
+Performance Tuning Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To achieve higher performance, DMA device needs to be tuned using PF kernel
+driver module parameters.
+
+Following options are exposed by kernel PF driver via devlink interface for
+tuning performance.
+
+``eng_sel``
+
+ ODM DMA device has 2 engines internally. Engine to queue mapping is decided
+ by a hardware register which can be configured as below::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name eng_sel value 3435973836 cmode runtime
+
+ Each bit in the register corresponds to one queue. Each queue would be
+ associated with one engine. If the value of the bit corresponding to the queue
+ is 0, then engine 0 would be picked. If it is 1, then engine 1 would be
+ picked.
+
+ In the above command, the register value is set as
+ ``1100 1100 1100 1100 1100 1100 1100 1100`` which allows for alternate engines
+ to be used with alternate VFs (assuming the system has 16 VFs with 2 queues
+ each).
+
+``max_load_request``
+
+ Specifies maximum outstanding load requests on internal bus. Values can range
+ from 1 to 512. Set to 512 for maximum requests in flight.::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name max_load_request value 512 cmode runtime
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index e1373e0c7f..1d60d2d11a 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -75,6 +75,10 @@ extern int odm_logtype;
rte_log(RTE_LOG_INFO, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_debug(...) \
+ rte_log(RTE_LOG_DEBUG, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
#define ODM_MEMZONE_FLAGS \
(RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index b21be83a89..57bd6923f1 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -320,6 +320,251 @@ odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *
return vq->desc_idx++;
}
+static int
+odm_dmadev_fill(void *dev_private, uint16_t vchan, uint64_t pattern, rte_iova_t dst,
+ uint32_t length, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ vq = &odm->vq[vchan];
+
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.nfst = 0,
+ .s.nlst = 1,
+ };
+
+ h = (uint64_t)length;
+
+ switch (pattern) {
+ case 0:
+ hdr.s.xtype = ODM_XTYPE_FILL0;
+ break;
+ case 0xffffffffffffffff:
+ hdr.s.xtype = ODM_XTYPE_FILL1;
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = 0;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head_ptr[iring_head + 1] = h;
+ iring_head_ptr[iring_head + 2] = dst;
+ iring_head_ptr[iring_head + 3] = 0;
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ return vq->desc_idx++;
+}
+
+static uint16_t
+odm_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx,
+ bool *has_error)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint64_t nb_err = 0;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (unlikely(vq->stats.submitted == vq->stats.completed)) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr,
+ rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ if (cmpl.s.cmp_code)
+ nb_err++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, cmpl_zero.u,
+ rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->stats.errors += nb_err;
+
+ if (unlikely(has_error != NULL && nb_err))
+ *has_error = true;
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static uint16_t
+odm_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls,
+ uint16_t *last_idx, enum rte_dma_status_code *status)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (vq->stats.submitted == vq->stats.completed) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+#ifdef ODM_DEBUG
+ odm_debug("cring_head: 0x%" PRIx16, cring_head);
+ odm_debug("Submitted: 0x%" PRIx64, vq->stats.submitted);
+ odm_debug("Completed: 0x%" PRIx64, vq->stats.completed);
+ odm_debug("Hardware count: 0x%" PRIx64, odm_read64(odm->rbase + ODM_VDMA_CNT(vchan)));
+#endif
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr,
+ rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ status[cnt] = cmpl.s.cmp_code;
+
+ if (cmpl.s.cmp_code)
+ vq->stats.errors++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, cmpl_zero.u,
+ rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static int
+odm_dmadev_submit(void *dev_private, uint16_t vchan)
+{
+ struct odm_dev *odm = dev_private;
+ uint16_t pending_submit_len;
+ struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ pending_submit_len = vq->pending_submit_len;
+
+ if (pending_submit_len == 0)
+ return 0;
+
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->pending_submit_len = 0;
+ vq->stats.submitted += vq->pending_submit_cnt;
+ vq->pending_submit_cnt = 0;
+
+ return 0;
+}
+
+static uint16_t
+odm_dmadev_burst_capacity(const void *dev_private, uint16_t vchan __rte_unused)
+{
+ const struct odm_dev *odm = dev_private;
+ const struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ return (vq->iring_sz_available / ODM_IRING_ENTRY_SIZE_MIN);
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -419,6 +664,11 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->copy = odm_dmadev_copy;
dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+ dmadev->fp_obj->fill = odm_dmadev_fill;
+ dmadev->fp_obj->submit = odm_dmadev_submit;
+ dmadev->fp_obj->completed = odm_dmadev_completed;
+ dmadev->fp_obj->completed_status = odm_dmadev_completed_status;
+ dmadev->fp_obj->burst_capacity = odm_dmadev_burst_capacity;
odm->pci_dev = pci_dev;
--
2.25.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v3 1/7] dma/odm: add framework for ODM DMA device
2024-04-19 6:43 ` [PATCH v3 1/7] dma/odm: add framework for " Anoob Joseph
@ 2024-05-24 13:26 ` Jerin Jacob
0 siblings, 0 replies; 37+ messages in thread
From: Jerin Jacob @ 2024-05-24 13:26 UTC (permalink / raw)
To: Anoob Joseph
Cc: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
On Fri, Apr 19, 2024 at 12:13 PM Anoob Joseph <anoobj@marvell.com> wrote:
>
> Add framework for Odyssey ODM DMA device.
>
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> ---
> MAINTAINERS | 6 +++
> drivers/dma/meson.build | 1 +
> drivers/dma/odm/meson.build | 14 +++++++
> drivers/dma/odm/odm.h | 29 ++++++++++++++
> drivers/dma/odm/odm_dmadev.c | 74 ++++++++++++++++++++++++++++++++++++
> 5 files changed, 124 insertions(+)
> create mode 100644 drivers/dma/odm/meson.build
> create mode 100644 drivers/dma/odm/odm.h
> create mode 100644 drivers/dma/odm/odm_dmadev.c
Update release notes in the last patch in the series
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v3 2/7] dma/odm: add hardware defines
2024-04-19 6:43 ` [PATCH v3 2/7] dma/odm: add hardware defines Anoob Joseph
@ 2024-05-24 13:29 ` Jerin Jacob
0 siblings, 0 replies; 37+ messages in thread
From: Jerin Jacob @ 2024-05-24 13:29 UTC (permalink / raw)
To: Anoob Joseph
Cc: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
On Fri, Apr 19, 2024 at 12:22 PM Anoob Joseph <anoobj@marvell.com> wrote:
>
> Add ODM registers and structures. Add mailbox structs as well.
>
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> ---
> drivers/dma/odm/odm.h | 116 +++++++++++++++++++++++++++++++++++++
> drivers/dma/odm/odm_priv.h | 49 ++++++++++++++++
> 2 files changed, 165 insertions(+)
> create mode 100644 drivers/dma/odm/odm_priv.h
>
> diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
> index aeeb6f9e9a..7564ffbed4 100644
> --- a/drivers/dma/odm/odm.h
> +++ b/drivers/dma/odm/odm.h
> @@ -9,6 +9,47 @@
>
> extern int odm_logtype;
>
> +/* ODM VF register offsets from VF_BAR0 */
> +#define ODM_VDMA_EN(x) (0x00 | (x << 3))
> +#define ODM_VDMA_REQQ_CTL(x) (0x80 | (x << 3))
> +#define ODM_VDMA_DBELL(x) (0x100 | (x << 3))
> +#define ODM_VDMA_RING_CFG(x) (0x180 | (x << 3))
> +#define ODM_VDMA_IRING_BADDR(x) (0x200 | (x << 3))
> +#define ODM_VDMA_CRING_BADDR(x) (0x280 | (x << 3))
> +#define ODM_VDMA_COUNTS(x) (0x300 | (x << 3))
> +#define ODM_VDMA_IRING_NADDR(x) (0x380 | (x << 3))
> +#define ODM_VDMA_CRING_NADDR(x) (0x400 | (x << 3))
> +#define ODM_VDMA_IRING_DBG(x) (0x480 | (x << 3))
> +#define ODM_VDMA_CNT(x) (0x580 | (x << 3))
> +#define ODM_VF_INT (0x1000)
> +#define ODM_VF_INT_W1S (0x1008)
> +#define ODM_VF_INT_ENA_W1C (0x1010)
> +#define ODM_VF_INT_ENA_W1S (0x1018)
> +#define ODM_MBOX_VF_PF_DATA(i) (0x2000 | (i << 3))
> +
Newline may not be needed here.
> +#define ODM_MBOX_RETRY_CNT (0xfffffff)
> +#define ODM_MBOX_ERR_CODE_MAX (0xf)
> +#define ODM_IRING_IDLE_WAIT_CNT (0xfffffff)
> +
> +/**
> + * Enumeration odm_hdr_xtype_e
> + *
> + * ODM Transfer Type Enumeration
> + * Enumerates the pointer type in ODM_DMA_INSTR_HDR_S[XTYPE]
> + */
> +#define ODM_XTYPE_INTERNAL 2
> +#define ODM_XTYPE_FILL0 4
> +#define ODM_XTYPE_FILL1 5
> +
> +/**
> + * ODM Header completion type enumeration
> + * Enumerates the completion type in ODM_DMA_INSTR_HDR_S[CT]
> + */
> +#define ODM_HDR_CT_CW_CA 0x0
> +#define ODM_HDR_CT_CW_NC 0x1
> +
> +#define ODM_MAX_QUEUES_PER_DEV 16
> +
> #define odm_err(...) \
> rte_log(RTE_LOG_ERR, odm_logtype, \
> RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
> @@ -18,6 +59,81 @@ extern int odm_logtype;
> RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
> RTE_FMT_TAIL(__VA_ARGS__, )))
>
> +/**
Non Doxygen comment across series, just keeping /* */ is enough.
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v3 4/7] dma/odm: add device ops
2024-04-19 6:43 ` [PATCH v3 4/7] dma/odm: add device ops Anoob Joseph
@ 2024-05-24 13:37 ` Jerin Jacob
0 siblings, 0 replies; 37+ messages in thread
From: Jerin Jacob @ 2024-05-24 13:37 UTC (permalink / raw)
To: Anoob Joseph
Cc: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
On Fri, Apr 19, 2024 at 3:22 PM Anoob Joseph <anoobj@marvell.com> wrote:
>
> From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
>
> Add DMA device control ops.
>
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
> Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> ---
> drivers/dma/odm/odm.c | 144 ++++++++++++++++++++++++++++++++++-
> drivers/dma/odm/odm.h | 58 ++++++++++++++
> drivers/dma/odm/odm_dmadev.c | 85 +++++++++++++++++++++
> 3 files changed, 285 insertions(+), 2 deletions(-)
>
> +#define ODM_MEMZONE_FLAGS \
> + (RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \
> + RTE_MEMZONE_512MB | RTE_MEMZONE_4GB | RTE_MEMZONE_SIZE_HINT_ONLY)
> +
Any specific reason why list all page with RTE_MEMZONE_SIZE_HINT_ONLY
and removed 2 MB page?
Does driver have any bearing on page size?
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 0/7] Add ODM DMA device
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
` (6 preceding siblings ...)
2024-04-19 6:43 ` [PATCH v3 7/7] dma/odm: add remaining ops Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 1/7] dma/odm: add framework for " Anoob Joseph
` (7 more replies)
7 siblings, 8 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add Odyssey ODM DMA device. This PMD abstracts ODM hardware unit on
Odyssey SoC which can perform mem to mem copies.
The hardware unit can support upto 32 queues (vchan) and 16 VFs. It
supports 'fill' operation with specific values. It also supports
SG mode of operation with upto 4 src pointers and 4 destination
pointers.
The PMD is tested with both unit tests and performance applications.
Changes in v4
- Added release notes
- Addressed review comments from Jerin
Changes in v3
- Addressed build failure with stdatomic stage in CI
Changes in v2
- Addressed build failure in CI
- Moved update to usertools as separate patch
Anoob Joseph (2):
dma/odm: add framework for ODM DMA device
dma/odm: add hardware defines
Gowrishankar Muthukrishnan (3):
dma/odm: add dev init and fini
dma/odm: add device ops
dma/odm: add stats
Vidya Sagar Velumuri (2):
dma/odm: add copy and copy sg ops
dma/odm: add remaining ops
MAINTAINERS | 7 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 ++++
doc/guides/rel_notes/release_24_07.rst | 4 +
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +
drivers/dma/odm/odm.c | 237 ++++++++
drivers/dma/odm/odm.h | 203 +++++++
drivers/dma/odm/odm_dmadev.c | 717 +++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 ++
10 files changed, 1325 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.c
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
create mode 100644 drivers/dma/odm/odm_priv.h
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 1/7] dma/odm: add framework for ODM DMA device
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 2/7] dma/odm: add hardware defines Anoob Joseph
` (6 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add framework for Odyssey ODM DMA device.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 6 +++
drivers/dma/meson.build | 1 +
drivers/dma/odm/meson.build | 14 +++++++
drivers/dma/odm/odm.h | 29 ++++++++++++++
drivers/dma/odm/odm_dmadev.c | 74 ++++++++++++++++++++++++++++++++++++
5 files changed, 124 insertions(+)
create mode 100644 drivers/dma/odm/meson.build
create mode 100644 drivers/dma/odm/odm.h
create mode 100644 drivers/dma/odm/odm_dmadev.c
diff --git a/MAINTAINERS b/MAINTAINERS
index c9adff9846..b581207a9a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1269,6 +1269,12 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
+Marvell Odyssey ODM DMA
+M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
+T: git://dpdk.org/next/dpdk-next-net-mrvl
+F: drivers/dma/odm/
+
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build
index 582654ea1b..358132759a 100644
--- a/drivers/dma/meson.build
+++ b/drivers/dma/meson.build
@@ -8,6 +8,7 @@ drivers = [
'hisilicon',
'idxd',
'ioat',
+ 'odm',
'skeleton',
]
std_deps = ['dmadev']
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
new file mode 100644
index 0000000000..227b10c890
--- /dev/null
+++ b/drivers/dma/odm/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2024 Marvell.
+
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit Linux'
+ subdir_done()
+endif
+
+deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
+
+sources = files('odm_dmadev.c')
+
+pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
new file mode 100644
index 0000000000..aeeb6f9e9a
--- /dev/null
+++ b/drivers/dma/odm/odm.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_H_
+#define _ODM_H_
+
+#include <rte_log.h>
+
+extern int odm_logtype;
+
+#define odm_err(...) \
+ rte_log(RTE_LOG_ERR, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_info(...) \
+ rte_log(RTE_LOG_INFO, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
+
+struct __rte_cache_aligned odm_dev {
+ struct rte_pci_device *pci_dev;
+ uint8_t *rbase;
+ uint16_t vfid;
+ uint8_t max_qs;
+ uint8_t num_qs;
+};
+
+#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
new file mode 100644
index 0000000000..cc3342cf7b
--- /dev/null
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <string.h>
+
+#include <bus_pci_driver.h>
+#include <rte_bus_pci.h>
+#include <rte_common.h>
+#include <rte_dmadev.h>
+#include <rte_dmadev_pmd.h>
+#include <rte_pci.h>
+
+#include "odm.h"
+
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
+#define PCI_DRIVER_NAME dma_odm
+
+static int
+odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+ struct odm_dev *odm = NULL;
+ struct rte_dma_dev *dmadev;
+
+ if (!pci_dev->mem_resource[0].addr)
+ return -ENODEV;
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*odm));
+ if (dmadev == NULL) {
+ odm_err("DMA device allocation failed for %s", name);
+ return -ENOMEM;
+ }
+
+ odm_info("DMA device %s probed", name);
+
+ return 0;
+}
+
+static int
+odm_dmadev_remove(struct rte_pci_device *pci_dev)
+{
+ char name[RTE_DEV_NAME_MAX_LEN];
+
+ memset(name, 0, sizeof(name));
+ rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+ return rte_dma_pmd_release(name);
+}
+
+static const struct rte_pci_id odm_dma_pci_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_ODYSSEY_ODM_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver odm_dmadev = {
+ .id_table = odm_dma_pci_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ .probe = odm_dmadev_probe,
+ .remove = odm_dmadev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(PCI_DRIVER_NAME, odm_dmadev);
+RTE_PMD_REGISTER_PCI_TABLE(PCI_DRIVER_NAME, odm_dma_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(PCI_DRIVER_NAME, "vfio-pci");
+RTE_LOG_REGISTER_DEFAULT(odm_logtype, NOTICE);
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 2/7] dma/odm: add hardware defines
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 1/7] dma/odm: add framework for " Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 3/7] dma/odm: add dev init and fini Anoob Joseph
` (5 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
Add ODM registers and structures. Add mailbox structs as well.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.h | 106 +++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm_priv.h | 49 +++++++++++++++++
2 files changed, 155 insertions(+)
create mode 100644 drivers/dma/odm/odm_priv.h
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index aeeb6f9e9a..8cc3e0de44 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,6 +9,46 @@
extern int odm_logtype;
+/* ODM VF register offsets from VF_BAR0 */
+#define ODM_VDMA_EN(x) (0x00 | (x << 3))
+#define ODM_VDMA_REQQ_CTL(x) (0x80 | (x << 3))
+#define ODM_VDMA_DBELL(x) (0x100 | (x << 3))
+#define ODM_VDMA_RING_CFG(x) (0x180 | (x << 3))
+#define ODM_VDMA_IRING_BADDR(x) (0x200 | (x << 3))
+#define ODM_VDMA_CRING_BADDR(x) (0x280 | (x << 3))
+#define ODM_VDMA_COUNTS(x) (0x300 | (x << 3))
+#define ODM_VDMA_IRING_NADDR(x) (0x380 | (x << 3))
+#define ODM_VDMA_CRING_NADDR(x) (0x400 | (x << 3))
+#define ODM_VDMA_IRING_DBG(x) (0x480 | (x << 3))
+#define ODM_VDMA_CNT(x) (0x580 | (x << 3))
+#define ODM_VF_INT (0x1000)
+#define ODM_VF_INT_W1S (0x1008)
+#define ODM_VF_INT_ENA_W1C (0x1010)
+#define ODM_VF_INT_ENA_W1S (0x1018)
+#define ODM_MBOX_VF_PF_DATA(i) (0x2000 | (i << 3))
+#define ODM_MBOX_RETRY_CNT (0xfffffff)
+#define ODM_MBOX_ERR_CODE_MAX (0xf)
+#define ODM_IRING_IDLE_WAIT_CNT (0xfffffff)
+
+/*
+ * Enumeration odm_hdr_xtype_e
+ *
+ * ODM Transfer Type Enumeration
+ * Enumerates the pointer type in ODM_DMA_INSTR_HDR_S[XTYPE]
+ */
+#define ODM_XTYPE_INTERNAL 2
+#define ODM_XTYPE_FILL0 4
+#define ODM_XTYPE_FILL1 5
+
+/*
+ * ODM Header completion type enumeration
+ * Enumerates the completion type in ODM_DMA_INSTR_HDR_S[CT]
+ */
+#define ODM_HDR_CT_CW_CA 0x0
+#define ODM_HDR_CT_CW_NC 0x1
+
+#define ODM_MAX_QUEUES_PER_DEV 16
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -18,6 +58,72 @@ extern int odm_logtype;
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+/*
+ * Structure odm_instr_hdr_s for ODM
+ *
+ * ODM DMA Instruction Header Format
+ */
+union odm_instr_hdr_s {
+ uint64_t u;
+ struct odm_instr_hdr {
+ uint64_t nfst : 3;
+ uint64_t reserved_3 : 1;
+ uint64_t nlst : 3;
+ uint64_t reserved_7_9 : 3;
+ uint64_t ct : 2;
+ uint64_t stse : 1;
+ uint64_t reserved_13_28 : 16;
+ uint64_t sts : 1;
+ uint64_t reserved_30_49 : 20;
+ uint64_t xtype : 3;
+ uint64_t reserved_53_63 : 11;
+ } s;
+};
+
+/* ODM Completion Entry Structure */
+union odm_cmpl_ent_s {
+ uint32_t u;
+ struct odm_cmpl_ent {
+ uint32_t cmp_code : 8;
+ uint32_t rsvd : 23;
+ uint32_t valid : 1;
+ } s;
+};
+
+/* ODM DMA Ring Configuration Register */
+union odm_vdma_ring_cfg_s {
+ uint64_t u;
+ struct {
+ uint64_t isize : 8;
+ uint64_t rsvd_8_15 : 8;
+ uint64_t csize : 8;
+ uint64_t rsvd_24_63 : 40;
+ } s;
+};
+
+/* ODM DMA Instruction Ring DBG */
+union odm_vdma_iring_dbg_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell_cnt : 32;
+ uint64_t offset : 16;
+ uint64_t rsvd_48_62 : 15;
+ uint64_t iwbusy : 1;
+ } s;
+};
+
+/* ODM DMA Counts */
+union odm_vdma_counts_s {
+ uint64_t u;
+ struct {
+ uint64_t dbell : 32;
+ uint64_t buf_used_cnt : 9;
+ uint64_t rsvd_41_43 : 3;
+ uint64_t rsvd_buf_used_cnt : 3;
+ uint64_t rsvd_47_63 : 17;
+ } s;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
uint8_t *rbase;
diff --git a/drivers/dma/odm/odm_priv.h b/drivers/dma/odm/odm_priv.h
new file mode 100644
index 0000000000..1878f4d9a6
--- /dev/null
+++ b/drivers/dma/odm/odm_priv.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#ifndef _ODM_PRIV_H_
+#define _ODM_PRIV_H_
+
+#define ODM_MAX_VFS 16
+#define ODM_MAX_QUEUES 32
+
+#define ODM_CMD_QUEUE_SIZE 4096
+
+#define ODM_DEV_INIT 0x1
+#define ODM_DEV_CLOSE 0x2
+#define ODM_QUEUE_OPEN 0x3
+#define ODM_QUEUE_CLOSE 0x4
+#define ODM_REG_DUMP 0x5
+
+struct odm_mbox_dev_msg {
+ /* Response code */
+ uint64_t rsp : 8;
+ /* Number of VFs */
+ uint64_t nvfs : 2;
+ /* Error code */
+ uint64_t err : 6;
+ /* Reserved */
+ uint64_t rsvd_16_63 : 48;
+};
+
+struct odm_mbox_queue_msg {
+ /* Command code */
+ uint64_t cmd : 8;
+ /* VF ID to configure */
+ uint64_t vfid : 8;
+ /* Queue index in the VF */
+ uint64_t qidx : 8;
+ /* Reserved */
+ uint64_t rsvd_24_63 : 40;
+};
+
+union odm_mbox_msg {
+ uint64_t u[2];
+ struct {
+ struct odm_mbox_dev_msg d;
+ struct odm_mbox_queue_msg q;
+ };
+};
+
+#endif /* _ODM_PRIV_H_ */
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 3/7] dma/odm: add dev init and fini
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 1/7] dma/odm: add framework for " Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 2/7] dma/odm: add hardware defines Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 4/7] dma/odm: add device ops Anoob Joseph
` (4 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add ODM device init and fini.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/meson.build | 2 +-
drivers/dma/odm/odm.c | 97 ++++++++++++++++++++++++++++++++++++
drivers/dma/odm/odm.h | 10 ++++
drivers/dma/odm/odm_dmadev.c | 13 +++++
4 files changed, 121 insertions(+), 1 deletion(-)
create mode 100644 drivers/dma/odm/odm.c
diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build
index 227b10c890..d597762d37 100644
--- a/drivers/dma/odm/meson.build
+++ b/drivers/dma/odm/meson.build
@@ -9,6 +9,6 @@ endif
deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci']
-sources = files('odm_dmadev.c')
+sources = files('odm_dmadev.c', 'odm.c')
pmd_supports_disable_iova_as_pa = true
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
new file mode 100644
index 0000000000..c0963da451
--- /dev/null
+++ b/drivers/dma/odm/odm.c
@@ -0,0 +1,97 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2024 Marvell.
+ */
+
+#include <stdint.h>
+
+#include <bus_pci_driver.h>
+
+#include <rte_io.h>
+
+#include "odm.h"
+#include "odm_priv.h"
+
+static void
+odm_vchan_resc_free(struct odm_dev *odm, int qno)
+{
+ RTE_SET_USED(odm);
+ RTE_SET_USED(qno);
+}
+
+static int
+send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg *rsp)
+{
+ int retry_cnt = ODM_MBOX_RETRY_CNT;
+ union odm_mbox_msg pf_msg;
+
+ msg->d.err = ODM_MBOX_ERR_CODE_MAX;
+ odm_write64(msg->u[0], odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ odm_write64(msg->u[1], odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ pf_msg.u[0] = 0;
+ pf_msg.u[1] = 0;
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+
+ while (pf_msg.d.rsp == 0 && retry_cnt > 0) {
+ pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0));
+ --retry_cnt;
+ }
+
+ if (retry_cnt <= 0)
+ return -EBADE;
+
+ pf_msg.u[1] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(1));
+
+ if (rsp) {
+ rsp->u[0] = pf_msg.u[0];
+ rsp->u[1] = pf_msg.u[1];
+ }
+
+ if (pf_msg.d.rsp == msg->d.err && pf_msg.d.err != 0)
+ return -EBADE;
+
+ return 0;
+}
+
+int
+odm_dev_init(struct odm_dev *odm)
+{
+ struct rte_pci_device *pci_dev = odm->pci_dev;
+ union odm_mbox_msg mbox_msg;
+ uint16_t vfid;
+ int rc;
+
+ odm->rbase = pci_dev->mem_resource[0].addr;
+ vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7);
+ vfid -= 1;
+ odm->vfid = vfid;
+ odm->num_qs = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_INIT;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (!rc)
+ odm->max_qs = 1 << (4 - mbox_msg.d.nvfs);
+
+ return rc;
+}
+
+int
+odm_dev_fini(struct odm_dev *odm)
+{
+ union odm_mbox_msg mbox_msg;
+ int qno, rc = 0;
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_DEV_CLOSE;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+
+ for (qno = 0; qno < odm->num_qs; qno++)
+ odm_vchan_resc_free(odm, qno);
+
+ return rc;
+}
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 8cc3e0de44..0bf0c6345b 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -5,6 +5,10 @@
#ifndef _ODM_H_
#define _ODM_H_
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_compat.h>
#include <rte_log.h>
extern int odm_logtype;
@@ -49,6 +53,9 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
+#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
+
#define odm_err(...) \
rte_log(RTE_LOG_ERR, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
@@ -132,4 +139,7 @@ struct __rte_cache_aligned odm_dev {
uint8_t num_qs;
};
+int odm_dev_init(struct odm_dev *odm);
+int odm_dev_fini(struct odm_dev *odm);
+
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index cc3342cf7b..bef335c10c 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -23,6 +23,7 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
char name[RTE_DEV_NAME_MAX_LEN];
struct odm_dev *odm = NULL;
struct rte_dma_dev *dmadev;
+ int rc;
if (!pci_dev->mem_resource[0].addr)
return -ENODEV;
@@ -37,8 +38,20 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
}
odm_info("DMA device %s probed", name);
+ odm = dmadev->data->dev_private;
+
+ odm->pci_dev = pci_dev;
+
+ rc = odm_dev_init(odm);
+ if (rc < 0)
+ goto dma_pmd_release;
return 0;
+
+dma_pmd_release:
+ rte_dma_pmd_release(name);
+
+ return rc;
}
static int
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 4/7] dma/odm: add device ops
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
` (2 preceding siblings ...)
2024-05-27 15:16 ` [PATCH v4 3/7] dma/odm: add dev init and fini Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 5/7] dma/odm: add stats Anoob Joseph
` (3 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA device control ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm.c | 144 ++++++++++++++++++++++++++++++++++-
drivers/dma/odm/odm.h | 54 +++++++++++++
drivers/dma/odm/odm_dmadev.c | 85 +++++++++++++++++++++
3 files changed, 281 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c
index c0963da451..270808f4df 100644
--- a/drivers/dma/odm/odm.c
+++ b/drivers/dma/odm/odm.c
@@ -7,6 +7,7 @@
#include <bus_pci_driver.h>
#include <rte_io.h>
+#include <rte_malloc.h>
#include "odm.h"
#include "odm_priv.h"
@@ -14,8 +15,15 @@
static void
odm_vchan_resc_free(struct odm_dev *odm, int qno)
{
- RTE_SET_USED(odm);
- RTE_SET_USED(qno);
+ struct odm_queue *vq = &odm->vq[qno];
+
+ rte_memzone_free(vq->iring_mz);
+ rte_memzone_free(vq->cring_mz);
+ rte_free(vq->extra_ins_sz);
+
+ vq->iring_mz = NULL;
+ vq->cring_mz = NULL;
+ vq->extra_ins_sz = NULL;
}
static int
@@ -53,6 +61,138 @@ send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg
return 0;
}
+static int
+odm_queue_ring_config(struct odm_dev *odm, int vchan, int isize, int csize)
+{
+ union odm_vdma_ring_cfg_s ring_cfg = {0};
+ struct odm_queue *vq = &odm->vq[vchan];
+
+ if (vq->iring_mz == NULL || vq->cring_mz == NULL)
+ return -EINVAL;
+
+ ring_cfg.s.isize = (isize / 1024) - 1;
+ ring_cfg.s.csize = (csize / 1024) - 1;
+
+ odm_write64(ring_cfg.u, odm->rbase + ODM_VDMA_RING_CFG(vchan));
+ odm_write64(vq->iring_mz->iova, odm->rbase + ODM_VDMA_IRING_BADDR(vchan));
+ odm_write64(vq->cring_mz->iova, odm->rbase + ODM_VDMA_CRING_BADDR(vchan));
+
+ return 0;
+}
+
+int
+odm_enable(struct odm_dev *odm)
+{
+ struct odm_queue *vq;
+ int qno, rc = 0;
+
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ vq = &odm->vq[qno];
+
+ vq->desc_idx = vq->stats.completed_offset;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ vq->iring_head = 0;
+ vq->cring_head = 0;
+ vq->ins_ring_head = 0;
+ vq->iring_sz_available = vq->iring_max_words;
+
+ rc = odm_queue_ring_config(odm, qno, vq->iring_max_words * 8,
+ vq->cring_max_entry * 4);
+ if (rc < 0)
+ break;
+
+ odm_write64(0x1, odm->rbase + ODM_VDMA_EN(qno));
+ }
+
+ return rc;
+}
+
+int
+odm_disable(struct odm_dev *odm)
+{
+ int qno, wait_cnt = ODM_IRING_IDLE_WAIT_CNT;
+ uint64_t val;
+
+ /* Disable the queue and wait for the queue to became idle */
+ for (qno = 0; qno < odm->num_qs; qno++) {
+ odm_write64(0x0, odm->rbase + ODM_VDMA_EN(qno));
+ do {
+ val = odm_read64(odm->rbase + ODM_VDMA_IRING_BADDR(qno));
+ } while ((!(val & 1ULL << 63)) && (--wait_cnt > 0));
+ }
+
+ return 0;
+}
+
+int
+odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc)
+{
+ struct odm_queue *vq = &odm->vq[vchan];
+ int isize, csize, max_nb_desc, rc = 0;
+ union odm_mbox_msg mbox_msg;
+ const struct rte_memzone *mz;
+ char name[32];
+
+ if (vq->iring_mz != NULL)
+ odm_vchan_resc_free(odm, vchan);
+
+ mbox_msg.u[0] = 0;
+ mbox_msg.u[1] = 0;
+
+ /* ODM PF driver expects vfid starts from index 0 */
+ mbox_msg.q.vfid = odm->vfid;
+ mbox_msg.q.cmd = ODM_QUEUE_OPEN;
+ mbox_msg.q.qidx = vchan;
+ rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg);
+ if (rc < 0)
+ return rc;
+
+ /* Determine instruction & completion ring sizes. */
+
+ /* Create iring that can support nb_desc. Round up to a multiple of 1024. */
+ isize = RTE_ALIGN_CEIL(nb_desc * ODM_IRING_ENTRY_SIZE_MAX * 8, 1024);
+ isize = RTE_MIN(isize, ODM_IRING_MAX_SIZE);
+ snprintf(name, sizeof(name), "vq%d_iring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, isize, SOCKET_ID_ANY, 0, 1024);
+ if (mz == NULL)
+ return -ENOMEM;
+ vq->iring_mz = mz;
+ vq->iring_max_words = isize / 8;
+
+ /* Create cring that can support max instructions that can be inflight in hw. */
+ max_nb_desc = (isize / (ODM_IRING_ENTRY_SIZE_MIN * 8));
+ csize = RTE_ALIGN_CEIL(max_nb_desc * sizeof(union odm_cmpl_ent_s), 1024);
+ snprintf(name, sizeof(name), "vq%d_cring%d", odm->vfid, vchan);
+ mz = rte_memzone_reserve_aligned(name, csize, SOCKET_ID_ANY, 0, 1024);
+ if (mz == NULL) {
+ rc = -ENOMEM;
+ goto iring_free;
+ }
+ vq->cring_mz = mz;
+ vq->cring_max_entry = csize / 4;
+
+ /* Allocate memory to track the size of each instruction. */
+ snprintf(name, sizeof(name), "vq%d_extra%d", odm->vfid, vchan);
+ vq->extra_ins_sz = rte_zmalloc(name, vq->cring_max_entry, 0);
+ if (vq->extra_ins_sz == NULL) {
+ rc = -ENOMEM;
+ goto cring_free;
+ }
+
+ vq->stats = (struct vq_stats){0};
+ return rc;
+
+cring_free:
+ rte_memzone_free(odm->vq[vchan].cring_mz);
+ vq->cring_mz = NULL;
+iring_free:
+ rte_memzone_free(odm->vq[vchan].iring_mz);
+ vq->iring_mz = NULL;
+
+ return rc;
+}
+
int
odm_dev_init(struct odm_dev *odm)
{
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index 0bf0c6345b..f4b9e2c4a7 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -9,7 +9,9 @@
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_io.h>
#include <rte_log.h>
+#include <rte_memzone.h>
extern int odm_logtype;
@@ -53,6 +55,14 @@ extern int odm_logtype;
#define ODM_MAX_QUEUES_PER_DEV 16
+#define ODM_IRING_MAX_SIZE (256 * 1024)
+#define ODM_IRING_ENTRY_SIZE_MIN 4
+#define ODM_IRING_ENTRY_SIZE_MAX 13
+#define ODM_IRING_MAX_WORDS (ODM_IRING_MAX_SIZE / 8)
+#define ODM_IRING_MAX_ENTRY (ODM_IRING_MAX_WORDS / ODM_IRING_ENTRY_SIZE_MIN)
+
+#define ODM_MAX_POINTER 4
+
#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr))
#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr))
@@ -131,8 +141,48 @@ union odm_vdma_counts_s {
} s;
};
+struct vq_stats {
+ uint64_t submitted;
+ uint64_t completed;
+ uint64_t errors;
+ /*
+ * Since stats.completed is used to return completion index, account for any packets
+ * received before stats is reset.
+ */
+ uint64_t completed_offset;
+};
+
+struct odm_queue {
+ struct odm_dev *dev;
+ /* Instructions that are prepared on the iring, but is not pushed to hw yet. */
+ uint16_t pending_submit_cnt;
+ /* Length (in words) of instructions that are not yet pushed to hw. */
+ uint16_t pending_submit_len;
+ uint16_t desc_idx;
+ /* Instruction ring head. Used for enqueue. */
+ uint16_t iring_head;
+ /* Completion ring head. Used for dequeue. */
+ uint16_t cring_head;
+ /* Extra instruction size ring head. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_head;
+ /* Extra instruction size ring tail. Used in enqueue-dequeue.*/
+ uint16_t ins_ring_tail;
+ /* Instruction size available.*/
+ uint16_t iring_sz_available;
+ /* Number of 8-byte words in iring.*/
+ uint16_t iring_max_words;
+ /* Number of words in cring.*/
+ uint16_t cring_max_entry;
+ /* Extra instruction size used per inflight instruction.*/
+ uint8_t *extra_ins_sz;
+ struct vq_stats stats;
+ const struct rte_memzone *iring_mz;
+ const struct rte_memzone *cring_mz;
+};
+
struct __rte_cache_aligned odm_dev {
struct rte_pci_device *pci_dev;
+ struct odm_queue vq[ODM_MAX_QUEUES_PER_DEV];
uint8_t *rbase;
uint16_t vfid;
uint8_t max_qs;
@@ -141,5 +191,9 @@ struct __rte_cache_aligned odm_dev {
int odm_dev_init(struct odm_dev *odm);
int odm_dev_fini(struct odm_dev *odm);
+int odm_configure(struct odm_dev *odm);
+int odm_enable(struct odm_dev *odm);
+int odm_disable(struct odm_dev *odm);
+int odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc);
#endif /* _ODM_H_ */
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index bef335c10c..8c705978fe 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -17,6 +17,87 @@
#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C
#define PCI_DRIVER_NAME dma_odm
+static int
+odm_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(size);
+
+ odm = dev->fp_obj->dev_private;
+
+ dev_info->max_vchans = odm->max_qs;
+ dev_info->nb_vchans = odm->num_qs;
+ dev_info->dev_capa =
+ (RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG);
+ dev_info->max_desc = ODM_IRING_MAX_ENTRY;
+ dev_info->min_desc = 1;
+ dev_info->max_sges = ODM_MAX_POINTER;
+
+ return 0;
+}
+
+static int
+odm_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = NULL;
+
+ RTE_SET_USED(conf_sz);
+
+ odm = dev->fp_obj->dev_private;
+ odm->num_qs = conf->nb_vchans;
+
+ return 0;
+}
+
+static int
+odm_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan,
+ const struct rte_dma_vchan_conf *conf, uint32_t conf_sz)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ RTE_SET_USED(conf_sz);
+ return odm_vchan_setup(odm, vchan, conf->nb_desc);
+}
+
+static int
+odm_dmadev_start(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_enable(odm);
+}
+
+static int
+odm_dmadev_stop(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ return odm_disable(odm);
+}
+
+static int
+odm_dmadev_close(struct rte_dma_dev *dev)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ odm_disable(odm);
+ odm_dev_fini(odm);
+
+ return 0;
+}
+
+static const struct rte_dma_dev_ops odm_dmadev_ops = {
+ .dev_close = odm_dmadev_close,
+ .dev_configure = odm_dmadev_configure,
+ .dev_info_get = odm_dmadev_info_get,
+ .dev_start = odm_dmadev_start,
+ .dev_stop = odm_dmadev_stop,
+ .stats_get = NULL,
+ .stats_reset = NULL,
+ .vchan_setup = odm_dmadev_vchan_setup,
+};
+
static int
odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev)
{
@@ -40,6 +121,10 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
odm_info("DMA device %s probed", name);
odm = dmadev->data->dev_private;
+ dmadev->device = &pci_dev->device;
+ dmadev->fp_obj->dev_private = odm;
+ dmadev->dev_ops = &odm_dmadev_ops;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 5/7] dma/odm: add stats
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
` (3 preceding siblings ...)
2024-05-27 15:16 ` [PATCH v4 4/7] dma/odm: add device ops Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
` (2 subsequent siblings)
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Gowrishankar Muthukrishnan, Vidya Sagar Velumuri, dev
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Add DMA dev stats.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 63 ++++++++++++++++++++++++++++++++++--
1 file changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 8c705978fe..13b2588246 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -87,14 +87,73 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
+ uint32_t size)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+
+ if (size < sizeof(rte_stats))
+ return -EINVAL;
+ if (rte_stats == NULL)
+ return -EINVAL;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[vchan].stats;
+
+ *rte_stats = *stats;
+ } else {
+ int i;
+
+ for (i = 0; i < odm->num_qs; i++) {
+ struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[i].stats;
+
+ rte_stats->submitted += stats->submitted;
+ rte_stats->completed += stats->completed;
+ rte_stats->errors += stats->errors;
+ }
+ }
+
+ return 0;
+}
+
+static void
+odm_vq_stats_reset(struct vq_stats *vq_stats)
+{
+ vq_stats->completed_offset += vq_stats->completed;
+ vq_stats->completed = 0;
+ vq_stats->errors = 0;
+ vq_stats->submitted = 0;
+}
+
+static int
+odm_stats_reset(struct rte_dma_dev *dev, uint16_t vchan)
+{
+ struct odm_dev *odm = dev->fp_obj->dev_private;
+ struct vq_stats *vq_stats;
+ int i;
+
+ if (vchan != RTE_DMA_ALL_VCHAN) {
+ vq_stats = &odm->vq[vchan].stats;
+ odm_vq_stats_reset(vq_stats);
+ } else {
+ for (i = 0; i < odm->num_qs; i++) {
+ vq_stats = &odm->vq[i].stats;
+ odm_vq_stats_reset(vq_stats);
+ }
+ }
+
+ return 0;
+}
+
static const struct rte_dma_dev_ops odm_dmadev_ops = {
.dev_close = odm_dmadev_close,
.dev_configure = odm_dmadev_configure,
.dev_info_get = odm_dmadev_info_get,
.dev_start = odm_dmadev_start,
.dev_stop = odm_dmadev_stop,
- .stats_get = NULL,
- .stats_reset = NULL,
+ .stats_get = odm_stats_get,
+ .stats_reset = odm_stats_reset,
.vchan_setup = odm_dmadev_vchan_setup,
};
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 6/7] dma/odm: add copy and copy sg ops
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
` (4 preceding siblings ...)
2024-05-27 15:16 ` [PATCH v4 5/7] dma/odm: add stats Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 7/7] dma/odm: add remaining ops Anoob Joseph
2024-05-28 8:12 ` [PATCH v4 0/7] Add ODM DMA device Jerin Jacob
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add ODM copy and copy SG ops.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/dma/odm/odm_dmadev.c | 236 +++++++++++++++++++++++++++++++++++
1 file changed, 236 insertions(+)
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index 13b2588246..b21be83a89 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -9,6 +9,7 @@
#include <rte_common.h>
#include <rte_dmadev.h>
#include <rte_dmadev_pmd.h>
+#include <rte_memcpy.h>
#include <rte_pci.h>
#include "odm.h"
@@ -87,6 +88,238 @@ odm_dmadev_close(struct rte_dma_dev *dev)
return 0;
}
+static int
+odm_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length,
+ uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ const union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ .s.nfst = 1,
+ .s.nlst = 1,
+ };
+
+ vq = &odm->vq[vchan];
+
+ h = length;
+ h |= ((uint64_t)length << 32);
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = src;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head++] = hdr.u;
+ iring_head_ptr[iring_head++] = h;
+ iring_head_ptr[iring_head++] = src;
+ iring_head_ptr[iring_head++] = dst;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
+static inline void
+odm_dmadev_fill_sg(uint64_t *cmd, const struct rte_dma_sge *src, const struct rte_dma_sge *dst,
+ uint16_t nb_src, uint16_t nb_dst, union odm_instr_hdr_s *hdr)
+{
+ int i = 0, j = 0;
+ uint64_t h = 0;
+
+ cmd[j++] = hdr->u;
+ /* When nb_src is even */
+ if (!(nb_src & 0x1)) {
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Fill the iring with dst pointers */
+ for (i = 1; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is odd */
+ if (nb_dst & 0x1) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ } else {
+ /* When nb_src is odd */
+
+ /* Fill the iring with src pointers */
+ for (i = 1; i < nb_src; i += 2) {
+ h = ((uint64_t)src[i].length << 32) | src[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[i - 1].addr;
+ cmd[j++] = src[i].addr;
+ }
+
+ /* Handle the last src pointer */
+ h = ((uint64_t)dst[0].length << 32) | src[nb_src - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = src[nb_src - 1].addr;
+ cmd[j++] = dst[0].addr;
+
+ /* Fill the iring with dst pointers */
+ for (i = 2; i < nb_dst; i += 2) {
+ h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[i - 1].addr;
+ cmd[j++] = dst[i].addr;
+ }
+
+ /* Handle the last dst pointer when nb_dst is even */
+ if (!(nb_dst & 0x1)) {
+ h = dst[nb_dst - 1].length;
+ cmd[j++] = h;
+ cmd[j++] = dst[nb_dst - 1].addr;
+ cmd[j++] = 0;
+ }
+ }
+}
+
+static int
+odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src,
+ const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_head, ins_ring_head;
+ uint16_t iring_sz_available, i, nb, num_words;
+ uint64_t cmd[ODM_IRING_ENTRY_SIZE_MAX];
+ struct odm_dev *odm = dev_private;
+ uint32_t s_sz = 0, d_sz = 0;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.xtype = ODM_XTYPE_INTERNAL,
+ };
+
+ vq = &odm->vq[vchan];
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+ iring_sz_available = vq->iring_sz_available;
+ ins_ring_head = vq->ins_ring_head;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+
+ if (unlikely(nb_src > 4 || nb_dst > 4))
+ return -EINVAL;
+
+ for (i = 0; i < nb_src; i++)
+ s_sz += src[i].length;
+
+ for (i = 0; i < nb_dst; i++)
+ d_sz += dst[i].length;
+
+ if (s_sz != d_sz)
+ return -EINVAL;
+
+ nb = nb_src + nb_dst;
+ hdr.s.nfst = nb_src;
+ hdr.s.nlst = nb_dst;
+ num_words = 1 + 3 * (nb / 2 + (nb & 0x1));
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+ uint16_t words_avail = max_iring_words - iring_head;
+ uint16_t words_pend = num_words - words_avail;
+
+ if (unlikely(words_avail + words_pend > ODM_IRING_ENTRY_SIZE_MAX))
+ return -ENOSPC;
+
+ odm_dmadev_fill_sg(cmd, src, dst, nb_src, nb_dst, &hdr);
+ rte_memcpy((void *)&iring_head_ptr[iring_head], (void *)cmd, words_avail * 8);
+ rte_memcpy((void *)iring_head_ptr, (void *)&cmd[words_avail], words_pend * 8);
+ iring_head = words_pend;
+ } else {
+ odm_dmadev_fill_sg(&iring_head_ptr[iring_head], src, dst, nb_src, nb_dst, &hdr);
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* Save extra space used for the instruction. */
+ vq->extra_ins_sz[ins_ring_head] = num_words - 4;
+
+ vq->ins_ring_head = (ins_ring_head + 1) % vq->cring_max_entry;
+
+ return vq->desc_idx++;
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -184,6 +417,9 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->dev_private = odm;
dmadev->dev_ops = &odm_dmadev_ops;
+ dmadev->fp_obj->copy = odm_dmadev_copy;
+ dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+
odm->pci_dev = pci_dev;
rc = odm_dev_init(odm);
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* [PATCH v4 7/7] dma/odm: add remaining ops
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
` (5 preceding siblings ...)
2024-05-27 15:16 ` [PATCH v4 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
@ 2024-05-27 15:16 ` Anoob Joseph
2024-05-28 8:12 ` [PATCH v4 0/7] Add ODM DMA device Jerin Jacob
7 siblings, 0 replies; 37+ messages in thread
From: Anoob Joseph @ 2024-05-27 15:16 UTC (permalink / raw)
To: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon
Cc: Vidya Sagar Velumuri, Gowrishankar Muthukrishnan, dev
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add all remaining ops such as fill, burst_capacity etc. Also update the
documentation.
Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/dmadevs/index.rst | 1 +
doc/guides/dmadevs/odm.rst | 92 +++++++++
doc/guides/rel_notes/release_24_07.rst | 4 +
drivers/dma/odm/odm.h | 4 +
drivers/dma/odm/odm_dmadev.c | 250 +++++++++++++++++++++++++
6 files changed, 352 insertions(+)
create mode 100644 doc/guides/dmadevs/odm.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index b581207a9a..195125ee1e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1274,6 +1274,7 @@ M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
M: Vidya Sagar Velumuri <vvelumuri@marvell.com>
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/dma/odm/
+F: doc/guides/dmadevs/odm.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst
index 5bd25b32b9..ce9f6eb260 100644
--- a/doc/guides/dmadevs/index.rst
+++ b/doc/guides/dmadevs/index.rst
@@ -17,3 +17,4 @@ an application through DMA API.
hisilicon
idxd
ioat
+ odm
diff --git a/doc/guides/dmadevs/odm.rst b/doc/guides/dmadevs/odm.rst
new file mode 100644
index 0000000000..a2eaab59a0
--- /dev/null
+++ b/doc/guides/dmadevs/odm.rst
@@ -0,0 +1,92 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2024 Marvell.
+
+Odyssey ODM DMA Device Driver
+=============================
+
+The ``odm`` DMA device driver provides a poll-mode driver (PMD) for Marvell Odyssey
+DMA Hardware Accelerator block found in Odyssey SoC. The block supports only mem
+to mem DMA transfers.
+
+ODM DMA device can support up to 32 queues and 16 VFs.
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+Device Setup
+-------------
+
+ODM DMA device is initialized by kernel PF driver. The PF kernel driver is part
+of Marvell software packages for Odyssey.
+
+Kernel module can be inserted as in below example::
+
+ $ sudo insmod odyssey_odm.ko
+
+ODM DMA device can support up to 16 VFs::
+
+ $ sudo echo 16 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs
+
+Above command creates 16 VFs with 2 queues each.
+
+The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the
+presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma``
+will show all the Odyssey ODM DMA devices.
+
+Devices using VFIO drivers
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The HW devices to be used will need to be bound to a user-space IO driver.
+The ``dpdk-devbind.py`` script can be used to view the state of the devices
+and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``.
+For example::
+
+ $ dpdk-devbind.py -b vfio-pci 0000:08:00.1
+
+Device Probing and Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To use the devices from an application, the dmadev API can be used.
+
+Once configured, the device can then be made ready for use
+by calling the ``rte_dma_start()`` API.
+
+Performing Data Copies
+~~~~~~~~~~~~~~~~~~~~~~
+
+Refer to the :ref:`Enqueue / Dequeue APIs <dmadev_enqueue_dequeue>` section
+of the dmadev library documentation for details on operation enqueue and
+submission API usage.
+
+Performance Tuning Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To achieve higher performance, DMA device needs to be tuned using PF kernel
+driver module parameters.
+
+Following options are exposed by kernel PF driver via devlink interface for
+tuning performance.
+
+``eng_sel``
+
+ ODM DMA device has 2 engines internally. Engine to queue mapping is decided
+ by a hardware register which can be configured as below::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name eng_sel value 3435973836 cmode runtime
+
+ Each bit in the register corresponds to one queue. Each queue would be
+ associated with one engine. If the value of the bit corresponding to the queue
+ is 0, then engine 0 would be picked. If it is 1, then engine 1 would be
+ picked.
+
+ In the above command, the register value is set as
+ ``1100 1100 1100 1100 1100 1100 1100 1100`` which allows for alternate engines
+ to be used with alternate VFs (assuming the system has 16 VFs with 2 queues
+ each).
+
+``max_load_request``
+
+ Specifies maximum outstanding load requests on internal bus. Values can range
+ from 1 to 512. Set to 512 for maximum requests in flight.::
+
+ $ /sbin/devlink dev param set pci/0000:08:00.0 name max_load_request value 512 cmode runtime
diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst
index a69f24cf99..3bc8451330 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added Marvell Odyssey ODM DMA device support.**
+
+ Added Marvell Odyssey ODM DMA device PMD.
+
Removed Items
-------------
diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h
index f4b9e2c4a7..7303aa2955 100644
--- a/drivers/dma/odm/odm.h
+++ b/drivers/dma/odm/odm.h
@@ -74,6 +74,10 @@ extern int odm_logtype;
rte_log(RTE_LOG_INFO, odm_logtype, \
RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
RTE_FMT_TAIL(__VA_ARGS__, )))
+#define odm_debug(...) \
+ rte_log(RTE_LOG_DEBUG, odm_logtype, \
+ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \
+ RTE_FMT_TAIL(__VA_ARGS__, )))
/*
* Structure odm_instr_hdr_s for ODM
diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c
index b21be83a89..57bd6923f1 100644
--- a/drivers/dma/odm/odm_dmadev.c
+++ b/drivers/dma/odm/odm_dmadev.c
@@ -320,6 +320,251 @@ odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *
return vq->desc_idx++;
}
+static int
+odm_dmadev_fill(void *dev_private, uint16_t vchan, uint64_t pattern, rte_iova_t dst,
+ uint32_t length, uint64_t flags)
+{
+ uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head;
+ const int num_words = ODM_IRING_ENTRY_SIZE_MIN;
+ struct odm_dev *odm = dev_private;
+ uint64_t *iring_head_ptr;
+ struct odm_queue *vq;
+ uint64_t h;
+
+ vq = &odm->vq[vchan];
+
+ union odm_instr_hdr_s hdr = {
+ .s.ct = ODM_HDR_CT_CW_NC,
+ .s.nfst = 0,
+ .s.nlst = 1,
+ };
+
+ h = (uint64_t)length;
+
+ switch (pattern) {
+ case 0:
+ hdr.s.xtype = ODM_XTYPE_FILL0;
+ break;
+ case 0xffffffffffffffff:
+ hdr.s.xtype = ODM_XTYPE_FILL1;
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ const uint16_t max_iring_words = vq->iring_max_words;
+
+ iring_sz_available = vq->iring_sz_available;
+ pending_submit_len = vq->pending_submit_len;
+ pending_submit_cnt = vq->pending_submit_cnt;
+ iring_head_ptr = vq->iring_mz->addr;
+ iring_head = vq->iring_head;
+
+ if (iring_sz_available < num_words)
+ return -ENOSPC;
+
+ if ((iring_head + num_words) >= max_iring_words) {
+
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = h;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = dst;
+ iring_head = (iring_head + 1) % max_iring_words;
+
+ iring_head_ptr[iring_head] = 0;
+ iring_head = (iring_head + 1) % max_iring_words;
+ } else {
+ iring_head_ptr[iring_head] = hdr.u;
+ iring_head_ptr[iring_head + 1] = h;
+ iring_head_ptr[iring_head + 2] = dst;
+ iring_head_ptr[iring_head + 3] = 0;
+ iring_head += num_words;
+ }
+
+ pending_submit_len += num_words;
+
+ if (flags & RTE_DMA_OP_FLAG_SUBMIT) {
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->stats.submitted += pending_submit_cnt + 1;
+ vq->pending_submit_len = 0;
+ vq->pending_submit_cnt = 0;
+ } else {
+ vq->pending_submit_len = pending_submit_len;
+ vq->pending_submit_cnt++;
+ }
+
+ vq->iring_head = iring_head;
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ /* No extra space to save. Skip entry in extra space ring. */
+ vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry;
+
+ vq->iring_sz_available = iring_sz_available - num_words;
+
+ return vq->desc_idx++;
+}
+
+static uint16_t
+odm_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx,
+ bool *has_error)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint64_t nb_err = 0;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (unlikely(vq->stats.submitted == vq->stats.completed)) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr,
+ rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ if (cmpl.s.cmp_code)
+ nb_err++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, cmpl_zero.u,
+ rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->stats.errors += nb_err;
+
+ if (unlikely(has_error != NULL && nb_err))
+ *has_error = true;
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static uint16_t
+odm_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls,
+ uint16_t *last_idx, enum rte_dma_status_code *status)
+{
+ const union odm_cmpl_ent_s cmpl_zero = {0};
+ uint16_t cring_head, iring_sz_available;
+ struct odm_dev *odm = dev_private;
+ union odm_cmpl_ent_s cmpl;
+ struct odm_queue *vq;
+ uint32_t *cmpl_ptr;
+ int cnt;
+
+ vq = &odm->vq[vchan];
+ const uint32_t *base_addr = vq->cring_mz->addr;
+ const uint16_t cring_max_entry = vq->cring_max_entry;
+
+ cring_head = vq->cring_head;
+ iring_sz_available = vq->iring_sz_available;
+
+ if (vq->stats.submitted == vq->stats.completed) {
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+ return 0;
+ }
+
+#ifdef ODM_DEBUG
+ odm_debug("cring_head: 0x%" PRIx16, cring_head);
+ odm_debug("Submitted: 0x%" PRIx64, vq->stats.submitted);
+ odm_debug("Completed: 0x%" PRIx64, vq->stats.completed);
+ odm_debug("Hardware count: 0x%" PRIx64, odm_read64(odm->rbase + ODM_VDMA_CNT(vchan)));
+#endif
+
+ for (cnt = 0; cnt < nb_cpls; cnt++) {
+ cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl));
+ cmpl.u = rte_atomic_load_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr,
+ rte_memory_order_relaxed);
+ if (!cmpl.s.valid)
+ break;
+
+ status[cnt] = cmpl.s.cmp_code;
+
+ if (cmpl.s.cmp_code)
+ vq->stats.errors++;
+
+ /* Free space for enqueue */
+ iring_sz_available += 4 + vq->extra_ins_sz[cring_head];
+
+ /* Clear instruction extra space */
+ vq->extra_ins_sz[cring_head] = 0;
+
+ rte_atomic_store_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, cmpl_zero.u,
+ rte_memory_order_relaxed);
+ cring_head = (cring_head + 1) % cring_max_entry;
+ }
+
+ vq->cring_head = cring_head;
+ vq->iring_sz_available = iring_sz_available;
+
+ vq->stats.completed += cnt;
+
+ *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF;
+
+ return cnt;
+}
+
+static int
+odm_dmadev_submit(void *dev_private, uint16_t vchan)
+{
+ struct odm_dev *odm = dev_private;
+ uint16_t pending_submit_len;
+ struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ pending_submit_len = vq->pending_submit_len;
+
+ if (pending_submit_len == 0)
+ return 0;
+
+ rte_wmb();
+ odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan));
+ vq->pending_submit_len = 0;
+ vq->stats.submitted += vq->pending_submit_cnt;
+ vq->pending_submit_cnt = 0;
+
+ return 0;
+}
+
+static uint16_t
+odm_dmadev_burst_capacity(const void *dev_private, uint16_t vchan __rte_unused)
+{
+ const struct odm_dev *odm = dev_private;
+ const struct odm_queue *vq;
+
+ vq = &odm->vq[vchan];
+ return (vq->iring_sz_available / ODM_IRING_ENTRY_SIZE_MIN);
+}
+
static int
odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats,
uint32_t size)
@@ -419,6 +664,11 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev
dmadev->fp_obj->copy = odm_dmadev_copy;
dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg;
+ dmadev->fp_obj->fill = odm_dmadev_fill;
+ dmadev->fp_obj->submit = odm_dmadev_submit;
+ dmadev->fp_obj->completed = odm_dmadev_completed;
+ dmadev->fp_obj->completed_status = odm_dmadev_completed_status;
+ dmadev->fp_obj->burst_capacity = odm_dmadev_burst_capacity;
odm->pci_dev = pci_dev;
--
2.45.1
^ permalink raw reply [flat|nested] 37+ messages in thread
* Re: [PATCH v4 0/7] Add ODM DMA device
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
` (6 preceding siblings ...)
2024-05-27 15:16 ` [PATCH v4 7/7] dma/odm: add remaining ops Anoob Joseph
@ 2024-05-28 8:12 ` Jerin Jacob
7 siblings, 0 replies; 37+ messages in thread
From: Jerin Jacob @ 2024-05-28 8:12 UTC (permalink / raw)
To: Anoob Joseph
Cc: Chengwen Feng, Kevin Laatz, Bruce Richardson, Jerin Jacob,
Thomas Monjalon, Gowrishankar Muthukrishnan,
Vidya Sagar Velumuri, dev
On Mon, May 27, 2024 at 8:47 PM Anoob Joseph <anoobj@marvell.com> wrote:
>
> Add Odyssey ODM DMA device. This PMD abstracts ODM hardware unit on
> Odyssey SoC which can perform mem to mem copies.
>
> The hardware unit can support upto 32 queues (vchan) and 16 VFs. It
> supports 'fill' operation with specific values. It also supports
> SG mode of operation with upto 4 src pointers and 4 destination
> pointers.
>
> The PMD is tested with both unit tests and performance applications.
>
> Changes in v4
> - Added release notes
> - Addressed review comments from Jerin
>
> Changes in v3
> - Addressed build failure with stdatomic stage in CI
>
> Changes in v2
> - Addressed build failure in CI
> - Moved update to usertools as separate patch
>
> Anoob Joseph (2):
> dma/odm: add framework for ODM DMA device
> dma/odm: add hardware defines
>
> Gowrishankar Muthukrishnan (3):
> dma/odm: add dev init and fini
> dma/odm: add device ops
> dma/odm: add stats
>
> Vidya Sagar Velumuri (2):
> dma/odm: add copy and copy sg ops
> dma/odm: add remaining ops
Series applied to dpdk-next-net-mrvl/for-main. Thanks
^ permalink raw reply [flat|nested] 37+ messages in thread
end of thread, other threads:[~2024-05-28 8:13 UTC | newest]
Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-15 15:31 [PATCH 0/8] Add ODM DMA device Anoob Joseph
2024-04-15 15:31 ` [PATCH 1/8] usertools/devbind: add " Anoob Joseph
2024-04-15 15:31 ` [PATCH 2/8] dma/odm: add framework for " Anoob Joseph
2024-04-15 15:31 ` [PATCH 3/8] dma/odm: add hardware defines Anoob Joseph
2024-04-15 15:31 ` [PATCH 4/8] dma/odm: add dev init and fini Anoob Joseph
2024-04-15 15:31 ` [PATCH 5/8] dma/odm: add device ops Anoob Joseph
2024-04-15 15:31 ` [PATCH 6/8] dma/odm: add stats Anoob Joseph
2024-04-15 15:31 ` [PATCH 7/8] dma/odm: add copy and copy sg ops Anoob Joseph
2024-04-15 15:31 ` [PATCH 8/8] dma/odm: add remaining ops Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 0/7] Add ODM DMA device Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 1/7] dma/odm: add framework for " Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 2/7] dma/odm: add hardware defines Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 3/7] dma/odm: add dev init and fini Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 4/7] dma/odm: add device ops Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 5/7] dma/odm: add stats Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
2024-04-17 7:27 ` [PATCH v2 7/7] dma/odm: add remaining ops Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 0/7] Add ODM DMA device Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 1/7] dma/odm: add framework for " Anoob Joseph
2024-05-24 13:26 ` Jerin Jacob
2024-04-19 6:43 ` [PATCH v3 2/7] dma/odm: add hardware defines Anoob Joseph
2024-05-24 13:29 ` Jerin Jacob
2024-04-19 6:43 ` [PATCH v3 3/7] dma/odm: add dev init and fini Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 4/7] dma/odm: add device ops Anoob Joseph
2024-05-24 13:37 ` Jerin Jacob
2024-04-19 6:43 ` [PATCH v3 5/7] dma/odm: add stats Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
2024-04-19 6:43 ` [PATCH v3 7/7] dma/odm: add remaining ops Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 0/7] Add ODM DMA device Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 1/7] dma/odm: add framework for " Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 2/7] dma/odm: add hardware defines Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 3/7] dma/odm: add dev init and fini Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 4/7] dma/odm: add device ops Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 5/7] dma/odm: add stats Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 6/7] dma/odm: add copy and copy sg ops Anoob Joseph
2024-05-27 15:16 ` [PATCH v4 7/7] dma/odm: add remaining ops Anoob Joseph
2024-05-28 8:12 ` [PATCH v4 0/7] Add ODM DMA device Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).