* [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver
@ 2021-03-05 13:38 Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 01/52] config/arm: add support for Marvell CN10K Nithin Dabilpuram
` (55 more replies)
0 siblings, 56 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
This patchset adds initial support for common code for
Marvell CN10K SoC. Based on this common 'cnxk' driver, new PMD's
such as 'net/cnxk', 'mempool/cnxk', 'event/cnxk' etc, will be added
later on.
Initially 'cnxk' drivers will only support Marvell CN106XX SoC. In future,
when code is ready, CN9K/octeontx2 will also be supported by the same set
of drivers and 'common/octeontx2' and its associated drivers will be
deprecated.
Ashwin Sekhar T K (8):
common/cnxk: add base npa device support
common/cnxk: add npa irq support
common/cnxk: add npa debug support
common/cnxk: add npa pool HW ops
common/cnxk: add npa bulk alloc/free support
common/cnxk: add npa performance counter support
common/cnxk: add npa batch alloc/free support
common/cnxk: add npa lf init/fini callback support
Jerin Jacob (14):
common/cnxk: add build infrastructre and HW definition
common/cnxk: add model init and IO handling API
common/cnxk: add interrupt helper API
common/cnxk: add mbox request and response definitions
common/cnxk: add mailbox base infra
common/cnxk: add base device class
common/cnxk: add VF support to base device class
common/cnxk: add base nix support
common/cnxk: add nix irq support
common/cnxk: add nix Rx queue management API
common/cnxk: add nix Tx queue management API
common/cnxk: add nix RSS support
common/cnxk: add nix stats support
common/cnxk: add nix debug dump support
Kiran Kumar K (5):
common/cnxk: add npc support
common/cnxk: add npc helper API
common/cnxk: add mcam utility API
common/cnxk: add npc parsing API
common/cnxk: add npc init and fini support
Nithin Dabilpuram (8):
common/cnxk: add nix traffic management base support
common/cnxk: add nix tm support to add/delete node
common/cnxk: add nix tm helper to alloc and free resource
common/cnxk: add nix tm hierarchy enable/disable
common/cnxk: add nix tm support for internal hierarchy
common/cnxk: add nix tm dynamic update support
common/cnxk: add nix tm debug support and misc utils
doc: add Marvell CNXK platform guide
Pavan Nikhilesh (8):
config/arm: add support for Marvell CN10K
common/cnxk: add base sso device support
common/cnxk: add sso hws interface
common/cnxk: add sso hwgrp interface
common/cnxk: add sso irq support
common/cnxk: add sso debug support
common/cnxk: add base tim device support
common/cnxk: add tim irq support
Satha Rao (2):
common/cnxk: add support for nix extended stats
common/cnxk: add nix tm shaper profile add support
Sunil Kumar Kori (6):
common/cnxk: add nix MAC operations support
common/cnxk: add nix specific npc operations
common/cnxk: add nix ptp support
common/cnxk: add VLAN filter support
common/cnxk: add nix flow control support
common/cnxk: add nix LSO support and misc utils
Vidya Sagar Velumuri (1):
common/cnxk: add nix inline IPsec config API
MAINTAINERS | 9 +
config/arm/arm64_cn10k_linux_gcc | 20 +
doc/guides/platform/cnxk.rst | 578 ++++
.../img/cnxk_packet_flow_hw_accelerators.svg | 2795 ++++++++++++++++++++
.../platform/img/cnxk_resource_virtualization.svg | 2428 +++++++++++++++++
doc/guides/platform/index.rst | 1 +
drivers/common/cnxk/hw/nix.h | 2187 +++++++++++++++
drivers/common/cnxk/hw/npa.h | 376 +++
drivers/common/cnxk/hw/npc.h | 525 ++++
drivers/common/cnxk/hw/rvu.h | 221 ++
drivers/common/cnxk/hw/sdp.h | 182 ++
drivers/common/cnxk/hw/sso.h | 233 ++
drivers/common/cnxk/hw/ssow.h | 70 +
drivers/common/cnxk/hw/tim.h | 49 +
drivers/common/cnxk/meson.build | 49 +
drivers/common/cnxk/roc_api.h | 103 +
drivers/common/cnxk/roc_bitfield.h | 15 +
drivers/common/cnxk/roc_bits.h | 32 +
drivers/common/cnxk/roc_dev.c | 1190 +++++++++
drivers/common/cnxk/roc_dev_priv.h | 107 +
drivers/common/cnxk/roc_idev.c | 184 ++
drivers/common/cnxk/roc_idev.h | 17 +
drivers/common/cnxk/roc_idev_priv.h | 39 +
drivers/common/cnxk/roc_io.h | 187 ++
drivers/common/cnxk/roc_io_generic.h | 122 +
drivers/common/cnxk/roc_irq.c | 249 ++
drivers/common/cnxk/roc_mbox.c | 483 ++++
drivers/common/cnxk/roc_mbox.h | 1732 ++++++++++++
drivers/common/cnxk/roc_mbox_priv.h | 215 ++
drivers/common/cnxk/roc_model.c | 148 ++
drivers/common/cnxk/roc_model.h | 103 +
drivers/common/cnxk/roc_nix.c | 439 +++
drivers/common/cnxk/roc_nix.h | 573 ++++
drivers/common/cnxk/roc_nix_debug.c | 1136 ++++++++
drivers/common/cnxk/roc_nix_fc.c | 251 ++
drivers/common/cnxk/roc_nix_irq.c | 495 ++++
drivers/common/cnxk/roc_nix_mac.c | 298 +++
drivers/common/cnxk/roc_nix_mcast.c | 98 +
drivers/common/cnxk/roc_nix_npc.c | 103 +
drivers/common/cnxk/roc_nix_ops.c | 416 +++
drivers/common/cnxk/roc_nix_priv.h | 394 +++
drivers/common/cnxk/roc_nix_ptp.c | 122 +
drivers/common/cnxk/roc_nix_queue.c | 863 ++++++
drivers/common/cnxk/roc_nix_rss.c | 219 ++
drivers/common/cnxk/roc_nix_stats.c | 411 +++
drivers/common/cnxk/roc_nix_tm.c | 1385 ++++++++++
drivers/common/cnxk/roc_nix_tm_ops.c | 1031 ++++++++
drivers/common/cnxk/roc_nix_tm_utils.c | 1002 +++++++
drivers/common/cnxk/roc_nix_vlan.c | 205 ++
drivers/common/cnxk/roc_nix_xstats.h | 204 ++
drivers/common/cnxk/roc_npa.c | 823 ++++++
drivers/common/cnxk/roc_npa.h | 661 +++++
drivers/common/cnxk/roc_npa_debug.c | 184 ++
drivers/common/cnxk/roc_npa_irq.c | 298 +++
drivers/common/cnxk/roc_npa_priv.h | 63 +
drivers/common/cnxk/roc_npc.c | 713 +++++
drivers/common/cnxk/roc_npc.h | 169 ++
drivers/common/cnxk/roc_npc_mcam.c | 708 +++++
drivers/common/cnxk/roc_npc_parse.c | 703 +++++
drivers/common/cnxk/roc_npc_priv.h | 426 +++
drivers/common/cnxk/roc_npc_utils.c | 631 +++++
drivers/common/cnxk/roc_platform.c | 38 +
drivers/common/cnxk/roc_platform.h | 182 ++
drivers/common/cnxk/roc_priv.h | 35 +
drivers/common/cnxk/roc_sso.c | 540 ++++
drivers/common/cnxk/roc_sso.h | 65 +
drivers/common/cnxk/roc_sso_debug.c | 68 +
drivers/common/cnxk/roc_sso_irq.c | 164 ++
drivers/common/cnxk/roc_sso_priv.h | 50 +
drivers/common/cnxk/roc_tim.c | 314 +++
drivers/common/cnxk/roc_tim.h | 43 +
drivers/common/cnxk/roc_tim_irq.c | 104 +
drivers/common/cnxk/roc_tim_priv.h | 30 +
drivers/common/cnxk/roc_util_priv.h | 14 +
drivers/common/cnxk/roc_utils.c | 239 ++
drivers/common/cnxk/roc_utils.h | 15 +
drivers/common/cnxk/version.map | 201 ++
drivers/meson.build | 1 +
78 files changed, 31776 insertions(+)
create mode 100644 config/arm/arm64_cn10k_linux_gcc
create mode 100644 doc/guides/platform/cnxk.rst
create mode 100644 doc/guides/platform/img/cnxk_packet_flow_hw_accelerators.svg
create mode 100644 doc/guides/platform/img/cnxk_resource_virtualization.svg
create mode 100644 drivers/common/cnxk/hw/nix.h
create mode 100644 drivers/common/cnxk/hw/npa.h
create mode 100644 drivers/common/cnxk/hw/npc.h
create mode 100644 drivers/common/cnxk/hw/rvu.h
create mode 100644 drivers/common/cnxk/hw/sdp.h
create mode 100644 drivers/common/cnxk/hw/sso.h
create mode 100644 drivers/common/cnxk/hw/ssow.h
create mode 100644 drivers/common/cnxk/hw/tim.h
create mode 100644 drivers/common/cnxk/meson.build
create mode 100644 drivers/common/cnxk/roc_api.h
create mode 100644 drivers/common/cnxk/roc_bitfield.h
create mode 100644 drivers/common/cnxk/roc_bits.h
create mode 100644 drivers/common/cnxk/roc_dev.c
create mode 100644 drivers/common/cnxk/roc_dev_priv.h
create mode 100644 drivers/common/cnxk/roc_idev.c
create mode 100644 drivers/common/cnxk/roc_idev.h
create mode 100644 drivers/common/cnxk/roc_idev_priv.h
create mode 100644 drivers/common/cnxk/roc_io.h
create mode 100644 drivers/common/cnxk/roc_io_generic.h
create mode 100644 drivers/common/cnxk/roc_irq.c
create mode 100644 drivers/common/cnxk/roc_mbox.c
create mode 100644 drivers/common/cnxk/roc_mbox.h
create mode 100644 drivers/common/cnxk/roc_mbox_priv.h
create mode 100644 drivers/common/cnxk/roc_model.c
create mode 100644 drivers/common/cnxk/roc_model.h
create mode 100644 drivers/common/cnxk/roc_nix.c
create mode 100644 drivers/common/cnxk/roc_nix.h
create mode 100644 drivers/common/cnxk/roc_nix_debug.c
create mode 100644 drivers/common/cnxk/roc_nix_fc.c
create mode 100644 drivers/common/cnxk/roc_nix_irq.c
create mode 100644 drivers/common/cnxk/roc_nix_mac.c
create mode 100644 drivers/common/cnxk/roc_nix_mcast.c
create mode 100644 drivers/common/cnxk/roc_nix_npc.c
create mode 100644 drivers/common/cnxk/roc_nix_ops.c
create mode 100644 drivers/common/cnxk/roc_nix_priv.h
create mode 100644 drivers/common/cnxk/roc_nix_ptp.c
create mode 100644 drivers/common/cnxk/roc_nix_queue.c
create mode 100644 drivers/common/cnxk/roc_nix_rss.c
create mode 100644 drivers/common/cnxk/roc_nix_stats.c
create mode 100644 drivers/common/cnxk/roc_nix_tm.c
create mode 100644 drivers/common/cnxk/roc_nix_tm_ops.c
create mode 100644 drivers/common/cnxk/roc_nix_tm_utils.c
create mode 100644 drivers/common/cnxk/roc_nix_vlan.c
create mode 100644 drivers/common/cnxk/roc_nix_xstats.h
create mode 100644 drivers/common/cnxk/roc_npa.c
create mode 100644 drivers/common/cnxk/roc_npa.h
create mode 100644 drivers/common/cnxk/roc_npa_debug.c
create mode 100644 drivers/common/cnxk/roc_npa_irq.c
create mode 100644 drivers/common/cnxk/roc_npa_priv.h
create mode 100644 drivers/common/cnxk/roc_npc.c
create mode 100644 drivers/common/cnxk/roc_npc.h
create mode 100644 drivers/common/cnxk/roc_npc_mcam.c
create mode 100644 drivers/common/cnxk/roc_npc_parse.c
create mode 100644 drivers/common/cnxk/roc_npc_priv.h
create mode 100644 drivers/common/cnxk/roc_npc_utils.c
create mode 100644 drivers/common/cnxk/roc_platform.c
create mode 100644 drivers/common/cnxk/roc_platform.h
create mode 100644 drivers/common/cnxk/roc_priv.h
create mode 100644 drivers/common/cnxk/roc_sso.c
create mode 100644 drivers/common/cnxk/roc_sso.h
create mode 100644 drivers/common/cnxk/roc_sso_debug.c
create mode 100644 drivers/common/cnxk/roc_sso_irq.c
create mode 100644 drivers/common/cnxk/roc_sso_priv.h
create mode 100644 drivers/common/cnxk/roc_tim.c
create mode 100644 drivers/common/cnxk/roc_tim.h
create mode 100644 drivers/common/cnxk/roc_tim_irq.c
create mode 100644 drivers/common/cnxk/roc_tim_priv.h
create mode 100644 drivers/common/cnxk/roc_util_priv.h
create mode 100644 drivers/common/cnxk/roc_utils.c
create mode 100644 drivers/common/cnxk/roc_utils.h
create mode 100644 drivers/common/cnxk/version.map
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 01/52] config/arm: add support for Marvell CN10K
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-26 13:29 ` Jerin Jacob
2021-03-05 13:38 ` [dpdk-dev] [PATCH 02/52] common/cnxk: add build infrastructre and HW definition Nithin Dabilpuram
` (54 subsequent siblings)
55 siblings, 1 reply; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add config support to cross compile for Marvell CN10K SoC.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
config/arm/arm64_cn10k_linux_gcc | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
create mode 100644 config/arm/arm64_cn10k_linux_gcc
diff --git a/config/arm/arm64_cn10k_linux_gcc b/config/arm/arm64_cn10k_linux_gcc
new file mode 100644
index 0000000..4f8e7cb
--- /dev/null
+++ b/config/arm/arm64_cn10k_linux_gcc
@@ -0,0 +1,20 @@
+[binaries]
+c = 'aarch64-linux-gnu-gcc'
+cpp = 'aarch64-linux-gnu-cpp'
+ar = 'aarch64-linux-gnu-gcc-ar'
+strip = 'aarch64-linux-gnu-strip'
+pkgconfig = 'aarch64-linux-gnu-pkg-config'
+pcap-config = ''
+
+[host_machine]
+system = 'linux'
+cpu_family = 'aarch64'
+cpu = 'armv8.6-a'
+endian = 'little'
+
+[properties]
+implementer_id = '0x41'
+part_number = '0xd49'
+max_lcores = 36
+max_numa_nodes = 1
+numa = false
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 02/52] common/cnxk: add build infrastructre and HW definition
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 01/52] config/arm: add support for Marvell CN10K Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 03/52] common/cnxk: add model init and IO handling API Nithin Dabilpuram
` (53 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
From: Jerin Jacob <jerinj@marvell.com>
Add meson build infrastructure along with HW definition
header file.
This patch also adds cross-compile configs for arm
for CN9K series and CN10K series of Marvell SoC's.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/common/cnxk/hw/nix.h | 2187 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/hw/npa.h | 376 +++++++
drivers/common/cnxk/hw/npc.h | 525 +++++++++
drivers/common/cnxk/hw/rvu.h | 221 ++++
drivers/common/cnxk/hw/sdp.h | 182 +++
drivers/common/cnxk/hw/sso.h | 233 ++++
drivers/common/cnxk/hw/ssow.h | 70 ++
drivers/common/cnxk/hw/tim.h | 49 +
drivers/common/cnxk/meson.build | 15 +
drivers/common/cnxk/roc_api.h | 69 ++
drivers/common/cnxk/roc_bitfield.h | 15 +
drivers/common/cnxk/roc_bits.h | 32 +
drivers/common/cnxk/roc_platform.c | 5 +
drivers/common/cnxk/roc_platform.h | 146 +++
drivers/common/cnxk/version.map | 4 +
drivers/meson.build | 1 +
16 files changed, 4130 insertions(+)
create mode 100644 drivers/common/cnxk/hw/nix.h
create mode 100644 drivers/common/cnxk/hw/npa.h
create mode 100644 drivers/common/cnxk/hw/npc.h
create mode 100644 drivers/common/cnxk/hw/rvu.h
create mode 100644 drivers/common/cnxk/hw/sdp.h
create mode 100644 drivers/common/cnxk/hw/sso.h
create mode 100644 drivers/common/cnxk/hw/ssow.h
create mode 100644 drivers/common/cnxk/hw/tim.h
create mode 100644 drivers/common/cnxk/meson.build
create mode 100644 drivers/common/cnxk/roc_api.h
create mode 100644 drivers/common/cnxk/roc_bitfield.h
create mode 100644 drivers/common/cnxk/roc_bits.h
create mode 100644 drivers/common/cnxk/roc_platform.c
create mode 100644 drivers/common/cnxk/roc_platform.h
create mode 100644 drivers/common/cnxk/version.map
diff --git a/drivers/common/cnxk/hw/nix.h b/drivers/common/cnxk/hw/nix.h
new file mode 100644
index 0000000..c3ad9f5
--- /dev/null
+++ b/drivers/common/cnxk/hw/nix.h
@@ -0,0 +1,2187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __NIX_HW_H__
+#define __NIX_HW_H__
+
+/* Register offsets */
+
+#define NIX_AF_CFG (0x0ull)
+#define NIX_AF_STATUS (0x10ull)
+#define NIX_AF_NDC_CFG (0x18ull)
+#define NIX_AF_CONST (0x20ull)
+#define NIX_AF_CONST1 (0x28ull)
+#define NIX_AF_CONST2 (0x30ull)
+#define NIX_AF_CONST3 (0x38ull)
+#define NIX_AF_SQ_CONST (0x40ull)
+#define NIX_AF_CQ_CONST (0x48ull)
+#define NIX_AF_RQ_CONST (0x50ull)
+#define NIX_AF_PL_CONST (0x58ull) /* [CN10K, .) */
+#define NIX_AF_PSE_CONST (0x60ull)
+#define NIX_AF_TL1_CONST (0x70ull)
+#define NIX_AF_TL2_CONST (0x78ull)
+#define NIX_AF_TL3_CONST (0x80ull)
+#define NIX_AF_TL4_CONST (0x88ull)
+#define NIX_AF_MDQ_CONST (0x90ull)
+#define NIX_AF_MC_MIRROR_CONST (0x98ull)
+#define NIX_AF_LSO_CFG (0xa8ull)
+#define NIX_AF_BLK_RST (0xb0ull)
+#define NIX_AF_TX_TSTMP_CFG (0xc0ull)
+#define NIX_AF_PL_TS (0xc8ull) /* [CN10K, .) */
+#define NIX_AF_RX_CFG (0xd0ull)
+#define NIX_AF_AVG_DELAY (0xe0ull)
+#define NIX_AF_CINT_DELAY (0xf0ull)
+#define NIX_AF_VWQE_TIMER (0xf8ull) /* [CN10K, .) */
+#define NIX_AF_RX_MCAST_BASE (0x100ull)
+#define NIX_AF_RX_MCAST_CFG (0x110ull)
+#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull)
+#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull)
+#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull)
+#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull)
+#define NIX_AF_LF_RST (0x150ull)
+#define NIX_AF_GEN_INT (0x160ull)
+#define NIX_AF_GEN_INT_W1S (0x168ull)
+#define NIX_AF_GEN_INT_ENA_W1S (0x170ull)
+#define NIX_AF_GEN_INT_ENA_W1C (0x178ull)
+#define NIX_AF_ERR_INT (0x180ull)
+#define NIX_AF_ERR_INT_W1S (0x188ull)
+#define NIX_AF_ERR_INT_ENA_W1S (0x190ull)
+#define NIX_AF_ERR_INT_ENA_W1C (0x198ull)
+#define NIX_AF_RAS (0x1a0ull)
+#define NIX_AF_RAS_W1S (0x1a8ull)
+#define NIX_AF_RAS_ENA_W1S (0x1b0ull)
+#define NIX_AF_RAS_ENA_W1C (0x1b8ull)
+#define NIX_AF_RVU_INT (0x1c0ull)
+#define NIX_AF_RVU_INT_W1S (0x1c8ull)
+#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull)
+#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull)
+#define NIX_AF_TCP_TIMER (0x1e0ull)
+/* [CN10k, .) */
+#define NIX_AF_RX_DEF_ETX(a) (0x1f0ull | (uint64_t)(a) << 3)
+#define NIX_AF_RX_DEF_OL2 (0x200ull)
+#define NIX_AF_RX_DEF_GEN0_COLOR (0x208ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_OIP4 (0x210ull)
+#define NIX_AF_RX_DEF_GEN1_COLOR (0x218ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_IIP4 (0x220ull)
+#define NIX_AF_RX_DEF_VLAN0_PCP_DEI (0x228ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_OIP6 (0x230ull)
+#define NIX_AF_RX_DEF_VLAN1_PCP_DEI (0x238ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_IIP6 (0x240ull)
+#define NIX_AF_RX_DEF_OTCP (0x250ull)
+#define NIX_AF_RX_DEF_ITCP (0x260ull)
+#define NIX_AF_RX_DEF_OUDP (0x270ull)
+#define NIX_AF_RX_DEF_IUDP (0x280ull)
+#define NIX_AF_RX_DEF_OSCTP (0x290ull)
+#define NIX_AF_RX_DEF_CST_APAD_0 (0x298ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_ISCTP (0x2a0ull)
+#define NIX_AF_RX_DEF_CST_APAD_1 (0x2a8ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3)
+#define NIX_AF_RX_DEF_IIP4_DSCP (0x2e0ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_OIP4_DSCP (0x2e8ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_IIP6_DSCP (0x2f0ull) /* [CN10K, .) */
+#define NIX_AF_RX_DEF_OIP6_DSCP (0x2f8ull) /* [CN10K, .) */
+#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull)
+#define NIX_AF_RX_IPSEC_VWQE_GEN_CFG (0x310ull) /* [CN10K, .) */
+#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3)
+#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3)
+#define NIX_AF_NDC_RX_SYNC (0x3e0ull)
+#define NIX_AF_NDC_TX_SYNC (0x3f0ull)
+#define NIX_AF_AQ_CFG (0x400ull)
+#define NIX_AF_AQ_BASE (0x410ull)
+#define NIX_AF_AQ_STATUS (0x420ull)
+#define NIX_AF_AQ_DOOR (0x430ull)
+#define NIX_AF_AQ_DONE_WAIT (0x440ull)
+#define NIX_AF_AQ_DONE (0x450ull)
+#define NIX_AF_AQ_DONE_ACK (0x460ull)
+#define NIX_AF_AQ_DONE_TIMER (0x470ull)
+#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull)
+#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull)
+#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16)
+#define NIX_AF_RX_SW_SYNC (0x550ull)
+#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16)
+#define NIX_AF_SEB_CFG (0x5f0ull) /* [CN10K, .) */
+#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull) /* [CN9K, CN10K) */
+#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull)
+#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull)
+#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull)
+#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull)
+#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3)
+#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3)
+#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16)
+#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16)
+#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16)
+#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16)
+#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16)
+#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull)
+#define NIX_AF_SQM_SCLK_CNT (0x780ull) /* [CN10K, .) */
+#define NIX_AF_DWRR_SDP_MTU (0x790ull) /* [CN10K, .) */
+#define NIX_AF_DWRR_RPM_MTU (0x7a0ull) /* [CN10K, .) */
+#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull)
+#define NIX_AF_PSE_SHAPER_CFG (0x810ull)
+#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull)
+#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18)
+#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TX_LINKX_NORM_CDT_ADJ(a) (0xa20ull | (uint64_t)(a) << 16)
+#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16)
+#define NIX_AF_SDP_LINK_CREDIT (0xa40ull)
+#define NIX_AF_SDP_LINK_CDT_ADJ (0xa50ull) /* [CN10K, .) */
+/* [CN9K, CN10K) */
+#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3)
+#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3)
+#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1_TW_ARB_CTL_DEBUG (0xbc0ull) /* [CN10K, .) */
+#define NIX_AF_TL1_TW_ARB_REQ_DEBUG (0xbc8ull) /* [CN10K, .) */
+#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL1X_SHAPE_STATE_CIR(a) (0xc50ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL2X_SHAPE_STATE_CIR(a) (0xcd0ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQ_MD_COUNT (0xda0ull) /* [CN10K, .) */
+/* [CN10K, .) */
+#define NIX_AF_MDQX_OUT_MD_COUNT(a) (0xdb0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2_TW_ARB_CTL_DEBUG (0xdc0ull) /* [CN10K, .) */
+/* [CN10K, .) */
+#define NIX_AF_TL2_TWX_ARB_REQ_DEBUG0(a) (0xdc8ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL2_TWX_ARB_REQ_DEBUG1(a) (0xdd0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL2X_SHAPE_STATE_PIR(a) (0xe50ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL3X_SHAPE_STATE_CIR(a) (0xed0ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3_TW_ARB_CTL_DEBUG (0xfc0ull) /* [CN10K, .) */
+/* [CN10k, .) */
+#define NIX_AF_TL3_TWX_ARB_REQ_DEBUG0(a) (0xfc8ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL3_TWX_ARB_REQ_DEBUG1(a) (0xfd0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_SCHEDULE(a) (0x1000ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_SHAPE(a) (0x1010ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_CIR(a) (0x1020ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_PIR(a) (0x1030ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_SCHED_STATE(a) (0x1040ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL3X_SHAPE_STATE(a) (0x1050ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL3X_SHAPE_STATE_PIR(a) (0x1050ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_SW_XOFF(a) (0x1070ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_TOPOLOGY(a) (0x1080ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_PARENT(a) (0x1088ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG0(a) (0x10c0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3X_MD_DEBUG1(a) (0x10c8ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL3X_MD_DEBUG2(a) (0x10d0ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL4X_SHAPE_STATE_CIR(a) (0x10d0ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL3X_MD_DEBUG3(a) (0x10d8ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4_TW_ARB_CTL_DEBUG (0x11c0ull) /* [CN10K, .) */
+/* [CN10K, .) */
+#define NIX_AF_TL4_TWX_ARB_REQ_DEBUG0(a) (0x11c8ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_TL4_TWX_ARB_REQ_DEBUG1(a) (0x11d0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_SCHEDULE(a) (0x1200ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_SHAPE(a) (0x1210ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_CIR(a) (0x1220ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_PIR(a) (0x1230ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_SCHED_STATE(a) (0x1240ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_SHAPE_STATE(a) (0x1250ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_SW_XOFF(a) (0x1270ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_TOPOLOGY(a) (0x1280ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_PARENT(a) (0x1288ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG0(a) (0x12c0ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL4X_MD_DEBUG1(a) (0x12c8ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL4X_MD_DEBUG2(a) (0x12d0ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_MDQX_SHAPE_STATE_CIR(a) (0x12d0ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL4X_MD_DEBUG3(a) (0x12d8ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQ_TW_ARB_CTL_DEBUG (0x13c0ull) /* [CN10K, .) */
+/* [CN10K, .) */
+#define NIX_AF_MDQ_TWX_ARB_REQ_DEBUG0(a) (0x13c8ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_MDQ_TWX_ARB_REQ_DEBUG1(a) (0x13d0ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_SCHEDULE(a) (0x1400ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_SHAPE(a) (0x1410ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_CIR(a) (0x1420ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_PIR(a) (0x1430ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_SCHED_STATE(a) (0x1440ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_MDQX_SHAPE_STATE(a) (0x1450ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_MDQX_SHAPE_STATE_PIR(a) (0x1450ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_SW_XOFF(a) (0x1470ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_PARENT(a) (0x1480ull | (uint64_t)(a) << 16)
+#define NIX_AF_MDQX_MD_DEBUG(a) (0x14c0ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_MDQX_IN_MD_COUNT(a) (0x14e0ull | (uint64_t)(a) << 16)
+/* [CN9K, CN10K) */
+#define NIX_AF_TL3_TL2X_CFG(a) (0x1600ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3_TL2X_BP_STATUS(a) (0x1610ull | (uint64_t)(a) << 16)
+#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \
+ (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
+#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \
+ (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3)
+#define NIX_AF_TX_MCASTX(a) (0x1900ull | (uint64_t)(a) << 15)
+#define NIX_AF_TX_VTAG_DEFX_CTL(a) (0x1a00ull | (uint64_t)(a) << 16)
+#define NIX_AF_TX_VTAG_DEFX_DATA(a) (0x1a10ull | (uint64_t)(a) << 16)
+#define NIX_AF_RX_BPIDX_STATUS(a) (0x1a20ull | (uint64_t)(a) << 17)
+#define NIX_AF_RX_CHANX_CFG(a) (0x1a30ull | (uint64_t)(a) << 15)
+#define NIX_AF_CINT_TIMERX(a) (0x1a40ull | (uint64_t)(a) << 18)
+#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \
+ (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
+#define NIX_AF_LFX_CFG(a) (0x4000ull | (uint64_t)(a) << 17)
+/* [CN10K, .) */
+#define NIX_AF_LINKX_CFG(a) (0x4010ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_SQS_CFG(a) (0x4020ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_TX_CFG2(a) (0x4028ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_SQS_BASE(a) (0x4030ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RQS_CFG(a) (0x4040ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RQS_BASE(a) (0x4050ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_CQS_CFG(a) (0x4060ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_CQS_BASE(a) (0x4070ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_TX_CFG(a) (0x4080ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_TX_PARSE_CFG(a) (0x4090ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_CFG(a) (0x40a0ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RSS_CFG(a) (0x40c0ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RSS_BASE(a) (0x40d0ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_CINTS_CFG(a) (0x4120ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_CINTS_BASE(a) (0x4130ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_CFG0(a) (0x4140ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_CFG1(a) (0x4148ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) (0x4150ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) (0x4158ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) (0x4170ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_TX_STATUS(a) (0x4180ull | (uint64_t)(a) << 17)
+#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \
+ (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
+#define NIX_AF_LFX_LOCKX(a, b) \
+ (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
+#define NIX_AF_LFX_TX_STATX(a, b) \
+ (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
+#define NIX_AF_LFX_RX_STATX(a, b) \
+ (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
+#define NIX_AF_LFX_RSS_GRPX(a, b) \
+ (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
+#define NIX_AF_RX_NPC_MC_RCV (0x4700ull)
+#define NIX_AF_RX_NPC_MC_DROP (0x4710ull)
+#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull)
+#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull)
+/* [CN10K, .) */
+#define NIX_AF_LFX_VWQE_NORM_COMPL(a) (0x4740ull | (uint64_t)(a) << 17)
+/* [CN10K, .) */
+#define NIX_AF_LFX_VWQE_RLS_TIMEOUT(a) (0x4750ull | (uint64_t)(a) << 17)
+/* [CN10K, .) */
+#define NIX_AF_LFX_VWQE_HASH_FULL(a) (0x4760ull | (uint64_t)(a) << 17)
+/* [CN10K, .) */
+#define NIX_AF_LFX_VWQE_SA_FULL(a) (0x4770ull | (uint64_t)(a) << 17)
+#define NIX_AF_VWQE_HASH_FUNC_MASK (0x47a0ull) /* [CN10K, .) */
+#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) (0x4800ull | (uint64_t)(a) << 16)
+/* [CN10K, .) */
+#define NIX_AF_RX_LINKX_WRR_OUT_CFG(a) (0x4a00ull | (uint64_t)(a) << 16)
+#define NIX_PRIV_AF_INT_CFG (0x8000000ull)
+#define NIX_PRIV_LFX_CFG(a) (0x8000010ull | (uint64_t)(a) << 8)
+#define NIX_PRIV_LFX_INT_CFG(a) (0x8000020ull | (uint64_t)(a) << 8)
+#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull)
+
+#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3)
+#define NIX_LF_CFG (0x100ull)
+#define NIX_LF_GINT (0x200ull)
+#define NIX_LF_GINT_W1S (0x208ull)
+#define NIX_LF_GINT_ENA_W1C (0x210ull)
+#define NIX_LF_GINT_ENA_W1S (0x218ull)
+#define NIX_LF_ERR_INT (0x220ull)
+#define NIX_LF_ERR_INT_W1S (0x228ull)
+#define NIX_LF_ERR_INT_ENA_W1C (0x230ull)
+#define NIX_LF_ERR_INT_ENA_W1S (0x238ull)
+#define NIX_LF_RAS (0x240ull)
+#define NIX_LF_RAS_W1S (0x248ull)
+#define NIX_LF_RAS_ENA_W1C (0x250ull)
+#define NIX_LF_RAS_ENA_W1S (0x258ull)
+#define NIX_LF_SQ_OP_ERR_DBG (0x260ull)
+#define NIX_LF_MNQ_ERR_DBG (0x270ull)
+#define NIX_LF_SEND_ERR_DBG (0x280ull)
+#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3)
+#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3)
+#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3)
+#define NIX_LF_RQ_OP_INT (0x900ull)
+#define NIX_LF_RQ_OP_OCTS (0x910ull)
+#define NIX_LF_RQ_OP_PKTS (0x920ull)
+#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull)
+#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull)
+#define NIX_LF_RQ_OP_RE_PKTS (0x950ull)
+#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull)
+#define NIX_LF_OP_VWQE_FLUSH (0x9a0ull) /* [CN10K, .) */
+#define NIX_LF_PL_OP_BAND_PROF (0x9c0ull) /* [CN10K, .) */
+#define NIX_LF_SQ_OP_INT (0xa00ull)
+#define NIX_LF_SQ_OP_OCTS (0xa10ull)
+#define NIX_LF_SQ_OP_PKTS (0xa20ull)
+#define NIX_LF_SQ_OP_STATUS (0xa30ull)
+#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull)
+#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull)
+#define NIX_LF_CQ_OP_INT (0xb00ull)
+#define NIX_LF_CQ_OP_DOOR (0xb30ull)
+#define NIX_LF_CQ_OP_STATUS (0xb40ull)
+#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12)
+#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12)
+#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12)
+#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12)
+#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12)
+#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12)
+#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12)
+#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12)
+#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12)
+#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12)
+/* [CN10K, .) */
+#define NIX_LF_RX_GEN_COLOR_CONVX(a) (0x4740ull | (uint64_t)(a) << 3)
+#define NIX_LF_RX_VLAN0_COLOR_CONV (0x4760ull) /* [CN10K, .) */
+#define NIX_LF_RX_VLAN1_COLOR_CONV (0x4768ull) /* [CN10K, .) */
+#define NIX_LF_RX_IIP_COLOR_CONV_LO (0x4770ull) /* [CN10K, .) */
+#define NIX_LF_RX_IIP_COLOR_CONV_HI (0x4778ull) /* [CN10K, .) */
+#define NIX_LF_RX_OIP_COLOR_CONV_LO (0x4780ull) /* [CN10K, .) */
+#define NIX_LF_RX_OIP_COLOR_CONV_HI (0x4788ull) /* [CN10K, .) */
+
+/* Enum offsets */
+
+#define NIX_STAT_LF_TX_TX_UCAST (0x0ull)
+#define NIX_STAT_LF_TX_TX_BCAST (0x1ull)
+#define NIX_STAT_LF_TX_TX_MCAST (0x2ull)
+#define NIX_STAT_LF_TX_TX_DROP (0x3ull)
+#define NIX_STAT_LF_TX_TX_OCTS (0x4ull)
+
+#define NIX_STAT_LF_RX_RX_OCTS (0x0ull)
+#define NIX_STAT_LF_RX_RX_UCAST (0x1ull)
+#define NIX_STAT_LF_RX_RX_BCAST (0x2ull)
+#define NIX_STAT_LF_RX_RX_MCAST (0x3ull)
+#define NIX_STAT_LF_RX_RX_DROP (0x4ull)
+#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull)
+#define NIX_STAT_LF_RX_RX_FCS (0x6ull)
+#define NIX_STAT_LF_RX_RX_ERR (0x7ull)
+#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull)
+#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull)
+#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull)
+#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull)
+
+#define NIX_STAT_LF_RX_RX_GC_OCTS_PASSED (0xcull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_GC_PKTS_PASSED (0xdull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_YC_OCTS_PASSED (0xeull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_YC_PKTS_PASSED (0xfull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_RC_OCTS_PASSED (0x10ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_RC_PKTS_PASSED (0x11ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_GC_OCTS_DROP (0x12ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_GC_PKTS_DROP (0x13ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_YC_OCTS_DROP (0x14ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_YC_PKTS_DROP (0x15ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_RC_OCTS_DROP (0x16ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_RC_PKTS_DROP (0x17ull) /* [CN10K, .) */
+#define NIX_STAT_LF_RX_RX_CPT_DROP_PKTS (0x18ull) /* [CN10K, .) */
+
+#define CGX_RX_PKT_CNT (0x0ull) /* [CN9K, CN10K) */
+#define CGX_RX_OCT_CNT (0x1ull) /* [CN9K, CN10K) */
+#define CGX_RX_PAUSE_PKT_CNT (0x2ull) /* [CN9K, CN10K) */
+#define CGX_RX_PAUSE_OCT_CNT (0x3ull) /* [CN9K, CN10K) */
+#define CGX_RX_DMAC_FILT_PKT_CNT (0x4ull) /* [CN9K, CN10K) */
+#define CGX_RX_DMAC_FILT_OCT_CNT (0x5ull) /* [CN9K, CN10K) */
+#define CGX_RX_FIFO_DROP_PKT_CNT (0x6ull) /* [CN9K, CN10K) */
+#define CGX_RX_FIFO_DROP_OCT_CNT (0x7ull) /* [CN9K, CN10K) */
+#define CGX_RX_ERR_CNT (0x8ull) /* [CN9K, CN10K) */
+
+#define CGX_TX_COLLISION_DROP (0x0ull) /* [CN9K, CN10K) */
+#define CGX_TX_FRAME_DEFER_CNT (0x1ull) /* [CN9K, CN10K) */
+#define CGX_TX_MULTIPLE_COLLISION (0x2ull) /* [CN9K, CN10K) */
+#define CGX_TX_SINGLE_COLLISION (0x3ull) /* [CN9K, CN10K) */
+#define CGX_TX_OCT_CNT (0x4ull) /* [CN9K, CN10K) */
+#define CGX_TX_PKT_CNT (0x5ull) /* [CN9K, CN10K) */
+#define CGX_TX_1_63_PKT_CNT (0x6ull) /* [CN9K, CN10K) */
+#define CGX_TX_64_PKT_CNT (0x7ull) /* [CN9K, CN10K) */
+#define CGX_TX_65_127_PKT_CNT (0x8ull) /* [CN9K, CN10K) */
+#define CGX_TX_128_255_PKT_CNT (0x9ull) /* [CN9K, CN10K) */
+#define CGX_TX_256_511_PKT_CNT (0xaull) /* [CN9K, CN10K) */
+#define CGX_TX_512_1023_PKT_CNT (0xbull) /* [CN9K, CN10K) */
+#define CGX_TX_1024_1518_PKT_CNT (0xcull) /* [CN9K, CN10K) */
+#define CGX_TX_1519_MAX_PKT_CNT (0xdull) /* [CN9K, CN10K) */
+#define CGX_TX_BCAST_PKTS (0xeull) /* [CN9K, CN10K) */
+#define CGX_TX_MCAST_PKTS (0xfull) /* [CN9K, CN10K) */
+#define CGX_TX_UFLOW_PKTS (0x10ull) /* [CN9K, CN10K) */
+#define CGX_TX_PAUSE_PKTS (0x11ull) /* [CN9K, CN10K) */
+
+#define RPM_MTI_STAT_RX_OCT_CNT (0x0ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_OCT_RECV_OK (0x1ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_ALIG_ERR (0x2ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CTRL_FRM_RECV (0x3ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_FRM_LONG (0x4ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_LEN_ERR (0x5ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_FRM_RECV (0x6ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_FRM_SEQ_ERR (0x7ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_VLAN_OK (0x8ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_IN_ERR (0x9ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_IN_UCAST_PKT (0xaull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_IN_MCAST_PKT (0xbull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_IN_BCAST_PKT (0xcull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_DRP_EVENTS (0xdull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_PKT (0xeull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_UNDER_SIZE (0xfull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_1_64_PKT_CNT (0x10ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_65_127_PKT_CNT (0x11ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_128_255_PKT_CNT (0x12ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_256_511_PKT_CNT (0x13ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_512_1023_PKT_CNT (0x14ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_1024_1518_PKT_CNT (0x15ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_1519_MAX_PKT_CNT (0x16ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_OVER_SIZE (0x17ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_JABBER (0x18ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_ETH_FRAGS (0x19ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_0 (0x1aull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_1 (0x1bull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_2 (0x1cull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_3 (0x1dull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_4 (0x1eull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_5 (0x1full) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_6 (0x20ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_7 (0x21ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_8 (0x22ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_9 (0x23ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_10 (0x24ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_11 (0x25ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_12 (0x26ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_13 (0x27ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_14 (0x28ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_CBFC_CLASS_15 (0x29ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_RX_MAC_CONTROL (0x2aull) /* [CN10K, .) */
+
+#define RPM_MTI_STAT_TX_OCT_CNT (0x0ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_OCT_TX_OK (0x1ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_PAUSE_MAC_CTRL (0x2ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_FRAMES_OK (0x3ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_VLAN_OK (0x4ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_OUT_ERR (0x5ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_UCAST_PKT_CNT (0x6ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_MCAST_PKT_CNT (0x7ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_BCAST_PKT_CNT (0x8ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_1_64_PKT_CNT (0x9ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_65_127_PKT_CNT (0xaull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_128_255_PKT_CNT (0xbull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_256_511_PKT_CNT (0xcull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_512_1023_PKT_CNT (0xdull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_1024_1518_PKT_CNT (0xeull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_1519_MAX_PKT_CNT (0xfull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_0 (0x10ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_1 (0x11ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_2 (0x12ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_3 (0x13ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_4 (0x14ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_5 (0x15ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_6 (0x16ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_7 (0x17ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_8 (0x18ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_9 (0x19ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_10 (0x1aull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_11 (0x1bull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_12 (0x1cull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_13 (0x1dull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_14 (0x1eull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_CBFC_CLASS_15 (0x1full) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_MAC_CONTROL_FRAMES (0x20ull) /* [CN10K, .) */
+#define RPM_MTI_STAT_TX_PKT_CNT (0x21ull) /* [CN10K, .) */
+
+#define NIX_SQOPERR_SQ_OOR (0x0ull)
+#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull)
+#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull)
+#define NIX_SQOPERR_SQ_DISABLED (0x3ull)
+#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull)
+#define NIX_SQOPERR_SQE_OFLOW (0x5ull)
+#define NIX_SQOPERR_SQB_NULL (0x6ull)
+#define NIX_SQOPERR_SQB_FAULT (0x7ull)
+#define NIX_SQOPERR_SQE_SIZEM1_ZERO (0x8ull) /* [CN10K, .) */
+
+#define NIX_SQINT_LMT_ERR (0x0ull)
+#define NIX_SQINT_MNQ_ERR (0x1ull)
+#define NIX_SQINT_SEND_ERR (0x2ull)
+#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull)
+
+#define NIX_SEND_STATUS_GOOD (0x0ull)
+#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull)
+#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull)
+#define NIX_SEND_STATUS_SQB_FAULT (0x3ull)
+#define NIX_SEND_STATUS_SQB_POISON (0x4ull)
+#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull)
+#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull)
+#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull)
+#define NIX_SEND_STATUS_JUMP_POISON (0x8ull)
+#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull)
+#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull)
+#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull)
+#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull)
+#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull)
+#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull)
+#define NIX_SEND_STATUS_DATA_FAULT (0x16ull)
+#define NIX_SEND_STATUS_DATA_POISON (0x17ull)
+#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull)
+#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull)
+#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull)
+#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull)
+#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull)
+#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull)
+#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull)
+#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull)
+#define NIX_SEND_STATUS_SEND_STATS_ERR (0x28ull)
+
+#define NIX_SENDSTATSALG_NOP (0x0ull)
+#define NIX_SENDSTATSALG_ADD_PKT_CNT (0x1ull)
+#define NIX_SENDSTATSALG_ADD_BYTE_CNT (0x2ull)
+#define NIX_SENDSTATSALG_ADD_PKT_BYTE_CNT (0x3ull)
+#define NIX_SENDSTATSALG_UPDATE_PKT_CNT_ON_DROP (0x4ull)
+#define NIX_SENDSTATSALG_UPDATE_BYTE_CNT_ON_DROP (0x5ull)
+#define NIX_SENDSTATSALG_UPDATE_PKT_BYTE_CNT_ON_DROP (0x6ull)
+
+#define NIX_SENDMEMDSZ_B64 (0x0ull)
+#define NIX_SENDMEMDSZ_B32 (0x1ull)
+#define NIX_SENDMEMDSZ_B16 (0x2ull)
+#define NIX_SENDMEMDSZ_B8 (0x3ull)
+
+#define NIX_SENDMEMALG_SET (0x0ull)
+#define NIX_SENDMEMALG_SETTSTMP (0x1ull)
+#define NIX_SENDMEMALG_SETRSLT (0x2ull)
+#define NIX_SENDMEMALG_ADD (0x8ull)
+#define NIX_SENDMEMALG_SUB (0x9ull)
+#define NIX_SENDMEMALG_ADDLEN (0xaull)
+#define NIX_SENDMEMALG_SUBLEN (0xbull)
+#define NIX_SENDMEMALG_ADDMBUF (0xcull)
+#define NIX_SENDMEMALG_SUBMBUF (0xdull)
+
+#define NIX_SUBDC_NOP (0x0ull)
+#define NIX_SUBDC_EXT (0x1ull)
+#define NIX_SUBDC_CRC (0x2ull)
+#define NIX_SUBDC_IMM (0x3ull)
+#define NIX_SUBDC_SG (0x4ull)
+#define NIX_SUBDC_MEM (0x5ull)
+#define NIX_SUBDC_JUMP (0x6ull)
+#define NIX_SUBDC_WORK (0x7ull)
+#define NIX_SUBDC_SG2 (0x8ull) /* [CN10K, .) */
+#define NIX_SUBDC_AGE_AND_STATS (0x9ull) /* [CN10K, .) */
+#define NIX_SUBDC_SOD (0xfull)
+
+#define NIX_STYPE_STF (0x0ull)
+#define NIX_STYPE_STT (0x1ull)
+#define NIX_STYPE_STP (0x2ull)
+
+#define NIX_RX_ACTIONOP_DROP (0x0ull)
+#define NIX_RX_ACTIONOP_UCAST (0x1ull)
+#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull)
+#define NIX_RX_ACTIONOP_MCAST (0x3ull)
+#define NIX_RX_ACTIONOP_RSS (0x4ull)
+#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull)
+#define NIX_RX_ACTIONOP_MIRROR (0x6ull)
+
+#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull)
+#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull)
+#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull)
+#define NIX_TX_VTAGACTION_VTAG0_RELPTR (sizeof(struct nix_inst_hdr_s) + 2 * 6)
+#define NIX_TX_VTAGACTION_VTAG1_RELPTR \
+ (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4)
+#define NIX_RQINT_DROP (0x0ull)
+#define NIX_RQINT_RED (0x1ull)
+#define NIX_RQINT_R2 (0x2ull)
+#define NIX_RQINT_R3 (0x3ull)
+#define NIX_RQINT_R4 (0x4ull)
+#define NIX_RQINT_R5 (0x5ull)
+#define NIX_RQINT_R6 (0x6ull)
+#define NIX_RQINT_R7 (0x7ull)
+
+#define NIX_MAXSQESZ_W16 (0x0ull)
+#define NIX_MAXSQESZ_W8 (0x1ull)
+
+#define NIX_LSOALG_NOP (0x0ull)
+#define NIX_LSOALG_ADD_SEGNUM (0x1ull)
+#define NIX_LSOALG_ADD_PAYLEN (0x2ull)
+#define NIX_LSOALG_ADD_OFFSET (0x3ull)
+#define NIX_LSOALG_TCP_FLAGS (0x4ull)
+
+#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull)
+#define NIX_MNQERR_SQ_CTX_POISON (0x1ull)
+#define NIX_MNQERR_SQB_FAULT (0x2ull)
+#define NIX_MNQERR_SQB_POISON (0x3ull)
+#define NIX_MNQERR_TOTAL_ERR (0x4ull)
+#define NIX_MNQERR_LSO_ERR (0x5ull)
+#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull)
+#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull)
+#define NIX_MNQERR_MAXLEN_ERR (0x8ull)
+#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull)
+
+#define NIX_MDTYPE_RSVD (0x0ull)
+#define NIX_MDTYPE_FLUSH (0x1ull)
+#define NIX_MDTYPE_PMD (0x2ull)
+
+#define NIX_NDC_TX_PORT_LMT (0x0ull)
+#define NIX_NDC_TX_PORT_ENQ (0x1ull)
+#define NIX_NDC_TX_PORT_MNQ (0x2ull)
+#define NIX_NDC_TX_PORT_DEQ (0x3ull)
+#define NIX_NDC_TX_PORT_DMA (0x4ull)
+#define NIX_NDC_TX_PORT_XQE (0x5ull)
+
+#define NIX_NDC_RX_PORT_AQ (0x0ull)
+#define NIX_NDC_RX_PORT_CQ (0x1ull)
+#define NIX_NDC_RX_PORT_CINT (0x2ull)
+#define NIX_NDC_RX_PORT_MC (0x3ull)
+#define NIX_NDC_RX_PORT_PKT (0x4ull)
+#define NIX_NDC_RX_PORT_RQ (0x5ull)
+
+#define NIX_RE_OPCODE_RE_NONE (0x0ull)
+#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull)
+#define NIX_RE_OPCODE_RE_JABBER (0x2ull)
+#define NIX_RE_OPCODE_RE_FCS (0x7ull)
+#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull)
+#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull)
+#define NIX_RE_OPCODE_RE_RX_CTL (0xbull)
+#define NIX_RE_OPCODE_RE_SKIP (0xcull)
+#define NIX_RE_OPCODE_RE_DMAPKT (0xfull)
+#define NIX_RE_OPCODE_UNDERSIZE (0x10ull)
+#define NIX_RE_OPCODE_OVERSIZE (0x11ull)
+#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull)
+
+#define NIX_REDALG_STD (0x0ull)
+#define NIX_REDALG_SEND (0x1ull)
+#define NIX_REDALG_STALL (0x2ull)
+#define NIX_REDALG_DISCARD (0x3ull)
+
+#define NIX_RX_BAND_PROF_ACTIONRESULT_PASS (0x0ull) /* [CN10K, .) */
+#define NIX_RX_BAND_PROF_ACTIONRESULT_DROP (0x1ull) /* [CN10K, .) */
+#define NIX_RX_BAND_PROF_ACTIONRESULT_RED (0x2ull) /* [CN10K, .) */
+
+#define NIX_RX_BAND_PROF_LAYER_LEAF (0x0ull) /* [CN10K, .) */
+#define NIX_RX_BAND_PROF_LAYER_MIDDLE (0x1ull) /* [CN10K, .) */
+#define NIX_RX_BAND_PROF_LAYER_TOP (0x2ull) /* [CN10K, .) */
+
+#define NIX_RX_COLORRESULT_GREEN (0x0ull) /* [CN10K, .) */
+#define NIX_RX_COLORRESULT_YELLOW (0x1ull) /* [CN10K, .) */
+#define NIX_RX_COLORRESULT_RED (0x2ull) /* [CN10K, .) */
+
+#define NIX_RX_MCOP_RQ (0x0ull)
+#define NIX_RX_MCOP_RSS (0x1ull)
+
+#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull)
+#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull)
+#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull)
+#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull)
+#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull)
+#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull)
+#define NIX_RX_PERRCODE_MEMOUT (0x9ull)
+#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull)
+#define NIX_RX_PERRCODE_OL3_LEN (0x10ull)
+#define NIX_RX_PERRCODE_OL4_LEN (0x11ull)
+#define NIX_RX_PERRCODE_OL4_CHK (0x12ull)
+#define NIX_RX_PERRCODE_OL4_PORT (0x13ull)
+#define NIX_RX_PERRCODE_IL3_LEN (0x20ull)
+#define NIX_RX_PERRCODE_IL4_LEN (0x21ull)
+#define NIX_RX_PERRCODE_IL4_CHK (0x22ull)
+#define NIX_RX_PERRCODE_IL4_PORT (0x23ull)
+
+#define NIX_SA_ALG_NON_MS (0x0ull) /* [CN10K, .) */
+#define NIX_SA_ALG_MS_CISCO (0x1ull) /* [CN10K, .) */
+#define NIX_SA_ALG_MS_VIPTELA (0x2ull) /* [CN10K, .) */
+
+#define NIX_SENDCRCALG_CRC32 (0x0ull)
+#define NIX_SENDCRCALG_CRC32C (0x1ull)
+#define NIX_SENDCRCALG_ONES16 (0x2ull)
+
+#define NIX_SENDL3TYPE_NONE (0x0ull)
+#define NIX_SENDL3TYPE_IP4 (0x2ull)
+#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull)
+#define NIX_SENDL3TYPE_IP6 (0x4ull)
+
+#define NIX_SENDL4TYPE_NONE (0x0ull)
+#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull)
+#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull)
+#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull)
+
+#define NIX_SENDLDTYPE_LDD (0x0ull)
+#define NIX_SENDLDTYPE_LDT (0x1ull)
+#define NIX_SENDLDTYPE_LDWB (0x2ull)
+
+#define NIX_XQESZ_W64 (0x0ull)
+#define NIX_XQESZ_W16 (0x1ull)
+
+#define NIX_XQE_TYPE_INVALID (0x0ull)
+#define NIX_XQE_TYPE_RX (0x1ull)
+#define NIX_XQE_TYPE_RX_IPSECS (0x2ull)
+#define NIX_XQE_TYPE_RX_IPSECH (0x3ull)
+#define NIX_XQE_TYPE_RX_IPSECD (0x4ull)
+#define NIX_XQE_TYPE_RX_VWQE (0x5ull) /* [CN10K, .) */
+#define NIX_XQE_TYPE_RES_6 (0x6ull)
+#define NIX_XQE_TYPE_RES_7 (0x7ull)
+#define NIX_XQE_TYPE_SEND (0x8ull)
+#define NIX_XQE_TYPE_RES_9 (0x9ull)
+#define NIX_XQE_TYPE_RES_A (0xAull)
+#define NIX_XQE_TYPE_RES_B (0xBull)
+#define NIX_XQE_TYPE_RES_C (0xCull)
+#define NIX_XQE_TYPE_RES_D (0xDull)
+#define NIX_XQE_TYPE_RES_E (0xEull)
+#define NIX_XQE_TYPE_RES_F (0xFull)
+
+#define NIX_TX_VTAGOP_NOP (0x0ull)
+#define NIX_TX_VTAGOP_INSERT (0x1ull)
+#define NIX_TX_VTAGOP_REPLACE (0x2ull)
+
+#define NIX_VTAGSIZE_T4 (0x0ull)
+#define NIX_VTAGSIZE_T8 (0x1ull)
+
+#define NIX_TXLAYER_OL3 (0x0ull)
+#define NIX_TXLAYER_OL4 (0x1ull)
+#define NIX_TXLAYER_IL3 (0x2ull)
+#define NIX_TXLAYER_IL4 (0x3ull)
+
+#define NIX_TX_ACTIONOP_DROP (0x0ull)
+#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull)
+#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull)
+#define NIX_TX_ACTIONOP_MCAST (0x3ull)
+#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull)
+
+#define NIX_AQ_COMP_NOTDONE (0x0ull)
+#define NIX_AQ_COMP_GOOD (0x1ull)
+#define NIX_AQ_COMP_SWERR (0x2ull)
+#define NIX_AQ_COMP_CTX_POISON (0x3ull)
+#define NIX_AQ_COMP_CTX_FAULT (0x4ull)
+#define NIX_AQ_COMP_LOCKERR (0x5ull)
+#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull)
+
+#define NIX_AF_INT_VEC_RVU (0x0ull)
+#define NIX_AF_INT_VEC_GEN (0x1ull)
+#define NIX_AF_INT_VEC_AQ_DONE (0x2ull)
+#define NIX_AF_INT_VEC_AF_ERR (0x3ull)
+#define NIX_AF_INT_VEC_POISON (0x4ull)
+
+#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull)
+#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull)
+#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull)
+#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull)
+
+#define NIX_AQ_INSTOP_NOP (0x0ull)
+#define NIX_AQ_INSTOP_INIT (0x1ull)
+#define NIX_AQ_INSTOP_WRITE (0x2ull)
+#define NIX_AQ_INSTOP_READ (0x3ull)
+#define NIX_AQ_INSTOP_LOCK (0x4ull)
+#define NIX_AQ_INSTOP_UNLOCK (0x5ull)
+
+#define NIX_AQ_CTYPE_RQ (0x0ull)
+#define NIX_AQ_CTYPE_SQ (0x1ull)
+#define NIX_AQ_CTYPE_CQ (0x2ull)
+#define NIX_AQ_CTYPE_MCE (0x3ull)
+#define NIX_AQ_CTYPE_RSS (0x4ull)
+#define NIX_AQ_CTYPE_DYNO (0x5ull)
+#define NIX_AQ_CTYPE_BAND_PROF (0x6ull) /* [CN10K, .) */
+
+#define NIX_COLORRESULT_GREEN (0x0ull)
+#define NIX_COLORRESULT_YELLOW (0x1ull)
+#define NIX_COLORRESULT_RED_SEND (0x2ull)
+#define NIX_COLORRESULT_RED_DROP (0x3ull)
+
+#define NIX_CHAN_LBKX_CHX(a, b) \
+ (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b))
+#define NIX_CHAN_CPT_CH_END (0x4ffull) /* [CN10K, .) */
+#define NIX_CHAN_CPT_CH_START (0x400ull) /* [CN10K, .) */
+#define NIX_CHAN_R4 (0x400ull) /* [CN9K, CN10K) */
+#define NIX_CHAN_R5 (0x500ull)
+#define NIX_CHAN_R6 (0x600ull)
+#define NIX_CHAN_SDP_CH_END (0x7ffull)
+#define NIX_CHAN_SDP_CH_START (0x700ull)
+/* [CN9K, CN10K) */
+#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \
+ (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | (uint64_t)(c))
+/* [CN10K, .) */
+#define NIX_CHAN_RPMX_LMACX_CHX(a, b, c) \
+ (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | (uint64_t)(c))
+
+#define NIX_INTF_SDP (0x4ull)
+#define NIX_INTF_CGX0 (0x0ull) /* [CN9K, CN10K) */
+#define NIX_INTF_CGX1 (0x1ull) /* [CN9K, CN10K) */
+#define NIX_INTF_CGX2 (0x2ull) /* [CN9K, CN10K) */
+#define NIX_INTF_RPM0 (0x0ull) /* [CN10K, .) */
+#define NIX_INTF_RPM1 (0x1ull) /* [CN10K, .) */
+#define NIX_INTF_RPM2 (0x2ull) /* [CN10K, .) */
+#define NIX_INTF_LBK0 (0x3ull)
+#define NIX_INTF_CPT0 (0x5ull) /* [CN10K, .) */
+
+#define NIX_CQERRINT_DOOR_ERR (0x0ull)
+#define NIX_CQERRINT_WR_FULL (0x1ull)
+#define NIX_CQERRINT_CQE_FAULT (0x2ull)
+
+#define NIX_LINK_SDP (0xdull) /* [CN10K, .) */
+#define NIX_LINK_CPT (0xeull) /* [CN10K, .) */
+#define NIX_LINK_MC (0xfull) /* [CN10K, .) */
+/* [CN10K, .) */
+#define NIX_LINK_RPMX_LMACX(a, b) \
+ (0x00ull | ((uint64_t)(a) << 2) | (uint64_t)(b))
+#define NIX_LINK_LBK0 (0xcull)
+
+#define NIX_LF_INT_VEC_GINT (0x80ull)
+#define NIX_LF_INT_VEC_ERR_INT (0x81ull)
+#define NIX_LF_INT_VEC_POISON (0x82ull)
+#define NIX_LF_INT_VEC_QINT_END (0x3full)
+#define NIX_LF_INT_VEC_QINT_START (0x0ull)
+#define NIX_LF_INT_VEC_CINT_END (0x7full)
+#define NIX_LF_INT_VEC_CINT_START (0x40ull)
+
+#define NIX_INTF_RX (0x0ull)
+#define NIX_INTF_TX (0x1ull)
+
+/* Enums definitions */
+
+/* Structures definitions */
+
+/* NIX aging and send stats subdescriptor structure */
+struct nix_age_and_send_stats_s {
+ uint64_t threshold : 29;
+ uint64_t latency_drop : 1;
+ uint64_t aging : 1;
+ uint64_t wmem : 1;
+ uint64_t ooffset : 12;
+ uint64_t ioffset : 12;
+ uint64_t sel : 1;
+ uint64_t alg : 3;
+ uint64_t subdc : 4;
+ uint64_t addr : 64; /* W1 */
+};
+
+/* NIX admin queue instruction structure */
+struct nix_aq_inst_s {
+ uint64_t op : 4;
+ uint64_t ctype : 4;
+ uint64_t lf : 7;
+ uint64_t rsvd_23_15 : 9;
+ uint64_t cindex : 20;
+ uint64_t rsvd_62_44 : 19;
+ uint64_t doneint : 1;
+ uint64_t res_addr : 64; /* W1 */
+};
+
+/* NIX admin queue result structure */
+struct nix_aq_res_s {
+ uint64_t op : 4;
+ uint64_t ctype : 4;
+ uint64_t compcode : 8;
+ uint64_t doneint : 1;
+ uint64_t rsvd_63_17 : 47;
+ uint64_t rsvd_127_64 : 64; /* W1 */
+};
+
+/* NIX bandwidth profile structure */
+struct nix_band_prof_s {
+ uint64_t pc_mode : 2;
+ uint64_t icolor : 2;
+ uint64_t tnl_ena : 1;
+ uint64_t rsvd_7_5 : 3;
+ uint64_t peir_exponent : 5;
+ uint64_t rsvd_15_13 : 3;
+ uint64_t pebs_exponent : 5;
+ uint64_t rsvd_23_21 : 3;
+ uint64_t cir_exponent : 5;
+ uint64_t rsvd_31_29 : 3;
+ uint64_t cbs_exponent : 5;
+ uint64_t rsvd_39_37 : 3;
+ uint64_t peir_mantissa : 8;
+ uint64_t pebs_mantissa : 8;
+ uint64_t cir_mantissa : 8;
+ uint64_t cbs_mantissa : 8;
+ uint64_t lmode : 1;
+ uint64_t l_sellect : 3;
+ uint64_t rdiv : 4;
+ uint64_t adjust_exponent : 5;
+ uint64_t rsvd_86_85 : 2;
+ uint64_t adjust_mantissa : 9;
+ uint64_t gc_action : 2;
+ uint64_t yc_action : 2;
+ uint64_t rc_action : 2;
+ uint64_t meter_algo : 2;
+ uint64_t band_prof_id : 7;
+ uint64_t rsvd_118_111 : 8;
+ uint64_t hl_en : 1;
+ uint64_t rsvd_127_120 : 8;
+ uint64_t ts : 48;
+ uint64_t rsvd_191_176 : 16;
+ uint64_t pe_accum : 32;
+ uint64_t c_accum : 32;
+ uint64_t green_pkt_pass : 48;
+ uint64_t rsvd_319_304 : 16;
+ uint64_t yellow_pkt_pass : 48;
+ uint64_t rsvd_383_368 : 16;
+ uint64_t red_pkt_pass : 48;
+ uint64_t rsvd_447_432 : 16;
+ uint64_t green_octs_pass : 48;
+ uint64_t rsvd_511_496 : 16;
+ uint64_t yellow_octs_pass : 48;
+ uint64_t rsvd_575_560 : 16;
+ uint64_t red_octs_pass : 48;
+ uint64_t rsvd_639_624 : 16;
+ uint64_t green_pkt_drop : 48;
+ uint64_t rsvd_703_688 : 16;
+ uint64_t yellow_pkt_drop : 48;
+ uint64_t rsvd_767_752 : 16;
+ uint64_t red_pkt_drop : 48;
+ uint64_t rsvd_831_816 : 16;
+ uint64_t green_octs_drop : 48;
+ uint64_t rsvd_895_880 : 16;
+ uint64_t yellow_octs_drop : 48;
+ uint64_t rsvd_959_944 : 16;
+ uint64_t red_octs_drop : 48;
+ uint64_t rsvd_1023_1008 : 16;
+};
+
+/* NIX completion interrupt context hardware structure */
+struct nix_cint_hw_s {
+ uint64_t ecount : 32;
+ uint64_t qcount : 16;
+ uint64_t intr : 1;
+ uint64_t ena : 1;
+ uint64_t timer_idx : 8;
+ uint64_t rsvd_63_58 : 6;
+ uint64_t ecount_wait : 32;
+ uint64_t qcount_wait : 16;
+ uint64_t time_wait : 8;
+ uint64_t rsvd_127_120 : 8;
+};
+
+/* NIX completion queue entry header structure */
+struct nix_cqe_hdr_s {
+ uint64_t tag : 32;
+ uint64_t q : 20;
+ uint64_t rsvd_57_52 : 6;
+ uint64_t node : 2;
+ uint64_t cqe_type : 4;
+};
+
+/* NIX completion queue context structure */
+struct nix_cq_ctx_s {
+ uint64_t base : 64; /* W0 */
+ uint64_t rsvd_67_64 : 4;
+ uint64_t bp_ena : 1;
+ uint64_t rsvd_71_69 : 3;
+ uint64_t bpid : 9;
+ uint64_t rsvd_83_81 : 3;
+ uint64_t qint_idx : 7;
+ uint64_t cq_err : 1;
+ uint64_t cint_idx : 7;
+ uint64_t avg_con : 9;
+ uint64_t wrptr : 20;
+ uint64_t tail : 20;
+ uint64_t head : 20;
+ uint64_t avg_level : 8;
+ uint64_t update_time : 16;
+ uint64_t bp : 8;
+ uint64_t drop : 8;
+ uint64_t drop_ena : 1;
+ uint64_t ena : 1;
+ uint64_t rsvd_211_210 : 2;
+ uint64_t substream : 20;
+ uint64_t caching : 1;
+ uint64_t rsvd_235_233 : 3;
+ uint64_t qsize : 4;
+ uint64_t cq_err_int : 8;
+ uint64_t cq_err_int_ena : 8;
+};
+
+/* NIX instruction header structure */
+struct nix_inst_hdr_s {
+ uint64_t pf_func : 16;
+ uint64_t sq : 20;
+ uint64_t rsvd_63_36 : 28;
+};
+
+/* NIX i/o virtual address structure */
+struct nix_iova_s {
+ uint64_t addr : 64; /* W0 */
+};
+
+/* NIX IPsec dynamic ordering counter structure */
+struct nix_ipsec_dyno_s {
+ uint32_t count : 32; /* W0 */
+};
+
+/* NIX memory value structure */
+struct nix_mem_result_s {
+ uint64_t v : 1;
+ uint64_t color : 2;
+ uint64_t rsvd_63_3 : 61;
+};
+
+/* NIX statistics operation write data structure */
+struct nix_op_q_wdata_s {
+ uint64_t rsvd_31_0 : 32;
+ uint64_t q : 20;
+ uint64_t rsvd_63_52 : 12;
+};
+
+/* NIX queue interrupt context hardware structure */
+struct nix_qint_hw_s {
+ uint32_t count : 22;
+ uint32_t rsvd_30_22 : 9;
+ uint32_t ena : 1;
+};
+
+/* [CN10K, .) NIX receive queue context structure */
+struct nix_cn10k_rq_ctx_hw_s {
+ uint64_t ena : 1;
+ uint64_t sso_ena : 1;
+ uint64_t ipsech_ena : 1;
+ uint64_t ena_wqwd : 1;
+ uint64_t cq : 20;
+ uint64_t rsvd_36_24 : 13;
+ uint64_t lenerr_dis : 1;
+ uint64_t csum_il4_dis : 1;
+ uint64_t csum_ol4_dis : 1;
+ uint64_t len_il4_dis : 1;
+ uint64_t len_il3_dis : 1;
+ uint64_t len_ol4_dis : 1;
+ uint64_t len_ol3_dis : 1;
+ uint64_t wqe_aura : 20;
+ uint64_t spb_aura : 20;
+ uint64_t lpb_aura : 20;
+ uint64_t sso_grp : 10;
+ uint64_t sso_tt : 2;
+ uint64_t pb_caching : 2;
+ uint64_t wqe_caching : 1;
+ uint64_t xqe_drop_ena : 1;
+ uint64_t spb_drop_ena : 1;
+ uint64_t lpb_drop_ena : 1;
+ uint64_t pb_stashing : 1;
+ uint64_t ipsecd_drop_en : 1;
+ uint64_t chi_ena : 1;
+ uint64_t rsvd_127_125 : 3;
+ uint64_t band_prof_id : 10;
+ uint64_t rsvd_138 : 1;
+ uint64_t policer_ena : 1;
+ uint64_t spb_sizem1 : 6;
+ uint64_t wqe_skip : 2;
+ uint64_t spb_high_sizem1 : 3;
+ uint64_t spb_ena : 1;
+ uint64_t lpb_sizem1 : 12;
+ uint64_t first_skip : 7;
+ uint64_t rsvd_171 : 1;
+ uint64_t later_skip : 6;
+ uint64_t xqe_imm_size : 6;
+ uint64_t rsvd_189_184 : 6;
+ uint64_t xqe_imm_copy : 1;
+ uint64_t xqe_hdr_split : 1;
+ uint64_t xqe_drop : 8;
+ uint64_t xqe_pass : 8;
+ uint64_t wqe_pool_drop : 8;
+ uint64_t wqe_pool_pass : 8;
+ uint64_t spb_aura_drop : 8;
+ uint64_t spb_aura_pass : 8;
+ uint64_t spb_pool_drop : 8;
+ uint64_t spb_pool_pass : 8;
+ uint64_t lpb_aura_drop : 8;
+ uint64_t lpb_aura_pass : 8;
+ uint64_t lpb_pool_drop : 8;
+ uint64_t lpb_pool_pass : 8;
+ uint64_t rsvd_319_288 : 32;
+ uint64_t ltag : 24;
+ uint64_t good_utag : 8;
+ uint64_t bad_utag : 8;
+ uint64_t flow_tagw : 6;
+ uint64_t ipsec_vwqe : 1;
+ uint64_t vwqe_ena : 1;
+ uint64_t vtime_wait : 8;
+ uint64_t max_vsize_exp : 4;
+ uint64_t vwqe_skip : 2;
+ uint64_t rsvd_383_382 : 2;
+ uint64_t octs : 48;
+ uint64_t rsvd_447_432 : 16;
+ uint64_t pkts : 48;
+ uint64_t rsvd_511_496 : 16;
+ uint64_t drop_octs : 48;
+ uint64_t rsvd_575_560 : 16;
+ uint64_t drop_pkts : 48;
+ uint64_t rsvd_639_624 : 16;
+ uint64_t re_pkts : 48;
+ uint64_t rsvd_702_688 : 15;
+ uint64_t ena_copy : 1;
+ uint64_t rsvd_739_704 : 36;
+ uint64_t rq_int : 8;
+ uint64_t rq_int_ena : 8;
+ uint64_t qint_idx : 7;
+ uint64_t rsvd_767_763 : 5;
+ uint64_t rsvd_831_768 : 64; /* W12 */
+ uint64_t rsvd_895_832 : 64; /* W13 */
+ uint64_t rsvd_959_896 : 64; /* W14 */
+ uint64_t rsvd_1023_960 : 64; /* W15 */
+};
+
+/* NIX receive queue context structure */
+struct nix_rq_ctx_hw_s {
+ uint64_t ena : 1;
+ uint64_t sso_ena : 1;
+ uint64_t ipsech_ena : 1;
+ uint64_t ena_wqwd : 1;
+ uint64_t cq : 20;
+ uint64_t substream : 20;
+ uint64_t wqe_aura : 20;
+ uint64_t spb_aura : 20;
+ uint64_t lpb_aura : 20;
+ uint64_t sso_grp : 10;
+ uint64_t sso_tt : 2;
+ uint64_t pb_caching : 2;
+ uint64_t wqe_caching : 1;
+ uint64_t xqe_drop_ena : 1;
+ uint64_t spb_drop_ena : 1;
+ uint64_t lpb_drop_ena : 1;
+ uint64_t wqe_skip : 2;
+ uint64_t rsvd_127_124 : 4;
+ uint64_t rsvd_139_128 : 12;
+ uint64_t spb_sizem1 : 6;
+ uint64_t rsvd_150_146 : 5;
+ uint64_t spb_ena : 1;
+ uint64_t lpb_sizem1 : 12;
+ uint64_t first_skip : 7;
+ uint64_t rsvd_171 : 1;
+ uint64_t later_skip : 6;
+ uint64_t xqe_imm_size : 6;
+ uint64_t rsvd_189_184 : 6;
+ uint64_t xqe_imm_copy : 1;
+ uint64_t xqe_hdr_split : 1;
+ uint64_t xqe_drop : 8;
+ uint64_t xqe_pass : 8;
+ uint64_t wqe_pool_drop : 8;
+ uint64_t wqe_pool_pass : 8;
+ uint64_t spb_aura_drop : 8;
+ uint64_t spb_aura_pass : 8;
+ uint64_t spb_pool_drop : 8;
+ uint64_t spb_pool_pass : 8;
+ uint64_t lpb_aura_drop : 8;
+ uint64_t lpb_aura_pass : 8;
+ uint64_t lpb_pool_drop : 8;
+ uint64_t lpb_pool_pass : 8;
+ uint64_t rsvd_319_288 : 32;
+ uint64_t ltag : 24;
+ uint64_t good_utag : 8;
+ uint64_t bad_utag : 8;
+ uint64_t flow_tagw : 6;
+ uint64_t rsvd_383_366 : 18;
+ uint64_t octs : 48;
+ uint64_t rsvd_447_432 : 16;
+ uint64_t pkts : 48;
+ uint64_t rsvd_511_496 : 16;
+ uint64_t drop_octs : 48;
+ uint64_t rsvd_575_560 : 16;
+ uint64_t drop_pkts : 48;
+ uint64_t rsvd_639_624 : 16;
+ uint64_t re_pkts : 48;
+ uint64_t rsvd_702_688 : 15;
+ uint64_t ena_copy : 1;
+ uint64_t rsvd_739_704 : 36;
+ uint64_t rq_int : 8;
+ uint64_t rq_int_ena : 8;
+ uint64_t qint_idx : 7;
+ uint64_t rsvd_767_763 : 5;
+ uint64_t rsvd_831_768 : 64; /* W12 */
+ uint64_t rsvd_895_832 : 64; /* W13 */
+ uint64_t rsvd_959_896 : 64; /* W14 */
+ uint64_t rsvd_1023_960 : 64; /* W15 */
+};
+
+/* [CN10K, .) NIX Receive queue context structure */
+struct nix_cn10k_rq_ctx_s {
+ uint64_t ena : 1;
+ uint64_t sso_ena : 1;
+ uint64_t ipsech_ena : 1;
+ uint64_t ena_wqwd : 1;
+ uint64_t cq : 20;
+ uint64_t rsvd_36_24 : 13;
+ uint64_t lenerr_dis : 1;
+ uint64_t csum_il4_dis : 1;
+ uint64_t csum_ol4_dis : 1;
+ uint64_t len_il4_dis : 1;
+ uint64_t len_il3_dis : 1;
+ uint64_t len_ol4_dis : 1;
+ uint64_t len_ol3_dis : 1;
+ uint64_t wqe_aura : 20;
+ uint64_t spb_aura : 20;
+ uint64_t lpb_aura : 20;
+ uint64_t sso_grp : 10;
+ uint64_t sso_tt : 2;
+ uint64_t pb_caching : 2;
+ uint64_t wqe_caching : 1;
+ uint64_t xqe_drop_ena : 1;
+ uint64_t spb_drop_ena : 1;
+ uint64_t lpb_drop_ena : 1;
+ uint64_t pb_stashing : 1;
+ uint64_t ipsecd_drop_en : 1;
+ uint64_t chi_ena : 1;
+ uint64_t rsvd_127_125 : 3;
+ uint64_t band_prof_id : 10;
+ uint64_t rsvd_138 : 1;
+ uint64_t policer_ena : 1;
+ uint64_t spb_sizem1 : 6;
+ uint64_t wqe_skip : 2;
+ uint64_t spb_high_sizem1 : 3;
+ uint64_t spb_ena : 1;
+ uint64_t lpb_sizem1 : 12;
+ uint64_t first_skip : 7;
+ uint64_t rsvd_171 : 1;
+ uint64_t later_skip : 6;
+ uint64_t xqe_imm_size : 6;
+ uint64_t rsvd_189_184 : 6;
+ uint64_t xqe_imm_copy : 1;
+ uint64_t xqe_hdr_split : 1;
+ uint64_t xqe_drop : 8;
+ uint64_t xqe_pass : 8;
+ uint64_t wqe_pool_drop : 8;
+ uint64_t wqe_pool_pass : 8;
+ uint64_t spb_aura_drop : 8;
+ uint64_t spb_aura_pass : 8;
+ uint64_t spb_pool_drop : 8;
+ uint64_t spb_pool_pass : 8;
+ uint64_t lpb_aura_drop : 8;
+ uint64_t lpb_aura_pass : 8;
+ uint64_t lpb_pool_drop : 8;
+ uint64_t lpb_pool_pass : 8;
+ uint64_t rsvd_291_288 : 4;
+ uint64_t rq_int : 8;
+ uint64_t rq_int_ena : 8;
+ uint64_t qint_idx : 7;
+ uint64_t rsvd_319_315 : 5;
+ uint64_t ltag : 24;
+ uint64_t good_utag : 8;
+ uint64_t bad_utag : 8;
+ uint64_t flow_tagw : 6;
+ uint64_t ipsec_vwqe : 1;
+ uint64_t vwqe_ena : 1;
+ uint64_t vtime_wait : 8;
+ uint64_t max_vsize_exp : 4;
+ uint64_t vwqe_skip : 2;
+ uint64_t rsvd_383_382 : 2;
+ uint64_t octs : 48;
+ uint64_t rsvd_447_432 : 16;
+ uint64_t pkts : 48;
+ uint64_t rsvd_511_496 : 16;
+ uint64_t drop_octs : 48;
+ uint64_t rsvd_575_560 : 16;
+ uint64_t drop_pkts : 48;
+ uint64_t rsvd_639_624 : 16;
+ uint64_t re_pkts : 48;
+ uint64_t rsvd_703_688 : 16;
+ uint64_t rsvd_767_704 : 64; /* W11 */
+ uint64_t rsvd_831_768 : 64; /* W12 */
+ uint64_t rsvd_895_832 : 64; /* W13 */
+ uint64_t rsvd_959_896 : 64; /* W14 */
+ uint64_t rsvd_1023_960 : 64; /* W15 */
+};
+
+/* NIX receive queue context structure */
+struct nix_rq_ctx_s {
+ uint64_t ena : 1;
+ uint64_t sso_ena : 1;
+ uint64_t ipsech_ena : 1;
+ uint64_t ena_wqwd : 1;
+ uint64_t cq : 20;
+ uint64_t substream : 20;
+ uint64_t wqe_aura : 20;
+ uint64_t spb_aura : 20;
+ uint64_t lpb_aura : 20;
+ uint64_t sso_grp : 10;
+ uint64_t sso_tt : 2;
+ uint64_t pb_caching : 2;
+ uint64_t wqe_caching : 1;
+ uint64_t xqe_drop_ena : 1;
+ uint64_t spb_drop_ena : 1;
+ uint64_t lpb_drop_ena : 1;
+ uint64_t rsvd_127_122 : 6;
+ uint64_t rsvd_139_128 : 12;
+ uint64_t spb_sizem1 : 6;
+ uint64_t wqe_skip : 2;
+ uint64_t rsvd_150_148 : 3;
+ uint64_t spb_ena : 1;
+ uint64_t lpb_sizem1 : 12;
+ uint64_t first_skip : 7;
+ uint64_t rsvd_171 : 1;
+ uint64_t later_skip : 6;
+ uint64_t xqe_imm_size : 6;
+ uint64_t rsvd_189_184 : 6;
+ uint64_t xqe_imm_copy : 1;
+ uint64_t xqe_hdr_split : 1;
+ uint64_t xqe_drop : 8;
+ uint64_t xqe_pass : 8;
+ uint64_t wqe_pool_drop : 8;
+ uint64_t wqe_pool_pass : 8;
+ uint64_t spb_aura_drop : 8;
+ uint64_t spb_aura_pass : 8;
+ uint64_t spb_pool_drop : 8;
+ uint64_t spb_pool_pass : 8;
+ uint64_t lpb_aura_drop : 8;
+ uint64_t lpb_aura_pass : 8;
+ uint64_t lpb_pool_drop : 8;
+ uint64_t lpb_pool_pass : 8;
+ uint64_t rsvd_291_288 : 4;
+ uint64_t rq_int : 8;
+ uint64_t rq_int_ena : 8;
+ uint64_t qint_idx : 7;
+ uint64_t rsvd_319_315 : 5;
+ uint64_t ltag : 24;
+ uint64_t good_utag : 8;
+ uint64_t bad_utag : 8;
+ uint64_t flow_tagw : 6;
+ uint64_t rsvd_383_366 : 18;
+ uint64_t octs : 48;
+ uint64_t rsvd_447_432 : 16;
+ uint64_t pkts : 48;
+ uint64_t rsvd_511_496 : 16;
+ uint64_t drop_octs : 48;
+ uint64_t rsvd_575_560 : 16;
+ uint64_t drop_pkts : 48;
+ uint64_t rsvd_639_624 : 16;
+ uint64_t re_pkts : 48;
+ uint64_t rsvd_703_688 : 16;
+ uint64_t rsvd_767_704 : 64; /* W11 */
+ uint64_t rsvd_831_768 : 64; /* W12 */
+ uint64_t rsvd_895_832 : 64; /* W13 */
+ uint64_t rsvd_959_896 : 64; /* W14 */
+ uint64_t rsvd_1023_960 : 64; /* W15 */
+};
+
+/* NIX receive side scaling entry structure */
+struct nix_rsse_s {
+ uint32_t rq : 20;
+ uint32_t rsvd_31_20 : 12;
+};
+
+/* NIX receive action structure */
+struct nix_rx_action_s {
+ uint64_t op : 4;
+ uint64_t pf_func : 16;
+ uint64_t index : 20;
+ uint64_t match_id : 16;
+ uint64_t flow_key_alg : 5;
+ uint64_t rsvd_63_61 : 3;
+};
+
+/* NIX receive immediate sub descriptor structure */
+struct nix_rx_imm_s {
+ uint64_t size : 16;
+ uint64_t apad : 3;
+ uint64_t rsvd_59_19 : 41;
+ uint64_t subdc : 4;
+};
+
+/* NIX receive multicast/mirror entry structure */
+struct nix_rx_mce_s {
+ uint64_t op : 2;
+ uint64_t rsvd_2 : 1;
+ uint64_t eol : 1;
+ uint64_t index : 20;
+ uint64_t rsvd_31_24 : 8;
+ uint64_t pf_func : 16;
+ uint64_t next : 16;
+};
+
+/* NIX receive parse structure */
+union nix_rx_parse_u {
+ struct {
+ uint64_t chan : 12;
+ uint64_t desc_sizem1 : 5;
+ uint64_t imm_copy : 1;
+ uint64_t express : 1;
+ uint64_t wqwd : 1;
+ uint64_t errlev : 4;
+ uint64_t errcode : 8;
+ uint64_t latype : 4;
+ uint64_t lbtype : 4;
+ uint64_t lctype : 4;
+ uint64_t ldtype : 4;
+ uint64_t letype : 4;
+ uint64_t lftype : 4;
+ uint64_t lgtype : 4;
+ uint64_t lhtype : 4;
+ uint64_t pkt_lenm1 : 16;
+ uint64_t l2m : 1;
+ uint64_t l2b : 1;
+ uint64_t l3m : 1;
+ uint64_t l3b : 1;
+ uint64_t vtag0_valid : 1;
+ uint64_t vtag0_gone : 1;
+ uint64_t vtag1_valid : 1;
+ uint64_t vtag1_gone : 1;
+ uint64_t pkind : 6;
+ uint64_t nix_idx : 2;
+ uint64_t vtag0_tci : 16;
+ uint64_t vtag1_tci : 16;
+ uint64_t laflags : 8;
+ uint64_t lbflags : 8;
+ uint64_t lcflags : 8;
+ uint64_t ldflags : 8;
+ uint64_t leflags : 8;
+ uint64_t lfflags : 8;
+ uint64_t lgflags : 8;
+ uint64_t lhflags : 8;
+ uint64_t eoh_ptr : 8;
+ uint64_t wqe_aura : 20;
+ uint64_t pb_aura : 20;
+ uint64_t match_id : 16;
+ uint64_t laptr : 8;
+ uint64_t lbptr : 8;
+ uint64_t lcptr : 8;
+ uint64_t ldptr : 8;
+ uint64_t leptr : 8;
+ uint64_t lfptr : 8;
+ uint64_t lgptr : 8;
+ uint64_t lhptr : 8;
+ uint64_t vtag0_ptr : 8;
+ uint64_t vtag1_ptr : 8;
+ uint64_t flow_key_alg : 5;
+ uint64_t rsvd_341 : 1;
+ uint64_t rsvd_349_342 : 8;
+ uint64_t rsvd_353_350 : 4;
+ uint64_t rsvd_359_354 : 6;
+ uint64_t color : 2;
+ uint64_t rsvd_381_362 : 20;
+ uint64_t rsvd_382 : 1;
+ uint64_t rsvd_383 : 1;
+ uint64_t rsvd_447_384 : 64; /* W6 */
+ };
+ struct {
+ uint64_t chan : 12;
+ uint64_t desc_sizem1 : 5;
+ uint64_t imm_copy : 1;
+ uint64_t express : 1;
+ uint64_t wqwd : 1;
+ uint64_t errlev : 4;
+ uint64_t errcode : 8;
+ uint64_t latype : 4;
+ uint64_t lbtype : 4;
+ uint64_t lctype : 4;
+ uint64_t ldtype : 4;
+ uint64_t letype : 4;
+ uint64_t lftype : 4;
+ uint64_t lgtype : 4;
+ uint64_t lhtype : 4;
+ uint64_t pkt_lenm1 : 16;
+ uint64_t l2m : 1;
+ uint64_t l2b : 1;
+ uint64_t l3m : 1;
+ uint64_t l3b : 1;
+ uint64_t vtag0_valid : 1;
+ uint64_t vtag0_gone : 1;
+ uint64_t vtag1_valid : 1;
+ uint64_t vtag1_gone : 1;
+ uint64_t pkind : 6;
+ uint64_t rsvd_95_94 : 2;
+ uint64_t vtag0_tci : 16;
+ uint64_t vtag1_tci : 16;
+ uint64_t laflags : 8;
+ uint64_t lbflags : 8;
+ uint64_t lcflags : 8;
+ uint64_t ldflags : 8;
+ uint64_t leflags : 8;
+ uint64_t lfflags : 8;
+ uint64_t lgflags : 8;
+ uint64_t lhflags : 8;
+ uint64_t eoh_ptr : 8;
+ uint64_t wqe_aura : 20;
+ uint64_t pb_aura : 20;
+ uint64_t match_id : 16;
+ uint64_t laptr : 8;
+ uint64_t lbptr : 8;
+ uint64_t lcptr : 8;
+ uint64_t ldptr : 8;
+ uint64_t leptr : 8;
+ uint64_t lfptr : 8;
+ uint64_t lgptr : 8;
+ uint64_t lhptr : 8;
+ uint64_t vtag0_ptr : 8;
+ uint64_t vtag1_ptr : 8;
+ uint64_t flow_key_alg : 5;
+ uint64_t rsvd_383_341 : 43;
+ uint64_t rsvd_447_384 : 64; /* W6 */
+ } cn9k;
+};
+
+/* NIX receive scatter/gather sub descriptor structure */
+struct nix_rx_sg_s {
+ uint64_t seg1_size : 16;
+ uint64_t seg2_size : 16;
+ uint64_t seg3_size : 16;
+ uint64_t segs : 2;
+ uint64_t rsvd_59_50 : 10;
+ uint64_t subdc : 4;
+};
+
+/* NIX receive vtag action structure */
+union nix_rx_vtag_action_u {
+ struct {
+ uint64_t vtag0_relptr : 8;
+ uint64_t vtag0_lid : 3;
+ uint64_t sa_xor : 1;
+ uint64_t vtag0_type : 3;
+ uint64_t vtag0_valid : 1;
+ uint64_t sa_lo : 16;
+ uint64_t vtag1_relptr : 8;
+ uint64_t vtag1_lid : 3;
+ uint64_t rsvd_43 : 1;
+ uint64_t vtag1_type : 3;
+ uint64_t vtag1_valid : 1;
+ uint64_t sa_hi : 16;
+ };
+ struct {
+ uint64_t vtag0_relptr : 8;
+ uint64_t vtag0_lid : 3;
+ uint64_t rsvd_11 : 1;
+ uint64_t vtag0_type : 3;
+ uint64_t vtag0_valid : 1;
+ uint64_t rsvd_31_16 : 16;
+ uint64_t vtag1_relptr : 8;
+ uint64_t vtag1_lid : 3;
+ uint64_t rsvd_43 : 1;
+ uint64_t vtag1_type : 3;
+ uint64_t vtag1_valid : 1;
+ uint64_t rsvd_63_48 : 16;
+ } cn9k;
+};
+
+/* NIX send completion structure */
+struct nix_send_comp_s {
+ uint64_t status : 8;
+ uint64_t sqe_id : 16;
+ uint64_t rsvd_63_24 : 40;
+};
+
+/* NIX send CRC sub descriptor structure */
+struct nix_send_crc_s {
+ uint64_t size : 16;
+ uint64_t start : 16;
+ uint64_t insert : 16;
+ uint64_t rsvd_57_48 : 10;
+ uint64_t alg : 2;
+ uint64_t subdc : 4;
+ uint64_t iv : 32;
+ uint64_t rsvd_127_96 : 32;
+};
+
+/* NIX send extended header sub descriptor structure */
+PLT_STD_C11
+union nix_send_ext_w0_u {
+ uint64_t u;
+ struct {
+ uint64_t lso_mps : 14;
+ uint64_t lso : 1;
+ uint64_t tstmp : 1;
+ uint64_t lso_sb : 8;
+ uint64_t lso_format : 5;
+ uint64_t rsvd_31_29 : 3;
+ uint64_t shp_chg : 9;
+ uint64_t shp_dis : 1;
+ uint64_t shp_ra : 2;
+ uint64_t markptr : 8;
+ uint64_t markform : 7;
+ uint64_t mark_en : 1;
+ uint64_t subdc : 4;
+ };
+};
+
+PLT_STD_C11
+union nix_send_ext_w1_u {
+ uint64_t u;
+ struct {
+ uint64_t vlan0_ins_ptr : 8;
+ uint64_t vlan0_ins_tci : 16;
+ uint64_t vlan1_ins_ptr : 8;
+ uint64_t vlan1_ins_tci : 16;
+ uint64_t vlan0_ins_ena : 1;
+ uint64_t vlan1_ins_ena : 1;
+ uint64_t init_color : 2;
+ uint64_t rsvd_127_116 : 12;
+ };
+ struct {
+ uint64_t vlan0_ins_ptr : 8;
+ uint64_t vlan0_ins_tci : 16;
+ uint64_t vlan1_ins_ptr : 8;
+ uint64_t vlan1_ins_tci : 16;
+ uint64_t vlan0_ins_ena : 1;
+ uint64_t vlan1_ins_ena : 1;
+ uint64_t rsvd_127_114 : 14;
+ } cn9k;
+};
+
+struct nix_send_ext_s {
+ union nix_send_ext_w0_u w0;
+ union nix_send_ext_w1_u w1;
+};
+
+/* NIX send header sub descriptor structure */
+PLT_STD_C11
+union nix_send_hdr_w0_u {
+ uint64_t u;
+ struct {
+ uint64_t total : 18;
+ uint64_t rsvd_18 : 1;
+ uint64_t df : 1;
+ uint64_t aura : 20;
+ uint64_t sizem1 : 3;
+ uint64_t pnc : 1;
+ uint64_t sq : 20;
+ };
+};
+
+PLT_STD_C11
+union nix_send_hdr_w1_u {
+ uint64_t u;
+ struct {
+ uint64_t ol3ptr : 8;
+ uint64_t ol4ptr : 8;
+ uint64_t il3ptr : 8;
+ uint64_t il4ptr : 8;
+ uint64_t ol3type : 4;
+ uint64_t ol4type : 4;
+ uint64_t il3type : 4;
+ uint64_t il4type : 4;
+ uint64_t sqe_id : 16;
+ };
+};
+
+struct nix_send_hdr_s {
+ union nix_send_hdr_w0_u w0;
+ union nix_send_hdr_w1_u w1;
+};
+
+/* NIX send immediate sub descriptor structure */
+struct nix_send_imm_s {
+ uint64_t size : 16;
+ uint64_t apad : 3;
+ uint64_t rsvd_59_19 : 41;
+ uint64_t subdc : 4;
+};
+
+/* NIX send jump sub descriptor structure */
+struct nix_send_jump_s {
+ uint64_t sizem1 : 7;
+ uint64_t rsvd_13_7 : 7;
+ uint64_t ld_type : 2;
+ uint64_t aura : 20;
+ uint64_t rsvd_58_36 : 23;
+ uint64_t f : 1;
+ uint64_t subdc : 4;
+ uint64_t addr : 64; /* W1 */
+};
+
+/* NIX send memory sub descriptor structure */
+PLT_STD_C11
+union nix_send_mem_w0_u {
+ uint64_t u;
+ struct {
+ uint64_t offset : 16;
+ uint64_t rsvd_51_16 : 36;
+ uint64_t per_lso_seg : 1;
+ uint64_t wmem : 1;
+ uint64_t dsz : 2;
+ uint64_t alg : 4;
+ uint64_t subdc : 4;
+ };
+ struct {
+ uint64_t offset : 16;
+ uint64_t rsvd_52_16 : 37;
+ uint64_t wmem : 1;
+ uint64_t dsz : 2;
+ uint64_t alg : 4;
+ uint64_t subdc : 4;
+ } cn9k;
+};
+
+struct nix_send_mem_s {
+ union nix_send_mem_w0_u w0;
+ uint64_t addr : 64; /* W1 */
+};
+
+/* NIX send scatter/gather sub descriptor structure */
+PLT_STD_C11
+union nix_send_sg2_s {
+ uint64_t u;
+ struct {
+ uint64_t seg1_size : 16;
+ uint64_t aura : 20;
+ uint64_t i1 : 1;
+ uint64_t fabs : 1;
+ uint64_t foff : 8;
+ uint64_t rsvd_57_46 : 12;
+ uint64_t ld_type : 2;
+ uint64_t subdc : 4;
+ };
+};
+
+PLT_STD_C11
+union nix_send_sg_s {
+ uint64_t u;
+ struct {
+ uint64_t seg1_size : 16;
+ uint64_t seg2_size : 16;
+ uint64_t seg3_size : 16;
+ uint64_t segs : 2;
+ uint64_t rsvd_54_50 : 5;
+ uint64_t i1 : 1;
+ uint64_t i2 : 1;
+ uint64_t i3 : 1;
+ uint64_t ld_type : 2;
+ uint64_t subdc : 4;
+ };
+};
+
+/* NIX send work sub descriptor structure */
+struct nix_send_work_s {
+ uint64_t tag : 32;
+ uint64_t tt : 2;
+ uint64_t grp : 10;
+ uint64_t rsvd_59_44 : 16;
+ uint64_t subdc : 4;
+ uint64_t addr : 64; /* W1 */
+};
+
+/* [CN10K, .) NIX sq context hardware structure */
+struct nix_cn10k_sq_ctx_hw_s {
+ uint64_t ena : 1;
+ uint64_t substream : 20;
+ uint64_t max_sqe_size : 2;
+ uint64_t sqe_way_mask : 16;
+ uint64_t sqb_aura : 20;
+ uint64_t gbl_rsvd1 : 5;
+ uint64_t cq_id : 20;
+ uint64_t cq_ena : 1;
+ uint64_t qint_idx : 6;
+ uint64_t gbl_rsvd2 : 1;
+ uint64_t sq_int : 8;
+ uint64_t sq_int_ena : 8;
+ uint64_t xoff : 1;
+ uint64_t sqe_stype : 2;
+ uint64_t gbl_rsvd : 17;
+ uint64_t head_sqb : 64; /* W2 */
+ uint64_t head_offset : 6;
+ uint64_t sqb_dequeue_count : 16;
+ uint64_t default_chan : 12;
+ uint64_t sdp_mcast : 1;
+ uint64_t sso_ena : 1;
+ uint64_t dse_rsvd1 : 28;
+ uint64_t sqb_enqueue_count : 16;
+ uint64_t tail_offset : 6;
+ uint64_t lmt_dis : 1;
+ uint64_t smq_rr_weight : 14;
+ uint64_t dnq_rsvd1 : 27;
+ uint64_t tail_sqb : 64; /* W5 */
+ uint64_t next_sqb : 64; /* W6 */
+ uint64_t smq : 10;
+ uint64_t smq_pend : 1;
+ uint64_t smq_next_sq : 20;
+ uint64_t smq_next_sq_vld : 1;
+ uint64_t mnq_dis : 1;
+ uint64_t scm1_rsvd2 : 31;
+ uint64_t smenq_sqb : 64; /* W8 */
+ uint64_t smenq_offset : 6;
+ uint64_t cq_limit : 8;
+ uint64_t smq_rr_count : 32;
+ uint64_t scm_lso_rem : 18;
+ uint64_t smq_lso_segnum : 8;
+ uint64_t vfi_lso_total : 18;
+ uint64_t vfi_lso_sizem1 : 3;
+ uint64_t vfi_lso_sb : 8;
+ uint64_t vfi_lso_mps : 14;
+ uint64_t vfi_lso_vlan0_ins_ena : 1;
+ uint64_t vfi_lso_vlan1_ins_ena : 1;
+ uint64_t vfi_lso_vld : 1;
+ uint64_t smenq_next_sqb_vld : 1;
+ uint64_t scm_dq_rsvd1 : 9;
+ uint64_t smenq_next_sqb : 64; /* W11 */
+ uint64_t age_drop_octs : 32;
+ uint64_t age_drop_pkts : 32;
+ uint64_t drop_pkts : 48;
+ uint64_t drop_octs_lsw : 16;
+ uint64_t drop_octs_msw : 32;
+ uint64_t pkts_lsw : 32;
+ uint64_t pkts_msw : 16;
+ uint64_t octs : 48;
+};
+
+/* NIX sq context hardware structure */
+struct nix_sq_ctx_hw_s {
+ uint64_t ena : 1;
+ uint64_t substream : 20;
+ uint64_t max_sqe_size : 2;
+ uint64_t sqe_way_mask : 16;
+ uint64_t sqb_aura : 20;
+ uint64_t gbl_rsvd1 : 5;
+ uint64_t cq_id : 20;
+ uint64_t cq_ena : 1;
+ uint64_t qint_idx : 6;
+ uint64_t gbl_rsvd2 : 1;
+ uint64_t sq_int : 8;
+ uint64_t sq_int_ena : 8;
+ uint64_t xoff : 1;
+ uint64_t sqe_stype : 2;
+ uint64_t gbl_rsvd : 17;
+ uint64_t head_sqb : 64; /* W2 */
+ uint64_t head_offset : 6;
+ uint64_t sqb_dequeue_count : 16;
+ uint64_t default_chan : 12;
+ uint64_t sdp_mcast : 1;
+ uint64_t sso_ena : 1;
+ uint64_t dse_rsvd1 : 28;
+ uint64_t sqb_enqueue_count : 16;
+ uint64_t tail_offset : 6;
+ uint64_t lmt_dis : 1;
+ uint64_t smq_rr_quantum : 24;
+ uint64_t dnq_rsvd1 : 17;
+ uint64_t tail_sqb : 64; /* W5 */
+ uint64_t next_sqb : 64; /* W6 */
+ uint64_t mnq_dis : 1;
+ uint64_t smq : 9;
+ uint64_t smq_pend : 1;
+ uint64_t smq_next_sq : 20;
+ uint64_t smq_next_sq_vld : 1;
+ uint64_t scm1_rsvd2 : 32;
+ uint64_t smenq_sqb : 64; /* W8 */
+ uint64_t smenq_offset : 6;
+ uint64_t cq_limit : 8;
+ uint64_t smq_rr_count : 25;
+ uint64_t scm_lso_rem : 18;
+ uint64_t scm_dq_rsvd0 : 7;
+ uint64_t smq_lso_segnum : 8;
+ uint64_t vfi_lso_total : 18;
+ uint64_t vfi_lso_sizem1 : 3;
+ uint64_t vfi_lso_sb : 8;
+ uint64_t vfi_lso_mps : 14;
+ uint64_t vfi_lso_vlan0_ins_ena : 1;
+ uint64_t vfi_lso_vlan1_ins_ena : 1;
+ uint64_t vfi_lso_vld : 1;
+ uint64_t smenq_next_sqb_vld : 1;
+ uint64_t scm_dq_rsvd1 : 9;
+ uint64_t smenq_next_sqb : 64; /* W11 */
+ uint64_t seb_rsvd1 : 64; /* W12 */
+ uint64_t drop_pkts : 48;
+ uint64_t drop_octs_lsw : 16;
+ uint64_t drop_octs_msw : 32;
+ uint64_t pkts_lsw : 32;
+ uint64_t pkts_msw : 16;
+ uint64_t octs : 48;
+};
+
+/* [CN10K, .) NIX Send queue context structure */
+struct nix_cn10k_sq_ctx_s {
+ uint64_t ena : 1;
+ uint64_t qint_idx : 6;
+ uint64_t substream : 20;
+ uint64_t sdp_mcast : 1;
+ uint64_t cq : 20;
+ uint64_t sqe_way_mask : 16;
+ uint64_t smq : 10;
+ uint64_t cq_ena : 1;
+ uint64_t xoff : 1;
+ uint64_t sso_ena : 1;
+ uint64_t smq_rr_weight : 14;
+ uint64_t default_chan : 12;
+ uint64_t sqb_count : 16;
+ uint64_t rsvd_120_119 : 2;
+ uint64_t smq_rr_count_lb : 7;
+ uint64_t smq_rr_count_ub : 25;
+ uint64_t sqb_aura : 20;
+ uint64_t sq_int : 8;
+ uint64_t sq_int_ena : 8;
+ uint64_t sqe_stype : 2;
+ uint64_t rsvd_191 : 1;
+ uint64_t max_sqe_size : 2;
+ uint64_t cq_limit : 8;
+ uint64_t lmt_dis : 1;
+ uint64_t mnq_dis : 1;
+ uint64_t smq_next_sq : 20;
+ uint64_t smq_lso_segnum : 8;
+ uint64_t tail_offset : 6;
+ uint64_t smenq_offset : 6;
+ uint64_t head_offset : 6;
+ uint64_t smenq_next_sqb_vld : 1;
+ uint64_t smq_pend : 1;
+ uint64_t smq_next_sq_vld : 1;
+ uint64_t rsvd_255_253 : 3;
+ uint64_t next_sqb : 64; /* W4 */
+ uint64_t tail_sqb : 64; /* W5 */
+ uint64_t smenq_sqb : 64; /* W6 */
+ uint64_t smenq_next_sqb : 64; /* W7 */
+ uint64_t head_sqb : 64; /* W8 */
+ uint64_t rsvd_583_576 : 8;
+ uint64_t vfi_lso_total : 18;
+ uint64_t vfi_lso_sizem1 : 3;
+ uint64_t vfi_lso_sb : 8;
+ uint64_t vfi_lso_mps : 14;
+ uint64_t vfi_lso_vlan0_ins_ena : 1;
+ uint64_t vfi_lso_vlan1_ins_ena : 1;
+ uint64_t vfi_lso_vld : 1;
+ uint64_t rsvd_639_630 : 10;
+ uint64_t scm_lso_rem : 18;
+ uint64_t rsvd_703_658 : 46;
+ uint64_t octs : 48;
+ uint64_t rsvd_767_752 : 16;
+ uint64_t pkts : 48;
+ uint64_t rsvd_831_816 : 16;
+ uint64_t aged_drop_octs : 32;
+ uint64_t aged_drop_pkts : 32;
+ uint64_t drop_octs : 48;
+ uint64_t rsvd_959_944 : 16;
+ uint64_t drop_pkts : 48;
+ uint64_t rsvd_1023_1008 : 16;
+};
+
+/* NIX send queue context structure */
+struct nix_sq_ctx_s {
+ uint64_t ena : 1;
+ uint64_t qint_idx : 6;
+ uint64_t substream : 20;
+ uint64_t sdp_mcast : 1;
+ uint64_t cq : 20;
+ uint64_t sqe_way_mask : 16;
+ uint64_t smq : 9;
+ uint64_t cq_ena : 1;
+ uint64_t xoff : 1;
+ uint64_t sso_ena : 1;
+ uint64_t smq_rr_quantum : 24;
+ uint64_t default_chan : 12;
+ uint64_t sqb_count : 16;
+ uint64_t smq_rr_count : 25;
+ uint64_t sqb_aura : 20;
+ uint64_t sq_int : 8;
+ uint64_t sq_int_ena : 8;
+ uint64_t sqe_stype : 2;
+ uint64_t rsvd_191 : 1;
+ uint64_t max_sqe_size : 2;
+ uint64_t cq_limit : 8;
+ uint64_t lmt_dis : 1;
+ uint64_t mnq_dis : 1;
+ uint64_t smq_next_sq : 20;
+ uint64_t smq_lso_segnum : 8;
+ uint64_t tail_offset : 6;
+ uint64_t smenq_offset : 6;
+ uint64_t head_offset : 6;
+ uint64_t smenq_next_sqb_vld : 1;
+ uint64_t smq_pend : 1;
+ uint64_t smq_next_sq_vld : 1;
+ uint64_t rsvd_255_253 : 3;
+ uint64_t next_sqb : 64; /* W4 */
+ uint64_t tail_sqb : 64; /* W5 */
+ uint64_t smenq_sqb : 64; /* W6 */
+ uint64_t smenq_next_sqb : 64; /* W7 */
+ uint64_t head_sqb : 64; /* W8 */
+ uint64_t rsvd_583_576 : 8;
+ uint64_t vfi_lso_total : 18;
+ uint64_t vfi_lso_sizem1 : 3;
+ uint64_t vfi_lso_sb : 8;
+ uint64_t vfi_lso_mps : 14;
+ uint64_t vfi_lso_vlan0_ins_ena : 1;
+ uint64_t vfi_lso_vlan1_ins_ena : 1;
+ uint64_t vfi_lso_vld : 1;
+ uint64_t rsvd_639_630 : 10;
+ uint64_t scm_lso_rem : 18;
+ uint64_t rsvd_703_658 : 46;
+ uint64_t octs : 48;
+ uint64_t rsvd_767_752 : 16;
+ uint64_t pkts : 48;
+ uint64_t rsvd_831_816 : 16;
+ uint64_t rsvd_895_832 : 64; /* W13 */
+ uint64_t drop_octs : 48;
+ uint64_t rsvd_959_944 : 16;
+ uint64_t drop_pkts : 48;
+ uint64_t rsvd_1023_1008 : 16;
+};
+
+/* NIX transmit action structure */
+struct nix_tx_action_s {
+ uint64_t op : 4;
+ uint64_t rsvd_11_4 : 8;
+ uint64_t index : 20;
+ uint64_t match_id : 16;
+ uint64_t rsvd_63_48 : 16;
+};
+
+/* NIX transmit vtag action structure */
+struct nix_tx_vtag_action_s {
+ uint64_t vtag0_relptr : 8;
+ uint64_t vtag0_lid : 3;
+ uint64_t rsvd_11 : 1;
+ uint64_t vtag0_op : 2;
+ uint64_t rsvd_15_14 : 2;
+ uint64_t vtag0_def : 10;
+ uint64_t rsvd_31_26 : 6;
+ uint64_t vtag1_relptr : 8;
+ uint64_t vtag1_lid : 3;
+ uint64_t rsvd_43 : 1;
+ uint64_t vtag1_op : 2;
+ uint64_t rsvd_47_46 : 2;
+ uint64_t vtag1_def : 10;
+ uint64_t rsvd_63_58 : 6;
+};
+
+/* NIX work queue entry header structure */
+struct nix_wqe_hdr_s {
+ uint64_t tag : 32;
+ uint64_t tt : 2;
+ uint64_t grp : 10;
+ uint64_t node : 2;
+ uint64_t q : 14;
+ uint64_t wqe_type : 4;
+};
+
+/* NIX Rx flow key algorithm field structure */
+struct nix_rx_flowkey_alg {
+ uint64_t key_offset : 6;
+ uint64_t ln_mask : 1;
+ uint64_t fn_mask : 1;
+ uint64_t hdr_offset : 8;
+ uint64_t bytesm1 : 5;
+ uint64_t lid : 3;
+ uint64_t reserved_24_24 : 1;
+ uint64_t ena : 1;
+ uint64_t sel_chan : 1;
+ uint64_t ltype_mask : 4;
+ uint64_t ltype_match : 4;
+ uint64_t reserved_35_63 : 29;
+};
+
+/* NIX LSO format field structure */
+struct nix_lso_format {
+ uint64_t offset : 8;
+ uint64_t layer : 2;
+ uint64_t rsvd_10_11 : 2;
+ uint64_t sizem1 : 2;
+ uint64_t rsvd_14_15 : 2;
+ uint64_t alg : 3;
+ uint64_t rsvd_19_63 : 45;
+};
+
+#define NIX_LSO_FIELD_MAX (8)
+#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16)
+#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12)
+#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8)
+#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0)
+
+#define NIX_LSO_FIELD_MASK \
+ (NIX_LSO_FIELD_OFF_MASK | NIX_LSO_FIELD_LY_MASK | \
+ NIX_LSO_FIELD_SZ_MASK | NIX_LSO_FIELD_ALG_MASK)
+
+#define NIX_CN9K_MAX_HW_FRS 9212UL
+#define NIX_LBK_MAX_HW_FRS 65535UL
+#define NIX_RPM_MAX_HW_FRS 16380UL
+#define NIX_MIN_HW_FRS 60UL
+
+/* NIX rate limits */
+#define NIX_TM_MAX_RATE_DIV_EXP 12
+#define NIX_TM_MAX_RATE_EXPONENT 0xf
+#define NIX_TM_MAX_RATE_MANTISSA 0xff
+
+#define NIX_TM_SHAPER_RATE_CONST ((uint64_t)2E6)
+
+/* NIX rate calculation in Bits/Sec
+ * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
+ * << NIX_*_PIR[RATE_EXPONENT]) / 256
+ * PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
+ *
+ * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
+ * << NIX_*_CIR[RATE_EXPONENT]) / 256
+ * CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
+ */
+#define NIX_TM_SHAPER_RATE(exponent, mantissa, div_exp) \
+ ((NIX_TM_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent))) / \
+ (((1ull << (div_exp)) * 256)))
+
+/* Rate limit in Bits/Sec */
+#define NIX_TM_MIN_SHAPER_RATE NIX_TM_SHAPER_RATE(0, 0, NIX_TM_MAX_RATE_DIV_EXP)
+
+#define NIX_TM_MAX_SHAPER_RATE \
+ NIX_TM_SHAPER_RATE(NIX_TM_MAX_RATE_EXPONENT, NIX_TM_MAX_RATE_MANTISSA, \
+ 0)
+
+/* NIX burst limits */
+#define NIX_TM_MAX_BURST_EXPONENT 0xf
+#define NIX_TM_MAX_BURST_MANTISSA 0xff
+
+/* NIX burst calculation
+ * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
+ * << (NIX_*_PIR[BURST_EXPONENT] + 1))
+ * / 256
+ *
+ * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
+ * << (NIX_*_CIR[BURST_EXPONENT] + 1))
+ * / 256
+ */
+#define NIX_TM_SHAPER_BURST(exponent, mantissa) \
+ (((256 + (mantissa)) << ((exponent) + 1)) / 256)
+
+/* Burst limit in Bytes */
+#define NIX_TM_MIN_SHAPER_BURST NIX_TM_SHAPER_BURST(0, 0)
+
+#define NIX_TM_MAX_SHAPER_BURST \
+ NIX_TM_SHAPER_BURST(NIX_TM_MAX_BURST_EXPONENT, \
+ NIX_TM_MAX_BURST_MANTISSA)
+
+/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */
+#define NIX_TM_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1)
+#define NIX_TM_LENGTH_ADJUST_MAX 255
+
+#define NIX_TM_TLX_SP_PRIO_MAX 10
+#define NIX_CN9K_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
+#define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(14) - 1)
+
+/* [CN9K, CN10K) */
+#define NIX_CN9K_TXSCH_LVL_SMQ_MAX 512
+
+/* [CN10K, .) */
+#define NIX_TXSCH_LVL_SMQ_MAX 832
+
+/* [CN9K, .) */
+#define NIX_TXSCH_LVL_TL4_MAX 512
+#define NIX_TXSCH_LVL_TL3_MAX 256
+#define NIX_TXSCH_LVL_TL2_MAX 256
+#define NIX_TXSCH_LVL_TL1_MAX 28
+
+#define NIX_CQ_OP_STAT_OP_ERR 63
+#define NIX_CQ_OP_STAT_CQ_ERR 46
+
+#define NIX_RQ_CN10K_SPB_MAX_SIZE 4096
+
+/* Software defined LSO base format IDX */
+#define NIX_LSO_FORMAT_IDX_TSOV4 0
+#define NIX_LSO_FORMAT_IDX_TSOV6 1
+
+#endif /* __NIX_HW_H__ */
diff --git a/drivers/common/cnxk/hw/npa.h b/drivers/common/cnxk/hw/npa.h
new file mode 100644
index 0000000..8fb3c46
--- /dev/null
+++ b/drivers/common/cnxk/hw/npa.h
@@ -0,0 +1,376 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __NPA_HW_H__
+#define __NPA_HW_H__
+
+/* Register offsets */
+
+#define NPA_AF_BLK_RST (0x0ull)
+#define NPA_AF_CONST (0x10ull)
+#define NPA_AF_CONST1 (0x18ull)
+#define NPA_AF_LF_RST (0x20ull)
+#define NPA_AF_GEN_CFG (0x30ull)
+#define NPA_AF_NDC_CFG (0x40ull)
+#define NPA_AF_NDC_SYNC (0x50ull)
+#define NPA_AF_INP_CTL (0xd0ull)
+#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull)
+#define NPA_AF_AVG_DELAY (0x100ull)
+#define NPA_AF_GEN_INT (0x140ull)
+#define NPA_AF_GEN_INT_W1S (0x148ull)
+#define NPA_AF_GEN_INT_ENA_W1S (0x150ull)
+#define NPA_AF_GEN_INT_ENA_W1C (0x158ull)
+#define NPA_AF_RVU_INT (0x160ull)
+#define NPA_AF_RVU_INT_W1S (0x168ull)
+#define NPA_AF_RVU_INT_ENA_W1S (0x170ull)
+#define NPA_AF_RVU_INT_ENA_W1C (0x178ull)
+#define NPA_AF_ERR_INT (0x180ull)
+#define NPA_AF_ERR_INT_W1S (0x188ull)
+#define NPA_AF_ERR_INT_ENA_W1S (0x190ull)
+#define NPA_AF_ERR_INT_ENA_W1C (0x198ull)
+#define NPA_AF_RAS (0x1a0ull)
+#define NPA_AF_RAS_W1S (0x1a8ull)
+#define NPA_AF_RAS_ENA_W1S (0x1b0ull)
+#define NPA_AF_RAS_ENA_W1C (0x1b8ull)
+#define NPA_AF_AQ_CFG (0x600ull)
+#define NPA_AF_AQ_BASE (0x610ull)
+#define NPA_AF_AQ_STATUS (0x620ull)
+#define NPA_AF_AQ_DOOR (0x630ull)
+#define NPA_AF_AQ_DONE_WAIT (0x640ull)
+#define NPA_AF_AQ_DONE (0x650ull)
+#define NPA_AF_AQ_DONE_ACK (0x660ull)
+#define NPA_AF_AQ_DONE_TIMER (0x670ull)
+#define NPA_AF_AQ_DONE_INT (0x680ull)
+#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull)
+#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull)
+#define NPA_AF_BATCH_CTL (0x6a0ull) /* [CN10K, .) */
+#define NPA_AF_BATCH_ACCEPT_CTL (0x6a8ull) /* [CN10K, .) */
+#define NPA_AF_BATCH_ERR_DATA0 (0x6c0ull) /* [CN10K, .) */
+#define NPA_AF_BATCH_ERR_DATA1 (0x6c8ull) /* [CN10K, .) */
+#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18)
+#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18)
+#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18)
+#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18)
+#define NPA_PRIV_AF_INT_CFG (0x10000ull)
+#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8)
+#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8)
+#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull)
+#define NPA_AF_DTX_FILTER_CTL (0x10040ull)
+
+#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3)
+#define NPA_LF_AURA_OP_FREE0 (0x20ull)
+#define NPA_LF_AURA_OP_FREE1 (0x28ull)
+#define NPA_LF_AURA_OP_CNT (0x30ull)
+#define NPA_LF_AURA_OP_LIMIT (0x50ull)
+#define NPA_LF_AURA_OP_INT (0x60ull)
+#define NPA_LF_AURA_OP_THRESH (0x70ull)
+#define NPA_LF_POOL_OP_PC (0x100ull)
+#define NPA_LF_POOL_OP_AVAILABLE (0x110ull)
+#define NPA_LF_POOL_OP_PTR_START0 (0x120ull)
+#define NPA_LF_POOL_OP_PTR_START1 (0x128ull)
+#define NPA_LF_POOL_OP_PTR_END0 (0x130ull)
+#define NPA_LF_POOL_OP_PTR_END1 (0x138ull)
+#define NPA_LF_POOL_OP_INT (0x160ull)
+#define NPA_LF_POOL_OP_THRESH (0x170ull)
+#define NPA_LF_ERR_INT (0x200ull)
+#define NPA_LF_ERR_INT_W1S (0x208ull)
+#define NPA_LF_ERR_INT_ENA_W1C (0x210ull)
+#define NPA_LF_ERR_INT_ENA_W1S (0x218ull)
+#define NPA_LF_RAS (0x220ull)
+#define NPA_LF_RAS_W1S (0x228ull)
+#define NPA_LF_RAS_ENA_W1C (0x230ull)
+#define NPA_LF_RAS_ENA_W1S (0x238ull)
+#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12)
+#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12)
+#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12)
+#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12)
+#define NPA_LF_AURA_BATCH_ALLOC (0x340ull) /* [CN10K, .) */
+#define NPA_LF_AURA_BATCH_FREE0 (0x400ull) /* [CN10K, .) */
+#define NPA_LF_AURA_BATCH_FREEX(a) \
+ (0x400ull | (uint64_t)(a) << 3) /* [CN10K, .) */
+
+/* Enum offsets */
+
+#define NPA_AF_BATCH_FAIL_BATCH_PASS (0x0ull) /* [CN10K, .) */
+#define NPA_AF_BATCH_FAIL_BATCH_CNT_OOR (0x1ull) /* [CN10K, .) */
+#define NPA_AF_BATCH_FAIL_BATCH_STORE_FAIL (0x2ull) /* [CN10K, .) */
+
+#define NPA_AQ_COMP_NOTDONE (0x0ull)
+#define NPA_AQ_COMP_GOOD (0x1ull)
+#define NPA_AQ_COMP_SWERR (0x2ull)
+#define NPA_AQ_COMP_CTX_POISON (0x3ull)
+#define NPA_AQ_COMP_CTX_FAULT (0x4ull)
+#define NPA_AQ_COMP_LOCKERR (0x5ull)
+
+#define NPA_AF_INT_VEC_RVU (0x0ull)
+#define NPA_AF_INT_VEC_GEN (0x1ull)
+#define NPA_AF_INT_VEC_AQ_DONE (0x2ull)
+#define NPA_AF_INT_VEC_AF_ERR (0x3ull)
+#define NPA_AF_INT_VEC_POISON (0x4ull)
+
+#define NPA_AQ_INSTOP_NOP (0x0ull)
+#define NPA_AQ_INSTOP_INIT (0x1ull)
+#define NPA_AQ_INSTOP_WRITE (0x2ull)
+#define NPA_AQ_INSTOP_READ (0x3ull)
+#define NPA_AQ_INSTOP_LOCK (0x4ull)
+#define NPA_AQ_INSTOP_UNLOCK (0x5ull)
+
+#define NPA_AQ_CTYPE_AURA (0x0ull)
+#define NPA_AQ_CTYPE_POOL (0x1ull)
+
+#define NPA_BPINTF_NIX0_RX (0x0ull)
+#define NPA_BPINTF_NIX1_RX (0x1ull)
+
+#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull)
+#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull)
+#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull)
+#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull)
+#define NPA_AURA_ERR_INT_R4 (0x4ull)
+#define NPA_AURA_ERR_INT_R5 (0x5ull)
+#define NPA_AURA_ERR_INT_R6 (0x6ull)
+#define NPA_AURA_ERR_INT_R7 (0x7ull)
+
+#define NPA_LF_INT_VEC_ERR_INT (0x40ull)
+#define NPA_LF_INT_VEC_POISON (0x41ull)
+#define NPA_LF_INT_VEC_QINT_END (0x3full)
+#define NPA_LF_INT_VEC_QINT_START (0x0ull)
+
+#define NPA_INPQ_SSO (0x4ull)
+#define NPA_INPQ_TIM (0x5ull)
+#define NPA_INPQ_DPI (0x6ull)
+#define NPA_INPQ_AURA_OP (0xeull)
+#define NPA_INPQ_INTERNAL_RSV (0xfull)
+#define NPA_INPQ_NIX0_RX (0x0ull)
+#define NPA_INPQ_NIX1_RX (0x2ull)
+#define NPA_INPQ_NIX0_TX (0x1ull)
+#define NPA_INPQ_NIX1_TX (0x3ull)
+#define NPA_INPQ_R_END (0xdull)
+#define NPA_INPQ_R_START (0x7ull)
+
+#define NPA_POOL_ERR_INT_OVFLS (0x0ull)
+#define NPA_POOL_ERR_INT_RANGE (0x1ull)
+#define NPA_POOL_ERR_INT_PERR (0x2ull)
+#define NPA_POOL_ERR_INT_R3 (0x3ull)
+#define NPA_POOL_ERR_INT_R4 (0x4ull)
+#define NPA_POOL_ERR_INT_R5 (0x5ull)
+#define NPA_POOL_ERR_INT_R6 (0x6ull)
+#define NPA_POOL_ERR_INT_R7 (0x7ull)
+
+#define NPA_NDC0_PORT_AURA0 (0x0ull)
+#define NPA_NDC0_PORT_AURA1 (0x1ull)
+#define NPA_NDC0_PORT_POOL0 (0x2ull)
+#define NPA_NDC0_PORT_POOL1 (0x3ull)
+#define NPA_NDC0_PORT_STACK0 (0x4ull)
+#define NPA_NDC0_PORT_STACK1 (0x5ull)
+
+#define NPA_LF_ERR_INT_AURA_DIS (0x0ull)
+#define NPA_LF_ERR_INT_AURA_OOR (0x1ull)
+#define NPA_LF_ERR_INT_AURA_FAULT (0xcull)
+#define NPA_LF_ERR_INT_POOL_FAULT (0xdull)
+#define NPA_LF_ERR_INT_STACK_FAULT (0xeull)
+#define NPA_LF_ERR_INT_QINT_FAULT (0xfull)
+
+#define NPA_BATCH_ALLOC_RESULT_ACCEPTED (0x0ull) /* [CN10K, .) */
+#define NPA_BATCH_ALLOC_RESULT_WAIT (0x1ull) /* [CN10K, .) */
+#define NPA_BATCH_ALLOC_RESULT_ERR (0x2ull) /* [CN10K, .) */
+#define NPA_BATCH_ALLOC_RESULT_NOCORE (0x3ull) /* [CN10K, .) */
+#define NPA_BATCH_ALLOC_RESULT_NOCORE_WAIT (0x4ull) /* [CN10K, .) */
+
+#define NPA_BATCH_ALLOC_CCODE_INVAL (0x0ull) /* [CN10K, .) */
+#define NPA_BATCH_ALLOC_CCODE_VAL (0x1ull) /* [CN10K, .) */
+#define NPA_BATCH_ALLOC_CCODE_VAL_NULL (0x2ull) /* [CN10K, .) */
+
+#define NPA_INPQ_ENAS_REMOTE_PORT (0x0ull) /* [CN10K, .) */
+#define NPA_INPQ_ENAS_RESP_DISABLE (0x702ull) /* [CN10K, .) */
+#define NPA_INPQ_ENAS_NOTIF_DISABLE (0x7cfull) /* [CN10K, .) */
+
+/* Structures definitions */
+
+/* NPA admin queue instruction structure */
+struct npa_aq_inst_s {
+ uint64_t op : 4;
+ uint64_t ctype : 4;
+ uint64_t lf : 9;
+ uint64_t rsvd_23_17 : 7;
+ uint64_t cindex : 20;
+ uint64_t rsvd_62_44 : 19;
+ uint64_t doneint : 1;
+ uint64_t res_addr : 64; /* W1 */
+};
+
+/* NPA admin queue result structure */
+struct npa_aq_res_s {
+ uint64_t op : 4;
+ uint64_t ctype : 4;
+ uint64_t compcode : 8;
+ uint64_t doneint : 1;
+ uint64_t rsvd_63_17 : 47;
+ uint64_t rsvd_127_64 : 64; /* W1 */
+};
+
+/* NPA aura operation write data structure */
+struct npa_aura_op_wdata_s {
+ uint64_t aura : 20;
+ uint64_t rsvd_62_20 : 43;
+ uint64_t drop : 1;
+};
+
+/* NPA aura context structure */
+struct npa_aura_s {
+ uint64_t pool_addr : 64; /* W0 */
+ uint64_t ena : 1;
+ uint64_t rsvd_66_65 : 2;
+ uint64_t pool_caching : 1;
+ uint64_t pool_way_mask : 16;
+ uint64_t avg_con : 9;
+ uint64_t rsvd_93 : 1;
+ uint64_t pool_drop_ena : 1;
+ uint64_t aura_drop_ena : 1;
+ uint64_t bp_ena : 2;
+ uint64_t rsvd_103_98 : 6;
+ uint64_t aura_drop : 8;
+ uint64_t shift : 6;
+ uint64_t rsvd_119_118 : 2;
+ uint64_t avg_level : 8;
+ uint64_t count : 36;
+ uint64_t rsvd_167_164 : 4;
+ uint64_t nix0_bpid : 9;
+ uint64_t rsvd_179_177 : 3;
+ uint64_t nix1_bpid : 9;
+ uint64_t rsvd_191_189 : 3;
+ uint64_t limit : 36;
+ uint64_t rsvd_231_228 : 4;
+ uint64_t bp : 8;
+ uint64_t rsvd_242_240 : 3;
+ uint64_t fc_be : 1; /* [CN10K, .) */
+ uint64_t fc_ena : 1;
+ uint64_t fc_up_crossing : 1;
+ uint64_t fc_stype : 2;
+ uint64_t fc_hyst_bits : 4;
+ uint64_t rsvd_255_252 : 4;
+ uint64_t fc_addr : 64; /* W4 */
+ uint64_t pool_drop : 8;
+ uint64_t update_time : 16;
+ uint64_t err_int : 8;
+ uint64_t err_int_ena : 8;
+ uint64_t thresh_int : 1;
+ uint64_t thresh_int_ena : 1;
+ uint64_t thresh_up : 1;
+ uint64_t rsvd_363 : 1;
+ uint64_t thresh_qint_idx : 7;
+ uint64_t rsvd_371 : 1;
+ uint64_t err_qint_idx : 7;
+ uint64_t rsvd_383_379 : 5;
+ uint64_t thresh : 36;
+ uint64_t rsvd_423_420 : 4;
+ uint64_t fc_msh_dst : 11; /* [CN10K, .) */
+ uint64_t rsvd_447_435 : 13;
+ uint64_t rsvd_511_448 : 64; /* W7 */
+};
+
+/* NPA pool context structure */
+struct npa_pool_s {
+ uint64_t stack_base : 64; /* W0 */
+ uint64_t ena : 1;
+ uint64_t nat_align : 1;
+ uint64_t rsvd_67_66 : 2;
+ uint64_t stack_caching : 1;
+ uint64_t rsvd_71_69 : 3;
+ uint64_t stack_way_mask : 16;
+ uint64_t buf_offset : 12;
+ uint64_t rsvd_103_100 : 4;
+ uint64_t buf_size : 11;
+ uint64_t rsvd_127_115 : 13;
+ uint64_t stack_max_pages : 32;
+ uint64_t stack_pages : 32;
+ uint64_t op_pc : 48;
+ uint64_t rsvd_255_240 : 16;
+ uint64_t stack_offset : 4;
+ uint64_t rsvd_263_260 : 4;
+ uint64_t shift : 6;
+ uint64_t rsvd_271_270 : 2;
+ uint64_t avg_level : 8;
+ uint64_t avg_con : 9;
+ uint64_t fc_ena : 1;
+ uint64_t fc_stype : 2;
+ uint64_t fc_hyst_bits : 4;
+ uint64_t fc_up_crossing : 1;
+ uint64_t fc_be : 1; /* [CN10K, .) */
+ uint64_t rsvd_299_298 : 2;
+ uint64_t update_time : 16;
+ uint64_t rsvd_319_316 : 4;
+ uint64_t fc_addr : 64; /* W5 */
+ uint64_t ptr_start : 64; /* W6 */
+ uint64_t ptr_end : 64; /* W7 */
+ uint64_t rsvd_535_512 : 24;
+ uint64_t err_int : 8;
+ uint64_t err_int_ena : 8;
+ uint64_t thresh_int : 1;
+ uint64_t thresh_int_ena : 1;
+ uint64_t thresh_up : 1;
+ uint64_t rsvd_555 : 1;
+ uint64_t thresh_qint_idx : 7;
+ uint64_t rsvd_563 : 1;
+ uint64_t err_qint_idx : 7;
+ uint64_t rsvd_575_571 : 5;
+ uint64_t thresh : 36;
+ uint64_t rsvd_615_612 : 4;
+ uint64_t fc_msh_dst : 11; /* [CN10K, .) */
+ uint64_t rsvd_639_627 : 13;
+ uint64_t rsvd_703_640 : 64; /* W10 */
+ uint64_t rsvd_767_704 : 64; /* W11 */
+ uint64_t rsvd_831_768 : 64; /* W12 */
+ uint64_t rsvd_895_832 : 64; /* W13 */
+ uint64_t rsvd_959_896 : 64; /* W14 */
+ uint64_t rsvd_1023_960 : 64; /* W15 */
+};
+
+/* NPA queue interrupt context hardware structure */
+struct npa_qint_hw_s {
+ uint32_t count : 22;
+ uint32_t rsvd_30_22 : 9;
+ uint32_t ena : 1;
+};
+
+/* NPA batch allocate compare hardware structure */
+struct npa_batch_alloc_compare_s {
+ uint64_t aura : 20;
+ uint64_t rsvd_31_20 : 12;
+ uint64_t count : 10;
+ uint64_t rsvd_47_42 : 6;
+ uint64_t stype : 2;
+ uint64_t rsvd_61_50 : 12;
+ uint64_t dis_wait : 1;
+ uint64_t drop : 1;
+};
+
+/* NPA batch alloc dma write status structure */
+struct npa_batch_alloc_status_s {
+ uint8_t count : 5;
+ uint8_t ccode : 2;
+ uint8_t rsvd_7_7 : 1;
+};
+
+typedef enum {
+ ALLOC_RESULT_ACCEPTED = 0,
+ ALLOC_RESULT_WAIT = 1,
+ ALLOC_RESULT_ERR = 2,
+ ALLOC_RESULT_NOCORE = 3,
+ ALLOC_RESULT_NOCORE_WAIT = 4,
+} npa_batch_alloc_result_e;
+
+typedef enum {
+ ALLOC_CCODE_INVAL = 0,
+ ALLOC_CCODE_VAL = 1,
+ ALLOC_CCODE_VAL_NULL = 2,
+} npa_batch_alloc_ccode_e;
+
+typedef enum {
+ ALLOC_STYPE_STF = 0,
+ ALLOC_STYPE_STT = 1,
+ ALLOC_STYPE_STP = 2,
+ ALLOC_STYPE_STSTP = 3,
+} npa_batch_alloc_stype_e;
+
+#endif /* __NPA_HW_H__ */
diff --git a/drivers/common/cnxk/hw/npc.h b/drivers/common/cnxk/hw/npc.h
new file mode 100644
index 0000000..917cf25
--- /dev/null
+++ b/drivers/common/cnxk/hw/npc.h
@@ -0,0 +1,525 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __NPC_HW_H__
+#define __NPC_HW_H__
+
+/* Register offsets */
+
+#define NPC_AF_CFG (0x0ull)
+#define NPC_AF_ACTIVE_PC (0x10ull)
+#define NPC_AF_CONST (0x20ull)
+#define NPC_AF_CONST1 (0x30ull)
+#define NPC_AF_BLK_RST (0x40ull)
+#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull)
+#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull)
+#define NPC_AF_KPUX_CFG(a) (0x500ull | (uint64_t)(a) << 3)
+#define NPC_AF_PCK_CFG (0x600ull)
+#define NPC_AF_PCK_DEF_OL2 (0x610ull)
+#define NPC_AF_PCK_DEF_OIP4 (0x620ull)
+#define NPC_AF_PCK_DEF_OIP6 (0x630ull)
+#define NPC_AF_PCK_DEF_IIP4 (0x640ull)
+#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) (0x800ull | (uint64_t)(a) << 3)
+#define NPC_AF_INTFX_KEX_CFG(a) (0x1010ull | (uint64_t)(a) << 8)
+#define NPC_AF_PKINDX_ACTION0(a) (0x80000ull | (uint64_t)(a) << 6)
+#define NPC_AF_PKINDX_ACTION1(a) (0x80008ull | (uint64_t)(a) << 6)
+#define NPC_AF_PKINDX_CPI_DEFX(a, b) \
+ (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
+#define NPC_AF_CHLEN90B_PKIND (0x3bull)
+#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \
+ (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \
+ (uint64_t)(c) << 3)
+#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \
+ (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
+#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \
+ (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
+#define NPC_AF_KPUX_ENTRY_DISX(a, b) \
+ (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
+#define NPC_AF_CPIX_CFG(a) (0x200000ull | (uint64_t)(a) << 3)
+#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \
+ (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
+ (uint64_t)(c) << 5 | (uint64_t)(d) << 3)
+#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \
+ (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
+ (uint64_t)(c) << 3)
+#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \
+ (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
+ (uint64_t)(c) << 3)
+#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \
+ (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
+ (uint64_t)(c) << 3)
+#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \
+ (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
+ (uint64_t)(c) << 3)
+#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \
+ (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \
+ (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+#define NPC_AF_MATCH_STATX(a) (0x1880008ull | (uint64_t)(a) << 8)
+#define NPC_AF_INTFX_MISS_STAT_ACT(a) (0x1880040ull + 0x8 * (uint64_t)(a))
+#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \
+ (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \
+ (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+#define NPC_AF_INTFX_MISS_ACT(a) (0x1a00000ull | (uint64_t)(a) << 4)
+#define NPC_AF_INTFX_MISS_TAG_ACT(a) (0x1b00008ull | (uint64_t)(a) << 4)
+#define NPC_AF_MCAM_BANKX_HITX(a, b) \
+ (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+#define NPC_AF_LKUP_CTL (0x2000000ull)
+#define NPC_AF_LKUP_DATAX(a) (0x2000200ull | (uint64_t)(a) << 4)
+#define NPC_AF_LKUP_RESULTX(a) (0x2000400ull | (uint64_t)(a) << 4)
+#define NPC_AF_INTFX_STAT(a) (0x2000800ull | (uint64_t)(a) << 4)
+#define NPC_AF_DBG_CTL (0x3000000ull)
+#define NPC_AF_DBG_STATUS (0x3000010ull)
+#define NPC_AF_KPUX_DBG(a) (0x3000020ull | (uint64_t)(a) << 8)
+#define NPC_AF_IKPU_ERR_CTL (0x3000080ull)
+#define NPC_AF_KPUX_ERR_CTL(a) (0x30000a0ull | (uint64_t)(a) << 8)
+#define NPC_AF_MCAM_DBG (0x3001000ull)
+#define NPC_AF_DBG_DATAX(a) (0x3001400ull | (uint64_t)(a) << 4)
+#define NPC_AF_DBG_RESULTX(a) (0x3001800ull | (uint64_t)(a) << 4)
+
+/* Enum offsets */
+
+#define NPC_INTF_NIX0_RX (0x0ull)
+#define NPC_INTF_NIX0_TX (0x1ull)
+
+#define NPC_LKUPOP_PKT (0x0ull)
+#define NPC_LKUPOP_KEY (0x1ull)
+
+#define NPC_MCAM_KEY_X1 (0x0ull)
+#define NPC_MCAM_KEY_X2 (0x1ull)
+#define NPC_MCAM_KEY_X4 (0x2ull)
+
+#ifndef __NPC_ERRLEVELS__
+#define __NPC_ERRLEVELS__
+
+enum NPC_ERRLEV_E {
+ NPC_ERRLEV_RE = 0,
+ NPC_ERRLEV_LA = 1,
+ NPC_ERRLEV_LB = 2,
+ NPC_ERRLEV_LC = 3,
+ NPC_ERRLEV_LD = 4,
+ NPC_ERRLEV_LE = 5,
+ NPC_ERRLEV_LF = 6,
+ NPC_ERRLEV_LG = 7,
+ NPC_ERRLEV_LH = 8,
+ NPC_ERRLEV_R9 = 9,
+ NPC_ERRLEV_R10 = 10,
+ NPC_ERRLEV_R11 = 11,
+ NPC_ERRLEV_R12 = 12,
+ NPC_ERRLEV_R13 = 13,
+ NPC_ERRLEV_R14 = 14,
+ NPC_ERRLEV_NIX = 15,
+ NPC_ERRLEV_ENUM_LAST = 16,
+};
+
+#endif
+
+enum npc_kpu_err_code {
+ NPC_EC_NOERR = 0, /* has to be zero */
+ NPC_EC_UNK,
+ NPC_EC_IH_LENGTH,
+ NPC_EC_EDSA_UNK,
+ NPC_EC_L2_K1,
+ NPC_EC_L2_K2,
+ NPC_EC_L2_K3,
+ NPC_EC_L2_K3_ETYPE_UNK,
+ NPC_EC_L2_K4,
+ NPC_EC_MPLS_2MANY,
+ NPC_EC_MPLS_UNK,
+ NPC_EC_NSH_UNK,
+ NPC_EC_IP_TTL_0,
+ NPC_EC_IP_FRAG_OFFSET_1,
+ NPC_EC_IP_VER,
+ NPC_EC_IP6_HOP_0,
+ NPC_EC_IP6_VER,
+ NPC_EC_TCP_FLAGS_FIN_ONLY,
+ NPC_EC_TCP_FLAGS_ZERO,
+ NPC_EC_TCP_FLAGS_RST_FIN,
+ NPC_EC_TCP_FLAGS_URG_SYN,
+ NPC_EC_TCP_FLAGS_RST_SYN,
+ NPC_EC_TCP_FLAGS_SYN_FIN,
+ NPC_EC_VXLAN,
+ NPC_EC_NVGRE,
+ NPC_EC_GRE,
+ NPC_EC_GRE_VER1,
+ NPC_EC_L4,
+ NPC_EC_OIP4_CSUM,
+ NPC_EC_IIP4_CSUM,
+ NPC_EC_LAST /* has to be the last item */
+};
+
+enum NPC_LID_E {
+ NPC_LID_LA = 0,
+ NPC_LID_LB,
+ NPC_LID_LC,
+ NPC_LID_LD,
+ NPC_LID_LE,
+ NPC_LID_LF,
+ NPC_LID_LG,
+ NPC_LID_LH,
+};
+
+#ifndef __NPC_LT_TYPES__
+#define __NPC_LT_TYPES__
+#define NPC_LT_NA 0
+
+enum npc_kpu_la_ltype {
+ NPC_LT_LA_8023 = 1,
+ NPC_LT_LA_ETHER,
+ NPC_LT_LA_IH_NIX_ETHER,
+ NPC_LT_LA_IH_8_ETHER,
+ NPC_LT_LA_IH_4_ETHER,
+ NPC_LT_LA_IH_2_ETHER,
+ NPC_LT_LA_HIGIG2_ETHER,
+ NPC_LT_LA_IH_NIX_HIGIG2_ETHER,
+ NPC_LT_LA_CH_LEN_90B_ETHER,
+ NPC_LT_LA_CPT_HDR,
+ NPC_LT_LA_CUSTOM0 = 0xE,
+ NPC_LT_LA_CUSTOM1 = 0xF,
+};
+
+enum npc_kpu_lb_ltype {
+ NPC_LT_LB_ETAG = 1,
+ NPC_LT_LB_CTAG,
+ NPC_LT_LB_STAG_QINQ,
+ NPC_LT_LB_BTAG,
+ NPC_LT_LB_ITAG,
+ NPC_LT_LB_DSA,
+ NPC_LT_LB_DSA_VLAN,
+ NPC_LT_LB_EDSA,
+ NPC_LT_LB_EDSA_VLAN,
+ NPC_LT_LB_EXDSA,
+ NPC_LT_LB_EXDSA_VLAN,
+ NPC_LT_LB_FDSA,
+ NPC_LT_LB_CUSTOM0 = 0xE,
+ NPC_LT_LB_CUSTOM1 = 0xF,
+};
+
+enum npc_kpu_lc_ltype {
+ NPC_LT_LC_PTP = 1,
+ NPC_LT_LC_IP,
+ NPC_LT_LC_IP_OPT,
+ NPC_LT_LC_IP6,
+ NPC_LT_LC_IP6_EXT,
+ NPC_LT_LC_ARP,
+ NPC_LT_LC_RARP,
+ NPC_LT_LC_MPLS,
+ NPC_LT_LC_NSH,
+ NPC_LT_LC_FCOE,
+ NPC_LT_LC_CUSTOM0 = 0xE,
+ NPC_LT_LC_CUSTOM1 = 0xF,
+};
+
+/* Don't modify Ltypes up to SCTP, otherwise it will
+ * effect flow tag calculation and thus RSS.
+ */
+enum npc_kpu_ld_ltype {
+ NPC_LT_LD_TCP = 1,
+ NPC_LT_LD_UDP,
+ NPC_LT_LD_ICMP,
+ NPC_LT_LD_SCTP,
+ NPC_LT_LD_ICMP6,
+ NPC_LT_LD_CUSTOM0,
+ NPC_LT_LD_CUSTOM1,
+ NPC_LT_LD_IGMP = 8,
+ NPC_LT_LD_AH,
+ NPC_LT_LD_GRE,
+ NPC_LT_LD_NVGRE,
+ NPC_LT_LD_NSH,
+ NPC_LT_LD_TU_MPLS_IN_NSH,
+ NPC_LT_LD_TU_MPLS_IN_IP,
+};
+
+enum npc_kpu_le_ltype {
+ NPC_LT_LE_VXLAN = 1,
+ NPC_LT_LE_GENEVE,
+ NPC_LT_LE_ESP,
+ NPC_LT_LE_GTPU = 4,
+ NPC_LT_LE_VXLANGPE,
+ NPC_LT_LE_GTPC,
+ NPC_LT_LE_NSH,
+ NPC_LT_LE_TU_MPLS_IN_GRE,
+ NPC_LT_LE_TU_NSH_IN_GRE,
+ NPC_LT_LE_TU_MPLS_IN_UDP,
+ NPC_LT_LE_CUSTOM0 = 0xE,
+ NPC_LT_LE_CUSTOM1 = 0xF,
+};
+
+#endif
+
+enum npc_kpu_lf_ltype {
+ NPC_LT_LF_TU_ETHER = 1,
+ NPC_LT_LF_TU_PPP,
+ NPC_LT_LF_TU_MPLS_IN_VXLANGPE,
+ NPC_LT_LF_TU_NSH_IN_VXLANGPE,
+ NPC_LT_LF_TU_MPLS_IN_NSH,
+ NPC_LT_LF_TU_3RD_NSH,
+ NPC_LT_LF_CUSTOM0 = 0xE,
+ NPC_LT_LF_CUSTOM1 = 0xF,
+};
+
+enum npc_kpu_lg_ltype {
+ NPC_LT_LG_TU_IP = 1,
+ NPC_LT_LG_TU_IP6,
+ NPC_LT_LG_TU_ARP,
+ NPC_LT_LG_TU_ETHER_IN_NSH,
+ NPC_LT_LG_CUSTOM0 = 0xE,
+ NPC_LT_LG_CUSTOM1 = 0xF,
+};
+
+/* Don't modify Ltypes up to SCTP, otherwise it will
+ * effect flow tag calculation and thus RSS.
+ */
+enum npc_kpu_lh_ltype {
+ NPC_LT_LH_TU_TCP = 1,
+ NPC_LT_LH_TU_UDP,
+ NPC_LT_LH_TU_ICMP,
+ NPC_LT_LH_TU_SCTP,
+ NPC_LT_LH_TU_ICMP6,
+ NPC_LT_LH_TU_IGMP = 8,
+ NPC_LT_LH_TU_ESP,
+ NPC_LT_LH_TU_AH,
+ NPC_LT_LH_CUSTOM0 = 0xE,
+ NPC_LT_LH_CUSTOM1 = 0xF,
+};
+
+enum npc_kpu_lb_uflag {
+ NPC_F_LB_U_UNK_ETYPE = 0x80,
+ NPC_F_LB_U_MORE_TAG = 0x40,
+};
+
+enum npc_kpu_lb_lflag {
+ NPC_F_LB_L_WITH_CTAG = 1,
+ NPC_F_LB_L_WITH_CTAG_UNK,
+ NPC_F_LB_L_WITH_STAG_CTAG,
+ NPC_F_LB_L_WITH_STAG_STAG,
+ NPC_F_LB_L_WITH_QINQ_CTAG,
+ NPC_F_LB_L_WITH_QINQ_QINQ,
+ NPC_F_LB_L_WITH_ITAG,
+ NPC_F_LB_L_WITH_ITAG_STAG,
+ NPC_F_LB_L_WITH_ITAG_CTAG,
+ NPC_F_LB_L_WITH_ITAG_UNK,
+ NPC_F_LB_L_WITH_BTAG_ITAG,
+ NPC_F_LB_L_WITH_STAG,
+ NPC_F_LB_L_WITH_QINQ,
+ NPC_F_LB_L_DSA,
+ NPC_F_LB_L_DSA_VLAN,
+ NPC_F_LB_L_EDSA,
+ NPC_F_LB_L_EDSA_VLAN,
+ NPC_F_LB_L_EXDSA,
+ NPC_F_LB_L_EXDSA_VLAN,
+ NPC_F_LB_L_FDSA,
+};
+
+enum npc_kpu_lc_uflag {
+ NPC_F_LC_U_UNK_PROTO = 0x10,
+ NPC_F_LC_U_IP_FRAG = 0x20,
+ NPC_F_LC_U_IP6_FRAG = 0x40,
+};
+
+/* Structures definitions */
+struct npc_kpu_profile_cam {
+ uint8_t state;
+ uint8_t state_mask;
+ uint16_t dp0;
+ uint16_t dp0_mask;
+ uint16_t dp1;
+ uint16_t dp1_mask;
+ uint16_t dp2;
+ uint16_t dp2_mask;
+};
+
+struct npc_kpu_profile_action {
+ uint8_t errlev;
+ uint8_t errcode;
+ uint8_t dp0_offset;
+ uint8_t dp1_offset;
+ uint8_t dp2_offset;
+ uint8_t bypass_count;
+ uint8_t parse_done;
+ uint8_t next_state;
+ uint8_t ptr_advance;
+ uint8_t cap_ena;
+ uint8_t lid;
+ uint8_t ltype;
+ uint8_t flags;
+ uint8_t offset;
+ uint8_t mask;
+ uint8_t right;
+ uint8_t shift;
+};
+
+struct npc_kpu_profile {
+ int cam_entries;
+ int action_entries;
+ struct npc_kpu_profile_cam *cam;
+ struct npc_kpu_profile_action *action;
+};
+
+/* NPC KPU register formats */
+struct npc_kpu_cam {
+ uint64_t dp0_data : 16;
+ uint64_t dp1_data : 16;
+ uint64_t dp2_data : 16;
+ uint64_t state : 8;
+ uint64_t rsvd_63_56 : 8;
+};
+
+struct npc_kpu_action0 {
+ uint64_t var_len_shift : 3;
+ uint64_t var_len_right : 1;
+ uint64_t var_len_mask : 8;
+ uint64_t var_len_offset : 8;
+ uint64_t ptr_advance : 8;
+ uint64_t capture_flags : 8;
+ uint64_t capture_ltype : 4;
+ uint64_t capture_lid : 3;
+ uint64_t rsvd_43 : 1;
+ uint64_t next_state : 8;
+ uint64_t parse_done : 1;
+ uint64_t capture_ena : 1;
+ uint64_t byp_count : 3;
+ uint64_t rsvd_63_57 : 7;
+};
+
+struct npc_kpu_action1 {
+ uint64_t dp0_offset : 8;
+ uint64_t dp1_offset : 8;
+ uint64_t dp2_offset : 8;
+ uint64_t errcode : 8;
+ uint64_t errlev : 4;
+ uint64_t rsvd_63_36 : 28;
+};
+
+struct npc_kpu_pkind_cpi_def {
+ uint64_t cpi_base : 10;
+ uint64_t rsvd_11_10 : 2;
+ uint64_t add_shift : 3;
+ uint64_t rsvd_15 : 1;
+ uint64_t add_mask : 8;
+ uint64_t add_offset : 8;
+ uint64_t flags_mask : 8;
+ uint64_t flags_match : 8;
+ uint64_t ltype_mask : 4;
+ uint64_t ltype_match : 4;
+ uint64_t lid : 3;
+ uint64_t rsvd_62_59 : 4;
+ uint64_t ena : 1;
+};
+
+struct nix_rx_action {
+ uint64_t op : 4;
+ uint64_t pf_func : 16;
+ uint64_t index : 20;
+ uint64_t match_id : 16;
+ uint64_t flow_key_alg : 5;
+ uint64_t rsvd_63_61 : 3;
+};
+
+struct nix_tx_action {
+ uint64_t op : 4;
+ uint64_t rsvd_11_4 : 8;
+ uint64_t index : 20;
+ uint64_t match_id : 16;
+ uint64_t rsvd_63_48 : 16;
+};
+
+/* NPC layer parse information structure */
+struct npc_layer_info_s {
+ uint32_t lptr : 8;
+ uint32_t flags : 8;
+ uint32_t ltype : 4;
+ uint32_t rsvd_31_20 : 12;
+};
+
+/* NPC layer mcam search key extract structure */
+struct npc_layer_kex_s {
+ uint16_t flags : 8;
+ uint16_t ltype : 4;
+ uint16_t rsvd_15_12 : 4;
+};
+
+/* NPC mcam search key x1 structure */
+struct npc_mcam_key_x1_s {
+ uint64_t intf : 2;
+ uint64_t rsvd_63_2 : 62;
+ uint64_t kw0 : 64; /* W1 */
+ uint64_t kw1 : 48;
+ uint64_t rsvd_191_176 : 16;
+};
+
+/* NPC mcam search key x2 structure */
+struct npc_mcam_key_x2_s {
+ uint64_t intf : 2;
+ uint64_t rsvd_63_2 : 62;
+ uint64_t kw0 : 64; /* W1 */
+ uint64_t kw1 : 64; /* W2 */
+ uint64_t kw2 : 64; /* W3 */
+ uint64_t kw3 : 32;
+ uint64_t rsvd_319_288 : 32;
+};
+
+/* NPC mcam search key x4 structure */
+struct npc_mcam_key_x4_s {
+ uint64_t intf : 2;
+ uint64_t rsvd_63_2 : 62;
+ uint64_t kw0 : 64; /* W1 */
+ uint64_t kw1 : 64; /* W2 */
+ uint64_t kw2 : 64; /* W3 */
+ uint64_t kw3 : 64; /* W4 */
+ uint64_t kw4 : 64; /* W5 */
+ uint64_t kw5 : 64; /* W6 */
+ uint64_t kw6 : 64; /* W7 */
+};
+
+/* NPC parse key extract structure */
+struct npc_parse_kex_s {
+ uint64_t chan : 12;
+ uint64_t errlev : 4;
+ uint64_t errcode : 8;
+ uint64_t l2m : 1;
+ uint64_t l2b : 1;
+ uint64_t l3m : 1;
+ uint64_t l3b : 1;
+ uint64_t la : 12;
+ uint64_t lb : 12;
+ uint64_t lc : 12;
+ uint64_t ld : 12;
+ uint64_t le : 12;
+ uint64_t lf : 12;
+ uint64_t lg : 12;
+ uint64_t lh : 12;
+ uint64_t rsvd_127_124 : 4;
+};
+
+/* NPC result structure */
+struct npc_result_s {
+ uint64_t intf : 2;
+ uint64_t pkind : 6;
+ uint64_t chan : 12;
+ uint64_t errlev : 4;
+ uint64_t errcode : 8;
+ uint64_t l2m : 1;
+ uint64_t l2b : 1;
+ uint64_t l3m : 1;
+ uint64_t l3b : 1;
+ uint64_t eoh_ptr : 8;
+ uint64_t rsvd_63_44 : 20;
+ uint64_t action : 64; /* W1 */
+ uint64_t vtag_action : 64; /* W2 */
+ uint64_t la : 20;
+ uint64_t lb : 20;
+ uint64_t lc : 20;
+ uint64_t rsvd_255_252 : 4;
+ uint64_t ld : 20;
+ uint64_t le : 20;
+ uint64_t lf : 20;
+ uint64_t rsvd_319_316 : 4;
+ uint64_t lg : 20;
+ uint64_t lh : 20;
+ uint64_t rsvd_383_360 : 24;
+};
+
+#endif /* __NPC_HW_H__ */
diff --git a/drivers/common/cnxk/hw/rvu.h b/drivers/common/cnxk/hw/rvu.h
new file mode 100644
index 0000000..9dd4c25
--- /dev/null
+++ b/drivers/common/cnxk/hw/rvu.h
@@ -0,0 +1,221 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __RVU_HW_H__
+#define __RVU_HW_H__
+
+/* Register offsets */
+
+#define RVU_AF_MSIXTR_BASE (0x10ull)
+#define RVU_AF_BLK_RST (0x30ull)
+#define RVU_AF_PF_BAR4_ADDR (0x40ull)
+#define RVU_AF_RAS (0x100ull)
+#define RVU_AF_RAS_W1S (0x108ull)
+#define RVU_AF_RAS_ENA_W1S (0x110ull)
+#define RVU_AF_RAS_ENA_W1C (0x118ull)
+#define RVU_AF_GEN_INT (0x120ull)
+#define RVU_AF_GEN_INT_W1S (0x128ull)
+#define RVU_AF_GEN_INT_ENA_W1S (0x130ull)
+#define RVU_AF_GEN_INT_ENA_W1C (0x138ull)
+#define RVU_AF_AFPFX_MBOXX(a, b) \
+ (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3)
+#define RVU_AF_PFME_STATUS (0x2800ull)
+#define RVU_AF_PFTRPEND (0x2810ull)
+#define RVU_AF_PFTRPEND_W1S (0x2820ull)
+#define RVU_AF_PF_RST (0x2840ull)
+#define RVU_AF_HWVF_RST (0x2850ull)
+#define RVU_AF_PFAF_MBOX_INT (0x2880ull)
+#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull)
+#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull)
+#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull)
+#define RVU_AF_PFFLR_INT (0x28a0ull)
+#define RVU_AF_PFFLR_INT_W1S (0x28a8ull)
+#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull)
+#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull)
+#define RVU_AF_PFME_INT (0x28c0ull)
+#define RVU_AF_PFME_INT_W1S (0x28c8ull)
+#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull)
+#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull)
+#define RVU_PRIV_CONST (0x8000000ull)
+#define RVU_PRIV_GEN_CFG (0x8000010ull)
+#define RVU_PRIV_CLK_CFG (0x8000020ull)
+#define RVU_PRIV_ACTIVE_PC (0x8000030ull)
+#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_NIXX_CFG(a, b) \
+ (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
+#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_PFX_CPTX_CFG(a, b) \
+ (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
+#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3)
+#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \
+ (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
+#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16)
+#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \
+ (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
+
+#define RVU_PF_VFX_PFVF_MBOXX(a, b) \
+ (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3)
+#define RVU_PF_VF_BAR4_ADDR (0x10ull)
+#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3)
+#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3)
+#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3)
+#define RVU_PF_INT (0xc20ull)
+#define RVU_PF_INT_W1S (0xc28ull)
+#define RVU_PF_INT_ENA_W1S (0xc30ull)
+#define RVU_PF_INT_ENA_W1C (0xc38ull)
+#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
+#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
+#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
+#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3)
+#define RVU_VF_INT (0x20ull)
+#define RVU_VF_INT_W1S (0x28ull)
+#define RVU_VF_INT_ENA_W1S (0x30ull)
+#define RVU_VF_INT_ENA_W1C (0x38ull)
+#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
+#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
+#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
+#define RVU_VF_MBOX_REGION (0xc0000ull) /* [CN10K, .) */
+#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
+
+/* Enum offsets */
+
+#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull)
+#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull)
+#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \
+ (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25))
+
+#define RVU_AF_INT_VEC_POISON (0x0ull)
+#define RVU_AF_INT_VEC_PFFLR (0x1ull)
+#define RVU_AF_INT_VEC_PFME (0x2ull)
+#define RVU_AF_INT_VEC_GEN (0x3ull)
+#define RVU_AF_INT_VEC_MBOX (0x4ull)
+
+#define RVU_BLOCK_TYPE_RVUM (0x0ull)
+#define RVU_BLOCK_TYPE_LMT (0x2ull)
+#define RVU_BLOCK_TYPE_NIX (0x3ull)
+#define RVU_BLOCK_TYPE_NPA (0x4ull)
+#define RVU_BLOCK_TYPE_NPC (0x5ull)
+#define RVU_BLOCK_TYPE_SSO (0x6ull)
+#define RVU_BLOCK_TYPE_SSOW (0x7ull)
+#define RVU_BLOCK_TYPE_TIM (0x8ull)
+#define RVU_BLOCK_TYPE_CPT (0x9ull)
+#define RVU_BLOCK_TYPE_NDC (0xaull)
+#define RVU_BLOCK_TYPE_DDF (0xbull)
+#define RVU_BLOCK_TYPE_ZIP (0xcull)
+#define RVU_BLOCK_TYPE_RAD (0xdull)
+#define RVU_BLOCK_TYPE_DFA (0xeull)
+#define RVU_BLOCK_TYPE_HNA (0xfull)
+
+#define RVU_BLOCK_ADDR_RVUM (0x0ull)
+#define RVU_BLOCK_ADDR_LMT (0x1ull)
+#define RVU_BLOCK_ADDR_NPA (0x3ull)
+#define RVU_BLOCK_ADDR_NIX0 (0x4ull)
+#define RVU_BLOCK_ADDR_NIX1 (0x5ull)
+#define RVU_BLOCK_ADDR_NPC (0x6ull)
+#define RVU_BLOCK_ADDR_SSO (0x7ull)
+#define RVU_BLOCK_ADDR_SSOW (0x8ull)
+#define RVU_BLOCK_ADDR_TIM (0x9ull)
+#define RVU_BLOCK_ADDR_CPT0 (0xaull)
+#define RVU_BLOCK_ADDR_NDC0 (0xcull)
+#define RVU_BLOCK_ADDR_NDC1 (0xdull)
+#define RVU_BLOCK_ADDR_NDC2 (0xeull)
+#define RVU_BLOCK_ADDR_R_END (0x1full)
+#define RVU_BLOCK_ADDR_R_START (0x14ull)
+
+#define RVU_VF_INT_VEC_MBOX (0x0ull)
+
+#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull)
+#define RVU_PF_INT_VEC_VFFLR0 (0x0ull)
+#define RVU_PF_INT_VEC_VFFLR1 (0x1ull)
+#define RVU_PF_INT_VEC_VFME0 (0x2ull)
+#define RVU_PF_INT_VEC_VFME1 (0x3ull)
+#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull)
+#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull)
+
+#define AF_BAR2_ALIASX_SIZE (0x100000ull)
+
+#define TIM_AF_BAR2_SEL (0x9000000ull)
+#define SSO_AF_BAR2_SEL (0x9000000ull)
+#define NIX_AF_BAR2_SEL (0x9000000ull)
+#define SSOW_AF_BAR2_SEL (0x9000000ull)
+#define NPA_AF_BAR2_SEL (0x9000000ull)
+#define CPT_AF_BAR2_SEL (0x9000000ull)
+#define RVU_AF_BAR2_SEL (0x9000000ull)
+
+#define AF_BAR2_ALIASX(a, b) \
+ (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b))
+#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
+#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
+#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
+#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
+#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
+#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
+#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
+
+/* Structures definitions */
+
+/* RVU admin function register address structure */
+struct rvu_af_addr_s {
+ uint64_t addr : 28;
+ uint64_t block : 5;
+ uint64_t rsvd_63_33 : 31;
+};
+
+/* RVU function-unique address structure */
+struct rvu_func_addr_s {
+ uint32_t addr : 12;
+ uint32_t lf_slot : 8;
+ uint32_t block : 5;
+ uint32_t rsvd_31_25 : 7;
+};
+
+/* RVU msi-x vector structure */
+struct rvu_msix_vec_s {
+ uint64_t addr : 64; /* W0 */
+ uint64_t data : 32;
+ uint64_t mask : 1;
+ uint64_t pend : 1;
+ uint64_t rsvd_127_98 : 30;
+};
+
+/* RVU pf function identification structure */
+struct rvu_pf_func_s {
+ uint16_t func : 10;
+ uint16_t pf : 6;
+};
+
+#define RVU_CN9K_LMT_SLOT_MAX 256ULL
+#define RVU_CN9K_LMT_SLOT_MASK (RVU_CN9K_LMT_SLOT_MAX - 1)
+
+#define RVU_LMT_SZ 128ULL
+
+/* 2048 LMT lines in BAR4 [CN10k, .) */
+#define RVU_LMT_LINE_MAX 2048
+#define RVU_LMT_LINE_BURST_MAX (uint16_t)32 /* [CN10K, .) */
+
+#endif /* __RVU_HW_H__ */
diff --git a/drivers/common/cnxk/hw/sdp.h b/drivers/common/cnxk/hw/sdp.h
new file mode 100644
index 0000000..46d210e
--- /dev/null
+++ b/drivers/common/cnxk/hw/sdp.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __SDP_HW_H_
+#define __SDP_HW_H_
+
+/* SDP VF IOQs */
+#define SDP_MIN_RINGS_PER_VF (1)
+#define SDP_MAX_RINGS_PER_VF (8)
+
+/* SDP VF IQ configuration */
+#define SDP_VF_MAX_IQ_DESCRIPTORS (512)
+#define SDP_VF_MIN_IQ_DESCRIPTORS (128)
+
+#define SDP_VF_DB_MIN (1)
+#define SDP_VF_DB_TIMEOUT (1)
+#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF)
+
+#define SDP_VF_64BYTE_INSTR (64)
+#define SDP_VF_32BYTE_INSTR (32)
+
+/* SDP VF OQ configuration */
+#define SDP_VF_MAX_OQ_DESCRIPTORS (512)
+#define SDP_VF_MIN_OQ_DESCRIPTORS (128)
+#define SDP_VF_OQ_BUF_SIZE (2048)
+#define SDP_VF_OQ_REFIL_THRESHOLD (16)
+
+#define SDP_VF_OQ_INFOPTR_MODE (1)
+#define SDP_VF_OQ_BUFPTR_MODE (0)
+
+#define SDP_VF_OQ_INTR_PKT (1)
+#define SDP_VF_OQ_INTR_TIME (10)
+#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF
+
+/* Wait time in milliseconds for FLR */
+#define SDP_VF_PCI_FLR_WAIT (100)
+#define SDP_VF_BUSY_LOOP_COUNT (10000)
+
+#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF
+#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF
+
+/* SDP VF IOQs per rawdev */
+#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES
+#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES
+
+/* SDP VF Register definitions */
+#define SDP_VF_RING_OFFSET (0x1ull << 17)
+
+/* SDP VF IQ Registers */
+#define SDP_VF_R_IN_CONTROL_START (0x10000)
+#define SDP_VF_R_IN_ENABLE_START (0x10010)
+#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020)
+#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030)
+#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040)
+#define SDP_VF_R_IN_CNTS_START (0x10050)
+#define SDP_VF_R_IN_INT_LEVELS_START (0x10060)
+#define SDP_VF_R_IN_PKT_CNT_START (0x10080)
+#define SDP_VF_R_IN_BYTE_CNT_START (0x10090)
+
+#define SDP_VF_R_IN_CONTROL(ring) \
+ (SDP_VF_R_IN_CONTROL_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_ENABLE(ring) \
+ (SDP_VF_R_IN_ENABLE_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_INSTR_BADDR(ring) \
+ (SDP_VF_R_IN_INSTR_BADDR_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_INSTR_RSIZE(ring) \
+ (SDP_VF_R_IN_INSTR_RSIZE_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_INSTR_DBELL(ring) \
+ (SDP_VF_R_IN_INSTR_DBELL_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_CNTS(ring) \
+ (SDP_VF_R_IN_CNTS_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_INT_LEVELS(ring) \
+ (SDP_VF_R_IN_INT_LEVELS_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_PKT_CNT(ring) \
+ (SDP_VF_R_IN_PKT_CNT_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_IN_BYTE_CNT(ring) \
+ (SDP_VF_R_IN_BYTE_CNT_START + (SDP_VF_RING_OFFSET * (ring)))
+
+/* SDP VF IQ Masks */
+#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF)
+#define SDP_VF_R_IN_CTL_RPVF_POS (48)
+
+#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28)
+#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */
+#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24)
+#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8)
+#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6)
+#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5)
+#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3)
+#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1)
+#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0)
+
+#define SDP_VF_R_IN_CTL_MASK (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B)
+
+/* SDP VF OQ Registers */
+#define SDP_VF_R_OUT_CNTS_START (0x10100)
+#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110)
+#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120)
+#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130)
+#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140)
+#define SDP_VF_R_OUT_CONTROL_START (0x10150)
+#define SDP_VF_R_OUT_ENABLE_START (0x10160)
+#define SDP_VF_R_OUT_PKT_CNT_START (0x10180)
+#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190)
+
+#define SDP_VF_R_OUT_CONTROL(ring) \
+ (SDP_VF_R_OUT_CONTROL_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_ENABLE(ring) \
+ (SDP_VF_R_OUT_ENABLE_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_SLIST_BADDR(ring) \
+ (SDP_VF_R_OUT_SLIST_BADDR_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \
+ (SDP_VF_R_OUT_SLIST_RSIZE_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_SLIST_DBELL(ring) \
+ (SDP_VF_R_OUT_SLIST_DBELL_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_CNTS(ring) \
+ (SDP_VF_R_OUT_CNTS_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_INT_LEVELS(ring) \
+ (SDP_VF_R_OUT_INT_LEVELS_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_PKT_CNT(ring) \
+ (SDP_VF_R_OUT_PKT_CNT_START + (SDP_VF_RING_OFFSET * (ring)))
+
+#define SDP_VF_R_OUT_BYTE_CNT(ring) \
+ (SDP_VF_R_OUT_BYTE_CNT_START + (SDP_VF_RING_OFFSET * (ring)))
+
+/* SDP VF OQ Masks */
+#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40)
+#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34)
+#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33)
+#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32)
+#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30)
+#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29)
+#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28)
+#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26)
+#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25)
+#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24)
+#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23)
+
+#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63)
+#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32)
+
+/* SDP Instruction Header */
+struct sdp_instr_ih {
+ /* Data Len */
+ uint64_t tlen : 16;
+
+ /* Reserved1 */
+ uint64_t rsvd1 : 20;
+
+ /* PKIND for SDP */
+ uint64_t pkind : 6;
+
+ /* Front Data size */
+ uint64_t fsz : 6;
+
+ /* No. of entries in gather list */
+ uint64_t gsz : 14;
+
+ /* Gather indicator */
+ uint64_t gather : 1;
+
+ /* Reserved2 */
+ uint64_t rsvd2 : 1;
+} __plt_packed;
+
+#endif /* __SDP_HW_H_ */
diff --git a/drivers/common/cnxk/hw/sso.h b/drivers/common/cnxk/hw/sso.h
new file mode 100644
index 0000000..2311b5a
--- /dev/null
+++ b/drivers/common/cnxk/hw/sso.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __SSO_HW_H__
+#define __SSO_HW_H__
+
+/* Register offsets */
+
+#define SSO_AF_CONST (0x1000ull)
+#define SSO_AF_CONST1 (0x1008ull)
+#define SSO_AF_WQ_INT_PC (0x1020ull)
+#define SSO_AF_NOS_CNT (0x1050ull) /* [CN9K, CN10K) */
+#define SSO_AF_GWS_INV (0x1060ull) /* [CN10K, .) */
+#define SSO_AF_AW_WE (0x1080ull)
+#define SSO_AF_WS_CFG (0x1088ull)
+#define SSO_AF_GWE_CFG (0x1098ull)
+#define SSO_AF_GWE_RANDOM (0x10b0ull)
+#define SSO_AF_LF_HWGRP_RST (0x10e0ull)
+#define SSO_AF_AW_CFG (0x10f0ull)
+#define SSO_AF_BLK_RST (0x10f8ull)
+#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull)
+#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull)
+#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull)
+#define SSO_AF_ERR0 (0x1220ull)
+#define SSO_AF_ERR0_W1S (0x1228ull)
+#define SSO_AF_ERR0_ENA_W1C (0x1230ull)
+#define SSO_AF_ERR0_ENA_W1S (0x1238ull)
+#define SSO_AF_ERR2 (0x1260ull)
+#define SSO_AF_ERR2_W1S (0x1268ull)
+#define SSO_AF_ERR2_ENA_W1C (0x1270ull)
+#define SSO_AF_ERR2_ENA_W1S (0x1278ull)
+#define SSO_AF_UNMAP_INFO (0x12f0ull)
+#define SSO_AF_UNMAP_INFO2 (0x1300ull)
+#define SSO_AF_UNMAP_INFO3 (0x1310ull)
+#define SSO_AF_RAS (0x1420ull)
+#define SSO_AF_RAS_W1S (0x1430ull)
+#define SSO_AF_RAS_ENA_W1C (0x1460ull)
+#define SSO_AF_RAS_ENA_W1S (0x1470ull)
+#define SSO_AF_AW_INP_CTL (0x2070ull)
+#define SSO_AF_AW_ADD (0x2080ull)
+#define SSO_AF_AW_READ_ARB (0x2090ull)
+#define SSO_AF_XAQ_REQ_PC (0x20b0ull)
+#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull)
+#define SSO_AF_TAQ_CNT (0x20c0ull)
+#define SSO_AF_TAQ_ADD (0x20e0ull)
+#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3)
+#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3)
+#define SSO_PRIV_AF_INT_CFG (0x3000ull)
+#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull)
+#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3)
+#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3)
+#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3)
+#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3)
+#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_AW_FWD(a) \
+ (0x200030ull | (uint64_t)(a) << 12) /* [CN10K, .) */
+#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_LS_PC(a) \
+ (0x2000c0ull | (uint64_t)(a) << 12) /* [CN10K, .) */
+#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12)
+#define SSO_AF_HWSX_LSW_CFG(a) \
+ (0x400300ull | (uint64_t)(a) << 12) /* [CN10K, .) */
+#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \
+ (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \
+ (uint64_t)(c) << 3)
+#define SSO_AF_TILEMAPX(a) \
+ (0x400600ull | (uint64_t)(a) << 12) /* [CN10K, .) \
+ */
+#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3)
+#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3)
+#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3)
+#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3)
+#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3)
+#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3)
+#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3)
+#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3)
+#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3)
+#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3)
+#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3)
+#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3)
+#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3)
+#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3)
+#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3)
+#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3)
+#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3)
+#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3)
+#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3)
+#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3)
+#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3)
+#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3)
+#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3)
+#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3)
+#define SSO_AF_WS_AWE_DIGESTX(a) \
+ (0x902400ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */
+#define SSO_AF_WS_AWE_DIGESTX_W1S(a) \
+ (0x902500ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */
+#define SSO_AF_WS_GWI_DIGESTX(a) \
+ (0x902600ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */
+#define SSO_AF_WS_GWI_DIGESTX_W1S(a) \
+ (0x902700ull | (uint64_t)(a) << 3) /* [CN9K, CN10K) */
+#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3)
+#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3)
+#define SSO_AF_IENTX_LSW(a) \
+ (0xac0000ull | (uint64_t)(a) << 3) /* [CN10K, .) */
+
+#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3)
+#define SSO_AF_TAQX_WAEX_TAG(a, b) \
+ (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+#define SSO_AF_TAQX_WAEX_WQP(a, b) \
+ (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
+
+#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
+#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
+#define SSO_LF_GGRP_QCTL (0x20ull)
+#define SSO_LF_GGRP_EXE_DIS (0x80ull)
+#define SSO_LF_GGRP_INT (0x100ull)
+#define SSO_LF_GGRP_INT_W1S (0x108ull)
+#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
+#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
+#define SSO_LF_GGRP_INT_THR (0x140ull)
+#define SSO_LF_GGRP_INT_CNT (0x180ull)
+#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
+#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
+#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
+#define SSO_LF_GGRP_MISC_CNT (0x200ull)
+
+#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull
+#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull
+#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16
+#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK
+#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull
+#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16
+#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull
+#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull
+#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32
+#define SSO_HWGRP_IAQ_RSVD_THR 0x2
+
+#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull
+#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull
+#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16
+#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK
+#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull
+#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16
+#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull
+#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull
+#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32
+#define SSO_HWGRP_TAQ_RSVD_THR 0x3
+
+#define SSO_HWGRP_PRI_AFF_MASK 0xFull
+#define SSO_HWGRP_PRI_AFF_SHIFT 8
+#define SSO_HWGRP_PRI_WGT_MASK 0x3Full
+#define SSO_HWGRP_PRI_WGT_SHIFT 16
+#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full
+#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24
+
+#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0)
+#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1)
+#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2)
+#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3)
+#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4)
+
+#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8)
+#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9)
+#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull
+#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull
+
+/* Enum offsets */
+
+#define SSO_LF_INT_VEC_GRP (0x0ull)
+
+#define SSO_AF_INT_VEC_ERR0 (0x0ull)
+#define SSO_AF_INT_VEC_ERR2 (0x1ull)
+#define SSO_AF_INT_VEC_RAS (0x2ull)
+
+#define SSO_LSW_MODE_NO_LSW (0x0ull) /* [CN10K, .) */
+#define SSO_LSW_MODE_WAITW (0x1ull) /* [CN10K, .) */
+#define SSO_LSW_MODE_IMMED (0x2ull) /* [CN10K, .) */
+
+#define SSO_WA_IOBN (0x0ull)
+#define SSO_WA_ADDWQ (0x3ull)
+#define SSO_WA_DPI (0x4ull)
+#define SSO_WA_TIM (0x6ull)
+#define SSO_WA_ZIP (0x7ull) /* [CN9K, CN10K) */
+#define SSO_WA_PSM (0x7ull) /* [CN10K, .) */
+#define SSO_WA_NIXRX0 (0x1ull)
+#define SSO_WA_NIXRX1 (0x8ull) /* [CN10K, .) */
+#define SSO_WA_CPT0 (0x2ull)
+#define SSO_WA_CPT1 (0x9ull) /* [CN10K, .) */
+#define SSO_WA_NIXTX0 (0x5ull)
+#define SSO_WA_NIXTX1 (0xbull) /* [CN10K, .) */
+#define SSO_WA_ML0 (0xaull) /* [CN10K, .) */
+#define SSO_WA_ML1 (0xcull) /* [CN10K, .) */
+
+#define SSO_TT_ORDERED (0x0ull)
+#define SSO_TT_ATOMIC (0x1ull)
+#define SSO_TT_UNTAGGED (0x2ull)
+#define SSO_TT_EMPTY (0x3ull)
+
+#endif /* __SSO_HW_H__ */
diff --git a/drivers/common/cnxk/hw/ssow.h b/drivers/common/cnxk/hw/ssow.h
new file mode 100644
index 0000000..c6b3ef7
--- /dev/null
+++ b/drivers/common/cnxk/hw/ssow.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __SSOW_HW_H__
+#define __SSOW_HW_H__
+
+/* Register offsets */
+
+#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull)
+#define SSOW_AF_LF_HWS_RST (0x30ull)
+#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3)
+#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3)
+#define SSOW_AF_SCRATCH_WS (0x100000ull)
+#define SSOW_AF_SCRATCH_GW (0x200000ull)
+#define SSOW_AF_SCRATCH_AW (0x300000ull)
+
+#define SSOW_LF_GWS_LINKS (0x10ull)
+#define SSOW_LF_GWS_PENDWQP (0x40ull) /* [CN9K, CN10K) */
+#define SSOW_LF_GWS_PENDSTATE (0x50ull)
+#define SSOW_LF_GWS_NW_TIM (0x70ull)
+#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
+#define SSOW_LF_GWS_INT (0x100ull)
+#define SSOW_LF_GWS_INT_W1S (0x108ull)
+#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
+#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
+#define SSOW_LF_GWS_TAG (0x200ull)
+#define SSOW_LF_GWS_WQP (0x210ull)
+#define SSOW_LF_GWS_SWTP (0x220ull)
+#define SSOW_LF_GWS_PENDTAG (0x230ull)
+#define SSOW_LF_GWS_WQE0 (0x240ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_WQE1 (0x248ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull) /* [CN9K, CN10K) */
+#define SSOW_LF_GWS_PRF_TAG (0x400ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_PRF_WQP (0x410ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_PRF_WQE0 (0x440ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_PRF_WQE1 (0x448ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_OP_GET_WORK0 (0x600ull)
+#define SSOW_LF_GWS_OP_GET_WORK1 (0x608ull) /* [CN10K, .) */
+#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
+#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
+#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
+#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
+#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
+#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
+#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull) /* [CN9K, CN10K) */
+#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
+#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull) /* [CN9K, CN10K) */
+#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull) /* [CN9K, CN10K) */
+#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull) /* [CN9K, CN10K) */
+#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
+#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
+#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
+#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
+#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
+
+/* Enum offsets */
+
+#define SSOW_LF_INT_VEC_IOP (0x0ull)
+
+#define SSOW_GW_RESULT_GW_WORK (0x0ull) /* [CN10K, .) */
+#define SSOW_GW_RESULT_GW_NO_WORK (0x1ull) /* [CN10K, .) */
+#define SSOW_GW_RESULT_GW_ERROR (0x2ull) /* [CN10K, .) */
+
+#define SSOW_LF_GWS_TAG_PEND_GET_WORK_BIT 63
+#define SSOW_LF_GWS_TAG_PEND_SWITCH_BIT 62
+#define SSOW_LF_GWS_TAG_PEND_DESCHED_BIT 58
+#define SSOW_LF_GWS_TAG_HEAD_BIT 35
+
+#endif /* __SSOW_HW_H__ */
diff --git a/drivers/common/cnxk/hw/tim.h b/drivers/common/cnxk/hw/tim.h
new file mode 100644
index 0000000..a2d5557
--- /dev/null
+++ b/drivers/common/cnxk/hw/tim.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __TIM_HW_H__
+#define __TIM_HW_H__
+
+/* TIM */
+#define TIM_AF_CONST (0x90)
+#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3)
+#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3)
+#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000)
+#define TIM_AF_BLK_RST (0x10)
+#define TIM_AF_LF_RST (0x20)
+#define TIM_AF_BLK_RST (0x10)
+#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3)
+#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3)
+#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3)
+#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3)
+#define TIM_AF_FLAGS_REG (0x80)
+#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0)
+#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47)
+#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50)
+#define TIM_AF_RINGX_CLT1_CLK_10NS (0)
+#define TIM_AF_RINGX_CLT1_CLK_GPIO (1)
+#define TIM_AF_RINGX_CLT1_CLK_GTI (2)
+#define TIM_AF_RINGX_CLT1_CLK_PTP (3)
+
+/* ENUMS */
+
+#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull)
+#define TIM_LF_INT_VEC_RAS_INT (0x1ull)
+#define TIM_LF_RING_AURA (0x0)
+#define TIM_LF_RING_BASE (0x130)
+#define TIM_LF_NRSPERR_INT (0x200)
+#define TIM_LF_NRSPERR_INT_W1S (0x208)
+#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210)
+#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218)
+#define TIM_LF_RAS_INT (0x300)
+#define TIM_LF_RAS_INT_W1S (0x308)
+#define TIM_LF_RAS_INT_ENA_W1S (0x310)
+#define TIM_LF_RAS_INT_ENA_W1C (0x318)
+#define TIM_LF_RING_REL (0x400)
+
+#define TIM_MAX_INTERVAL_TICKS ((1ULL << 32) - 1)
+#define TIM_MAX_BUCKET_SIZE ((1ULL << 20) - 1)
+#define TIM_MIN_BUCKET_SIZE 3
+
+#endif /* __TIM_HW_H__ */
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
new file mode 100644
index 0000000..1f4d705
--- /dev/null
+++ b/drivers/common/cnxk/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Marvell, Inc
+#
+
+
+if not dpdk_conf.get('RTE_ARCH_64')
+ build = false
+ reason = 'only supported on 64-bit'
+ subdir_done()
+endif
+
+config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON'
+deps = ['eal', 'pci', 'bus_pci', 'mbuf']
+sources = files('roc_platform.c')
+includes += include_directories('../../bus/pci')
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
new file mode 100644
index 0000000..83a69ac
--- /dev/null
+++ b/drivers/common/cnxk/roc_api.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_API_H_
+#define _ROC_API_H_
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <string.h>
+
+/* Alignment */
+#define ROC_ALIGN 128
+
+/* Bits manipulation */
+#include "roc_bits.h"
+
+/* Bitfields manipulation */
+#include "roc_bitfield.h"
+
+/* Constants */
+#define PLT_ETHER_ADDR_LEN 6
+
+/* Platform definition */
+#include "roc_platform.h"
+
+#define ROC_LMT_LINES_PER_CORE_LOG2 5
+#define ROC_LMT_LINE_SIZE_LOG2 7
+#define ROC_LMT_BASE_PER_CORE_LOG2 \
+ (ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
+
+/* PCI IDs */
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVID_CNXK_RVU_PF 0xA063
+#define PCI_DEVID_CNXK_RVU_VF 0xA064
+#define PCI_DEVID_CNXK_RVU_AF 0xA065
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
+#define PCI_DEVID_CNXK_RVU_NPA_PF 0xA0FB
+#define PCI_DEVID_CNXK_RVU_NPA_VF 0xA0FC
+#define PCI_DEVID_CNXK_RVU_AF_VF 0xA0f8
+#define PCI_DEVID_CNXK_DPI_VF 0xA081
+#define PCI_DEVID_CNXK_EP_VF 0xB203
+#define PCI_DEVID_CNXK_RVU_SDP_PF 0xA0f6
+#define PCI_DEVID_CNXK_RVU_SDP_VF 0xA0f7
+
+#define PCI_DEVID_CN9K_CGX 0xA059
+#define PCI_DEVID_CN10K_RPM 0xA060
+
+#define PCI_SUBSYSTEM_DEVID_CN10KA 0xB900
+#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
+
+#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
+#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
+#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
+#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
+#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
+
+/* HW structure definition */
+#include "hw/nix.h"
+#include "hw/npa.h"
+#include "hw/npc.h"
+#include "hw/rvu.h"
+#include "hw/sdp.h"
+#include "hw/sso.h"
+#include "hw/ssow.h"
+#include "hw/tim.h"
+
+#endif /* _ROC_API_H_ */
diff --git a/drivers/common/cnxk/roc_bitfield.h b/drivers/common/cnxk/roc_bitfield.h
new file mode 100644
index 0000000..66483d7
--- /dev/null
+++ b/drivers/common/cnxk/roc_bitfield.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_BITFIELD_H_
+#define _ROC_BITFIELD_H_
+
+#define __bf_shf(x) (__builtin_ffsll(x) - 1)
+
+#define FIELD_PREP(mask, val) (((typeof(mask))(val) << __bf_shf(mask)) & (mask))
+
+#define FIELD_GET(mask, reg) \
+ ((typeof(mask))(((reg) & (mask)) >> __bf_shf(mask)))
+
+#endif /* _ROC_BITFIELD_H_ */
diff --git a/drivers/common/cnxk/roc_bits.h b/drivers/common/cnxk/roc_bits.h
new file mode 100644
index 0000000..ca09654
--- /dev/null
+++ b/drivers/common/cnxk/roc_bits.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_BITS_H_
+#define _ROC_BITS_H_
+
+#ifndef BIT_ULL
+#define BIT_ULL(nr) (1ULL << (nr))
+#endif
+
+#ifndef BIT
+#define BIT(nr) (1UL << (nr))
+#endif
+
+#ifndef BITS_PER_LONG
+#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
+#endif
+#ifndef BITS_PER_LONG_LONG
+#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
+#endif
+
+#ifndef GENMASK
+#define GENMASK(h, l) (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+#endif
+#ifndef GENMASK_ULL
+#define GENMASK_ULL(h, l) \
+ (((~0ULL) - (1ULL << (l)) + 1) & \
+ (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
+#endif
+
+#endif /* _ROC_BITS_H_ */
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
new file mode 100644
index 0000000..2c8b91c
--- /dev/null
+++ b/drivers/common/cnxk/roc_platform.c
@@ -0,0 +1,5 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "roc_api.h"
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
new file mode 100644
index 0000000..575a6ac
--- /dev/null
+++ b/drivers/common/cnxk/roc_platform.h
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_PLATFORM_H_
+#define _ROC_PLATFORM_H_
+
+#include <rte_alarm.h>
+#include <rte_bitmap.h>
+#include <rte_bus_pci.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_cycles.h>
+#include <rte_interrupts.h>
+#include <rte_io.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_spinlock.h>
+#include <rte_string_fns.h>
+
+#include "roc_bits.h"
+
+#if defined(__ARM_FEATURE_SVE)
+#define PLT_CPU_FEATURE_PREAMBLE ".cpu generic+crc+lse+sve\n"
+#else
+#define PLT_CPU_FEATURE_PREAMBLE ".cpu generic+crc+lse\n"
+#endif
+
+#define PLT_ASSERT RTE_ASSERT
+#define PLT_MEMZONE_NAMESIZE RTE_MEMZONE_NAMESIZE
+#define PLT_STD_C11 RTE_STD_C11
+#define PLT_PTR_ADD RTE_PTR_ADD
+#define PLT_MAX_RXTX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
+#define PLT_INTR_VEC_RXTX_OFFSET RTE_INTR_VEC_RXTX_OFFSET
+#define PLT_MIN RTE_MIN
+#define PLT_MAX RTE_MAX
+#define PLT_DIM RTE_DIM
+#define PLT_SET_USED RTE_SET_USED
+#define PLT_STATIC_ASSERT(s) _Static_assert(s, #s)
+#define PLT_ALIGN RTE_ALIGN
+#define PLT_ALIGN_MUL_CEIL RTE_ALIGN_MUL_CEIL
+#define PLT_MODEL_MZ_NAME "roc_model_mz"
+#define PLT_CACHE_LINE_SIZE RTE_CACHE_LINE_SIZE
+#define BITMASK_ULL GENMASK_ULL
+
+#define __plt_cache_aligned __rte_cache_aligned
+#define __plt_always_inline __rte_always_inline
+#define __plt_packed __rte_packed
+#define __roc_api __rte_internal
+#define plt_iova_t rte_iova_t
+
+#define plt_pci_device rte_pci_device
+#define plt_pci_read_config rte_pci_read_config
+#define plt_pci_find_ext_capability rte_pci_find_ext_capability
+
+#define plt_log2_u32 rte_log2_u32
+#define plt_cpu_to_be_16 rte_cpu_to_be_16
+#define plt_be_to_cpu_16 rte_be_to_cpu_16
+#define plt_cpu_to_be_32 rte_cpu_to_be_32
+#define plt_be_to_cpu_32 rte_be_to_cpu_32
+#define plt_cpu_to_be_64 rte_cpu_to_be_64
+#define plt_be_to_cpu_64 rte_be_to_cpu_64
+
+#define plt_align32prevpow2 rte_align32prevpow2
+
+#define plt_bitmap rte_bitmap
+#define plt_bitmap_init rte_bitmap_init
+#define plt_bitmap_reset rte_bitmap_reset
+#define plt_bitmap_free rte_bitmap_free
+#define plt_bitmap_clear rte_bitmap_clear
+#define plt_bitmap_set rte_bitmap_set
+#define plt_bitmap_get rte_bitmap_get
+#define plt_bitmap_scan_init __rte_bitmap_scan_init
+#define plt_bitmap_scan rte_bitmap_scan
+#define plt_bitmap_get_memory_footprint rte_bitmap_get_memory_footprint
+
+#define plt_spinlock_t rte_spinlock_t
+#define plt_spinlock_init rte_spinlock_init
+#define plt_spinlock_lock rte_spinlock_lock
+#define plt_spinlock_unlock rte_spinlock_unlock
+
+#define plt_intr_callback_register rte_intr_callback_register
+#define plt_intr_callback_unregister rte_intr_callback_unregister
+#define plt_intr_disable rte_intr_disable
+#define plt_thread_is_intr rte_thread_is_intr
+#define plt_intr_callback_fn rte_intr_callback_fn
+
+#define plt_alarm_set rte_eal_alarm_set
+#define plt_alarm_cancel rte_eal_alarm_cancel
+
+#define plt_intr_handle rte_intr_handle
+
+#define plt_zmalloc(sz, align) rte_zmalloc("cnxk", sz, align)
+#define plt_free rte_free
+
+#define plt_read64(addr) rte_read64_relaxed((volatile void *)(addr))
+#define plt_write64(val, addr) \
+ rte_write64_relaxed((val), (volatile void *)(addr))
+
+#define plt_wmb() rte_wmb()
+#define plt_rmb() rte_rmb()
+#define plt_io_wmb() rte_io_wmb()
+#define plt_io_rmb() rte_io_rmb()
+
+#define plt_mmap mmap
+#define PLT_PROT_READ PROT_READ
+#define PLT_PROT_WRITE PROT_WRITE
+#define PLT_MAP_SHARED MAP_SHARED
+
+#define plt_memzone rte_memzone
+#define plt_memzone_lookup rte_memzone_lookup
+#define plt_memzone_reserve_cache_align(name, sz) \
+ rte_memzone_reserve_aligned(name, sz, 0, 0, RTE_CACHE_LINE_SIZE)
+#define plt_memzone_free rte_memzone_free
+
+#define plt_tsc_hz rte_get_tsc_hz
+#define plt_delay_ms rte_delay_ms
+#define plt_delay_us rte_delay_us
+
+#define plt_lcore_id rte_lcore_id
+
+#define plt_strlcpy rte_strlcpy
+
+#ifdef __cplusplus
+#define CNXK_PCI_ID(subsystem_dev, dev) \
+ { \
+ RTE_CLASS_ANY_ID, \
+ PCI_VENDOR_ID_CAVIUM, \
+ (dev), \
+ PCI_ANY_ID, \
+ (subsystem_dev), \
+ }
+#else
+#define CNXK_PCI_ID(subsystem_dev, dev) \
+ { \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, \
+ .device_id = (dev), \
+ .subsystem_vendor_id = PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+ }
+#endif
+
+#endif /* _ROC_PLATFORM_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
new file mode 100644
index 0000000..dc012a1
--- /dev/null
+++ b/drivers/common/cnxk/version.map
@@ -0,0 +1,4 @@
+INTERNAL {
+
+ local: *;
+};
diff --git a/drivers/meson.build b/drivers/meson.build
index fdf7612..b225fef 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -5,6 +5,7 @@
subdirs = [
'common',
'bus',
+ 'common/cnxk', #depends on bus.
'common/mlx5', # depends on bus.
'common/qat', # depends on bus.
'mempool', # depends on common and bus.
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 03/52] common/cnxk: add model init and IO handling API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 01/52] config/arm: add support for Marvell CN10K Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 02/52] common/cnxk: add build infrastructre and HW definition Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-26 13:48 ` Jerin Jacob
2021-03-05 13:38 ` [dpdk-dev] [PATCH 04/52] common/cnxk: add interrupt helper API Nithin Dabilpuram
` (52 subsequent siblings)
55 siblings, 1 reply; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add routines for SoC model identification and HW IO handling
routines specific to CN9K and CN10K Marvell SoC's.
These are based on arm64 ISA and behaviour specific to
Marvell SoC's.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 4 +-
drivers/common/cnxk/roc_api.h | 13 +++
drivers/common/cnxk/roc_io.h | 187 +++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_io_generic.h | 122 +++++++++++++++++++++++
drivers/common/cnxk/roc_model.c | 148 +++++++++++++++++++++++++++
drivers/common/cnxk/roc_model.h | 103 +++++++++++++++++++
drivers/common/cnxk/roc_platform.c | 21 ++++
drivers/common/cnxk/roc_platform.h | 10 ++
drivers/common/cnxk/roc_priv.h | 11 +++
drivers/common/cnxk/roc_util_priv.h | 14 +++
drivers/common/cnxk/roc_utils.c | 35 +++++++
drivers/common/cnxk/roc_utils.h | 13 +++
drivers/common/cnxk/version.map | 5 +
13 files changed, 685 insertions(+), 1 deletion(-)
create mode 100644 drivers/common/cnxk/roc_io.h
create mode 100644 drivers/common/cnxk/roc_io_generic.h
create mode 100644 drivers/common/cnxk/roc_model.c
create mode 100644 drivers/common/cnxk/roc_model.h
create mode 100644 drivers/common/cnxk/roc_priv.h
create mode 100644 drivers/common/cnxk/roc_util_priv.h
create mode 100644 drivers/common/cnxk/roc_utils.c
create mode 100644 drivers/common/cnxk/roc_utils.h
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 1f4d705..e5de6cf 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -11,5 +11,7 @@ endif
config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON'
deps = ['eal', 'pci', 'bus_pci', 'mbuf']
-sources = files('roc_platform.c')
+sources = files('roc_model.c',
+ 'roc_platform.c',
+ 'roc_utils.c')
includes += include_directories('../../bus/pci')
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 83a69ac..7dbeb6a 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -29,6 +29,13 @@
#define ROC_LMT_BASE_PER_CORE_LOG2 \
(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
+/* IO */
+#if defined(__aarch64__)
+#include "roc_io.h"
+#else
+#include "roc_io_generic.h"
+#endif
+
/* PCI IDs */
#define PCI_VENDOR_ID_CAVIUM 0x177D
#define PCI_DEVID_CNXK_RVU_PF 0xA063
@@ -66,4 +73,10 @@
#include "hw/ssow.h"
#include "hw/tim.h"
+/* Model */
+#include "roc_model.h"
+
+/* Utils */
+#include "roc_utils.h"
+
#endif /* _ROC_API_H_ */
diff --git a/drivers/common/cnxk/roc_io.h b/drivers/common/cnxk/roc_io.h
new file mode 100644
index 0000000..31849fb
--- /dev/null
+++ b/drivers/common/cnxk/roc_io.h
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_IO_H_
+#define _ROC_IO_H_
+
+#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) \
+ do { \
+ /* 32 Lines per core */ \
+ lmt_id = plt_lcore_id() << ROC_LMT_LINES_PER_CORE_LOG2; \
+ /* Each line is of 128B */ \
+ (lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2); \
+ } while (0)
+
+#define roc_load_pair(val0, val1, addr) \
+ ({ \
+ asm volatile("ldp %x[x0], %x[x1], [%x[p1]]" \
+ : [x0] "=r"(val0), [x1] "=r"(val1) \
+ : [p1] "r"(addr)); \
+ })
+
+#define roc_store_pair(val0, val1, addr) \
+ ({ \
+ asm volatile( \
+ "stp %x[x0], %x[x1], [%x[p1], #0]!" ::[x0] "r"(val0), \
+ [x1] "r"(val1), [p1] "r"(addr)); \
+ })
+
+#define roc_prefetch_store_keep(ptr) \
+ ({ asm volatile("prfm pstl1keep, [%x0]\n" : : "r"(ptr)); })
+
+#if defined(__clang__)
+static __plt_always_inline void
+roc_atomic128_cas_noreturn(uint64_t swap0, uint64_t swap1, int64_t *ptr)
+{
+ register uint64_t x0 __asm("x0") = swap0;
+ register uint64_t x1 __asm("x1") = swap1;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "casp %[x0], %[x1], %[x0], %[x1], [%[ptr]]\n"
+ : [x0] "+r"(x0), [x1] "+r"(x1)
+ : [ptr] "r"(ptr)
+ : "memory");
+}
+#else
+static __plt_always_inline void
+roc_atomic128_cas_noreturn(uint64_t swap0, uint64_t swap1, uint64_t ptr)
+{
+ __uint128_t wdata = swap0 | ((__uint128_t)swap1 << 64);
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "casp %[wdata], %H[wdata], %[wdata], %H[wdata], [%[ptr]]\n"
+ : [wdata] "+r"(wdata)
+ : [ptr] "r"(ptr)
+ : "memory");
+}
+#endif
+
+static __plt_always_inline uint64_t
+roc_atomic64_cas(uint64_t compare, uint64_t swap, int64_t *ptr)
+{
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "cas %[compare], %[swap], [%[ptr]]\n"
+ : [compare] "+r"(compare)
+ : [swap] "r"(swap), [ptr] "r"(ptr)
+ : "memory");
+
+ return compare;
+}
+
+static __plt_always_inline uint64_t
+roc_atomic64_add_nosync(int64_t incr, int64_t *ptr)
+{
+ uint64_t result;
+
+ /* Atomic add with no ordering */
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldadd %x[i], %x[r], [%[b]]"
+ : [r] "=r"(result), "+m"(*ptr)
+ : [i] "r"(incr), [b] "r"(ptr)
+ : "memory");
+ return result;
+}
+
+static __plt_always_inline uint64_t
+roc_atomic64_add_sync(int64_t incr, int64_t *ptr)
+{
+ uint64_t result;
+
+ /* Atomic add with ordering */
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldadda %x[i], %x[r], [%[b]]"
+ : [r] "=r"(result), "+m"(*ptr)
+ : [i] "r"(incr), [b] "r"(ptr)
+ : "memory");
+ return result;
+}
+
+static __plt_always_inline uint64_t
+roc_lmt_submit_ldeor(plt_iova_t io_address)
+{
+ uint64_t result;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldeor xzr, %x[rf], [%[rs]]"
+ : [rf] "=r"(result)
+ : [rs] "r"(io_address));
+ return result;
+}
+
+static __plt_always_inline uint64_t
+roc_lmt_submit_ldeorl(plt_iova_t io_address)
+{
+ uint64_t result;
+
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE "ldeorl xzr,%x[rf],[%[rs]]"
+ : [rf] "=r"(result)
+ : [rs] "r"(io_address));
+ return result;
+}
+
+static __plt_always_inline void
+roc_lmt_submit_steor(uint64_t data, plt_iova_t io_address)
+{
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "steor %x[d], [%[rs]]" ::[d] "r"(data),
+ [rs] "r"(io_address));
+}
+
+static __plt_always_inline void
+roc_lmt_submit_steorl(uint64_t data, plt_iova_t io_address)
+{
+ asm volatile(PLT_CPU_FEATURE_PREAMBLE
+ "steorl %x[d], [%[rs]]" ::[d] "r"(data),
+ [rs] "r"(io_address));
+}
+
+static __plt_always_inline void
+roc_lmt_mov(void *out, const void *in, const uint32_t lmtext)
+{
+ volatile const __uint128_t *src128 = (const __uint128_t *)in;
+ volatile __uint128_t *dst128 = (__uint128_t *)out;
+
+ dst128[0] = src128[0];
+ dst128[1] = src128[1];
+ /* lmtext receives following value:
+ * 1: NIX_SUBDC_EXT needed i.e. tx vlan case
+ * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case
+ */
+ if (lmtext) {
+ dst128[2] = src128[2];
+ if (lmtext > 1)
+ dst128[3] = src128[3];
+ }
+}
+
+static __plt_always_inline void
+roc_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
+{
+ volatile const __uint128_t *src128 = (const __uint128_t *)in;
+ volatile __uint128_t *dst128 = (__uint128_t *)out;
+ uint8_t i;
+
+ for (i = 0; i < segdw; i++)
+ dst128[i] = src128[i];
+}
+
+static __plt_always_inline void
+roc_lmt_mov_one(void *out, const void *in)
+{
+ volatile const __uint128_t *src128 = (const __uint128_t *)in;
+ volatile __uint128_t *dst128 = (__uint128_t *)out;
+
+ *dst128 = *src128;
+}
+
+/* Non volatile version of roc_lmt_mov_seg() */
+static __plt_always_inline void
+roc_lmt_mov_seg_nv(void *out, const void *in, const uint16_t segdw)
+{
+ const __uint128_t *src128 = (const __uint128_t *)in;
+ __uint128_t *dst128 = (__uint128_t *)out;
+ uint8_t i;
+
+ for (i = 0; i < segdw; i++)
+ dst128[i] = src128[i];
+}
+
+#endif /* _ROC_IO_H_ */
diff --git a/drivers/common/cnxk/roc_io_generic.h b/drivers/common/cnxk/roc_io_generic.h
new file mode 100644
index 0000000..708c7cd
--- /dev/null
+++ b/drivers/common/cnxk/roc_io_generic.h
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_IO_GENERIC_H_
+#define _ROC_IO_GENERIC_H_
+
+#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
+
+#define roc_load_pair(val0, val1, addr) \
+ do { \
+ val0 = plt_read64((void *)(addr)); \
+ val1 = plt_read64((uint8_t *)(addr) + 8); \
+ } while (0)
+
+#define roc_store_pair(val0, val1, addr) \
+ do { \
+ plt_write64(val0, (void *)(addr)); \
+ plt_write64(val1, (((uint8_t *)(addr)) + 8)); \
+ } while (0)
+
+#define roc_prefetch_store_keep(ptr) \
+ do { \
+ } while (0)
+
+static __plt_always_inline void
+roc_atomic128_cas_noreturn(uint64_t swap0, uint64_t swap1, uint64_t ptr)
+{
+ PLT_SET_USED(swap0);
+ PLT_SET_USED(swap1);
+ PLT_SET_USED(ptr);
+}
+
+static __plt_always_inline uint64_t
+roc_atomic64_cas(uint64_t compare, uint64_t swap, int64_t *ptr)
+{
+ PLT_SET_USED(swap);
+ PLT_SET_USED(ptr);
+
+ return compare;
+}
+
+static inline uint64_t
+roc_atomic64_add_nosync(int64_t incr, int64_t *ptr)
+{
+ PLT_SET_USED(ptr);
+ PLT_SET_USED(incr);
+
+ return 0;
+}
+
+static inline uint64_t
+roc_atomic64_add_sync(int64_t incr, int64_t *ptr)
+{
+ PLT_SET_USED(ptr);
+ PLT_SET_USED(incr);
+
+ return 0;
+}
+
+static inline uint64_t
+roc_lmt_submit_ldeor(plt_iova_t io_address)
+{
+ PLT_SET_USED(io_address);
+
+ return 0;
+}
+
+static __plt_always_inline uint64_t
+roc_lmt_submit_ldeorl(plt_iova_t io_address)
+{
+ PLT_SET_USED(io_address);
+
+ return 0;
+}
+
+static inline void
+roc_lmt_submit_steor(uint64_t data, plt_iova_t io_address)
+{
+ PLT_SET_USED(data);
+ PLT_SET_USED(io_address);
+}
+
+static inline void
+roc_lmt_submit_steorl(uint64_t data, plt_iova_t io_address)
+{
+ PLT_SET_USED(data);
+ PLT_SET_USED(io_address);
+}
+
+static __plt_always_inline void
+roc_lmt_mov(void *out, const void *in, const uint32_t lmtext)
+{
+ PLT_SET_USED(in);
+ PLT_SET_USED(lmtext);
+ memset(out, 0, sizeof(__uint128_t) * (lmtext ? lmtext > 1 ? 4 : 3 : 2));
+}
+
+static __plt_always_inline void
+roc_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
+{
+ PLT_SET_USED(out);
+ PLT_SET_USED(in);
+ PLT_SET_USED(segdw);
+}
+
+static __plt_always_inline void
+roc_lmt_mov_one(void *out, const void *in)
+{
+ PLT_SET_USED(out);
+ PLT_SET_USED(in);
+}
+
+static __plt_always_inline void
+roc_lmt_mov_seg_nv(void *out, const void *in, const uint16_t segdw)
+{
+ PLT_SET_USED(out);
+ PLT_SET_USED(in);
+ PLT_SET_USED(segdw);
+}
+
+#endif /* _ROC_IO_GENERIC_H_ */
diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/roc_model.c
new file mode 100644
index 0000000..c00f90d
--- /dev/null
+++ b/drivers/common/cnxk/roc_model.c
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+struct roc_model *roc_model;
+
+/* RoC and CPU IDs and revisions */
+#define VENDOR_ARM 0x41 /* 'A' */
+#define VENDOR_CAVIUM 0x43 /* 'C' */
+
+#define PART_106XX 0xD49
+#define PART_98XX 0xB1
+#define PART_96XX 0xB2
+#define PART_95XX 0xB3
+#define PART_95XXN 0xB4
+#define PART_95XXMM 0xB5
+
+#define MODEL_IMPL_BITS 8
+#define MODEL_IMPL_SHIFT 24
+#define MODEL_IMPL_MASK ((1 << MODEL_IMPL_BITS) - 1)
+#define MODEL_PART_BITS 12
+#define MODEL_PART_SHIFT 4
+#define MODEL_PART_MASK ((1 << MODEL_PART_BITS) - 1)
+#define MODEL_MAJOR_BITS 4
+#define MODEL_MAJOR_SHIFT 20
+#define MODEL_MAJOR_MASK ((1 << MODEL_MAJOR_BITS) - 1)
+#define MODEL_MINOR_BITS 4
+#define MODEL_MINOR_SHIFT 0
+#define MODEL_MINOR_MASK ((1 << MODEL_MINOR_BITS) - 1)
+
+static const struct model_db {
+ uint32_t impl;
+ uint32_t part;
+ uint32_t major;
+ uint32_t minor;
+ uint64_t flag;
+ char name[ROC_MODEL_STR_LEN_MAX];
+} model_db[] = {
+ {VENDOR_ARM, PART_106XX, 0, 0, ROC_MODEL_CN10K, "cn10k"},
+ {VENDOR_CAVIUM, PART_98XX, 0, 0, ROC_MODEL_CN98xx_A0, "cn98xx_a0"},
+ {VENDOR_CAVIUM, PART_96XX, 0, 0, ROC_MODEL_CN96xx_A0, "cn96xx_a0"},
+ {VENDOR_CAVIUM, PART_96XX, 0, 1, ROC_MODEL_CN96xx_B0, "cn96xx_b0"},
+ {VENDOR_CAVIUM, PART_96XX, 2, 0, ROC_MODEL_CN96xx_C0, "cn96xx_c0"},
+ {VENDOR_CAVIUM, PART_95XX, 0, 0, ROC_MODEL_CNF95xx_A0, "cnf95xx_a0"},
+ {VENDOR_CAVIUM, PART_95XX, 1, 0, ROC_MODEL_CNF95xx_B0, "cnf95xx_b0"},
+ {VENDOR_CAVIUM, PART_95XXN, 0, 0, ROC_MODEL_CNF95XXN_A0, "cnf95xxn_a0"},
+ {VENDOR_CAVIUM, PART_95XXMM, 0, 0, ROC_MODEL_CNF95XXMM_A0,
+ "cnf95xxmm_a0"}
+};
+
+static bool
+populate_model(struct roc_model *model, uint32_t midr)
+{
+ uint32_t impl, major, part, minor;
+ bool found = false;
+ size_t i;
+
+ impl = (midr >> MODEL_IMPL_SHIFT) & MODEL_IMPL_MASK;
+ part = (midr >> MODEL_PART_SHIFT) & MODEL_PART_MASK;
+ major = (midr >> MODEL_MAJOR_SHIFT) & MODEL_MAJOR_MASK;
+ minor = (midr >> MODEL_MINOR_SHIFT) & MODEL_MINOR_MASK;
+
+ for (i = 0; i < PLT_DIM(model_db); i++)
+ if (model_db[i].impl == impl && model_db[i].part == part &&
+ model_db[i].major == major && model_db[i].minor == minor) {
+ model->flag = model_db[i].flag;
+ strncpy(model->name, model_db[i].name,
+ ROC_MODEL_STR_LEN_MAX - 1);
+ found = true;
+ break;
+ }
+ if (!found) {
+ model->flag = 0;
+ strncpy(model->name, "unknown", ROC_MODEL_STR_LEN_MAX - 1);
+ plt_err("Invalid RoC model (impl=0x%x, part=0x%x)", impl, part);
+ }
+
+ return found;
+}
+
+static int
+midr_get(unsigned long *val)
+{
+ const char *file =
+ "/sys/devices/system/cpu/cpu0/regs/identification/midr_el1";
+ int rc = UTIL_ERR_FS;
+ char buf[BUFSIZ];
+ char *end = NULL;
+ FILE *f;
+
+ if (val == NULL)
+ goto err;
+ f = fopen(file, "r");
+ if (f == NULL)
+ goto err;
+
+ if (fgets(buf, sizeof(buf), f) == NULL)
+ goto fclose;
+
+ *val = strtoul(buf, &end, 0);
+ if ((buf[0] == '\0') || (end == NULL) || (*end != '\n'))
+ goto fclose;
+
+ rc = 0;
+fclose:
+ fclose(f);
+err:
+ return rc;
+}
+
+static void
+detect_invalid_config(void)
+{
+#ifdef ROC_PLATFORM_CN9K
+#ifdef ROC_PLATFORM_CN10K
+ PLT_STATIC_ASSERT(0);
+#endif
+#endif
+}
+
+int
+roc_model_init(struct roc_model *model)
+{
+ int rc = UTIL_ERR_PARAM;
+ unsigned long midr;
+
+ detect_invalid_config();
+
+ if (!model)
+ goto err;
+
+ rc = midr_get(&midr);
+ if (rc)
+ goto err;
+
+ rc = UTIL_ERR_INVALID_MODEL;
+ if (!populate_model(model, midr))
+ goto err;
+
+ rc = 0;
+ plt_info("RoC Model: %s", model->name);
+ roc_model = model;
+err:
+ return rc;
+}
diff --git a/drivers/common/cnxk/roc_model.h b/drivers/common/cnxk/roc_model.h
new file mode 100644
index 0000000..5914648
--- /dev/null
+++ b/drivers/common/cnxk/roc_model.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_MODEL_H_
+#define _ROC_MODEL_H_
+
+#include <stdbool.h>
+
+extern struct roc_model *roc_model;
+
+struct roc_model {
+#define ROC_MODEL_CN96xx_A0 BIT_ULL(0)
+#define ROC_MODEL_CN96xx_B0 BIT_ULL(1)
+#define ROC_MODEL_CN96xx_C0 BIT_ULL(2)
+#define ROC_MODEL_CNF95xx_A0 BIT_ULL(4)
+#define ROC_MODEL_CNF95xx_B0 BIT_ULL(6)
+#define ROC_MODEL_CNF95XXMM_A0 BIT_ULL(8)
+#define ROC_MODEL_CNF95XXN_A0 BIT_ULL(12)
+#define ROC_MODEL_CN98xx_A0 BIT_ULL(16)
+#define ROC_MODEL_CN10K BIT_ULL(20)
+ uint64_t flag;
+#define ROC_MODEL_STR_LEN_MAX 128
+ char name[ROC_MODEL_STR_LEN_MAX];
+} __plt_cache_aligned;
+
+#define ROC_MODEL_CN96xx_Ax (ROC_MODEL_CN96xx_A0 | ROC_MODEL_CN96xx_B0)
+#define ROC_MODEL_CN9K \
+ (ROC_MODEL_CN96xx_Ax | ROC_MODEL_CN96xx_C0 | ROC_MODEL_CNF95xx_A0 | \
+ ROC_MODEL_CNF95xx_B0 | ROC_MODEL_CNF95XXMM_A0 | \
+ ROC_MODEL_CNF95XXN_A0 | ROC_MODEL_CN98xx_A0)
+
+/* Runtime variants */
+static inline uint64_t
+roc_model_runtime_is_cn9k(void)
+{
+ return (roc_model->flag & (ROC_MODEL_CN9K));
+}
+
+static inline uint64_t
+roc_model_runtime_is_cn10k(void)
+{
+ return (roc_model->flag & (ROC_MODEL_CN10K));
+}
+
+/* Compile time variants */
+#ifdef ROC_PLATFORM_CN9K
+#define roc_model_constant_is_cn9k() 1
+#define roc_model_constant_is_cn10k() 0
+#else
+#define roc_model_constant_is_cn9k() 0
+#define roc_model_constant_is_cn10k() 1
+#endif
+
+/*
+ * Compile time variants to enable optimized version check when the library
+ * configured for specific platform version else to fallback to runtime.
+ */
+static inline uint64_t
+roc_model_is_cn9k(void)
+{
+#ifdef ROC_PLATFORM_CN9K
+ return 1;
+#endif
+#ifdef ROC_PLATFORM_CN10K
+ return 0;
+#endif
+ return roc_model_runtime_is_cn9k();
+}
+
+static inline uint64_t
+roc_model_is_cn10k(void)
+{
+#ifdef ROC_PLATFORM_CN10K
+ return 1;
+#endif
+#ifdef ROC_PLATFORM_CN9K
+ return 0;
+#endif
+ return roc_model_runtime_is_cn10k();
+}
+
+static inline uint64_t
+roc_model_is_cn96_A0(void)
+{
+ return roc_model->flag & ROC_MODEL_CN96xx_A0;
+}
+
+static inline uint64_t
+roc_model_is_cn96_Ax(void)
+{
+ return (roc_model->flag & ROC_MODEL_CN96xx_Ax);
+}
+
+static inline uint64_t
+roc_model_is_cn95_A0(void)
+{
+ return roc_model->flag & ROC_MODEL_CNF95xx_A0;
+}
+
+int roc_model_init(struct roc_model *model);
+
+#endif
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 2c8b91c..072b3e5 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -3,3 +3,24 @@
*/
#include "roc_api.h"
+
+int
+plt_init(void)
+{
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(PLT_MODEL_MZ_NAME);
+ if (mz == NULL)
+ mz = rte_memzone_reserve(PLT_MODEL_MZ_NAME,
+ sizeof(struct roc_model),
+ SOCKET_ID_ANY, 0);
+ else
+ return 0;
+
+ if (mz == NULL) {
+ plt_err("Failed to allocate memory for roc_model");
+ return -ENOMEM;
+ }
+ roc_model_init(mz->addr);
+ return 0;
+}
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 575a6ac..7477613 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -123,6 +123,13 @@
#define plt_strlcpy rte_strlcpy
+/* Log */
+#define plt_err(fmt, args...) \
+ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args)
+#define plt_info(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args)
+#define plt_warn(fmt, args...) RTE_LOG(WARNING, PMD, fmt "\n", ##args)
+#define plt_print(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args)
+
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
{ \
@@ -143,4 +150,7 @@
}
#endif
+__rte_internal
+int plt_init(void);
+
#endif /* _ROC_PLATFORM_H_ */
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
new file mode 100644
index 0000000..be37c6c
--- /dev/null
+++ b/drivers/common/cnxk/roc_priv.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_PRIV_H_
+#define _ROC_PRIV_H_
+
+/* Utils */
+#include "roc_util_priv.h"
+
+#endif /* _ROC_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_util_priv.h b/drivers/common/cnxk/roc_util_priv.h
new file mode 100644
index 0000000..f140967
--- /dev/null
+++ b/drivers/common/cnxk/roc_util_priv.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_UTIL_PRIV_H_
+#define _ROC_UTIL_PRIV_H_
+
+enum util_err_status {
+ UTIL_ERR_PARAM = -6000,
+ UTIL_ERR_FS,
+ UTIL_ERR_INVALID_MODEL,
+};
+
+#endif /* _ROC_UTIL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
new file mode 100644
index 0000000..bcba7b2
--- /dev/null
+++ b/drivers/common/cnxk/roc_utils.c
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+const char *
+roc_error_msg_get(int errorcode)
+{
+ const char *err_msg;
+
+ switch (errorcode) {
+ case UTIL_ERR_PARAM:
+ err_msg = "Invalid parameter";
+ break;
+ case UTIL_ERR_FS:
+ err_msg = "file operation failed";
+ break;
+ case UTIL_ERR_INVALID_MODEL:
+ err_msg = "Invalid RoC model";
+ break;
+ default:
+ /**
+ * Handle general error (as defined in linux errno.h)
+ */
+ if (abs(errorcode) < 300)
+ err_msg = strerror(abs(errorcode));
+ else
+ err_msg = "Unknown error code";
+ break;
+ }
+
+ return err_msg;
+}
diff --git a/drivers/common/cnxk/roc_utils.h b/drivers/common/cnxk/roc_utils.h
new file mode 100644
index 0000000..7b607a5
--- /dev/null
+++ b/drivers/common/cnxk/roc_utils.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_UTILS_H_
+#define _ROC_UTILS_H_
+
+#include "roc_platform.h"
+
+/* Utils */
+const char *__roc_api roc_error_msg_get(int errorcode);
+
+#endif /* _ROC_UTILS_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index dc012a1..227f2ce 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,4 +1,9 @@
INTERNAL {
+ global:
+
+ plt_init;
+ roc_error_msg_get;
+ roc_model;
local: *;
};
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 04/52] common/cnxk: add interrupt helper API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (2 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 03/52] common/cnxk: add model init and IO handling API Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 05/52] common/cnxk: add mbox request and response definitions Nithin Dabilpuram
` (51 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add interrupt helper API's in common code to register and
unregister for specific interrupt vectors. These API's
will be used by all cnxk drivers.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 3 +-
drivers/common/cnxk/roc_dev_priv.h | 14 +++
drivers/common/cnxk/roc_irq.c | 249 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_platform.c | 4 +
drivers/common/cnxk/roc_platform.h | 11 ++
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/version.map | 1 +
7 files changed, 284 insertions(+), 1 deletion(-)
create mode 100644 drivers/common/cnxk/roc_dev_priv.h
create mode 100644 drivers/common/cnxk/roc_irq.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index e5de6cf..e2a2da2 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -11,7 +11,8 @@ endif
config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON'
deps = ['eal', 'pci', 'bus_pci', 'mbuf']
-sources = files('roc_model.c',
+sources = files('roc_irq.c',
+ 'roc_model.c',
'roc_platform.c',
'roc_utils.c')
includes += include_directories('../../bus/pci')
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
new file mode 100644
index 0000000..836e75b
--- /dev/null
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_DEV_PRIV_H
+#define _ROC_DEV_PRIV_H
+
+int dev_irq_register(struct plt_intr_handle *intr_handle,
+ plt_intr_callback_fn cb, void *data, unsigned int vec);
+void dev_irq_unregister(struct plt_intr_handle *intr_handle,
+ plt_intr_callback_fn cb, void *data, unsigned int vec);
+int dev_irqs_disable(struct plt_intr_handle *intr_handle);
+
+#endif /* _ROC_DEV_PRIV_H */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
new file mode 100644
index 0000000..75d8d7a
--- /dev/null
+++ b/drivers/common/cnxk/roc_irq.c
@@ -0,0 +1,249 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#if defined(__linux__)
+
+#include <inttypes.h>
+#include <linux/vfio.h>
+#include <sys/eventfd.h>
+#include <sys/ioctl.h>
+#include <unistd.h>
+
+#define MSIX_IRQ_SET_BUF_LEN \
+ (sizeof(struct vfio_irq_set) + sizeof(int) * (PLT_MAX_RXTX_INTR_VEC_ID))
+
+static int
+irq_get_info(struct plt_intr_handle *intr_handle)
+{
+ struct vfio_irq_info irq = {.argsz = sizeof(irq)};
+ int rc;
+
+ irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
+
+ rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ if (rc < 0) {
+ plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
+ return rc;
+ }
+
+ plt_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x",
+ irq.flags, irq.index, irq.count, PLT_MAX_RXTX_INTR_VEC_ID);
+
+ if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
+ plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
+ PLT_MAX_RXTX_INTR_VEC_ID);
+ intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ } else {
+ intr_handle->max_intr = irq.count;
+ }
+
+ return 0;
+}
+
+static int
+irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
+{
+ char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
+ struct vfio_irq_set *irq_set;
+ int32_t *fd_ptr;
+ int len, rc;
+
+ if (vec > intr_handle->max_intr) {
+ plt_err("vector=%d greater than max_intr=%d", vec,
+ intr_handle->max_intr);
+ return -EINVAL;
+ }
+
+ len = sizeof(struct vfio_irq_set) + sizeof(int32_t);
+
+ irq_set = (struct vfio_irq_set *)irq_set_buf;
+ irq_set->argsz = len;
+
+ irq_set->start = vec;
+ irq_set->count = 1;
+ irq_set->flags =
+ VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
+ irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
+
+ /* Use vec fd to set interrupt vectors */
+ fd_ptr = (int32_t *)&irq_set->data[0];
+ fd_ptr[0] = intr_handle->efds[vec];
+
+ rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ if (rc)
+ plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
+
+ return rc;
+}
+
+static int
+irq_init(struct plt_intr_handle *intr_handle)
+{
+ char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
+ struct vfio_irq_set *irq_set;
+ int32_t *fd_ptr;
+ int len, rc;
+ uint32_t i;
+
+ if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
+ intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ return -ERANGE;
+ }
+
+ len = sizeof(struct vfio_irq_set) +
+ sizeof(int32_t) * intr_handle->max_intr;
+
+ irq_set = (struct vfio_irq_set *)irq_set_buf;
+ irq_set->argsz = len;
+ irq_set->start = 0;
+ irq_set->count = intr_handle->max_intr;
+ irq_set->flags =
+ VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
+ irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
+
+ fd_ptr = (int32_t *)&irq_set->data[0];
+ for (i = 0; i < irq_set->count; i++)
+ fd_ptr[i] = -1;
+
+ rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ if (rc)
+ plt_err("Failed to set irqs vector rc=%d", rc);
+
+ return rc;
+}
+
+int
+dev_irqs_disable(struct plt_intr_handle *intr_handle)
+{
+ /* Clear max_intr to indicate re-init next time */
+ intr_handle->max_intr = 0;
+ return plt_intr_disable(intr_handle);
+}
+
+int
+dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
+ void *data, unsigned int vec)
+{
+ struct plt_intr_handle tmp_handle;
+ int rc;
+
+ /* If no max_intr read from VFIO */
+ if (intr_handle->max_intr == 0) {
+ irq_get_info(intr_handle);
+ irq_init(intr_handle);
+ }
+
+ if (vec > intr_handle->max_intr) {
+ plt_err("Vector=%d greater than max_intr=%d", vec,
+ intr_handle->max_intr);
+ return -EINVAL;
+ }
+
+ tmp_handle = *intr_handle;
+ /* Create new eventfd for interrupt vector */
+ tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (tmp_handle.fd == -1)
+ return -ENODEV;
+
+ /* Register vector interrupt callback */
+ rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ if (rc) {
+ plt_err("Failed to register vector:0x%x irq callback.", vec);
+ return rc;
+ }
+
+ intr_handle->efds[vec] = tmp_handle.fd;
+ intr_handle->nb_efd =
+ (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
+ if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
+ intr_handle->max_intr = intr_handle->nb_efd + 1;
+
+ plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ intr_handle->nb_efd, intr_handle->max_intr);
+
+ /* Enable MSIX vectors to VFIO */
+ return irq_config(intr_handle, vec);
+}
+
+void
+dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
+ void *data, unsigned int vec)
+{
+ struct plt_intr_handle tmp_handle;
+ uint8_t retries = 5; /* 5 ms */
+ int rc;
+
+ if (vec > intr_handle->max_intr) {
+ plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
+ intr_handle->max_intr);
+ return;
+ }
+
+ tmp_handle = *intr_handle;
+ tmp_handle.fd = intr_handle->efds[vec];
+ if (tmp_handle.fd == -1)
+ return;
+
+ do {
+ /* Un-register callback func from platform lib */
+ rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Retry only if -EAGAIN */
+ if (rc != -EAGAIN)
+ break;
+ plt_delay_ms(1);
+ retries--;
+ } while (retries);
+
+ if (rc < 0) {
+ plt_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
+ return;
+ }
+
+ plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ intr_handle->nb_efd, intr_handle->max_intr);
+
+ if (intr_handle->efds[vec] != -1)
+ close(intr_handle->efds[vec]);
+ /* Disable MSIX vectors from VFIO */
+ intr_handle->efds[vec] = -1;
+ irq_config(intr_handle, vec);
+}
+
+#else
+
+int
+dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
+ void *data, unsigned int vec)
+{
+ PLT_SET_USED(intr_handle);
+ PLT_SET_USED(cb);
+ PLT_SET_USED(data);
+ PLT_SET_USED(vec);
+
+ return -ENOTSUP;
+}
+
+void
+dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
+ void *data, unsigned int vec)
+{
+ PLT_SET_USED(intr_handle);
+ PLT_SET_USED(cb);
+ PLT_SET_USED(data);
+ PLT_SET_USED(vec);
+}
+
+int
+dev_irqs_disable(struct plt_intr_handle *intr_handle)
+{
+ PLT_SET_USED(intr_handle);
+
+ return -ENOTSUP;
+}
+
+#endif /* __linux__ */
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 072b3e5..79e9171 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -2,6 +2,8 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include <rte_log.h>
+
#include "roc_api.h"
int
@@ -24,3 +26,5 @@ plt_init(void)
roc_model_init(mz->addr);
return 0;
}
+
+RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 7477613..ea38216 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -124,12 +124,23 @@
#define plt_strlcpy rte_strlcpy
/* Log */
+extern int cnxk_logtype_base;
#define plt_err(fmt, args...) \
RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args)
#define plt_info(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args)
#define plt_warn(fmt, args...) RTE_LOG(WARNING, PMD, fmt "\n", ##args)
#define plt_print(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args)
+/**
+ * Log debug message if given subsystem logging is enabled.
+ */
+#define plt_dbg(subsystem, fmt, args...) \
+ rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
+ "[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
+ ##args)
+
+#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
+
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
{ \
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index be37c6c..8c250f4 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -8,4 +8,7 @@
/* Utils */
#include "roc_util_priv.h"
+/* Dev */
+#include "roc_dev_priv.h"
+
#endif /* _ROC_PRIV_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 227f2ce..e59bde9 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -1,6 +1,7 @@
INTERNAL {
global:
+ cnxk_logtype_base;
plt_init;
roc_error_msg_get;
roc_model;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 05/52] common/cnxk: add mbox request and response definitions
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (3 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 04/52] common/cnxk: add interrupt helper API Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 06/52] common/cnxk: add mailbox base infra Nithin Dabilpuram
` (50 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram, Harman Kalra, Shijith Thotton
From: Jerin Jacob <jerinj@marvell.com>
The admin function driver sits in Linux kernel as mailbox
server. The DPDK AF mailbox client, send the message to mailbox
server to complete the administrative task such as get mac
address.
This patch adds mailbox request and response definition of
existing mailbox defined between AF driver and DPDK driver.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
drivers/common/cnxk/roc_api.h | 3 +
drivers/common/cnxk/roc_mbox.h | 1732 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 1735 insertions(+)
create mode 100644 drivers/common/cnxk/roc_mbox.h
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 7dbeb6a..25c50d8 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -76,6 +76,9 @@
/* Model */
#include "roc_model.h"
+/* Mbox */
+#include "roc_mbox.h"
+
/* Utils */
#include "roc_utils.h"
diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h
new file mode 100644
index 0000000..3b941d2
--- /dev/null
+++ b/drivers/common/cnxk/roc_mbox.h
@@ -0,0 +1,1732 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __ROC_MBOX_H__
+#define __ROC_MBOX_H__
+
+#include <errno.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/* Device memory does not support unaligned access, instruct compiler to
+ * not optimize the memory access when working with mailbox memory.
+ */
+#define __io volatile
+
+/* Header which precedes all mbox messages */
+struct mbox_hdr {
+ uint64_t __io msg_size; /* Total msgs size embedded */
+ uint16_t __io num_msgs; /* No of msgs embedded */
+};
+
+/* Header which precedes every msg and is also part of it */
+struct mbox_msghdr {
+ uint16_t __io pcifunc; /* Who's sending this msg */
+ uint16_t __io id; /* Mbox message ID */
+#define MBOX_REQ_SIG (0xdead)
+#define MBOX_RSP_SIG (0xbeef)
+ /* Signature, for validating corrupted msgs */
+ uint16_t __io sig;
+#define MBOX_VERSION (0x000a)
+ /* Version of msg's structure for this ID */
+ uint16_t __io ver;
+ /* Offset of next msg within mailbox region */
+ uint16_t __io next_msgoff;
+ int __io rc; /* Msg processed response code */
+};
+
+/* Mailbox message types */
+#define MBOX_MSG_MASK 0xFFFF
+#define MBOX_MSG_INVALID 0xFFFE
+#define MBOX_MSG_MAX 0xFFFF
+
+#define MBOX_MESSAGES \
+ /* Generic mbox IDs (range 0x000 - 0x1FF) */ \
+ M(READY, 0x001, ready, msg_req, ready_msg_rsp) \
+ M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp) \
+ M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp) \
+ M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \
+ M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \
+ M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \
+ M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \
+ M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \
+ M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \
+ M(LMTST_TBL_SETUP, 0x00a, lmtst_tbl_setup, lmtst_tbl_setup_req, \
+ msg_rsp) \
+ /* CGX mbox IDs (range 0x200 - 0x3FF) */ \
+ M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
+ M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
+ M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \
+ M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get, \
+ cgx_mac_addr_set_or_get) \
+ M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get, \
+ cgx_mac_addr_set_or_get) \
+ M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \
+ M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \
+ M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \
+ M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \
+ M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, \
+ cgx_link_info_msg) \
+ M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \
+ M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \
+ M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \
+ M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \
+ M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \
+ cgx_pause_frm_cfg) \
+ M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \
+ M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \
+ M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \
+ cgx_mac_addr_add_rsp) \
+ M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \
+ msg_rsp) \
+ M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \
+ cgx_max_dmac_entries_get_rsp) \
+ M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \
+ cgx_set_link_state_msg, msg_rsp) \
+ M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \
+ cgx_phy_mod_type) \
+ M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \
+ msg_rsp) \
+ M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \
+ M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req, \
+ cgx_set_link_mode_rsp) \
+ M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, \
+ msg_rsp) \
+ M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \
+ M(RPM_STATS, 0x21C, rpm_stats, msg_req, rpm_stats_rsp) \
+ /* NPA mbox IDs (range 0x400 - 0x5FF) */ \
+ M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \
+ npa_lf_alloc_rsp) \
+ M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \
+ M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp) \
+ M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, \
+ msg_rsp) \
+ /* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
+ M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \
+ sso_lf_alloc_rsp) \
+ M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \
+ M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp) \
+ M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \
+ M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \
+ msg_rsp) \
+ M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \
+ msg_rsp) \
+ M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \
+ sso_grp_priority) \
+ M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \
+ M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \
+ msg_rsp) \
+ M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \
+ sso_grp_stats) \
+ M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \
+ sso_hws_stats) \
+ M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \
+ sso_hw_xaq_release, msg_rsp) \
+ /* TIM mbox IDs (range 0x800 - 0x9FF) */ \
+ M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \
+ tim_lf_alloc_rsp) \
+ M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \
+ M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp) \
+ M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \
+ tim_enable_rsp) \
+ M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \
+ /* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
+ M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, msg_rsp) \
+ M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \
+ M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \
+ cpt_rd_wr_reg_msg) \
+ M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \
+ cpt_set_crypto_grp_req_msg, msg_rsp) \
+ M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \
+ cpt_inline_ipsec_cfg_msg, msg_rsp) \
+ M(CPT_STATS, 0xA05, cpt_sts_get, cpt_sts_req, cpt_sts_rsp) \
+ M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \
+ msg_rsp) \
+ M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \
+ cpt_rx_inline_lf_cfg_msg, msg_rsp) \
+ M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \
+ M(CPT_GET_ENG_GRP, 0xBFF, cpt_eng_grp_get, cpt_eng_grp_req, \
+ cpt_eng_grp_rsp) \
+ /* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
+ M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \
+ npc_mcam_alloc_entry_req, npc_mcam_alloc_entry_rsp) \
+ M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \
+ npc_mcam_free_entry_req, msg_rsp) \
+ M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \
+ npc_mcam_write_entry_req, msg_rsp) \
+ M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \
+ npc_mcam_ena_dis_entry_req, msg_rsp) \
+ M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \
+ npc_mcam_ena_dis_entry_req, msg_rsp) \
+ M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \
+ npc_mcam_shift_entry_req, npc_mcam_shift_entry_rsp) \
+ M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \
+ npc_mcam_alloc_counter_req, npc_mcam_alloc_counter_rsp) \
+ M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \
+ npc_mcam_oper_counter_req, msg_rsp) \
+ M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \
+ npc_mcam_unmap_counter_req, msg_rsp) \
+ M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \
+ npc_mcam_oper_counter_req, msg_rsp) \
+ M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \
+ npc_mcam_oper_counter_req, npc_mcam_oper_counter_rsp) \
+ M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, \
+ npc_mcam_alloc_and_write_entry, npc_mcam_alloc_and_write_entry_req, \
+ npc_mcam_alloc_and_write_entry_rsp) \
+ M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \
+ npc_get_kex_cfg_rsp) \
+ M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, npc_install_flow_req, \
+ npc_install_flow_rsp) \
+ M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, npc_delete_flow_req, \
+ msg_rsp) \
+ M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \
+ npc_mcam_read_entry_req, npc_mcam_read_entry_rsp) \
+ M(NPC_SET_PKIND, 0x6010, npc_set_pkind, npc_set_pkind, msg_rsp) \
+ M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \
+ npc_mcam_read_base_rule_rsp) \
+ /* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
+ M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \
+ nix_lf_alloc_rsp) \
+ M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \
+ M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, nix_aq_enq_rsp) \
+ M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \
+ msg_rsp) \
+ M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \
+ nix_txsch_alloc_rsp) \
+ M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, msg_rsp) \
+ M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \
+ nix_txschq_config) \
+ M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \
+ M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \
+ M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \
+ nix_rss_flowkey_cfg, nix_rss_flowkey_cfg_rsp) \
+ M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \
+ msg_rsp) \
+ M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \
+ M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \
+ M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \
+ M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \
+ M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \
+ nix_mark_format_cfg, nix_mark_format_cfg_rsp) \
+ M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \
+ M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \
+ nix_lso_format_cfg_rsp) \
+ M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \
+ msg_rsp) \
+ M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \
+ msg_rsp) \
+ M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \
+ msg_rsp) \
+ M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \
+ nix_bp_cfg_rsp) \
+ M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp) \
+ M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \
+ nix_get_mac_addr_rsp) \
+ M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \
+ nix_inline_ipsec_cfg, msg_rsp) \
+ M(NIX_INLINE_IPSEC_LF_CFG, 0x801a, nix_inline_ipsec_lf_cfg, \
+ nix_inline_ipsec_lf_cfg, msg_rsp) \
+ M(NIX_CN10K_AQ_ENQ, 0x801b, nix_cn10k_aq_enq, nix_cn10k_aq_enq_req, \
+ nix_cn10k_aq_enq_rsp) \
+ M(NIX_GET_HW_INFO, 0x801c, nix_get_hw_info, msg_req, nix_hw_info)
+
+/* Messages initiated by AF (range 0xC00 - 0xDFF) */
+#define MBOX_UP_CGX_MESSAGES \
+ M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp) \
+ M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, msg_rsp)
+
+enum {
+#define M(_name, _id, _1, _2, _3) MBOX_MSG_##_name = _id,
+ MBOX_MESSAGES MBOX_UP_CGX_MESSAGES
+#undef M
+};
+
+/* Mailbox message formats */
+
+#define RVU_DEFAULT_PF_FUNC 0xFFFF
+
+/* Generic request msg used for those mbox messages which
+ * don't send any data in the request.
+ */
+struct msg_req {
+ struct mbox_msghdr hdr;
+};
+
+/* Generic response msg used a ack or response for those mbox
+ * messages which does not have a specific rsp msg format.
+ */
+struct msg_rsp {
+ struct mbox_msghdr hdr;
+};
+
+/* RVU mailbox error codes
+ * Range 256 - 300.
+ */
+enum rvu_af_status {
+ RVU_INVALID_VF_ID = -256,
+};
+
+struct ready_msg_rsp {
+ struct mbox_msghdr hdr;
+ uint16_t __io sclk_freq; /* SCLK frequency */
+ uint16_t __io rclk_freq; /* RCLK frequency */
+};
+
+/* Struct to set pkind */
+struct npc_set_pkind {
+ struct mbox_msghdr hdr;
+#define ROC_PRIV_FLAGS_DEFAULT BIT_ULL(0)
+#define ROC_PRIV_FLAGS_EDSA BIT_ULL(1)
+#define ROC_PRIV_FLAGS_HIGIG BIT_ULL(2)
+#define ROC_PRIV_FLAGS_LEN_90B BIT_ULL(3)
+#define ROC_PRIV_FLAGS_CUSTOM BIT_ULL(63)
+ uint64_t __io mode;
+#define PKIND_TX BIT_ULL(0)
+#define PKIND_RX BIT_ULL(1)
+ uint8_t __io dir;
+ uint8_t __io pkind; /* valid only in case custom flag */
+};
+
+/* Structure for requesting resource provisioning.
+ * 'modify' flag to be used when either requesting more
+ * or to detach partial of a certain resource type.
+ * Rest of the fields specify how many of what type to
+ * be attached.
+ * To request LFs from two blocks of same type this mailbox
+ * can be sent twice as below:
+ * struct rsrc_attach *attach;
+ * .. Allocate memory for message ..
+ * attach->cptlfs = 3; <3 LFs from CPT0>
+ * .. Send message ..
+ * .. Allocate memory for message ..
+ * attach->modify = 1;
+ * attach->cpt_blkaddr = BLKADDR_CPT1;
+ * attach->cptlfs = 2; <2 LFs from CPT1>
+ * .. Send message ..
+ */
+struct rsrc_attach_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io modify : 1;
+ uint8_t __io npalf : 1;
+ uint8_t __io nixlf : 1;
+ uint16_t __io sso;
+ uint16_t __io ssow;
+ uint16_t __io timlfs;
+ uint16_t __io cptlfs;
+ uint16_t __io reelfs;
+ /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
+ int __io cpt_blkaddr;
+ /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */
+ int __io ree_blkaddr;
+};
+
+/* Structure for relinquishing resources.
+ * 'partial' flag to be used when relinquishing all resources
+ * but only of a certain type. If not set, all resources of all
+ * types provisioned to the RVU function will be detached.
+ */
+struct rsrc_detach_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io partial : 1;
+ uint8_t __io npalf : 1;
+ uint8_t __io nixlf : 1;
+ uint8_t __io sso : 1;
+ uint8_t __io ssow : 1;
+ uint8_t __io timlfs : 1;
+ uint8_t __io cptlfs : 1;
+ uint8_t __io reelfs : 1;
+};
+
+/* NIX Transmit schedulers */
+#define NIX_TXSCH_LVL_SMQ 0x0
+#define NIX_TXSCH_LVL_MDQ 0x0
+#define NIX_TXSCH_LVL_TL4 0x1
+#define NIX_TXSCH_LVL_TL3 0x2
+#define NIX_TXSCH_LVL_TL2 0x3
+#define NIX_TXSCH_LVL_TL1 0x4
+#define NIX_TXSCH_LVL_CNT 0x5
+
+/*
+ * Number of resources available to the caller.
+ * In reply to MBOX_MSG_FREE_RSRC_CNT.
+ */
+struct free_rsrcs_rsp {
+ struct mbox_msghdr hdr;
+ uint16_t __io schq[NIX_TXSCH_LVL_CNT];
+ uint16_t __io sso;
+ uint16_t __io tim;
+ uint16_t __io ssow;
+ uint16_t __io cpt;
+ uint8_t __io npa;
+ uint8_t __io nix;
+ uint16_t __io schq_nix1[NIX_TXSCH_LVL_CNT];
+ uint8_t __io nix1;
+ uint8_t __io cpt1;
+ uint8_t __io ree0;
+ uint8_t __io ree1;
+};
+
+#define MSIX_VECTOR_INVALID 0xFFFF
+#define MAX_RVU_BLKLF_CNT 256
+
+struct msix_offset_rsp {
+ struct mbox_msghdr hdr;
+ uint16_t __io npa_msixoff;
+ uint16_t __io nix_msixoff;
+ uint16_t __io sso;
+ uint16_t __io ssow;
+ uint16_t __io timlfs;
+ uint16_t __io cptlfs;
+ uint16_t __io sso_msixoff[MAX_RVU_BLKLF_CNT];
+ uint16_t __io ssow_msixoff[MAX_RVU_BLKLF_CNT];
+ uint16_t __io timlf_msixoff[MAX_RVU_BLKLF_CNT];
+ uint16_t __io cptlf_msixoff[MAX_RVU_BLKLF_CNT];
+ uint16_t __io cpt1_lfs;
+ uint16_t __io ree0_lfs;
+ uint16_t __io ree1_lfs;
+ uint16_t __io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT];
+ uint16_t __io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT];
+ uint16_t __io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT];
+};
+
+struct lmtst_tbl_setup_req {
+ struct mbox_msghdr hdr;
+
+ uint64_t __io dis_sched_early_comp : 1;
+ uint64_t __io sched_ena : 1;
+ uint64_t __io dis_line_pref : 1;
+ uint64_t __io ssow_pf_func : 13;
+ uint16_t __io pcifunc;
+};
+
+/* CGX mbox message formats */
+
+struct cgx_stats_rsp {
+ struct mbox_msghdr hdr;
+#define CGX_RX_STATS_COUNT 13
+#define CGX_TX_STATS_COUNT 18
+ uint64_t __io rx_stats[CGX_RX_STATS_COUNT];
+ uint64_t __io tx_stats[CGX_TX_STATS_COUNT];
+};
+
+struct rpm_stats_rsp {
+ struct mbox_msghdr hdr;
+#define RPM_RX_STATS_COUNT 43
+#define RPM_TX_STATS_COUNT 34
+ uint64_t __io rx_stats[RPM_RX_STATS_COUNT];
+ uint64_t __io tx_stats[RPM_TX_STATS_COUNT];
+};
+
+struct cgx_fec_stats_rsp {
+ struct mbox_msghdr hdr;
+ uint64_t __io fec_corr_blks;
+ uint64_t __io fec_uncorr_blks;
+};
+
+/* Structure for requesting the operation for
+ * setting/getting mac address in the CGX interface
+ */
+struct cgx_mac_addr_set_or_get {
+ struct mbox_msghdr hdr;
+ uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN];
+};
+
+/* Structure for requesting the operation to
+ * add DMAC filter entry into CGX interface
+ */
+struct cgx_mac_addr_add_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN];
+};
+
+/* Structure for response against the operation to
+ * add DMAC filter entry into CGX interface
+ */
+struct cgx_mac_addr_add_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io index;
+};
+
+/* Structure for requesting the operation to
+ * delete DMAC filter entry from CGX interface
+ */
+struct cgx_mac_addr_del_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io index;
+};
+
+/* Structure for response against the operation to
+ * get maximum supported DMAC filter entries
+ */
+struct cgx_max_dmac_entries_get_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io max_dmac_filters;
+};
+
+struct cgx_link_user_info {
+ uint64_t __io link_up : 1;
+ uint64_t __io full_duplex : 1;
+ uint64_t __io lmac_type_id : 4;
+ uint64_t __io speed : 20; /* speed in Mbps */
+ uint64_t __io an : 1; /* AN supported or not */
+ uint64_t __io fec : 2; /* FEC type if enabled else 0 */
+ uint64_t __io port : 8;
+#define LMACTYPE_STR_LEN 16
+ char lmac_type[LMACTYPE_STR_LEN];
+};
+
+struct cgx_link_info_msg {
+ struct mbox_msghdr hdr;
+ struct cgx_link_user_info link_info;
+};
+
+struct cgx_ptp_rx_info_msg {
+ struct mbox_msghdr hdr;
+ uint8_t __io ptp_en;
+};
+
+struct cgx_pause_frm_cfg {
+ struct mbox_msghdr hdr;
+ uint8_t __io set;
+ /* set = 1 if the request is to config pause frames */
+ /* set = 0 if the request is to fetch pause frames config */
+ uint8_t __io rx_pause;
+ uint8_t __io tx_pause;
+};
+
+struct sfp_eeprom_s {
+#define SFP_EEPROM_SIZE 256
+ uint16_t __io sff_id;
+ uint8_t __io buf[SFP_EEPROM_SIZE];
+ uint64_t __io reserved;
+};
+
+enum fec_type {
+ ROC_FEC_NONE,
+ ROC_FEC_BASER,
+ ROC_FEC_RS,
+};
+
+struct phy_s {
+ uint64_t __io can_change_mod_type : 1;
+ uint64_t __io mod_type : 1;
+};
+
+struct cgx_lmac_fwdata_s {
+ uint16_t __io rw_valid;
+ uint64_t __io supported_fec;
+ uint64_t __io supported_an;
+ uint64_t __io supported_link_modes;
+ /* Only applicable if AN is supported */
+ uint64_t __io advertised_fec;
+ uint64_t __io advertised_link_modes;
+ /* Only applicable if SFP/QSFP slot is present */
+ struct sfp_eeprom_s sfp_eeprom;
+ struct phy_s phy;
+#define LMAC_FWDATA_RESERVED_MEM 1023
+ uint64_t __io reserved[LMAC_FWDATA_RESERVED_MEM];
+};
+
+struct cgx_fw_data {
+ struct mbox_msghdr hdr;
+ struct cgx_lmac_fwdata_s fwdata;
+};
+
+struct fec_mode {
+ struct mbox_msghdr hdr;
+ int __io fec;
+};
+
+struct cgx_set_link_state_msg {
+ struct mbox_msghdr hdr;
+ uint8_t __io enable;
+};
+
+struct cgx_phy_mod_type {
+ struct mbox_msghdr hdr;
+ int __io mod;
+};
+
+struct cgx_set_link_mode_args {
+ uint32_t __io speed;
+ uint8_t __io duplex;
+ uint8_t __io an;
+ uint8_t __io ports;
+ uint64_t __io mode;
+};
+
+struct cgx_set_link_mode_req {
+ struct mbox_msghdr hdr;
+ struct cgx_set_link_mode_args args;
+};
+
+struct cgx_set_link_mode_rsp {
+ struct mbox_msghdr hdr;
+ int __io status;
+};
+
+/* NPA mbox message formats */
+
+/* NPA mailbox error codes
+ * Range 301 - 400.
+ */
+enum npa_af_status {
+ NPA_AF_ERR_PARAM = -301,
+ NPA_AF_ERR_AQ_FULL = -302,
+ NPA_AF_ERR_AQ_ENQUEUE = -303,
+ NPA_AF_ERR_AF_LF_INVALID = -304,
+ NPA_AF_ERR_AF_LF_ALLOC = -305,
+ NPA_AF_ERR_LF_RESET = -306,
+};
+
+#define NPA_AURA_SZ_0 0
+#define NPA_AURA_SZ_128 1
+#define NPA_AURA_SZ_256 2
+#define NPA_AURA_SZ_512 3
+#define NPA_AURA_SZ_1K 4
+#define NPA_AURA_SZ_2K 5
+#define NPA_AURA_SZ_4K 6
+#define NPA_AURA_SZ_8K 7
+#define NPA_AURA_SZ_16K 8
+#define NPA_AURA_SZ_32K 9
+#define NPA_AURA_SZ_64K 10
+#define NPA_AURA_SZ_128K 11
+#define NPA_AURA_SZ_256K 12
+#define NPA_AURA_SZ_512K 13
+#define NPA_AURA_SZ_1M 14
+#define NPA_AURA_SZ_MAX 15
+
+/* For NPA LF context alloc and init */
+struct npa_lf_alloc_req {
+ struct mbox_msghdr hdr;
+ int __io node;
+ int __io aura_sz; /* No of auras. See NPA_AURA_SZ_* */
+ uint32_t __io nr_pools; /* No of pools */
+ uint64_t __io way_mask;
+};
+
+struct npa_lf_alloc_rsp {
+ struct mbox_msghdr hdr;
+ uint32_t __io stack_pg_ptrs; /* No of ptrs per stack page */
+ uint32_t __io stack_pg_bytes; /* Size of stack page */
+ uint16_t __io qints; /* NPA_AF_CONST::QINTS */
+ uint8_t __io cache_lines; /* Batch Alloc DMA */
+};
+
+/* NPA AQ enqueue msg */
+struct npa_aq_enq_req {
+ struct mbox_msghdr hdr;
+ uint32_t __io aura_id;
+ uint8_t __io ctype;
+ uint8_t __io op;
+ union {
+ /* Valid when op == WRITE/INIT and ctype == AURA.
+ * LF fills the pool_id in aura.pool_addr. AF will translate
+ * the pool_id to pool context pointer.
+ */
+ __io struct npa_aura_s aura;
+ /* Valid when op == WRITE/INIT and ctype == POOL */
+ __io struct npa_pool_s pool;
+ };
+ /* Mask data when op == WRITE (1=write, 0=don't write) */
+ union {
+ /* Valid when op == WRITE and ctype == AURA */
+ __io struct npa_aura_s aura_mask;
+ /* Valid when op == WRITE and ctype == POOL */
+ __io struct npa_pool_s pool_mask;
+ };
+};
+
+struct npa_aq_enq_rsp {
+ struct mbox_msghdr hdr;
+ union {
+ /* Valid when op == READ and ctype == AURA */
+ __io struct npa_aura_s aura;
+ /* Valid when op == READ and ctype == POOL */
+ __io struct npa_pool_s pool;
+ };
+};
+
+/* Disable all contexts of type 'ctype' */
+struct hwctx_disable_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io ctype;
+};
+
+/* NIX mbox message formats */
+
+/* NIX mailbox error codes
+ * Range 401 - 500.
+ */
+enum nix_af_status {
+ NIX_AF_ERR_PARAM = -401,
+ NIX_AF_ERR_AQ_FULL = -402,
+ NIX_AF_ERR_AQ_ENQUEUE = -403,
+ NIX_AF_ERR_AF_LF_INVALID = -404,
+ NIX_AF_ERR_AF_LF_ALLOC = -405,
+ NIX_AF_ERR_TLX_ALLOC_FAIL = -406,
+ NIX_AF_ERR_TLX_INVALID = -407,
+ NIX_AF_ERR_RSS_SIZE_INVALID = -408,
+ NIX_AF_ERR_RSS_GRPS_INVALID = -409,
+ NIX_AF_ERR_FRS_INVALID = -410,
+ NIX_AF_ERR_RX_LINK_INVALID = -411,
+ NIX_AF_INVAL_TXSCHQ_CFG = -412,
+ NIX_AF_SMQ_FLUSH_FAILED = -413,
+ NIX_AF_ERR_LF_RESET = -414,
+ NIX_AF_ERR_RSS_NOSPC_FIELD = -415,
+ NIX_AF_ERR_RSS_NOSPC_ALGO = -416,
+ NIX_AF_ERR_MARK_CFG_FAIL = -417,
+ NIX_AF_ERR_LSO_CFG_FAIL = -418,
+ NIX_AF_INVAL_NPA_PF_FUNC = -419,
+ NIX_AF_INVAL_SSO_PF_FUNC = -420,
+ NIX_AF_ERR_TX_VTAG_NOSPC = -421,
+ NIX_AF_ERR_RX_VTAG_INUSE = -422,
+ NIX_AF_ERR_PTP_CONFIG_FAIL = -423,
+};
+
+/* For NIX LF context alloc and init */
+struct nix_lf_alloc_req {
+ struct mbox_msghdr hdr;
+ int __io node;
+ uint32_t __io rq_cnt; /* No of receive queues */
+ uint32_t __io sq_cnt; /* No of send queues */
+ uint32_t __io cq_cnt; /* No of completion queues */
+ uint8_t __io xqe_sz;
+ uint16_t __io rss_sz;
+ uint8_t __io rss_grps;
+ uint16_t __io npa_func;
+ /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */
+ uint16_t __io sso_func;
+ uint64_t __io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */
+ uint64_t __io way_mask;
+#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0)
+ uint64_t flags;
+};
+
+struct nix_lf_alloc_rsp {
+ struct mbox_msghdr hdr;
+ uint16_t __io sqb_size;
+ uint16_t __io rx_chan_base;
+ uint16_t __io tx_chan_base;
+ uint8_t __io rx_chan_cnt; /* Total number of RX channels */
+ uint8_t __io tx_chan_cnt; /* Total number of TX channels */
+ uint8_t __io lso_tsov4_idx;
+ uint8_t __io lso_tsov6_idx;
+ uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN];
+ uint8_t __io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */
+ uint8_t __io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */
+ uint16_t __io cints; /* NIX_AF_CONST2::CINTS */
+ uint16_t __io qints; /* NIX_AF_CONST2::QINTS */
+ uint8_t __io hw_rx_tstamp_en; /*set if rx timestamping enabled */
+ uint8_t __io cgx_links; /* No. of CGX links present in HW */
+ uint8_t __io lbk_links; /* No. of LBK links present in HW */
+ uint8_t __io sdp_links; /* No. of SDP links present in HW */
+ uint8_t tx_link; /* Transmit channel link number */
+};
+
+struct nix_lf_free_req {
+ struct mbox_msghdr hdr;
+#define NIX_LF_DISABLE_FLOWS BIT_ULL(0)
+#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1)
+ uint64_t __io flags;
+};
+
+/* CN10x NIX AQ enqueue msg */
+struct nix_cn10k_aq_enq_req {
+ struct mbox_msghdr hdr;
+ uint32_t __io qidx;
+ uint8_t __io ctype;
+ uint8_t __io op;
+ union {
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */
+ __io struct nix_cn10k_rq_ctx_s rq;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */
+ __io struct nix_cn10k_sq_ctx_s sq;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */
+ __io struct nix_cq_ctx_s cq;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */
+ __io struct nix_rsse_s rss;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */
+ __io struct nix_rx_mce_s mce;
+ };
+ /* Mask data when op == WRITE (1=write, 0=don't write) */
+ union {
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */
+ __io struct nix_cn10k_rq_ctx_s rq_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */
+ __io struct nix_cn10k_sq_ctx_s sq_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */
+ __io struct nix_cq_ctx_s cq_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */
+ __io struct nix_rsse_s rss_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */
+ __io struct nix_rx_mce_s mce_mask;
+ };
+};
+
+struct nix_cn10k_aq_enq_rsp {
+ struct mbox_msghdr hdr;
+ union {
+ struct nix_cn10k_rq_ctx_s rq;
+ struct nix_cn10k_sq_ctx_s sq;
+ struct nix_cq_ctx_s cq;
+ struct nix_rsse_s rss;
+ struct nix_rx_mce_s mce;
+ };
+};
+
+/* NIX AQ enqueue msg */
+struct nix_aq_enq_req {
+ struct mbox_msghdr hdr;
+ uint32_t __io qidx;
+ uint8_t __io ctype;
+ uint8_t __io op;
+ union {
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */
+ __io struct nix_rq_ctx_s rq;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */
+ __io struct nix_sq_ctx_s sq;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */
+ __io struct nix_cq_ctx_s cq;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */
+ __io struct nix_rsse_s rss;
+ /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */
+ __io struct nix_rx_mce_s mce;
+ };
+ /* Mask data when op == WRITE (1=write, 0=don't write) */
+ union {
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */
+ __io struct nix_rq_ctx_s rq_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */
+ __io struct nix_sq_ctx_s sq_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */
+ __io struct nix_cq_ctx_s cq_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */
+ __io struct nix_rsse_s rss_mask;
+ /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */
+ __io struct nix_rx_mce_s mce_mask;
+ };
+};
+
+struct nix_aq_enq_rsp {
+ struct mbox_msghdr hdr;
+ union {
+ __io struct nix_rq_ctx_s rq;
+ __io struct nix_sq_ctx_s sq;
+ __io struct nix_cq_ctx_s cq;
+ __io struct nix_rsse_s rss;
+ __io struct nix_rx_mce_s mce;
+ };
+};
+
+/* Tx scheduler/shaper mailbox messages */
+
+#define MAX_TXSCHQ_PER_FUNC 128
+
+struct nix_txsch_alloc_req {
+ struct mbox_msghdr hdr;
+ /* Scheduler queue count request at each level */
+ uint16_t __io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
+ uint16_t __io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
+};
+
+struct nix_txsch_alloc_rsp {
+ struct mbox_msghdr hdr;
+ /* Scheduler queue count allocated at each level */
+ uint16_t __io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
+ uint16_t __io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
+ /* Scheduler queue list allocated at each level */
+ uint16_t __io schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ uint16_t __io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
+ /* Traffic aggregation scheduler level */
+ uint8_t __io aggr_level;
+ /* Aggregation lvl's RR_PRIO config */
+ uint8_t __io aggr_lvl_rr_prio;
+ /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
+ uint8_t __io link_cfg_lvl;
+};
+
+struct nix_txsch_free_req {
+ struct mbox_msghdr hdr;
+#define TXSCHQ_FREE_ALL BIT_ULL(0)
+ uint16_t __io flags;
+ /* Scheduler queue level to be freed */
+ uint16_t __io schq_lvl;
+ /* List of scheduler queues to be freed */
+ uint16_t __io schq;
+};
+
+struct nix_txschq_config {
+ struct mbox_msghdr hdr;
+ uint8_t __io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */
+ uint8_t __io read;
+#define TXSCHQ_IDX_SHIFT 16
+#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1)
+#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK)
+ uint8_t __io num_regs;
+#define MAX_REGS_PER_MBOX_MSG 20
+ uint64_t __io reg[MAX_REGS_PER_MBOX_MSG];
+ uint64_t __io regval[MAX_REGS_PER_MBOX_MSG];
+ /* All 0's => overwrite with new value */
+ uint64_t __io regval_mask[MAX_REGS_PER_MBOX_MSG];
+};
+
+struct nix_vtag_config {
+ struct mbox_msghdr hdr;
+ /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */
+ uint8_t __io vtag_size;
+ /* cfg_type is '0' for tx vlan cfg
+ * cfg_type is '1' for rx vlan cfg
+ */
+ uint8_t __io cfg_type;
+ union {
+ /* Valid when cfg_type is '0' */
+ struct {
+ uint64_t __io vtag0;
+ uint64_t __io vtag1;
+
+ /* cfg_vtag0 & cfg_vtag1 fields are valid
+ * when free_vtag0 & free_vtag1 are '0's.
+ */
+ /* cfg_vtag0 = 1 to configure vtag0 */
+ uint8_t __io cfg_vtag0 : 1;
+ /* cfg_vtag1 = 1 to configure vtag1 */
+ uint8_t __io cfg_vtag1 : 1;
+
+ /* vtag0_idx & vtag1_idx are only valid when
+ * both cfg_vtag0 & cfg_vtag1 are '0's,
+ * these fields are used along with free_vtag0
+ * & free_vtag1 to free the nix lf's tx_vlan
+ * configuration.
+ *
+ * Denotes the indices of tx_vtag def registers
+ * that needs to be cleared and freed.
+ */
+ int __io vtag0_idx;
+ int __io vtag1_idx;
+
+ /* Free_vtag0 & free_vtag1 fields are valid
+ * when cfg_vtag0 & cfg_vtag1 are '0's.
+ */
+ /* Free_vtag0 = 1 clears vtag0 configuration
+ * vtag0_idx denotes the index to be cleared.
+ */
+ uint8_t __io free_vtag0 : 1;
+ /* Free_vtag1 = 1 clears vtag1 configuration
+ * vtag1_idx denotes the index to be cleared.
+ */
+ uint8_t __io free_vtag1 : 1;
+ } tx;
+
+ /* Valid when cfg_type is '1' */
+ struct {
+ /* Rx vtag type index, valid values are in 0..7 range */
+ uint8_t __io vtag_type;
+ /* Rx vtag strip */
+ uint8_t __io strip_vtag : 1;
+ /* Rx vtag capture */
+ uint8_t __io capture_vtag : 1;
+ } rx;
+ };
+};
+
+struct nix_vtag_config_rsp {
+ struct mbox_msghdr hdr;
+ /* Indices of tx_vtag def registers used to configure
+ * tx vtag0 & vtag1 headers, these indices are valid
+ * when nix_vtag_config mbox requested for vtag0 and/
+ * or vtag1 configuration.
+ */
+ int __io vtag0_idx;
+ int __io vtag1_idx;
+};
+
+struct nix_rss_flowkey_cfg {
+ struct mbox_msghdr hdr;
+ int __io mcam_index; /* MCAM entry index to modify */
+ uint32_t __io flowkey_cfg; /* Flowkey types selected */
+#define FLOW_KEY_TYPE_PORT BIT(0)
+#define FLOW_KEY_TYPE_IPV4 BIT(1)
+#define FLOW_KEY_TYPE_IPV6 BIT(2)
+#define FLOW_KEY_TYPE_TCP BIT(3)
+#define FLOW_KEY_TYPE_UDP BIT(4)
+#define FLOW_KEY_TYPE_SCTP BIT(5)
+#define FLOW_KEY_TYPE_NVGRE BIT(6)
+#define FLOW_KEY_TYPE_VXLAN BIT(7)
+#define FLOW_KEY_TYPE_GENEVE BIT(8)
+#define FLOW_KEY_TYPE_ETH_DMAC BIT(9)
+#define FLOW_KEY_TYPE_IPV6_EXT BIT(10)
+#define FLOW_KEY_TYPE_GTPU BIT(11)
+#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12)
+#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13)
+#define FLOW_KEY_TYPE_INNR_TCP BIT(14)
+#define FLOW_KEY_TYPE_INNR_UDP BIT(15)
+#define FLOW_KEY_TYPE_INNR_SCTP BIT(16)
+#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17)
+#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18)
+#define FLOW_KEY_TYPE_CUSTOM0 BIT(19)
+#define FLOW_KEY_TYPE_VLAN BIT(20)
+#define FLOW_KEY_TYPE_L4_DST BIT(28)
+#define FLOW_KEY_TYPE_L4_SRC BIT(29)
+#define FLOW_KEY_TYPE_L3_DST BIT(30)
+#define FLOW_KEY_TYPE_L3_SRC BIT(31)
+ uint8_t __io group; /* RSS context or group */
+};
+
+struct nix_rss_flowkey_cfg_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io alg_idx; /* Selected algo index */
+};
+
+struct nix_set_mac_addr {
+ struct mbox_msghdr hdr;
+ uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN];
+};
+
+struct nix_get_mac_addr_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io mac_addr[PLT_ETHER_ADDR_LEN];
+};
+
+struct nix_mark_format_cfg {
+ struct mbox_msghdr hdr;
+ uint8_t __io offset;
+ uint8_t __io y_mask;
+ uint8_t __io y_val;
+ uint8_t __io r_mask;
+ uint8_t __io r_val;
+};
+
+struct nix_mark_format_cfg_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io mark_format_idx;
+};
+
+struct nix_lso_format_cfg {
+ struct mbox_msghdr hdr;
+ uint64_t __io field_mask;
+ uint64_t __io fields[NIX_LSO_FIELD_MAX];
+};
+
+struct nix_lso_format_cfg_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io lso_format_idx;
+};
+
+struct nix_rx_mode {
+ struct mbox_msghdr hdr;
+#define NIX_RX_MODE_UCAST BIT(0)
+#define NIX_RX_MODE_PROMISC BIT(1)
+#define NIX_RX_MODE_ALLMULTI BIT(2)
+ uint16_t __io mode;
+};
+
+struct nix_rx_cfg {
+ struct mbox_msghdr hdr;
+#define NIX_RX_OL3_VERIFY BIT(0)
+#define NIX_RX_OL4_VERIFY BIT(1)
+ uint8_t __io len_verify; /* Outer L3/L4 len check */
+#define NIX_RX_CSUM_OL4_VERIFY BIT(0)
+ uint8_t __io csum_verify; /* Outer L4 checksum verification */
+};
+
+struct nix_frs_cfg {
+ struct mbox_msghdr hdr;
+ uint8_t __io update_smq; /* Update SMQ's min/max lens */
+ uint8_t __io update_minlen; /* Set minlen also */
+ uint8_t __io sdp_link; /* Set SDP RX link */
+ uint16_t __io maxlen;
+ uint16_t __io minlen;
+};
+
+struct nix_set_vlan_tpid {
+ struct mbox_msghdr hdr;
+#define NIX_VLAN_TYPE_INNER 0
+#define NIX_VLAN_TYPE_OUTER 1
+ uint8_t __io vlan_type;
+ uint16_t __io tpid;
+};
+
+struct nix_bp_cfg_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io chan_base; /* Starting channel number */
+ uint8_t __io chan_cnt; /* Number of channels */
+ uint8_t __io bpid_per_chan;
+ /* bpid_per_chan = 0 assigns single bp id for range of channels */
+ /* bpid_per_chan = 1 assigns separate bp id for each channel */
+};
+
+/* PF can be mapped to either CGX or LBK interface,
+ * so maximum 64 channels are possible.
+ */
+#define NIX_MAX_CHAN 64
+struct nix_bp_cfg_rsp {
+ struct mbox_msghdr hdr;
+ /* Channel and bpid mapping */
+ uint16_t __io chan_bpid[NIX_MAX_CHAN];
+ /* Number of channel for which bpids are assigned */
+ uint8_t __io chan_cnt;
+};
+
+/* Global NIX inline IPSec configuration */
+struct nix_inline_ipsec_cfg {
+ struct mbox_msghdr hdr;
+ uint32_t __io cpt_credit;
+ struct {
+ uint8_t __io egrp;
+ uint8_t __io opcode;
+ } gen_cfg;
+ struct {
+ uint16_t __io cpt_pf_func;
+ uint8_t __io cpt_slot;
+ } inst_qsel;
+ uint8_t __io enable;
+};
+
+/* Per NIX LF inline IPSec configuration */
+struct nix_inline_ipsec_lf_cfg {
+ struct mbox_msghdr hdr;
+ uint64_t __io sa_base_addr;
+ struct {
+ uint32_t __io tag_const;
+ uint16_t __io lenm1_max;
+ uint8_t __io sa_pow2_size;
+ uint8_t __io tt;
+ } ipsec_cfg0;
+ struct {
+ uint32_t __io sa_idx_max;
+ uint8_t __io sa_idx_w;
+ } ipsec_cfg1;
+ uint8_t __io enable;
+};
+
+struct nix_hw_info {
+ struct mbox_msghdr hdr;
+ uint16_t __io vwqe_delay;
+ uint16_t __io rsvd[15];
+};
+
+/* SSO mailbox error codes
+ * Range 501 - 600.
+ */
+enum sso_af_status {
+ SSO_AF_ERR_PARAM = -501,
+ SSO_AF_ERR_LF_INVALID = -502,
+ SSO_AF_ERR_AF_LF_ALLOC = -503,
+ SSO_AF_ERR_GRP_EBUSY = -504,
+ SSO_AF_INVAL_NPA_PF_FUNC = -505,
+};
+
+struct sso_lf_alloc_req {
+ struct mbox_msghdr hdr;
+ int __io node;
+ uint16_t __io hwgrps;
+};
+
+struct sso_lf_alloc_rsp {
+ struct mbox_msghdr hdr;
+ uint32_t __io xaq_buf_size;
+ uint32_t __io xaq_wq_entries;
+ uint32_t __io in_unit_entries;
+ uint16_t __io hwgrps;
+};
+
+struct sso_lf_free_req {
+ struct mbox_msghdr hdr;
+ int __io node;
+ uint16_t __io hwgrps;
+};
+
+/* SSOW mailbox error codes
+ * Range 601 - 700.
+ */
+enum ssow_af_status {
+ SSOW_AF_ERR_PARAM = -601,
+ SSOW_AF_ERR_LF_INVALID = -602,
+ SSOW_AF_ERR_AF_LF_ALLOC = -603,
+};
+
+struct ssow_lf_alloc_req {
+ struct mbox_msghdr hdr;
+ int __io node;
+ uint16_t __io hws;
+};
+
+struct ssow_lf_free_req {
+ struct mbox_msghdr hdr;
+ int __io node;
+ uint16_t __io hws;
+};
+
+struct sso_hw_setconfig {
+ struct mbox_msghdr hdr;
+ uint32_t __io npa_aura_id;
+ uint16_t __io npa_pf_func;
+ uint16_t __io hwgrps;
+};
+
+struct sso_hw_xaq_release {
+ struct mbox_msghdr hdr;
+ uint16_t __io hwgrps;
+};
+
+struct sso_info_req {
+ struct mbox_msghdr hdr;
+ union {
+ uint16_t __io grp;
+ uint16_t __io hws;
+ };
+};
+
+struct sso_grp_priority {
+ struct mbox_msghdr hdr;
+ uint16_t __io grp;
+ uint8_t __io priority;
+ uint8_t __io affinity;
+ uint8_t __io weight;
+};
+
+struct sso_grp_qos_cfg {
+ struct mbox_msghdr hdr;
+ uint16_t __io grp;
+ uint32_t __io xaq_limit;
+ uint16_t __io taq_thr;
+ uint16_t __io iaq_thr;
+};
+
+struct sso_grp_stats {
+ struct mbox_msghdr hdr;
+ uint16_t __io grp;
+ uint64_t __io ws_pc;
+ uint64_t __io ext_pc;
+ uint64_t __io wa_pc;
+ uint64_t __io ts_pc;
+ uint64_t __io ds_pc;
+ uint64_t __io dq_pc;
+ uint64_t __io aw_status;
+ uint64_t __io page_cnt;
+};
+
+struct sso_hws_stats {
+ struct mbox_msghdr hdr;
+ uint16_t __io hws;
+ uint64_t __io arbitration;
+};
+
+/* CPT mailbox error codes
+ * Range 901 - 1000.
+ */
+enum cpt_af_status {
+ CPT_AF_ERR_PARAM = -901,
+ CPT_AF_ERR_GRP_INVALID = -902,
+ CPT_AF_ERR_LF_INVALID = -903,
+ CPT_AF_ERR_ACCESS_DENIED = -904,
+ CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905,
+ CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906,
+ CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907,
+ CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908
+};
+
+/* CPT mbox message formats */
+
+struct cpt_rd_wr_reg_msg {
+ struct mbox_msghdr hdr;
+ uint64_t __io reg_offset;
+ uint64_t __io *ret_val;
+ uint64_t __io val;
+ uint8_t __io is_write;
+};
+
+struct cpt_set_crypto_grp_req_msg {
+ struct mbox_msghdr hdr;
+ uint8_t __io crypto_eng_grp;
+};
+
+struct cpt_lf_alloc_req_msg {
+ struct mbox_msghdr hdr;
+ uint16_t __io nix_pf_func;
+ uint16_t __io sso_pf_func;
+ uint16_t __io eng_grpmsk;
+ uint8_t __io blkaddr;
+};
+
+#define CPT_INLINE_INBOUND 0
+#define CPT_INLINE_OUTBOUND 1
+
+struct cpt_inline_ipsec_cfg_msg {
+ struct mbox_msghdr hdr;
+ uint8_t __io enable;
+ uint8_t __io slot;
+ uint8_t __io dir;
+ uint8_t __io sso_pf_func_ovrd;
+ uint16_t __io sso_pf_func; /* Inbound path SSO_PF_FUNC */
+ uint16_t __io nix_pf_func; /* Outbound path NIX_PF_FUNC */
+};
+
+struct cpt_sts_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io blkaddr;
+};
+
+struct cpt_sts_rsp {
+ struct mbox_msghdr hdr;
+ uint64_t __io inst_req_pc;
+ uint64_t __io inst_lat_pc;
+ uint64_t __io rd_req_pc;
+ uint64_t __io rd_lat_pc;
+ uint64_t __io rd_uc_pc;
+ uint64_t __io active_cycles_pc;
+ uint64_t __io ctx_mis_pc;
+ uint64_t __io ctx_hit_pc;
+ uint64_t __io ctx_aop_pc;
+ uint64_t __io ctx_aop_lat_pc;
+ uint64_t __io ctx_ifetch_pc;
+ uint64_t __io ctx_ifetch_lat_pc;
+ uint64_t __io ctx_ffetch_pc;
+ uint64_t __io ctx_ffetch_lat_pc;
+ uint64_t __io ctx_wback_pc;
+ uint64_t __io ctx_wback_lat_pc;
+ uint64_t __io ctx_psh_pc;
+ uint64_t __io ctx_psh_lat_pc;
+ uint64_t __io ctx_err;
+ uint64_t __io ctx_enc_id;
+ uint64_t __io ctx_flush_timer;
+ uint64_t __io rxc_time;
+ uint64_t __io rxc_time_cfg;
+ uint64_t __io rxc_active_sts;
+ uint64_t __io rxc_zombie_sts;
+ uint64_t __io busy_sts_ae;
+ uint64_t __io free_sts_ae;
+ uint64_t __io busy_sts_se;
+ uint64_t __io free_sts_se;
+ uint64_t __io busy_sts_ie;
+ uint64_t __io free_sts_ie;
+ uint64_t __io exe_err_info;
+ uint64_t __io cptclk_cnt;
+ uint64_t __io diag;
+};
+
+struct cpt_rxc_time_cfg_req {
+ struct mbox_msghdr hdr;
+ int blkaddr;
+ uint32_t step;
+ uint16_t zombie_thres;
+ uint16_t zombie_limit;
+ uint16_t active_thres;
+ uint16_t active_limit;
+};
+
+struct cpt_rx_inline_lf_cfg_msg {
+ struct mbox_msghdr hdr;
+ uint16_t __io sso_pf_func;
+};
+
+enum cpt_eng_type {
+ CPT_ENG_TYPE_AE = 1,
+ CPT_ENG_TYPE_SE = 2,
+ CPT_ENG_TYPE_IE = 3,
+ CPT_MAX_ENG_TYPES,
+};
+
+/* CPT HW capabilities */
+union cpt_eng_caps {
+ uint64_t __io u;
+ struct {
+ uint64_t __io reserved_0_4 : 5;
+ uint64_t __io mul : 1;
+ uint64_t __io sha1_sha2 : 1;
+ uint64_t __io chacha20 : 1;
+ uint64_t __io zuc_snow3g : 1;
+ uint64_t __io sha3 : 1;
+ uint64_t __io aes : 1;
+ uint64_t __io kasumi : 1;
+ uint64_t __io des : 1;
+ uint64_t __io crc : 1;
+ uint64_t __io reserved_14_63 : 50;
+ };
+};
+
+struct cpt_caps_rsp_msg {
+ struct mbox_msghdr hdr;
+ uint16_t __io cpt_pf_drv_version;
+ uint8_t __io cpt_revision;
+ union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES];
+};
+
+struct cpt_eng_grp_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io eng_type;
+};
+
+struct cpt_eng_grp_rsp {
+ struct mbox_msghdr hdr;
+ uint8_t __io eng_type;
+ uint8_t __io eng_grp_num;
+};
+
+/* NPC mbox message structs */
+
+#define NPC_MCAM_ENTRY_INVALID 0xFFFF
+#define NPC_MCAM_INVALID_MAP 0xFFFF
+
+/* NPC mailbox error codes
+ * Range 701 - 800.
+ */
+enum npc_af_status {
+ NPC_MCAM_INVALID_REQ = -701,
+ NPC_MCAM_ALLOC_DENIED = -702,
+ NPC_MCAM_ALLOC_FAILED = -703,
+ NPC_MCAM_PERM_DENIED = -704,
+ NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705,
+};
+
+struct npc_mcam_alloc_entry_req {
+ struct mbox_msghdr hdr;
+#define NPC_MAX_NONCONTIG_ENTRIES 256
+ uint8_t __io contig; /* Contiguous entries ? */
+#define NPC_MCAM_ANY_PRIO 0
+#define NPC_MCAM_LOWER_PRIO 1
+#define NPC_MCAM_HIGHER_PRIO 2
+ uint8_t __io priority; /* Lower or higher w.r.t ref_entry */
+ uint16_t __io ref_entry;
+ uint16_t __io count; /* Number of entries requested */
+};
+
+struct npc_mcam_alloc_entry_rsp {
+ struct mbox_msghdr hdr;
+ /* Entry alloc'ed or start index if contiguous.
+ * Invalid in case of non-contiguous.
+ */
+ uint16_t __io entry;
+ uint16_t __io count; /* Number of entries allocated */
+ uint16_t __io free_count; /* Number of entries available */
+ uint16_t __io entry_list[NPC_MAX_NONCONTIG_ENTRIES];
+};
+
+struct npc_mcam_free_entry_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io entry; /* Entry index to be freed */
+ uint8_t __io all; /* Free all entries alloc'ed to this PFVF */
+};
+
+struct mcam_entry {
+#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */
+ uint64_t __io kw[NPC_MAX_KWS_IN_KEY];
+ uint64_t __io kw_mask[NPC_MAX_KWS_IN_KEY];
+ uint64_t __io action;
+ uint64_t __io vtag_action;
+};
+
+struct npc_mcam_write_entry_req {
+ struct mbox_msghdr hdr;
+ struct mcam_entry entry_data;
+ uint16_t __io entry; /* MCAM entry to write this match key */
+ uint16_t __io cntr; /* Counter for this MCAM entry */
+ uint8_t __io intf; /* Rx or Tx interface */
+ uint8_t __io enable_entry; /* Enable this MCAM entry ? */
+ uint8_t __io set_cntr; /* Set counter for this entry ? */
+};
+
+/* Enable/Disable a given entry */
+struct npc_mcam_ena_dis_entry_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io entry;
+};
+
+struct npc_mcam_shift_entry_req {
+ struct mbox_msghdr hdr;
+#define NPC_MCAM_MAX_SHIFTS 64
+ uint16_t __io curr_entry[NPC_MCAM_MAX_SHIFTS];
+ uint16_t __io new_entry[NPC_MCAM_MAX_SHIFTS];
+ uint16_t __io shift_count; /* Number of entries to shift */
+};
+
+struct npc_mcam_shift_entry_rsp {
+ struct mbox_msghdr hdr;
+ /* Index in 'curr_entry', not entry itself */
+ uint16_t __io failed_entry_idx;
+};
+
+struct npc_mcam_alloc_counter_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io contig; /* Contiguous counters ? */
+#define NPC_MAX_NONCONTIG_COUNTERS 64
+ uint16_t __io count; /* Number of counters requested */
+};
+
+struct npc_mcam_alloc_counter_rsp {
+ struct mbox_msghdr hdr;
+ /* Counter alloc'ed or start idx if contiguous.
+ * Invalid in case of non-contiguous.
+ */
+ uint16_t __io cntr;
+ uint16_t __io count; /* Number of counters allocated */
+ uint16_t __io cntr_list[NPC_MAX_NONCONTIG_COUNTERS];
+};
+
+struct npc_mcam_oper_counter_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io cntr; /* Free a counter or clear/fetch it's stats */
+};
+
+struct npc_mcam_oper_counter_rsp {
+ struct mbox_msghdr hdr;
+ /* valid only while fetching counter's stats */
+ uint64_t __io stat;
+};
+
+struct npc_mcam_unmap_counter_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io cntr;
+ uint16_t __io entry; /* Entry and counter to be unmapped */
+ uint8_t __io all; /* Unmap all entries using this counter ? */
+};
+
+struct npc_mcam_alloc_and_write_entry_req {
+ struct mbox_msghdr hdr;
+ struct mcam_entry entry_data;
+ uint16_t __io ref_entry;
+ uint8_t __io priority; /* Lower or higher w.r.t ref_entry */
+ uint8_t __io intf; /* Rx or Tx interface */
+ uint8_t __io enable_entry; /* Enable this MCAM entry ? */
+ uint8_t __io alloc_cntr; /* Allocate counter and map ? */
+};
+
+struct npc_mcam_alloc_and_write_entry_rsp {
+ struct mbox_msghdr hdr;
+ uint16_t __io entry;
+ uint16_t __io cntr;
+};
+
+struct npc_get_kex_cfg_rsp {
+ struct mbox_msghdr hdr;
+ uint64_t __io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */
+ uint64_t __io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */
+#define NPC_MAX_INTF 2
+#define NPC_MAX_LID 8
+#define NPC_MAX_LT 16
+#define NPC_MAX_LD 2
+#define NPC_MAX_LFL 16
+ /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
+ uint64_t __io kex_ld_flags[NPC_MAX_LD];
+ /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */
+ uint64_t __io intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
+ [NPC_MAX_LD];
+ /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */
+ uint64_t __io intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
+#define MKEX_NAME_LEN 128
+ uint8_t __io mkex_pfl_name[MKEX_NAME_LEN];
+};
+
+enum header_fields {
+ NPC_DMAC,
+ NPC_SMAC,
+ NPC_ETYPE,
+ NPC_OUTER_VID,
+ NPC_TOS,
+ NPC_SIP_IPV4,
+ NPC_DIP_IPV4,
+ NPC_SIP_IPV6,
+ NPC_DIP_IPV6,
+ NPC_SPORT_TCP,
+ NPC_DPORT_TCP,
+ NPC_SPORT_UDP,
+ NPC_DPORT_UDP,
+ NPC_FDSA_VAL,
+ NPC_HEADER_FIELDS_MAX,
+};
+
+struct flow_msg {
+ unsigned char __io dmac[6];
+ unsigned char __io smac[6];
+ uint16_t __io etype;
+ uint16_t __io vlan_etype;
+ uint16_t __io vlan_tci;
+ union {
+ uint32_t __io ip4src;
+ uint32_t __io ip6src[4];
+ };
+ union {
+ uint32_t __io ip4dst;
+ uint32_t __io ip6dst[4];
+ };
+ uint8_t __io tos;
+ uint8_t __io ip_ver;
+ uint8_t __io ip_proto;
+ uint8_t __io tc;
+ uint16_t __io sport;
+ uint16_t __io dport;
+};
+
+struct npc_install_flow_req {
+ struct mbox_msghdr hdr;
+ struct flow_msg packet;
+ struct flow_msg mask;
+ uint64_t __io features;
+ uint16_t __io entry;
+ uint16_t __io channel;
+ uint8_t __io intf;
+ uint8_t __io set_cntr;
+ uint8_t __io default_rule;
+ /* Overwrite(0) or append(1) flow to default rule? */
+ uint8_t __io append;
+ uint16_t __io vf;
+ /* action */
+ uint32_t __io index;
+ uint16_t __io match_id;
+ uint8_t __io flow_key_alg;
+ uint8_t __io op;
+ /* vtag action */
+ uint8_t __io vtag0_type;
+ uint8_t __io vtag0_valid;
+ uint8_t __io vtag1_type;
+ uint8_t __io vtag1_valid;
+
+ /* vtag tx action */
+ uint16_t __io vtag0_def;
+ uint8_t __io vtag0_op;
+ uint16_t __io vtag1_def;
+ uint8_t __io vtag1_op;
+};
+
+struct npc_install_flow_rsp {
+ struct mbox_msghdr hdr;
+ /* Negative if no counter else counter number */
+ int __io counter;
+};
+
+struct npc_delete_flow_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io entry;
+ uint16_t __io start; /*Disable range of entries */
+ uint16_t __io end;
+ uint8_t __io all; /* PF + VFs */
+};
+
+struct npc_mcam_read_entry_req {
+ struct mbox_msghdr hdr;
+ /* MCAM entry to read */
+ uint16_t __io entry;
+};
+
+struct npc_mcam_read_entry_rsp {
+ struct mbox_msghdr hdr;
+ struct mcam_entry entry_data;
+ uint8_t __io intf;
+ uint8_t __io enable;
+};
+
+struct npc_mcam_read_base_rule_rsp {
+ struct mbox_msghdr hdr;
+ struct mcam_entry entry_data;
+};
+
+/* TIM mailbox error codes
+ * Range 801 - 900.
+ */
+enum tim_af_status {
+ TIM_AF_NO_RINGS_LEFT = -801,
+ TIM_AF_INVALID_NPA_PF_FUNC = -802,
+ TIM_AF_INVALID_SSO_PF_FUNC = -803,
+ TIM_AF_RING_STILL_RUNNING = -804,
+ TIM_AF_LF_INVALID = -805,
+ TIM_AF_CSIZE_NOT_ALIGNED = -806,
+ TIM_AF_CSIZE_TOO_SMALL = -807,
+ TIM_AF_CSIZE_TOO_BIG = -808,
+ TIM_AF_INTERVAL_TOO_SMALL = -809,
+ TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810,
+ TIM_AF_INVALID_CLOCK_SOURCE = -811,
+ TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812,
+ TIM_AF_INVALID_BSIZE = -813,
+ TIM_AF_INVALID_ENABLE_PERIODIC = -814,
+ TIM_AF_INVALID_ENABLE_DONTFREE = -815,
+ TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
+ TIM_AF_RING_ALREADY_DISABLED = -817,
+};
+
+enum tim_clk_srcs {
+ TIM_CLK_SRCS_TENNS = 0,
+ TIM_CLK_SRCS_GPIO = 1,
+ TIM_CLK_SRCS_GTI = 2,
+ TIM_CLK_SRCS_PTP = 3,
+ TIM_CLK_SRSC_INVALID,
+};
+
+enum tim_gpio_edge {
+ TIM_GPIO_NO_EDGE = 0,
+ TIM_GPIO_LTOH_TRANS = 1,
+ TIM_GPIO_HTOL_TRANS = 2,
+ TIM_GPIO_BOTH_TRANS = 3,
+ TIM_GPIO_INVALID,
+};
+
+enum ptp_op {
+ PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */
+ PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */
+};
+
+struct ptp_req {
+ struct mbox_msghdr hdr;
+ uint8_t __io op;
+ int64_t __io scaled_ppm;
+ uint8_t __io is_pmu;
+};
+
+struct ptp_rsp {
+ struct mbox_msghdr hdr;
+ uint64_t __io clk;
+ uint64_t __io tsc;
+};
+
+struct get_hw_cap_rsp {
+ struct mbox_msghdr hdr;
+ /* Schq mapping fixed or flexible */
+ uint8_t __io nix_fixed_txschq_mapping;
+ uint8_t __io nix_shaping; /* Is shaping and coloring supported */
+};
+
+struct ndc_sync_op {
+ struct mbox_msghdr hdr;
+ uint8_t __io nix_lf_tx_sync;
+ uint8_t __io nix_lf_rx_sync;
+ uint8_t __io npa_lf_sync;
+};
+
+struct tim_lf_alloc_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io ring;
+ uint16_t __io npa_pf_func;
+ uint16_t __io sso_pf_func;
+};
+
+struct tim_ring_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io ring;
+};
+
+struct tim_config_req {
+ struct mbox_msghdr hdr;
+ uint16_t __io ring;
+ uint8_t __io bigendian;
+ uint8_t __io clocksource;
+ uint8_t __io enableperiodic;
+ uint8_t __io enabledontfreebuffer;
+ uint32_t __io bucketsize;
+ uint32_t __io chunksize;
+ uint32_t __io interval;
+ uint8_t __io gpioedge;
+};
+
+struct tim_lf_alloc_rsp {
+ struct mbox_msghdr hdr;
+ uint64_t __io tenns_clk;
+};
+
+struct tim_enable_rsp {
+ struct mbox_msghdr hdr;
+ uint64_t __io timestarted;
+ uint32_t __io currentbucket;
+};
+
+#endif /* __ROC_MBOX_H__ */
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 06/52] common/cnxk: add mailbox base infra
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (4 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 05/52] common/cnxk: add mbox request and response definitions Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 07/52] common/cnxk: add base device class Nithin Dabilpuram
` (49 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
This patch adds mailbox infra API's to communicate with Kernel AF
driver. These API's will be used by all the other cnxk drivers
for mbox init/fini, send/recv functionality.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_dev_priv.h | 3 +
drivers/common/cnxk/roc_mbox.c | 483 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_mbox_priv.h | 215 ++++++++++++++++
drivers/common/cnxk/roc_platform.c | 2 +
drivers/common/cnxk/roc_platform.h | 3 +
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/roc_utils.c | 7 +
drivers/common/cnxk/roc_utils.h | 2 +
drivers/common/cnxk/version.map | 2 +
10 files changed, 721 insertions(+)
create mode 100644 drivers/common/cnxk/roc_mbox.c
create mode 100644 drivers/common/cnxk/roc_mbox_priv.h
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index e2a2da2..17d112f 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -12,6 +12,7 @@ endif
config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON'
deps = ['eal', 'pci', 'bus_pci', 'mbuf']
sources = files('roc_irq.c',
+ 'roc_mbox.c',
'roc_model.c',
'roc_platform.c',
'roc_utils.c')
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
index 836e75b..713af4f 100644
--- a/drivers/common/cnxk/roc_dev_priv.h
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -5,6 +5,9 @@
#ifndef _ROC_DEV_PRIV_H
#define _ROC_DEV_PRIV_H
+extern uint16_t dev_rclk_freq;
+extern uint16_t dev_sclk_freq;
+
int dev_irq_register(struct plt_intr_handle *intr_handle,
plt_intr_callback_fn cb, void *data, unsigned int vec);
void dev_irq_unregister(struct plt_intr_handle *intr_handle,
diff --git a/drivers/common/cnxk/roc_mbox.c b/drivers/common/cnxk/roc_mbox.c
new file mode 100644
index 0000000..31cbe5a
--- /dev/null
+++ b/drivers/common/cnxk/roc_mbox.c
@@ -0,0 +1,483 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include <errno.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define RVU_AF_AFPF_MBOX0 (0x02000)
+#define RVU_AF_AFPF_MBOX1 (0x02008)
+
+#define RVU_PF_PFAF_MBOX0 (0xC00)
+#define RVU_PF_PFAF_MBOX1 (0xC08)
+
+#define RVU_PF_VFX_PFVF_MBOX0 (0x0000)
+#define RVU_PF_VFX_PFVF_MBOX1 (0x0008)
+
+#define RVU_VF_VFPF_MBOX0 (0x0000)
+#define RVU_VF_VFPF_MBOX1 (0x0008)
+
+/* RCLK, SCLK in MHz */
+uint16_t dev_rclk_freq;
+uint16_t dev_sclk_freq;
+
+static inline uint16_t
+msgs_offset(void)
+{
+ return PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+}
+
+void
+mbox_fini(struct mbox *mbox)
+{
+ mbox->reg_base = 0;
+ mbox->hwbase = 0;
+ plt_free(mbox->dev);
+ mbox->dev = NULL;
+}
+
+void
+mbox_reset(struct mbox *mbox, int devid)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_hdr *tx_hdr =
+ (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
+ struct mbox_hdr *rx_hdr =
+ (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+
+ plt_spinlock_lock(&mdev->mbox_lock);
+ mdev->msg_size = 0;
+ mdev->rsp_size = 0;
+ tx_hdr->msg_size = 0;
+ tx_hdr->num_msgs = 0;
+ rx_hdr->msg_size = 0;
+ rx_hdr->num_msgs = 0;
+ plt_spinlock_unlock(&mdev->mbox_lock);
+}
+
+int
+mbox_init(struct mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
+ int direction, int ndevs, uint64_t intr_offset)
+{
+ struct mbox_dev *mdev;
+ char *var, *var_to;
+ int devid;
+
+ mbox->intr_offset = intr_offset;
+ mbox->reg_base = reg_base;
+ mbox->hwbase = hwbase;
+
+ switch (direction) {
+ case MBOX_DIR_AFPF:
+ case MBOX_DIR_PFVF:
+ mbox->tx_start = MBOX_DOWN_TX_START;
+ mbox->rx_start = MBOX_DOWN_RX_START;
+ mbox->tx_size = MBOX_DOWN_TX_SIZE;
+ mbox->rx_size = MBOX_DOWN_RX_SIZE;
+ break;
+ case MBOX_DIR_PFAF:
+ case MBOX_DIR_VFPF:
+ mbox->tx_start = MBOX_DOWN_RX_START;
+ mbox->rx_start = MBOX_DOWN_TX_START;
+ mbox->tx_size = MBOX_DOWN_RX_SIZE;
+ mbox->rx_size = MBOX_DOWN_TX_SIZE;
+ break;
+ case MBOX_DIR_AFPF_UP:
+ case MBOX_DIR_PFVF_UP:
+ mbox->tx_start = MBOX_UP_TX_START;
+ mbox->rx_start = MBOX_UP_RX_START;
+ mbox->tx_size = MBOX_UP_TX_SIZE;
+ mbox->rx_size = MBOX_UP_RX_SIZE;
+ break;
+ case MBOX_DIR_PFAF_UP:
+ case MBOX_DIR_VFPF_UP:
+ mbox->tx_start = MBOX_UP_RX_START;
+ mbox->rx_start = MBOX_UP_TX_START;
+ mbox->tx_size = MBOX_UP_RX_SIZE;
+ mbox->rx_size = MBOX_UP_TX_SIZE;
+ break;
+ default:
+ return -ENODEV;
+ }
+
+ switch (direction) {
+ case MBOX_DIR_AFPF:
+ case MBOX_DIR_AFPF_UP:
+ mbox->trigger = RVU_AF_AFPF_MBOX0;
+ mbox->tr_shift = 4;
+ break;
+ case MBOX_DIR_PFAF:
+ case MBOX_DIR_PFAF_UP:
+ mbox->trigger = RVU_PF_PFAF_MBOX1;
+ mbox->tr_shift = 0;
+ break;
+ case MBOX_DIR_PFVF:
+ case MBOX_DIR_PFVF_UP:
+ mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
+ mbox->tr_shift = 12;
+ break;
+ case MBOX_DIR_VFPF:
+ case MBOX_DIR_VFPF_UP:
+ mbox->trigger = RVU_VF_VFPF_MBOX1;
+ mbox->tr_shift = 0;
+ break;
+ default:
+ return -ENODEV;
+ }
+
+ mbox->dev = plt_zmalloc(ndevs * sizeof(struct mbox_dev), ROC_ALIGN);
+ if (!mbox->dev) {
+ mbox_fini(mbox);
+ return -ENOMEM;
+ }
+ mbox->ndevs = ndevs;
+ for (devid = 0; devid < ndevs; devid++) {
+ mdev = &mbox->dev[devid];
+ mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE));
+ plt_spinlock_init(&mdev->mbox_lock);
+ /* Init header to reset value */
+ mbox_reset(mbox, devid);
+ }
+
+ var = getenv("ROC_CN10K_MBOX_TIMEOUT");
+ var_to = getenv("ROC_MBOX_TIMEOUT");
+
+ if (var)
+ mbox->rsp_tmo = atoi(var);
+ else if (var_to)
+ mbox->rsp_tmo = atoi(var_to);
+ else
+ mbox->rsp_tmo = MBOX_RSP_TIMEOUT;
+
+ return 0;
+}
+
+/**
+ * @internal
+ * Allocate a message response
+ */
+struct mbox_msghdr *
+mbox_alloc_msg_rsp(struct mbox *mbox, int devid, int size, int size_rsp)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_msghdr *msghdr = NULL;
+
+ plt_spinlock_lock(&mdev->mbox_lock);
+ size = PLT_ALIGN(size, MBOX_MSG_ALIGN);
+ size_rsp = PLT_ALIGN(size_rsp, MBOX_MSG_ALIGN);
+ /* Check if there is space in mailbox */
+ if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset())
+ goto exit;
+ if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset())
+ goto exit;
+ if (mdev->msg_size == 0)
+ mdev->num_msgs = 0;
+ mdev->num_msgs++;
+
+ msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase +
+ mbox->tx_start + msgs_offset() +
+ mdev->msg_size));
+
+ /* Clear the whole msg region */
+ mbox_memset(msghdr, 0, sizeof(*msghdr) + size);
+ /* Init message header with reset values */
+ msghdr->ver = MBOX_VERSION;
+ mdev->msg_size += size;
+ mdev->rsp_size += size_rsp;
+ msghdr->next_msgoff = mdev->msg_size + msgs_offset();
+exit:
+ plt_spinlock_unlock(&mdev->mbox_lock);
+
+ return msghdr;
+}
+
+/**
+ * @internal
+ * Send a mailbox message
+ */
+void
+mbox_msg_send(struct mbox *mbox, int devid)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_hdr *tx_hdr =
+ (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
+ struct mbox_hdr *rx_hdr =
+ (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+
+ /* Reset header for next messages */
+ tx_hdr->msg_size = mdev->msg_size;
+ mdev->msg_size = 0;
+ mdev->rsp_size = 0;
+ mdev->msgs_acked = 0;
+
+ /* num_msgs != 0 signals to the peer that the buffer has a number of
+ * messages. So this should be written after copying txmem
+ */
+ tx_hdr->num_msgs = mdev->num_msgs;
+ rx_hdr->num_msgs = 0;
+
+ /* Sync mbox data into memory */
+ plt_wmb();
+
+ /* The interrupt should be fired after num_msgs is written
+ * to the shared memory
+ */
+ plt_write64(1, (volatile void *)(mbox->reg_base +
+ (mbox->trigger |
+ (devid << mbox->tr_shift))));
+}
+
+/**
+ * @internal
+ * Wait and get mailbox response
+ */
+int
+mbox_get_rsp(struct mbox *mbox, int devid, void **msg)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_msghdr *msghdr;
+ uint64_t offset;
+ int rc;
+
+ rc = mbox_wait_for_rsp(mbox, devid);
+ if (rc < 0)
+ return -EIO;
+
+ plt_rmb();
+
+ offset = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+ if (msg != NULL)
+ *msg = msghdr;
+
+ return msghdr->rc;
+}
+
+/**
+ * Polling for given wait time to get mailbox response
+ */
+static int
+mbox_poll(struct mbox *mbox, uint32_t wait)
+{
+ uint32_t timeout = 0, sleep = 1;
+ uint32_t wait_us = wait * 1000;
+ uint64_t rsp_reg = 0;
+ uintptr_t reg_addr;
+
+ reg_addr = mbox->reg_base + mbox->intr_offset;
+ do {
+ rsp_reg = plt_read64(reg_addr);
+
+ if (timeout >= wait_us)
+ return -ETIMEDOUT;
+
+ plt_delay_us(sleep);
+ timeout += sleep;
+ } while (!rsp_reg);
+
+ plt_rmb();
+
+ /* Clear interrupt */
+ plt_write64(rsp_reg, reg_addr);
+
+ /* Reset mbox */
+ mbox_reset(mbox, 0);
+
+ return 0;
+}
+
+/**
+ * @internal
+ * Wait and get mailbox response with timeout
+ */
+int
+mbox_get_rsp_tmo(struct mbox *mbox, int devid, void **msg, uint32_t tmo)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_msghdr *msghdr;
+ uint64_t offset;
+ int rc;
+
+ rc = mbox_wait_for_rsp_tmo(mbox, devid, tmo);
+ if (rc != 1)
+ return -EIO;
+
+ plt_rmb();
+
+ offset = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+ if (msg != NULL)
+ *msg = msghdr;
+
+ return msghdr->rc;
+}
+
+static int
+mbox_wait(struct mbox *mbox, int devid, uint32_t rst_timo)
+{
+ volatile struct mbox_dev *mdev = &mbox->dev[devid];
+ uint32_t timeout = 0, sleep = 1;
+
+ rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */
+ while (mdev->num_msgs > mdev->msgs_acked) {
+ plt_delay_us(sleep);
+ timeout += sleep;
+ if (timeout >= rst_timo) {
+ struct mbox_hdr *tx_hdr =
+ (struct mbox_hdr *)((uintptr_t)mdev->mbase +
+ mbox->tx_start);
+ struct mbox_hdr *rx_hdr =
+ (struct mbox_hdr *)((uintptr_t)mdev->mbase +
+ mbox->rx_start);
+
+ plt_err("MBOX[devid: %d] message wait timeout %d, "
+ "num_msgs: %d, msgs_acked: %d "
+ "(tx/rx num_msgs: %d/%d), msg_size: %d, "
+ "rsp_size: %d",
+ devid, timeout, mdev->num_msgs,
+ mdev->msgs_acked, tx_hdr->num_msgs,
+ rx_hdr->num_msgs, mdev->msg_size,
+ mdev->rsp_size);
+
+ return -EIO;
+ }
+ plt_rmb();
+ }
+ return 0;
+}
+
+int
+mbox_wait_for_rsp_tmo(struct mbox *mbox, int devid, uint32_t tmo)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ int rc = 0;
+
+ /* Sync with mbox region */
+ plt_rmb();
+
+ if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 ||
+ mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) {
+ /* In case of VF, Wait a bit more to account round trip delay */
+ tmo = tmo * 2;
+ }
+
+ /* Wait message */
+ if (plt_thread_is_intr())
+ rc = mbox_poll(mbox, tmo);
+ else
+ rc = mbox_wait(mbox, devid, tmo);
+
+ if (!rc)
+ rc = mdev->num_msgs;
+
+ return rc;
+}
+
+/**
+ * @internal
+ * Wait for the mailbox response
+ */
+int
+mbox_wait_for_rsp(struct mbox *mbox, int devid)
+{
+ return mbox_wait_for_rsp_tmo(mbox, devid, mbox->rsp_tmo);
+}
+
+int
+mbox_get_availmem(struct mbox *mbox, int devid)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ int avail;
+
+ plt_spinlock_lock(&mdev->mbox_lock);
+ avail = mbox->tx_size - mdev->msg_size - msgs_offset();
+ plt_spinlock_unlock(&mdev->mbox_lock);
+
+ return avail;
+}
+
+int
+send_ready_msg(struct mbox *mbox, uint16_t *pcifunc)
+{
+ struct ready_msg_rsp *rsp;
+ int rc;
+
+ mbox_alloc_msg_ready(mbox);
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->hdr.ver != MBOX_VERSION) {
+ plt_err("Incompatible MBox versions(AF: 0x%04x Client: 0x%04x)",
+ rsp->hdr.ver, MBOX_VERSION);
+ return -EPIPE;
+ }
+
+ if (pcifunc)
+ *pcifunc = rsp->hdr.pcifunc;
+
+ /* Save rclk & sclk freq */
+ if (!dev_rclk_freq || !dev_sclk_freq) {
+ dev_rclk_freq = rsp->rclk_freq;
+ dev_sclk_freq = rsp->sclk_freq;
+ }
+ return 0;
+}
+
+int
+reply_invalid_msg(struct mbox *mbox, int devid, uint16_t pcifunc, uint16_t id)
+{
+ struct msg_rsp *rsp;
+
+ rsp = (struct msg_rsp *)mbox_alloc_msg(mbox, devid, sizeof(*rsp));
+ if (!rsp)
+ return -ENOMEM;
+ rsp->hdr.id = id;
+ rsp->hdr.sig = MBOX_RSP_SIG;
+ rsp->hdr.rc = MBOX_MSG_INVALID;
+ rsp->hdr.pcifunc = pcifunc;
+
+ return 0;
+}
+
+/**
+ * @internal
+ * Convert mail box ID to name
+ */
+const char *
+mbox_id2name(uint16_t id)
+{
+ switch (id) {
+ default:
+ return "INVALID ID";
+#define M(_name, _id, _1, _2, _3) \
+ case _id: \
+ return #_name;
+ MBOX_MESSAGES
+ MBOX_UP_CGX_MESSAGES
+#undef M
+ }
+}
+
+int
+mbox_id2size(uint16_t id)
+{
+ switch (id) {
+ default:
+ return 0;
+#define M(_1, _id, _2, _req_type, _3) \
+ case _id: \
+ return sizeof(struct _req_type);
+ MBOX_MESSAGES
+ MBOX_UP_CGX_MESSAGES
+#undef M
+ }
+}
diff --git a/drivers/common/cnxk/roc_mbox_priv.h b/drivers/common/cnxk/roc_mbox_priv.h
new file mode 100644
index 0000000..fdb7c82
--- /dev/null
+++ b/drivers/common/cnxk/roc_mbox_priv.h
@@ -0,0 +1,215 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef __ROC_MBOX_PRIV_H__
+#define __ROC_MBOX_PRIV_H__
+
+#include <errno.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+#define SZ_64K (64ULL * 1024ULL)
+#define SZ_1K (1ULL * 1024ULL)
+#define MBOX_SIZE SZ_64K
+
+/* AF/PF: PF initiated, PF/VF VF initiated */
+#define MBOX_DOWN_RX_START 0
+#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
+#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
+#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
+/* AF/PF: AF initiated, PF/VF PF initiated */
+#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
+#define MBOX_UP_RX_SIZE SZ_1K
+#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
+#define MBOX_UP_TX_SIZE SZ_1K
+
+#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
+#error "Incorrect mailbox area sizes"
+#endif
+
+#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
+
+#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */
+
+#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
+
+/* Mailbox directions */
+#define MBOX_DIR_AFPF 0 /* AF replies to PF */
+#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
+#define MBOX_DIR_PFVF 2 /* PF replies to VF */
+#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
+#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
+#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
+#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
+#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
+
+struct mbox_dev {
+ void *mbase; /* This dev's mbox region */
+ plt_spinlock_t mbox_lock;
+ uint16_t msg_size; /* Total msg size to be sent */
+ uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */
+ uint16_t num_msgs; /* No of msgs sent or waiting for response */
+ uint16_t msgs_acked; /* No of msgs for which response is received */
+};
+
+struct mbox {
+ uintptr_t hwbase; /* Mbox region advertised by HW */
+ uintptr_t reg_base; /* CSR base for this dev */
+ uint64_t trigger; /* Trigger mbox notification */
+ uint16_t tr_shift; /* Mbox trigger shift */
+ uint64_t rx_start; /* Offset of Rx region in mbox memory */
+ uint64_t tx_start; /* Offset of Tx region in mbox memory */
+ uint16_t rx_size; /* Size of Rx region */
+ uint16_t tx_size; /* Size of Tx region */
+ uint16_t ndevs; /* The number of peers */
+ struct mbox_dev *dev;
+ uint64_t intr_offset; /* Offset to interrupt register */
+ uint32_t rsp_tmo;
+};
+
+const char *mbox_id2name(uint16_t id);
+int mbox_id2size(uint16_t id);
+void mbox_reset(struct mbox *mbox, int devid);
+int mbox_init(struct mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
+ int direction, int ndevsi, uint64_t intr_offset);
+void mbox_fini(struct mbox *mbox);
+void mbox_msg_send(struct mbox *mbox, int devid);
+int mbox_wait_for_rsp(struct mbox *mbox, int devid);
+int mbox_wait_for_rsp_tmo(struct mbox *mbox, int devid, uint32_t tmo);
+int mbox_get_rsp(struct mbox *mbox, int devid, void **msg);
+int mbox_get_rsp_tmo(struct mbox *mbox, int devid, void **msg, uint32_t tmo);
+int mbox_get_availmem(struct mbox *mbox, int devid);
+struct mbox_msghdr *mbox_alloc_msg_rsp(struct mbox *mbox, int devid, int size,
+ int size_rsp);
+
+static inline struct mbox_msghdr *
+mbox_alloc_msg(struct mbox *mbox, int devid, int size)
+{
+ return mbox_alloc_msg_rsp(mbox, devid, size, 0);
+}
+
+static inline void
+mbox_req_init(uint16_t mbox_id, void *msghdr)
+{
+ struct mbox_msghdr *hdr = msghdr;
+
+ hdr->sig = MBOX_REQ_SIG;
+ hdr->ver = MBOX_VERSION;
+ hdr->id = mbox_id;
+ hdr->pcifunc = 0;
+}
+
+static inline void
+mbox_rsp_init(uint16_t mbox_id, void *msghdr)
+{
+ struct mbox_msghdr *hdr = msghdr;
+
+ hdr->sig = MBOX_RSP_SIG;
+ hdr->rc = -ETIMEDOUT;
+ hdr->id = mbox_id;
+}
+
+static inline bool
+mbox_nonempty(struct mbox *mbox, int devid)
+{
+ struct mbox_dev *mdev = &mbox->dev[devid];
+ bool ret;
+
+ plt_spinlock_lock(&mdev->mbox_lock);
+ ret = mdev->num_msgs != 0;
+ plt_spinlock_unlock(&mdev->mbox_lock);
+
+ return ret;
+}
+
+static inline int
+mbox_process(struct mbox *mbox)
+{
+ mbox_msg_send(mbox, 0);
+ return mbox_get_rsp(mbox, 0, NULL);
+}
+
+static inline int
+mbox_process_msg(struct mbox *mbox, void **msg)
+{
+ mbox_msg_send(mbox, 0);
+ return mbox_get_rsp(mbox, 0, msg);
+}
+
+static inline int
+mbox_process_tmo(struct mbox *mbox, uint32_t tmo)
+{
+ mbox_msg_send(mbox, 0);
+ return mbox_get_rsp_tmo(mbox, 0, NULL, tmo);
+}
+
+static inline int
+mbox_process_msg_tmo(struct mbox *mbox, void **msg, uint32_t tmo)
+{
+ mbox_msg_send(mbox, 0);
+ return mbox_get_rsp_tmo(mbox, 0, msg, tmo);
+}
+
+int send_ready_msg(struct mbox *mbox, uint16_t *pf_func /* out */);
+int reply_invalid_msg(struct mbox *mbox, int devid, uint16_t pf_func,
+ uint16_t id);
+
+#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
+ static inline struct _req_type *mbox_alloc_msg_##_fn_name( \
+ struct mbox *mbox) \
+ { \
+ struct _req_type *req; \
+ req = (struct _req_type *)mbox_alloc_msg_rsp( \
+ mbox, 0, sizeof(struct _req_type), \
+ sizeof(struct _rsp_type)); \
+ if (!req) \
+ return NULL; \
+ req->hdr.sig = MBOX_REQ_SIG; \
+ req->hdr.id = _id; \
+ plt_mbox_dbg("id=0x%x (%s)", req->hdr.id, \
+ mbox_id2name(req->hdr.id)); \
+ return req; \
+ }
+
+MBOX_MESSAGES
+#undef M
+
+/* This is required for copy operations from device memory which do not work on
+ * addresses which are unaligned to 16B. This is because of specific
+ * optimizations to libc memcpy.
+ */
+static inline volatile void *
+mbox_memcpy(volatile void *d, const volatile void *s, size_t l)
+{
+ const volatile uint8_t *sb;
+ volatile uint8_t *db;
+ size_t i;
+
+ if (!d || !s)
+ return NULL;
+ db = (volatile uint8_t *)d;
+ sb = (const volatile uint8_t *)s;
+ for (i = 0; i < l; i++)
+ db[i] = sb[i];
+ return d;
+}
+
+/* This is required for memory operations from device memory which do not
+ * work on addresses which are unaligned to 16B. This is because of specific
+ * optimizations to libc memset.
+ */
+static inline void
+mbox_memset(volatile void *d, uint8_t val, size_t l)
+{
+ volatile uint8_t *db;
+ size_t i = 0;
+
+ if (!d || !l)
+ return;
+ db = (volatile uint8_t *)d;
+ for (i = 0; i < l; i++)
+ db[i] = val;
+}
+
+#endif /* __ROC_MBOX_PRIV_H__ */
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 79e9171..b20ae69 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -24,7 +24,9 @@ plt_init(void)
return -ENOMEM;
}
roc_model_init(mz->addr);
+
return 0;
}
RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE);
+RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index ea38216..76b931b 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -125,6 +125,8 @@
/* Log */
extern int cnxk_logtype_base;
+extern int cnxk_logtype_mbox;
+
#define plt_err(fmt, args...) \
RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args)
#define plt_info(fmt, args...) RTE_LOG(INFO, PMD, fmt "\n", ##args)
@@ -140,6 +142,7 @@ extern int cnxk_logtype_base;
##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
+#define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index 8c250f4..ce5dcf3 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -8,6 +8,9 @@
/* Utils */
#include "roc_util_priv.h"
+/* Mbox */
+#include "roc_mbox_priv.h"
+
/* Dev */
#include "roc_dev_priv.h"
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
index bcba7b2..81d511e 100644
--- a/drivers/common/cnxk/roc_utils.c
+++ b/drivers/common/cnxk/roc_utils.c
@@ -33,3 +33,10 @@ roc_error_msg_get(int errorcode)
return err_msg;
}
+
+void
+roc_clk_freq_get(uint16_t *rclk_freq, uint16_t *sclk_freq)
+{
+ *rclk_freq = dev_rclk_freq;
+ *sclk_freq = dev_sclk_freq;
+}
diff --git a/drivers/common/cnxk/roc_utils.h b/drivers/common/cnxk/roc_utils.h
index 7b607a5..fffd43f 100644
--- a/drivers/common/cnxk/roc_utils.h
+++ b/drivers/common/cnxk/roc_utils.h
@@ -10,4 +10,6 @@
/* Utils */
const char *__roc_api roc_error_msg_get(int errorcode);
+void __roc_api roc_clk_freq_get(uint16_t *rclk_freq, uint16_t *sclk_freq);
+
#endif /* _ROC_UTILS_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index e59bde9..8293f2a 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -2,7 +2,9 @@ INTERNAL {
global:
cnxk_logtype_base;
+ cnxk_logtype_mbox;
plt_init;
+ roc_clk_freq_get;
roc_error_msg_get;
roc_model;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 07/52] common/cnxk: add base device class
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (5 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 06/52] common/cnxk: add mailbox base infra Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 08/52] common/cnxk: add VF support to " Nithin Dabilpuram
` (48 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Introduce 'dev' class to hold cnxk PCIe device specific
information and operations.
All PCIe drivers(ethdev, mempool, cryptodev and eventdev) of cnxk
inherits this base object to avail the common functionalities such
as mailbox creation, interrupt registration, LMT setup, VF message
mbox forwarding, etc.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 4 +-
drivers/common/cnxk/roc_api.h | 3 +
drivers/common/cnxk/roc_dev.c | 362 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_dev_priv.h | 42 +++++
drivers/common/cnxk/roc_idev.c | 77 ++++++++
drivers/common/cnxk/roc_idev.h | 12 ++
drivers/common/cnxk/roc_idev_priv.h | 22 +++
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/version.map | 2 +
9 files changed, 526 insertions(+), 1 deletion(-)
create mode 100644 drivers/common/cnxk/roc_dev.c
create mode 100644 drivers/common/cnxk/roc_idev.c
create mode 100644 drivers/common/cnxk/roc_idev.h
create mode 100644 drivers/common/cnxk/roc_idev_priv.h
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 17d112f..735d3f8 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -11,7 +11,9 @@ endif
config_flag_fmt = 'RTE_LIBRTE_@0@_COMMON'
deps = ['eal', 'pci', 'bus_pci', 'mbuf']
-sources = files('roc_irq.c',
+sources = files('roc_dev.c',
+ 'roc_idev.c',
+ 'roc_irq.c',
'roc_mbox.c',
'roc_model.c',
'roc_platform.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 25c50d8..83aa4f6 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -82,4 +82,7 @@
/* Utils */
#include "roc_utils.h"
+/* Idev */
+#include "roc_idev.h"
+
#endif /* _ROC_API_H_ */
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
new file mode 100644
index 0000000..4ef15df
--- /dev/null
+++ b/drivers/common/cnxk/roc_dev.c
@@ -0,0 +1,362 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include <fcntl.h>
+#include <inttypes.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <unistd.h>
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+/* PCI Extended capability ID */
+#define ROC_PCI_EXT_CAP_ID_SRIOV 0x10 /* SRIOV cap */
+
+/* Single Root I/O Virtualization */
+#define ROC_PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */
+
+static void
+process_msgs(struct dev *dev, struct mbox *mbox)
+{
+ struct mbox_dev *mdev = &mbox->dev[0];
+ struct mbox_hdr *req_hdr;
+ struct mbox_msghdr *msg;
+ int msgs_acked = 0;
+ int offset;
+ uint16_t i;
+
+ req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+ if (req_hdr->num_msgs == 0)
+ return;
+
+ offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+ for (i = 0; i < req_hdr->num_msgs; i++) {
+ msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+
+ msgs_acked++;
+ plt_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", msg->id,
+ mbox_id2name(msg->id), dev_get_pf(msg->pcifunc),
+ dev_get_vf(msg->pcifunc));
+
+ switch (msg->id) {
+ /* Add message id's that are handled here */
+ case MBOX_MSG_READY:
+ /* Get our identity */
+ dev->pf_func = msg->pcifunc;
+ break;
+
+ default:
+ if (msg->rc)
+ plt_err("Message (%s) response has err=%d",
+ mbox_id2name(msg->id), msg->rc);
+ break;
+ }
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+
+ mbox_reset(mbox, 0);
+ /* Update acked if someone is waiting a message */
+ mdev->msgs_acked = msgs_acked;
+ plt_wmb();
+}
+
+static int
+mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req)
+{
+ /* Check if valid, if not reply with a invalid msg */
+ if (req->sig != MBOX_REQ_SIG)
+ return -EIO;
+
+ switch (req->id) {
+ default:
+ reply_invalid_msg(&dev->mbox_up, 0, 0, req->id);
+ break;
+ }
+
+ return -ENODEV;
+}
+
+static void
+process_msgs_up(struct dev *dev, struct mbox *mbox)
+{
+ struct mbox_dev *mdev = &mbox->dev[0];
+ struct mbox_hdr *req_hdr;
+ struct mbox_msghdr *msg;
+ int i, err, offset;
+
+ req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+ if (req_hdr->num_msgs == 0)
+ return;
+
+ offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+ for (i = 0; i < req_hdr->num_msgs; i++) {
+ msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+
+ plt_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", msg->id,
+ mbox_id2name(msg->id), dev_get_pf(msg->pcifunc),
+ dev_get_vf(msg->pcifunc));
+ err = mbox_process_msgs_up(dev, msg);
+ if (err)
+ plt_err("Error %d handling 0x%x (%s)", err, msg->id,
+ mbox_id2name(msg->id));
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+ /* Send mbox responses */
+ if (mdev->num_msgs) {
+ plt_base_dbg("Reply num_msgs:%d", mdev->num_msgs);
+ mbox_msg_send(mbox, 0);
+ }
+}
+
+static void
+roc_af_pf_mbox_irq(void *param)
+{
+ struct dev *dev = param;
+ uint64_t intr;
+
+ intr = plt_read64(dev->bar2 + RVU_PF_INT);
+ if (intr == 0)
+ plt_base_dbg("Proceeding to check mbox UP messages if any");
+
+ plt_write64(intr, dev->bar2 + RVU_PF_INT);
+ plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d)", intr, dev->pf);
+
+ /* First process all configuration messages */
+ process_msgs(dev, dev->mbox);
+
+ /* Process Uplink messages */
+ process_msgs_up(dev, &dev->mbox_up);
+}
+
+static int
+mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ int rc;
+
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
+
+ /* MBOX interrupt AF <-> PF */
+ rc = dev_irq_register(intr_handle, roc_af_pf_mbox_irq, dev,
+ RVU_PF_INT_VEC_AFPF_MBOX);
+ if (rc) {
+ plt_err("Fail to register AF<->PF mbox irq");
+ return rc;
+ }
+
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT);
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
+
+ return rc;
+}
+
+static int
+mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ return mbox_register_pf_irq(pci_dev, dev);
+}
+
+static void
+mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
+
+ /* MBOX interrupt AF <-> PF */
+ dev_irq_unregister(intr_handle, roc_af_pf_mbox_irq, dev,
+ RVU_PF_INT_VEC_AFPF_MBOX);
+}
+
+static void
+mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ mbox_unregister_pf_irq(pci_dev, dev);
+}
+
+static uint16_t
+dev_pf_total_vfs(struct plt_pci_device *pci_dev)
+{
+ uint16_t total_vfs = 0;
+ int sriov_pos, rc;
+
+ sriov_pos =
+ plt_pci_find_ext_capability(pci_dev, ROC_PCI_EXT_CAP_ID_SRIOV);
+ if (sriov_pos <= 0) {
+ plt_warn("Unable to find SRIOV cap, rc=%d", sriov_pos);
+ return 0;
+ }
+
+ rc = plt_pci_read_config(pci_dev, &total_vfs, 2,
+ sriov_pos + ROC_PCI_SRIOV_TOTAL_VF);
+ if (rc < 0) {
+ plt_warn("Unable to read SRIOV cap, rc=%d", rc);
+ return 0;
+ }
+
+ return total_vfs;
+}
+
+static int
+dev_setup_shared_lmt_region(struct mbox *mbox)
+{
+ struct lmtst_tbl_setup_req *req;
+
+ req = mbox_alloc_msg_lmtst_tbl_setup(mbox);
+ req->pcifunc = idev_lmt_pffunc_get();
+
+ return mbox_process(mbox);
+}
+
+static int
+dev_lmt_setup(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ uint64_t bar4_mbox_sz = MBOX_SIZE;
+ struct idev_cfg *idev;
+ int rc;
+
+ if (roc_model_is_cn9k()) {
+ dev->lmt_base = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
+ return 0;
+ }
+
+ /* [CN10K, .) */
+
+ /* Set common lmt region from second pf_func onwards. */
+ if (!dev->disable_shared_lmt && idev_lmt_pffunc_get() &&
+ dev->pf_func != idev_lmt_pffunc_get()) {
+ rc = dev_setup_shared_lmt_region(dev->mbox);
+ if (!rc) {
+ dev->lmt_base = roc_idev_lmt_base_addr_get();
+ return rc;
+ }
+ plt_err("Failed to setup shared lmt region, pf_func %d err %d "
+ "Using respective LMT region per pf func",
+ dev->pf_func, rc);
+ }
+
+ /* PF BAR4 should always be sufficient enough to
+ * hold PF-AF MBOX + PF-VF MBOX + LMT lines.
+ */
+ if (pci_dev->mem_resource[4].len <
+ (bar4_mbox_sz + (RVU_LMT_LINE_MAX * RVU_LMT_SZ))) {
+ plt_err("Not enough bar4 space for lmt lines and mbox");
+ return -EFAULT;
+ }
+
+ /* LMT base is just after total VF MBOX area */
+ bar4_mbox_sz += (MBOX_SIZE * dev_pf_total_vfs(pci_dev));
+ dev->lmt_base = dev->bar4 + bar4_mbox_sz;
+
+ /* Base LMT address should be chosen from only those pci funcs which
+ * participate in LMT shared mode.
+ */
+ if (!dev->disable_shared_lmt) {
+ idev = idev_get_cfg();
+ if (!__atomic_load_n(&idev->lmt_pf_func, __ATOMIC_ACQUIRE)) {
+ idev->lmt_base_addr = dev->lmt_base;
+ idev->lmt_pf_func = dev->pf_func;
+ idev->num_lmtlines = RVU_LMT_LINE_MAX;
+ }
+ }
+
+ return 0;
+}
+
+int
+dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
+{
+ int direction, up_direction, rc;
+ uintptr_t bar2, bar4, mbox;
+ uint64_t intr_offset;
+
+ bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
+ bar4 = (uintptr_t)pci_dev->mem_resource[4].addr;
+ if (bar2 == 0 || bar4 == 0) {
+ plt_err("Failed to get PCI bars");
+ rc = -ENODEV;
+ goto error;
+ }
+
+ /* Trigger fault on bar2 and bar4 regions
+ * to avoid BUG_ON in remap_pfn_range()
+ * in latest kernel.
+ */
+ *(volatile uint64_t *)bar2;
+ *(volatile uint64_t *)bar4;
+
+ /* Check ROC model supported */
+ if (roc_model->flag == 0) {
+ rc = UTIL_ERR_INVALID_MODEL;
+ goto error;
+ }
+
+ dev->bar2 = bar2;
+ dev->bar4 = bar4;
+
+ mbox = bar4;
+ direction = MBOX_DIR_PFAF;
+ up_direction = MBOX_DIR_PFAF_UP;
+ intr_offset = RVU_PF_INT;
+
+ /* Initialize the local mbox */
+ rc = mbox_init(&dev->mbox_local, mbox, bar2, direction, 1, intr_offset);
+ if (rc)
+ goto error;
+ dev->mbox = &dev->mbox_local;
+
+ rc = mbox_init(&dev->mbox_up, mbox, bar2, up_direction, 1, intr_offset);
+ if (rc)
+ goto mbox_fini;
+
+ /* Register mbox interrupts */
+ rc = mbox_register_irq(pci_dev, dev);
+ if (rc)
+ goto mbox_fini;
+
+ /* Check the readiness of PF/VF */
+ rc = send_ready_msg(dev->mbox, &dev->pf_func);
+ if (rc)
+ goto mbox_unregister;
+
+ dev->pf = dev_get_pf(dev->pf_func);
+
+ dev->mbox_active = 1;
+
+ /* Setup LMT line base */
+ rc = dev_lmt_setup(pci_dev, dev);
+ if (rc)
+ goto iounmap;
+
+ return rc;
+iounmap:
+mbox_unregister:
+ mbox_unregister_irq(pci_dev, dev);
+mbox_fini:
+ mbox_fini(dev->mbox);
+ mbox_fini(&dev->mbox_up);
+error:
+ return rc;
+}
+
+int
+dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
+{
+ struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct mbox *mbox;
+
+ mbox_unregister_irq(pci_dev, dev);
+
+ /* Release PF - AF */
+ mbox = dev->mbox;
+ mbox_fini(mbox);
+ mbox = &dev->mbox_up;
+ mbox_fini(mbox);
+ dev->mbox_active = 0;
+
+ /* Disable MSIX vectors */
+ dev_irqs_disable(intr_handle);
+ return 0;
+}
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
index 713af4f..655c58b 100644
--- a/drivers/common/cnxk/roc_dev_priv.h
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -5,9 +5,51 @@
#ifndef _ROC_DEV_PRIV_H
#define _ROC_DEV_PRIV_H
+#define RVU_PFVF_PF_SHIFT 10
+#define RVU_PFVF_PF_MASK 0x3F
+#define RVU_PFVF_FUNC_SHIFT 0
+#define RVU_PFVF_FUNC_MASK 0x3FF
+#define RVU_MAX_INT_RETRY 3
+
+static inline int
+dev_get_vf(uint16_t pf_func)
+{
+ return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1);
+}
+
+static inline int
+dev_get_pf(uint16_t pf_func)
+{
+ return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
+}
+
+static inline int
+dev_pf_func(int pf, int vf)
+{
+ return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1);
+}
+
+struct dev {
+ uint16_t pf;
+ uint16_t pf_func;
+ uint8_t mbox_active;
+ bool drv_inited;
+ uintptr_t bar2;
+ uintptr_t bar4;
+ uintptr_t lmt_base;
+ struct mbox mbox_local;
+ struct mbox mbox_up;
+ uint64_t hwcap;
+ struct mbox *mbox;
+ bool disable_shared_lmt; /* false(default): shared lmt mode enabled */
+} __plt_cache_aligned;
+
extern uint16_t dev_rclk_freq;
extern uint16_t dev_sclk_freq;
+int dev_init(struct dev *dev, struct plt_pci_device *pci_dev);
+int dev_fini(struct dev *dev, struct plt_pci_device *pci_dev);
+
int dev_irq_register(struct plt_intr_handle *intr_handle,
plt_intr_callback_fn cb, void *data, unsigned int vec);
void dev_irq_unregister(struct plt_intr_handle *intr_handle,
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
new file mode 100644
index 0000000..be762c5
--- /dev/null
+++ b/drivers/common/cnxk/roc_idev.c
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+struct idev_cfg *
+idev_get_cfg(void)
+{
+ static const char name[] = "roc_cn10k_intra_device_conf";
+ const struct plt_memzone *mz;
+ struct idev_cfg *idev;
+
+ mz = plt_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ /* Request for the first time */
+ mz = plt_memzone_reserve_cache_align(name, sizeof(struct idev_cfg));
+ if (mz != NULL) {
+ idev = mz->addr;
+ idev_set_defaults(idev);
+ return idev;
+ }
+ return NULL;
+}
+
+void
+idev_set_defaults(struct idev_cfg *idev)
+{
+ idev->lmt_pf_func = 0;
+ idev->lmt_base_addr = 0;
+ idev->num_lmtlines = 0;
+}
+
+uint16_t
+idev_lmt_pffunc_get(void)
+{
+ struct idev_cfg *idev;
+ uint16_t lmt_pf_func;
+
+ idev = idev_get_cfg();
+ lmt_pf_func = 0;
+ if (idev != NULL)
+ lmt_pf_func = idev->lmt_pf_func;
+
+ return lmt_pf_func;
+}
+
+uint64_t
+roc_idev_lmt_base_addr_get(void)
+{
+ uint64_t lmt_base_addr;
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ lmt_base_addr = 0;
+ if (idev != NULL)
+ lmt_base_addr = idev->lmt_base_addr;
+
+ return lmt_base_addr;
+}
+
+uint16_t
+roc_idev_num_lmtlines_get(void)
+{
+ struct idev_cfg *idev;
+ uint16_t num_lmtlines;
+
+ idev = idev_get_cfg();
+ num_lmtlines = 0;
+ if (idev != NULL)
+ num_lmtlines = idev->num_lmtlines;
+
+ return num_lmtlines;
+}
diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h
new file mode 100644
index 0000000..9c45960
--- /dev/null
+++ b/drivers/common/cnxk/roc_idev.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_IDEV_H_
+#define _ROC_IDEV_H_
+
+/* LMT */
+uint64_t __roc_api roc_idev_lmt_base_addr_get(void);
+uint16_t __roc_api roc_idev_num_lmtlines_get(void);
+
+#endif /* _ROC_IDEV_H_ */
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
new file mode 100644
index 0000000..c996c5c
--- /dev/null
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_IDEV_PRIV_H_
+#define _ROC_IDEV_PRIV_H_
+
+/* Intra device related functions */
+struct idev_cfg {
+ uint16_t lmt_pf_func;
+ uint16_t num_lmtlines;
+ uint64_t lmt_base_addr;
+};
+
+/* Generic */
+struct idev_cfg *idev_get_cfg(void);
+void idev_set_defaults(struct idev_cfg *idev);
+
+/* idev lmt */
+uint16_t idev_lmt_pffunc_get(void);
+
+#endif /* _ROC_IDEV_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index ce5dcf3..090c597 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -14,4 +14,7 @@
/* Dev */
#include "roc_dev_priv.h"
+/* idev */
+#include "roc_idev_priv.h"
+
#endif /* _ROC_PRIV_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 8293f2a..a9d137d 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -6,6 +6,8 @@ INTERNAL {
plt_init;
roc_clk_freq_get;
roc_error_msg_get;
+ roc_idev_lmt_base_addr_get;
+ roc_idev_num_lmtlines_get;
roc_model;
local: *;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 08/52] common/cnxk: add VF support to base device class
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (6 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 07/52] common/cnxk: add base device class Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 09/52] common/cnxk: add base npa device support Nithin Dabilpuram
` (47 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add VF specific handling such as BAR4 setup, forwarding
VF mbox messages to AF and vice-versa, VF FLR handling
etc.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/roc_dev.c | 857 ++++++++++++++++++++++++++++++++++++-
drivers/common/cnxk/roc_dev_priv.h | 42 ++
2 files changed, 879 insertions(+), 20 deletions(-)
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 4ef15df..1fe1371 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -17,6 +17,337 @@
/* Single Root I/O Virtualization */
#define ROC_PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */
+static void *
+mbox_mem_map(off_t off, size_t size)
+{
+ void *va = MAP_FAILED;
+ int mem_fd;
+
+ if (size <= 0 || !off) {
+ plt_err("Invalid mbox area off 0x%lx size %lu", off, size);
+ goto error;
+ }
+
+ mem_fd = open("/dev/mem", O_RDWR);
+ if (mem_fd < 0)
+ goto error;
+
+ va = plt_mmap(NULL, size, PLT_PROT_READ | PLT_PROT_WRITE,
+ PLT_MAP_SHARED, mem_fd, off);
+ close(mem_fd);
+
+ if (va == MAP_FAILED)
+ plt_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd", size, mem_fd,
+ (intmax_t)off);
+error:
+ return va;
+}
+
+static void
+mbox_mem_unmap(void *va, size_t size)
+{
+ if (va)
+ munmap(va, size);
+}
+
+static int
+pf_af_sync_msg(struct dev *dev, struct mbox_msghdr **rsp)
+{
+ uint32_t timeout = 0, sleep = 1;
+ struct mbox *mbox = dev->mbox;
+ struct mbox_dev *mdev = &mbox->dev[0];
+
+ volatile uint64_t int_status;
+ struct mbox_msghdr *msghdr;
+ uint64_t off;
+ int rc = 0;
+
+ /* We need to disable PF interrupts. We are in timer interrupt */
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
+
+ /* Send message */
+ mbox_msg_send(mbox, 0);
+
+ do {
+ plt_delay_ms(sleep);
+ timeout += sleep;
+ if (timeout >= mbox->rsp_tmo) {
+ plt_err("Message timeout: %dms", mbox->rsp_tmo);
+ rc = -EIO;
+ break;
+ }
+ int_status = plt_read64(dev->bar2 + RVU_PF_INT);
+ } while ((int_status & 0x1) != 0x1);
+
+ /* Clear */
+ plt_write64(int_status, dev->bar2 + RVU_PF_INT);
+
+ /* Enable interrupts */
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
+
+ if (rc == 0) {
+ /* Get message */
+ off = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off);
+ if (rsp)
+ *rsp = msghdr;
+ rc = msghdr->rc;
+ }
+
+ return rc;
+}
+
+static int
+af_pf_wait_msg(struct dev *dev, uint16_t vf, int num_msg)
+{
+ uint32_t timeout = 0, sleep = 1;
+ struct mbox *mbox = dev->mbox;
+ struct mbox_dev *mdev = &mbox->dev[0];
+ volatile uint64_t int_status;
+ struct mbox_hdr *req_hdr;
+ struct mbox_msghdr *msg;
+ struct mbox_msghdr *rsp;
+ uint64_t offset;
+ size_t size;
+ int i;
+
+ /* We need to disable PF interrupts. We are in timer interrupt */
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
+
+ /* Send message */
+ mbox_msg_send(mbox, 0);
+
+ do {
+ plt_delay_ms(sleep);
+ timeout++;
+ if (timeout >= mbox->rsp_tmo) {
+ plt_err("Routed messages %d timeout: %dms", num_msg,
+ mbox->rsp_tmo);
+ break;
+ }
+ int_status = plt_read64(dev->bar2 + RVU_PF_INT);
+ } while ((int_status & 0x1) != 0x1);
+
+ /* Clear */
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT);
+
+ /* Enable interrupts */
+ plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
+
+ plt_spinlock_lock(&mdev->mbox_lock);
+
+ req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+ if (req_hdr->num_msgs != num_msg)
+ plt_err("Routed messages: %d received: %d", num_msg,
+ req_hdr->num_msgs);
+
+ /* Get messages from mbox */
+ offset = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ for (i = 0; i < req_hdr->num_msgs; i++) {
+ msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+ size = mbox->rx_start + msg->next_msgoff - offset;
+
+ /* Reserve PF/VF mbox message */
+ size = PLT_ALIGN(size, MBOX_MSG_ALIGN);
+ rsp = mbox_alloc_msg(&dev->mbox_vfpf, vf, size);
+ mbox_rsp_init(msg->id, rsp);
+
+ /* Copy message from AF<->PF mbox to PF<->VF mbox */
+ mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr),
+ (uint8_t *)msg + sizeof(struct mbox_msghdr),
+ size - sizeof(struct mbox_msghdr));
+
+ /* Set status and sender pf_func data */
+ rsp->rc = msg->rc;
+ rsp->pcifunc = msg->pcifunc;
+
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+ plt_spinlock_unlock(&mdev->mbox_lock);
+
+ return req_hdr->num_msgs;
+}
+
+static int
+vf_pf_process_msgs(struct dev *dev, uint16_t vf)
+{
+ struct mbox *mbox = &dev->mbox_vfpf;
+ struct mbox_dev *mdev = &mbox->dev[vf];
+ struct mbox_hdr *req_hdr;
+ struct mbox_msghdr *msg;
+ int offset, routed = 0;
+ size_t size;
+ uint16_t i;
+
+ req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+ if (!req_hdr->num_msgs)
+ return 0;
+
+ offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+
+ for (i = 0; i < req_hdr->num_msgs; i++) {
+ msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+ size = mbox->rx_start + msg->next_msgoff - offset;
+
+ /* RVU_PF_FUNC_S */
+ msg->pcifunc = dev_pf_func(dev->pf, vf);
+
+ if (msg->id == MBOX_MSG_READY) {
+ struct ready_msg_rsp *rsp;
+ uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8;
+
+ /* Handle READY message in PF */
+ dev->active_vfs[vf / max_bits] |=
+ BIT_ULL(vf % max_bits);
+ rsp = (struct ready_msg_rsp *)mbox_alloc_msg(
+ mbox, vf, sizeof(*rsp));
+ mbox_rsp_init(msg->id, rsp);
+
+ /* PF/VF function ID */
+ rsp->hdr.pcifunc = msg->pcifunc;
+ rsp->hdr.rc = 0;
+ } else {
+ struct mbox_msghdr *af_req;
+ /* Reserve AF/PF mbox message */
+ size = PLT_ALIGN(size, MBOX_MSG_ALIGN);
+ af_req = mbox_alloc_msg(dev->mbox, 0, size);
+ if (af_req == NULL)
+ return -ENOSPC;
+ mbox_req_init(msg->id, af_req);
+
+ /* Copy message from VF<->PF mbox to PF<->AF mbox */
+ mbox_memcpy((uint8_t *)af_req +
+ sizeof(struct mbox_msghdr),
+ (uint8_t *)msg + sizeof(struct mbox_msghdr),
+ size - sizeof(struct mbox_msghdr));
+ af_req->pcifunc = msg->pcifunc;
+ routed++;
+ }
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+
+ if (routed > 0) {
+ plt_base_dbg("pf:%d routed %d messages from vf:%d to AF",
+ dev->pf, routed, vf);
+ af_pf_wait_msg(dev, vf, routed);
+ mbox_reset(dev->mbox, 0);
+ }
+
+ /* Send mbox responses to VF */
+ if (mdev->num_msgs) {
+ plt_base_dbg("pf:%d reply %d messages to vf:%d", dev->pf,
+ mdev->num_msgs, vf);
+ mbox_msg_send(mbox, vf);
+ }
+
+ return i;
+}
+
+static int
+vf_pf_process_up_msgs(struct dev *dev, uint16_t vf)
+{
+ struct mbox *mbox = &dev->mbox_vfpf_up;
+ struct mbox_dev *mdev = &mbox->dev[vf];
+ struct mbox_hdr *req_hdr;
+ struct mbox_msghdr *msg;
+ int msgs_acked = 0;
+ int offset;
+ uint16_t i;
+
+ req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
+ if (req_hdr->num_msgs == 0)
+ return 0;
+
+ offset = mbox->rx_start + PLT_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+
+ for (i = 0; i < req_hdr->num_msgs; i++) {
+ msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
+
+ msgs_acked++;
+ /* RVU_PF_FUNC_S */
+ msg->pcifunc = dev_pf_func(dev->pf, vf);
+
+ switch (msg->id) {
+ case MBOX_MSG_CGX_LINK_EVENT:
+ plt_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
+ msg->id, mbox_id2name(msg->id),
+ msg->pcifunc, dev_get_pf(msg->pcifunc),
+ dev_get_vf(msg->pcifunc));
+ break;
+ case MBOX_MSG_CGX_PTP_RX_INFO:
+ plt_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
+ msg->id, mbox_id2name(msg->id),
+ msg->pcifunc, dev_get_pf(msg->pcifunc),
+ dev_get_vf(msg->pcifunc));
+ break;
+ default:
+ plt_err("Not handled UP msg 0x%x (%s) func:0x%x",
+ msg->id, mbox_id2name(msg->id), msg->pcifunc);
+ }
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+ mbox_reset(mbox, vf);
+ mdev->msgs_acked = msgs_acked;
+ plt_wmb();
+
+ return i;
+}
+
+static void
+roc_vf_pf_mbox_handle_msg(void *param)
+{
+ uint16_t vf, max_vf, max_bits;
+ struct dev *dev = param;
+
+ max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t);
+ max_vf = max_bits * MAX_VFPF_DWORD_BITS;
+
+ for (vf = 0; vf < max_vf; vf++) {
+ if (dev->intr.bits[vf / max_bits] & BIT_ULL(vf % max_bits)) {
+ plt_base_dbg("Process vf:%d request (pf:%d, vf:%d)", vf,
+ dev->pf, dev->vf);
+ vf_pf_process_msgs(dev, vf);
+ /* UP messages */
+ vf_pf_process_up_msgs(dev, vf);
+ dev->intr.bits[vf / max_bits] &=
+ ~(BIT_ULL(vf % max_bits));
+ }
+ }
+ dev->timer_set = 0;
+}
+
+static void
+roc_vf_pf_mbox_irq(void *param)
+{
+ struct dev *dev = param;
+ bool alarm_set = false;
+ uint64_t intr;
+ int vfpf;
+
+ for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) {
+ intr = plt_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
+ if (!intr)
+ continue;
+
+ plt_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)",
+ vfpf, intr, dev->pf, dev->vf);
+
+ /* Save and clear intr bits */
+ dev->intr.bits[vfpf] |= intr;
+ plt_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
+ alarm_set = true;
+ }
+
+ if (!dev->timer_set && alarm_set) {
+ dev->timer_set = 1;
+ /* Start timer to handle messages */
+ plt_alarm_set(VF_PF_MBOX_TIMER_MS, roc_vf_pf_mbox_handle_msg,
+ dev);
+ }
+}
+
static void
process_msgs(struct dev *dev, struct mbox *mbox)
{
@@ -62,6 +393,112 @@ process_msgs(struct dev *dev, struct mbox *mbox)
plt_wmb();
}
+/* Copies the message received from AF and sends it to VF */
+static void
+pf_vf_mbox_send_up_msg(struct dev *dev, void *rec_msg)
+{
+ uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t);
+ struct mbox *vf_mbox = &dev->mbox_vfpf_up;
+ struct msg_req *msg = rec_msg;
+ struct mbox_msghdr *vf_msg;
+ uint16_t vf;
+ size_t size;
+
+ size = PLT_ALIGN(mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN);
+ /* Send UP message to all VF's */
+ for (vf = 0; vf < vf_mbox->ndevs; vf++) {
+ /* VF active */
+ if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf))))
+ continue;
+
+ plt_base_dbg("(%s) size: %zx to VF: %d",
+ mbox_id2name(msg->hdr.id), size, vf);
+
+ /* Reserve PF/VF mbox message */
+ vf_msg = mbox_alloc_msg(vf_mbox, vf, size);
+ if (!vf_msg) {
+ plt_err("Failed to alloc VF%d UP message", vf);
+ continue;
+ }
+ mbox_req_init(msg->hdr.id, vf_msg);
+
+ /*
+ * Copy message from AF<->PF UP mbox
+ * to PF<->VF UP mbox
+ */
+ mbox_memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr),
+ (uint8_t *)msg + sizeof(struct mbox_msghdr),
+ size - sizeof(struct mbox_msghdr));
+
+ vf_msg->rc = msg->hdr.rc;
+ /* Set PF to be a sender */
+ vf_msg->pcifunc = dev->pf_func;
+
+ /* Send to VF */
+ mbox_msg_send(vf_mbox, vf);
+ }
+}
+
+static int
+mbox_up_handler_cgx_link_event(struct dev *dev, struct cgx_link_info_msg *msg,
+ struct msg_rsp *rsp)
+{
+ struct cgx_link_user_info *linfo = &msg->link_info;
+ void *roc_nix = dev->roc_nix;
+
+ plt_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d",
+ dev_get_pf(dev->pf_func), dev_get_vf(dev->pf_func),
+ linfo->link_up ? "UP" : "DOWN", msg->hdr.id,
+ mbox_id2name(msg->hdr.id), dev_get_pf(msg->hdr.pcifunc),
+ dev_get_vf(msg->hdr.pcifunc));
+
+ /* PF gets link notification from AF */
+ if (dev_get_pf(msg->hdr.pcifunc) == 0) {
+ if (dev->ops && dev->ops->link_status_update)
+ dev->ops->link_status_update(roc_nix, linfo);
+
+ /* Forward the same message as received from AF to VF */
+ pf_vf_mbox_send_up_msg(dev, msg);
+ } else {
+ /* VF gets link up notification */
+ if (dev->ops && dev->ops->link_status_update)
+ dev->ops->link_status_update(roc_nix, linfo);
+ }
+
+ rsp->hdr.rc = 0;
+ return 0;
+}
+
+static int
+mbox_up_handler_cgx_ptp_rx_info(struct dev *dev,
+ struct cgx_ptp_rx_info_msg *msg,
+ struct msg_rsp *rsp)
+{
+ void *roc_nix = dev->roc_nix;
+
+ plt_base_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d",
+ dev_get_pf(dev->pf_func), dev_get_vf(dev->pf_func),
+ msg->ptp_en ? "ENABLED" : "DISABLED", msg->hdr.id,
+ mbox_id2name(msg->hdr.id), dev_get_pf(msg->hdr.pcifunc),
+ dev_get_vf(msg->hdr.pcifunc));
+
+ /* PF gets PTP notification from AF */
+ if (dev_get_pf(msg->hdr.pcifunc) == 0) {
+ if (dev->ops && dev->ops->ptp_info_update)
+ dev->ops->ptp_info_update(roc_nix, msg->ptp_en);
+
+ /* Forward the same message as received from AF to VF */
+ pf_vf_mbox_send_up_msg(dev, msg);
+ } else {
+ /* VF gets PTP notification */
+ if (dev->ops && dev->ops->ptp_info_update)
+ dev->ops->ptp_info_update(roc_nix, msg->ptp_en);
+ }
+
+ rsp->hdr.rc = 0;
+ return 0;
+}
+
static int
mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req)
{
@@ -73,6 +510,24 @@ mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req)
default:
reply_invalid_msg(&dev->mbox_up, 0, 0, req->id);
break;
+#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
+ case _id: { \
+ struct _rsp_type *rsp; \
+ int err; \
+ rsp = (struct _rsp_type *)mbox_alloc_msg( \
+ &dev->mbox_up, 0, sizeof(struct _rsp_type)); \
+ if (!rsp) \
+ return -ENOMEM; \
+ rsp->hdr.id = _id; \
+ rsp->hdr.sig = MBOX_RSP_SIG; \
+ rsp->hdr.pcifunc = dev->pf_func; \
+ rsp->hdr.rc = 0; \
+ err = mbox_up_handler_##_fn_name(dev, (struct _req_type *)req, \
+ rsp); \
+ return err; \
+ }
+ MBOX_UP_CGX_MESSAGES
+#undef M
}
return -ENODEV;
@@ -111,6 +566,26 @@ process_msgs_up(struct dev *dev, struct mbox *mbox)
}
static void
+roc_pf_vf_mbox_irq(void *param)
+{
+ struct dev *dev = param;
+ uint64_t intr;
+
+ intr = plt_read64(dev->bar2 + RVU_VF_INT);
+ if (intr == 0)
+ plt_base_dbg("Proceeding to check mbox UP messages if any");
+
+ plt_write64(intr, dev->bar2 + RVU_VF_INT);
+ plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
+
+ /* First process all configuration messages */
+ process_msgs(dev, dev->mbox);
+
+ /* Process Uplink messages */
+ process_msgs_up(dev, &dev->mbox_up);
+}
+
+static void
roc_af_pf_mbox_irq(void *param)
{
struct dev *dev = param;
@@ -121,7 +596,7 @@ roc_af_pf_mbox_irq(void *param)
plt_base_dbg("Proceeding to check mbox UP messages if any");
plt_write64(intr, dev->bar2 + RVU_PF_INT);
- plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d)", intr, dev->pf);
+ plt_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
/* First process all configuration messages */
process_msgs(dev, dev->mbox);
@@ -134,10 +609,33 @@ static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
- int rc;
+ int i, rc;
+
+ /* HW clear irq */
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
+ plt_write64(~0ull,
+ dev->bar2 + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
+ dev->timer_set = 0;
+
+ /* MBOX interrupt for VF(0...63) <-> PF */
+ rc = dev_irq_register(intr_handle, roc_vf_pf_mbox_irq, dev,
+ RVU_PF_INT_VEC_VFPF_MBOX0);
+
+ if (rc) {
+ plt_err("Fail to register PF(VF0-63) mbox irq");
+ return rc;
+ }
+ /* MBOX interrupt for VF(64...128) <-> PF */
+ rc = dev_irq_register(intr_handle, roc_vf_pf_mbox_irq, dev,
+ RVU_PF_INT_VEC_VFPF_MBOX1);
+
+ if (rc) {
+ plt_err("Fail to register PF(VF64-128) mbox irq");
+ return rc;
+ }
/* MBOX interrupt AF <-> PF */
rc = dev_irq_register(intr_handle, roc_af_pf_mbox_irq, dev,
RVU_PF_INT_VEC_AFPF_MBOX);
@@ -146,6 +644,11 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
return rc;
}
+ /* HW enable intr */
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
+ plt_write64(~0ull,
+ dev->bar2 + RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i));
+
plt_write64(~0ull, dev->bar2 + RVU_PF_INT);
plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
@@ -153,27 +656,263 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
}
static int
+mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ int rc;
+
+ /* Clear irq */
+ plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
+
+ /* MBOX interrupt PF <-> VF */
+ rc = dev_irq_register(intr_handle, roc_pf_vf_mbox_irq, dev,
+ RVU_VF_INT_VEC_MBOX);
+ if (rc) {
+ plt_err("Fail to register PF<->VF mbox irq");
+ return rc;
+ }
+
+ /* HW enable intr */
+ plt_write64(~0ull, dev->bar2 + RVU_VF_INT);
+ plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S);
+
+ return rc;
+}
+
+static int
mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- return mbox_register_pf_irq(pci_dev, dev);
+ if (dev_is_vf(dev))
+ return mbox_register_vf_irq(pci_dev, dev);
+ else
+ return mbox_register_pf_irq(pci_dev, dev);
}
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ int i;
+
+ /* HW clear irq */
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
+ plt_write64(~0ull,
+ dev->bar2 + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
plt_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
+ dev->timer_set = 0;
+
+ plt_alarm_cancel(roc_vf_pf_mbox_handle_msg, dev);
+
+ /* Unregister the interrupt handler for each vectors */
+ /* MBOX interrupt for VF(0...63) <-> PF */
+ dev_irq_unregister(intr_handle, roc_vf_pf_mbox_irq, dev,
+ RVU_PF_INT_VEC_VFPF_MBOX0);
+
+ /* MBOX interrupt for VF(64...128) <-> PF */
+ dev_irq_unregister(intr_handle, roc_vf_pf_mbox_irq, dev,
+ RVU_PF_INT_VEC_VFPF_MBOX1);
+
/* MBOX interrupt AF <-> PF */
dev_irq_unregister(intr_handle, roc_af_pf_mbox_irq, dev,
RVU_PF_INT_VEC_AFPF_MBOX);
}
static void
+mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+
+ /* Clear irq */
+ plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
+
+ /* Unregister the interrupt handler */
+ dev_irq_unregister(intr_handle, roc_pf_vf_mbox_irq, dev,
+ RVU_VF_INT_VEC_MBOX);
+}
+
+static void
mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- mbox_unregister_pf_irq(pci_dev, dev);
+ if (dev_is_vf(dev))
+ mbox_unregister_vf_irq(pci_dev, dev);
+ else
+ mbox_unregister_pf_irq(pci_dev, dev);
+}
+
+static int
+vf_flr_send_msg(struct dev *dev, uint16_t vf)
+{
+ struct mbox *mbox = dev->mbox;
+ struct msg_req *req;
+ int rc;
+
+ req = mbox_alloc_msg_vf_flr(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ /* Overwrite pcifunc to indicate VF */
+ req->hdr.pcifunc = dev_pf_func(dev->pf, vf);
+
+ /* Sync message in interrupt context */
+ rc = pf_af_sync_msg(dev, NULL);
+ if (rc)
+ plt_err("Failed to send VF FLR mbox msg, rc=%d", rc);
+
+ return rc;
+}
+
+static void
+roc_pf_vf_flr_irq(void *param)
+{
+ struct dev *dev = (struct dev *)param;
+ uint16_t max_vf = 64, vf;
+ uintptr_t bar2;
+ uint64_t intr;
+ int i;
+
+ max_vf = (dev->maxvf > 0) ? dev->maxvf : 64;
+ bar2 = dev->bar2;
+
+ plt_base_dbg("FLR VF interrupt: max_vf: %d", max_vf);
+
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
+ intr = plt_read64(bar2 + RVU_PF_VFFLR_INTX(i));
+ if (!intr)
+ continue;
+
+ for (vf = 0; vf < max_vf; vf++) {
+ if (!(intr & (1ULL << vf)))
+ continue;
+
+ plt_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d", i,
+ intr, (64 * i + vf));
+ /* Clear interrupt */
+ plt_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i));
+ /* Disable the interrupt */
+ plt_write64(BIT_ULL(vf),
+ bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
+ /* Inform AF about VF reset */
+ vf_flr_send_msg(dev, vf);
+
+ /* Signal FLR finish */
+ plt_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i));
+ /* Enable interrupt */
+ plt_write64(~0ull, bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
+ }
+ }
+}
+
+static int
+vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ int i;
+
+ plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
+
+ /* HW clear irq */
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
+ plt_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
+
+ dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev,
+ RVU_PF_INT_VEC_VFFLR0);
+
+ dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev,
+ RVU_PF_INT_VEC_VFFLR1);
+
+ return 0;
+}
+
+static int
+vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ int i, rc;
+
+ plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
+
+ rc = dev_irq_register(handle, roc_pf_vf_flr_irq, dev,
+ RVU_PF_INT_VEC_VFFLR0);
+ if (rc)
+ plt_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc);
+
+ rc = dev_irq_register(handle, roc_pf_vf_flr_irq, dev,
+ RVU_PF_INT_VEC_VFFLR1);
+ if (rc)
+ plt_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc);
+
+ /* Enable HW interrupt */
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
+ plt_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i));
+ plt_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i));
+ plt_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
+ }
+ return 0;
+}
+
+int
+dev_active_vfs(struct dev *dev)
+{
+ int i, count = 0;
+
+ for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
+ count += __builtin_popcount(dev->active_vfs[i]);
+
+ return count;
+}
+
+static void
+dev_vf_hwcap_update(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ switch (pci_dev->id.device_id) {
+ case PCI_DEVID_CNXK_RVU_PF:
+ break;
+ case PCI_DEVID_CNXK_RVU_SSO_TIM_VF:
+ case PCI_DEVID_CNXK_RVU_NPA_VF:
+ case PCI_DEVID_CNXK_RVU_AF_VF:
+ case PCI_DEVID_CNXK_RVU_VF:
+ case PCI_DEVID_CNXK_RVU_SDP_VF:
+ dev->hwcap |= DEV_HWCAP_F_VF;
+ break;
+ }
+}
+
+static uintptr_t
+dev_vf_mbase_get(struct plt_pci_device *pci_dev, struct dev *dev)
+{
+ void *vf_mbase = NULL;
+ uintptr_t pa;
+
+ if (dev_is_vf(dev))
+ return 0;
+
+ /* For CN10K onwards, it is just after PF MBOX */
+ if (!roc_model_is_cn9k())
+ return dev->bar4 + MBOX_SIZE;
+
+ pa = plt_read64(dev->bar2 + RVU_PF_VF_BAR4_ADDR);
+ if (!pa) {
+ plt_err("Invalid VF mbox base pa");
+ return pa;
+ }
+
+ vf_mbase = mbox_mem_map(pa, MBOX_SIZE * pci_dev->max_vfs);
+ if (vf_mbase == MAP_FAILED) {
+ plt_err("Failed to mmap vf mbase at pa 0x%lx, rc=%d", pa,
+ errno);
+ return 0;
+ }
+ return (uintptr_t)vf_mbase;
+}
+
+static void
+dev_vf_mbase_put(struct plt_pci_device *pci_dev, uintptr_t vf_mbase)
+{
+ if (!vf_mbase || !pci_dev->max_vfs || !roc_model_is_cn9k())
+ return;
+
+ mbox_mem_unmap((void *)vf_mbase, MBOX_SIZE * pci_dev->max_vfs);
}
static uint16_t
@@ -213,7 +952,6 @@ dev_setup_shared_lmt_region(struct mbox *mbox)
static int
dev_lmt_setup(struct plt_pci_device *pci_dev, struct dev *dev)
{
- uint64_t bar4_mbox_sz = MBOX_SIZE;
struct idev_cfg *idev;
int rc;
@@ -237,19 +975,34 @@ dev_lmt_setup(struct plt_pci_device *pci_dev, struct dev *dev)
dev->pf_func, rc);
}
- /* PF BAR4 should always be sufficient enough to
- * hold PF-AF MBOX + PF-VF MBOX + LMT lines.
- */
- if (pci_dev->mem_resource[4].len <
- (bar4_mbox_sz + (RVU_LMT_LINE_MAX * RVU_LMT_SZ))) {
- plt_err("Not enough bar4 space for lmt lines and mbox");
- return -EFAULT;
+ if (dev_is_vf(dev)) {
+ /* VF BAR4 should always be sufficient enough to
+ * hold LMT lines.
+ */
+ if (pci_dev->mem_resource[4].len <
+ (RVU_LMT_LINE_MAX * RVU_LMT_SZ)) {
+ plt_err("Not enough bar4 space for lmt lines");
+ return -EFAULT;
+ }
+
+ dev->lmt_base = dev->bar4;
+ } else {
+ uint64_t bar4_mbox_sz = MBOX_SIZE;
+
+ /* PF BAR4 should always be sufficient enough to
+ * hold PF-AF MBOX + PF-VF MBOX + LMT lines.
+ */
+ if (pci_dev->mem_resource[4].len <
+ (bar4_mbox_sz + (RVU_LMT_LINE_MAX * RVU_LMT_SZ))) {
+ plt_err("Not enough bar4 space for lmt lines and mbox");
+ return -EFAULT;
+ }
+
+ /* LMT base is just after total VF MBOX area */
+ bar4_mbox_sz += (MBOX_SIZE * dev_pf_total_vfs(pci_dev));
+ dev->lmt_base = dev->bar4 + bar4_mbox_sz;
}
- /* LMT base is just after total VF MBOX area */
- bar4_mbox_sz += (MBOX_SIZE * dev_pf_total_vfs(pci_dev));
- dev->lmt_base = dev->bar4 + bar4_mbox_sz;
-
/* Base LMT address should be chosen from only those pci funcs which
* participate in LMT shared mode.
*/
@@ -270,6 +1023,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
{
int direction, up_direction, rc;
uintptr_t bar2, bar4, mbox;
+ uintptr_t vf_mbase = 0;
uint64_t intr_offset;
bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
@@ -293,13 +1047,23 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
goto error;
}
+ dev->maxvf = pci_dev->max_vfs;
dev->bar2 = bar2;
dev->bar4 = bar4;
+ dev_vf_hwcap_update(pci_dev, dev);
- mbox = bar4;
- direction = MBOX_DIR_PFAF;
- up_direction = MBOX_DIR_PFAF_UP;
- intr_offset = RVU_PF_INT;
+ if (dev_is_vf(dev)) {
+ mbox = (roc_model_is_cn9k() ?
+ bar4 : (bar2 + RVU_VF_MBOX_REGION));
+ direction = MBOX_DIR_VFPF;
+ up_direction = MBOX_DIR_VFPF_UP;
+ intr_offset = RVU_VF_INT;
+ } else {
+ mbox = bar4;
+ direction = MBOX_DIR_PFAF;
+ up_direction = MBOX_DIR_PFAF_UP;
+ intr_offset = RVU_PF_INT;
+ }
/* Initialize the local mbox */
rc = mbox_init(&dev->mbox_local, mbox, bar2, direction, 1, intr_offset);
@@ -322,7 +1086,43 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
goto mbox_unregister;
dev->pf = dev_get_pf(dev->pf_func);
+ dev->vf = dev_get_vf(dev->pf_func);
+ memset(&dev->active_vfs, 0, sizeof(dev->active_vfs));
+ /* Allocate memory for device ops */
+ dev->ops = plt_zmalloc(sizeof(struct dev_ops), 0);
+ if (dev->ops == NULL) {
+ rc = -ENOMEM;
+ goto mbox_unregister;
+ }
+
+ /* Found VF devices in a PF device */
+ if (pci_dev->max_vfs > 0) {
+ /* Remap mbox area for all vf's */
+ vf_mbase = dev_vf_mbase_get(pci_dev, dev);
+ if (!vf_mbase) {
+ rc = -ENODEV;
+ goto mbox_unregister;
+ }
+ /* Init mbox object */
+ rc = mbox_init(&dev->mbox_vfpf, vf_mbase, bar2, MBOX_DIR_PFVF,
+ pci_dev->max_vfs, intr_offset);
+ if (rc)
+ goto iounmap;
+
+ /* PF -> VF UP messages */
+ rc = mbox_init(&dev->mbox_vfpf_up, vf_mbase, bar2,
+ MBOX_DIR_PFVF_UP, pci_dev->max_vfs, intr_offset);
+ if (rc)
+ goto iounmap;
+ }
+
+ /* Register VF-FLR irq handlers */
+ if (!dev_is_vf(dev)) {
+ rc = vf_flr_register_irqs(pci_dev, dev);
+ if (rc)
+ goto iounmap;
+ }
dev->mbox_active = 1;
/* Setup LMT line base */
@@ -332,8 +1132,11 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
return rc;
iounmap:
+ dev_vf_mbase_put(pci_dev, vf_mbase);
mbox_unregister:
mbox_unregister_irq(pci_dev, dev);
+ if (dev->ops)
+ plt_free(dev->ops);
mbox_fini:
mbox_fini(dev->mbox);
mbox_fini(&dev->mbox_up);
@@ -349,6 +1152,20 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
mbox_unregister_irq(pci_dev, dev);
+ if (!dev_is_vf(dev))
+ vf_flr_unregister_irqs(pci_dev, dev);
+ /* Release PF - VF */
+ mbox = &dev->mbox_vfpf;
+ if (mbox->hwbase && mbox->dev)
+ dev_vf_mbase_put(pci_dev, mbox->hwbase);
+
+ if (dev->ops)
+ plt_free(dev->ops);
+
+ mbox_fini(mbox);
+ mbox = &dev->mbox_vfpf_up;
+ mbox_fini(mbox);
+
/* Release PF - AF */
mbox = dev->mbox;
mbox_fini(mbox);
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
index 655c58b..0996ec4 100644
--- a/drivers/common/cnxk/roc_dev_priv.h
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -5,12 +5,38 @@
#ifndef _ROC_DEV_PRIV_H
#define _ROC_DEV_PRIV_H
+#define DEV_HWCAP_F_VF BIT_ULL(0) /* VF device */
+
#define RVU_PFVF_PF_SHIFT 10
#define RVU_PFVF_PF_MASK 0x3F
#define RVU_PFVF_FUNC_SHIFT 0
#define RVU_PFVF_FUNC_MASK 0x3FF
+#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */
#define RVU_MAX_INT_RETRY 3
+/* PF/VF message handling timer */
+#define VF_PF_MBOX_TIMER_MS (20 * 1000)
+
+typedef struct {
+/* 128 devices translate to two 64 bits dwords */
+#define MAX_VFPF_DWORD_BITS 2
+ uint64_t bits[MAX_VFPF_DWORD_BITS];
+} dev_intr_t;
+
+/* Link status update callback */
+typedef void (*link_info_t)(void *roc_nix,
+ struct cgx_link_user_info *link);
+
+/* PTP info callback */
+typedef int (*ptp_info_t)(void *roc_nix, bool enable);
+
+struct dev_ops {
+ link_info_t link_status_update;
+ ptp_info_t ptp_info_update;
+};
+
+#define dev_is_vf(dev) ((dev)->hwcap & DEV_HWCAP_F_VF)
+
static inline int
dev_get_vf(uint16_t pf_func)
{
@@ -29,18 +55,33 @@ dev_pf_func(int pf, int vf)
return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1);
}
+static inline int
+dev_is_afvf(uint16_t pf_func)
+{
+ return !(pf_func & ~RVU_PFVF_FUNC_MASK);
+}
+
struct dev {
uint16_t pf;
+ int16_t vf;
uint16_t pf_func;
uint8_t mbox_active;
bool drv_inited;
+ uint64_t active_vfs[MAX_VFPF_DWORD_BITS];
uintptr_t bar2;
uintptr_t bar4;
uintptr_t lmt_base;
struct mbox mbox_local;
struct mbox mbox_up;
+ struct mbox mbox_vfpf;
+ struct mbox mbox_vfpf_up;
+ dev_intr_t intr;
+ int timer_set; /* ~0 : no alarm handling */
uint64_t hwcap;
struct mbox *mbox;
+ uint16_t maxvf;
+ struct dev_ops *ops;
+ void *roc_nix;
bool disable_shared_lmt; /* false(default): shared lmt mode enabled */
} __plt_cache_aligned;
@@ -49,6 +90,7 @@ extern uint16_t dev_sclk_freq;
int dev_init(struct dev *dev, struct plt_pci_device *pci_dev);
int dev_fini(struct dev *dev, struct plt_pci_device *pci_dev);
+int dev_active_vfs(struct dev *dev);
int dev_irq_register(struct plt_intr_handle *intr_handle,
plt_intr_callback_fn cb, void *data, unsigned int vec);
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 09/52] common/cnxk: add base npa device support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (7 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 08/52] common/cnxk: add VF support to " Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-26 13:54 ` Jerin Jacob
2021-03-05 13:38 ` [dpdk-dev] [PATCH 10/52] common/cnxk: add npa irq support Nithin Dabilpuram
` (46 subsequent siblings)
55 siblings, 1 reply; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add NPA init and fini functions.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_api.h | 3 +
drivers/common/cnxk/roc_dev.c | 11 ++
drivers/common/cnxk/roc_dev_priv.h | 6 +
drivers/common/cnxk/roc_idev.c | 67 ++++++++
drivers/common/cnxk/roc_idev.h | 3 +
drivers/common/cnxk/roc_idev_priv.h | 12 ++
drivers/common/cnxk/roc_npa.c | 318 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npa.h | 20 +++
drivers/common/cnxk/roc_npa_priv.h | 59 +++++++
drivers/common/cnxk/roc_platform.c | 1 +
drivers/common/cnxk/roc_platform.h | 2 +
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/roc_utils.c | 22 +++
drivers/common/cnxk/version.map | 5 +
15 files changed, 533 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npa.c
create mode 100644 drivers/common/cnxk/roc_npa.h
create mode 100644 drivers/common/cnxk/roc_npa_priv.h
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 735d3f8..c684e1d 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files('roc_dev.c',
'roc_irq.c',
'roc_mbox.c',
'roc_model.c',
+ 'roc_npa.c',
'roc_platform.c',
'roc_utils.c')
includes += include_directories('../../bus/pci')
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 83aa4f6..f2c5225 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -79,6 +79,9 @@
/* Mbox */
#include "roc_mbox.h"
+/* NPA */
+#include "roc_npa.h"
+
/* Utils */
#include "roc_utils.h"
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 1fe1371..157c155 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -1125,6 +1125,10 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
}
dev->mbox_active = 1;
+ rc = npa_lf_init(dev, pci_dev);
+ if (rc)
+ goto iounmap;
+
/* Setup LMT line base */
rc = dev_lmt_setup(pci_dev, dev);
if (rc)
@@ -1150,6 +1154,13 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
struct mbox *mbox;
+ /* Check if this dev hosts npalf and has 1+ refs */
+ if (idev_npa_lf_active(dev) > 1)
+ return -EAGAIN;
+
+ /* Clear references to this pci dev */
+ npa_lf_fini();
+
mbox_unregister_irq(pci_dev, dev);
if (!dev_is_vf(dev))
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
index 0996ec4..ac00e08 100644
--- a/drivers/common/cnxk/roc_dev_priv.h
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -78,6 +78,7 @@ struct dev {
dev_intr_t intr;
int timer_set; /* ~0 : no alarm handling */
uint64_t hwcap;
+ struct npa_lf npa;
struct mbox *mbox;
uint16_t maxvf;
struct dev_ops *ops;
@@ -85,6 +86,11 @@ struct dev {
bool disable_shared_lmt; /* false(default): shared lmt mode enabled */
} __plt_cache_aligned;
+struct npa {
+ struct plt_pci_device *pci_dev;
+ struct dev dev;
+} __plt_cache_aligned;
+
extern uint16_t dev_rclk_freq;
extern uint16_t dev_sclk_freq;
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index be762c5..dd03b2a 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -29,9 +29,76 @@ idev_get_cfg(void)
void
idev_set_defaults(struct idev_cfg *idev)
{
+ idev->npa = NULL;
+ idev->npa_pf_func = 0;
+ idev->max_pools = 128;
idev->lmt_pf_func = 0;
idev->lmt_base_addr = 0;
idev->num_lmtlines = 0;
+ __atomic_store_n(&idev->npa_refcnt, 0, __ATOMIC_RELEASE);
+}
+
+uint16_t
+idev_npa_pffunc_get(void)
+{
+ struct idev_cfg *idev;
+ uint16_t npa_pf_func;
+
+ idev = idev_get_cfg();
+ npa_pf_func = 0;
+ if (idev != NULL)
+ npa_pf_func = idev->npa_pf_func;
+
+ return npa_pf_func;
+}
+
+struct npa_lf *
+idev_npa_obj_get(void)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev && __atomic_load_n(&idev->npa_refcnt, __ATOMIC_ACQUIRE))
+ return idev->npa;
+
+ return NULL;
+}
+
+uint32_t
+roc_idev_npa_maxpools_get(void)
+{
+ struct idev_cfg *idev;
+ uint32_t max_pools;
+
+ idev = idev_get_cfg();
+ max_pools = 0;
+ if (idev != NULL)
+ max_pools = idev->max_pools;
+
+ return max_pools;
+}
+
+void
+roc_idev_npa_maxpools_set(uint32_t max_pools)
+{
+ struct idev_cfg *idev;
+
+ idev = idev_get_cfg();
+ if (idev != NULL)
+ __atomic_store_n(&idev->max_pools, max_pools, __ATOMIC_RELEASE);
+}
+
+uint16_t
+idev_npa_lf_active(struct dev *dev)
+{
+ struct idev_cfg *idev;
+
+ /* Check if npalf is actively used on this dev */
+ idev = idev_get_cfg();
+ if (!idev || !idev->npa || idev->npa->mbox != dev->mbox)
+ return 0;
+
+ return __atomic_load_n(&idev->npa_refcnt, __ATOMIC_ACQUIRE);
}
uint16_t
diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h
index 9c45960..9715bb6 100644
--- a/drivers/common/cnxk/roc_idev.h
+++ b/drivers/common/cnxk/roc_idev.h
@@ -5,6 +5,9 @@
#ifndef _ROC_IDEV_H_
#define _ROC_IDEV_H_
+uint32_t __roc_api roc_idev_npa_maxpools_get(void);
+void __roc_api roc_idev_npa_maxpools_set(uint32_t max_pools);
+
/* LMT */
uint64_t __roc_api roc_idev_lmt_base_addr_get(void);
uint16_t __roc_api roc_idev_num_lmtlines_get(void);
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index c996c5c..535575a 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -6,7 +6,12 @@
#define _ROC_IDEV_PRIV_H_
/* Intra device related functions */
+struct npa_lf;
struct idev_cfg {
+ uint16_t npa_pf_func;
+ struct npa_lf *npa;
+ uint16_t npa_refcnt;
+ uint32_t max_pools;
uint16_t lmt_pf_func;
uint16_t num_lmtlines;
uint64_t lmt_base_addr;
@@ -16,6 +21,13 @@ struct idev_cfg {
struct idev_cfg *idev_get_cfg(void);
void idev_set_defaults(struct idev_cfg *idev);
+/* idev npa */
+uint16_t idev_npa_pffunc_get(void);
+struct npa_lf *idev_npa_obj_get(void);
+uint32_t idev_npa_maxpools_get(void);
+void idev_npa_maxpools_set(uint32_t max_pools);
+uint16_t idev_npa_lf_active(struct dev *dev);
+
/* idev lmt */
uint16_t idev_lmt_pffunc_get(void);
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
new file mode 100644
index 0000000..762f025
--- /dev/null
+++ b/drivers/common/cnxk/roc_npa.c
@@ -0,0 +1,318 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline int
+npa_attach(struct mbox *mbox)
+{
+ struct rsrc_attach_req *req;
+
+ req = mbox_alloc_msg_attach_resources(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->modify = true;
+ req->npalf = true;
+
+ return mbox_process(mbox);
+}
+
+static inline int
+npa_detach(struct mbox *mbox)
+{
+ struct rsrc_detach_req *req;
+
+ req = mbox_alloc_msg_detach_resources(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->partial = true;
+ req->npalf = true;
+
+ return mbox_process(mbox);
+}
+
+static inline int
+npa_get_msix_offset(struct mbox *mbox, uint16_t *npa_msixoff)
+{
+ struct msix_offset_rsp *msix_rsp;
+ int rc;
+
+ /* Get NPA MSIX vector offsets */
+ mbox_alloc_msg_msix_offset(mbox);
+ rc = mbox_process_msg(mbox, (void *)&msix_rsp);
+ if (rc == 0)
+ *npa_msixoff = msix_rsp->npa_msixoff;
+
+ return rc;
+}
+
+static inline int
+npa_lf_alloc(struct npa_lf *lf)
+{
+ struct mbox *mbox = lf->mbox;
+ struct npa_lf_alloc_req *req;
+ struct npa_lf_alloc_rsp *rsp;
+ int rc;
+
+ req = mbox_alloc_msg_npa_lf_alloc(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->aura_sz = lf->aura_sz;
+ req->nr_pools = lf->nr_pools;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return NPA_ERR_ALLOC;
+
+ lf->stack_pg_ptrs = rsp->stack_pg_ptrs;
+ lf->stack_pg_bytes = rsp->stack_pg_bytes;
+ lf->qints = rsp->qints;
+
+ return 0;
+}
+
+static int
+npa_lf_free(struct mbox *mbox)
+{
+ mbox_alloc_msg_npa_lf_free(mbox);
+ return mbox_process(mbox);
+}
+
+static inline uint32_t
+aura_size_to_u32(uint8_t val)
+{
+ if (val == NPA_AURA_SZ_0)
+ return 128;
+ if (val >= NPA_AURA_SZ_MAX)
+ return BIT_ULL(20);
+
+ return 1 << (val + 6);
+}
+
+static inline void
+pool_count_aura_sz_get(uint32_t *nr_pools, uint8_t *aura_sz)
+{
+ uint32_t val;
+
+ val = roc_idev_npa_maxpools_get();
+ if (val < aura_size_to_u32(NPA_AURA_SZ_128))
+ val = 128;
+ if (val > aura_size_to_u32(NPA_AURA_SZ_1M))
+ val = BIT_ULL(20);
+
+ roc_idev_npa_maxpools_set(val);
+ *nr_pools = val;
+ *aura_sz = plt_log2_u32(val) - 6;
+}
+
+static int
+npa_dev_init(struct npa_lf *lf, uintptr_t base, struct mbox *mbox)
+{
+ uint32_t i, bmp_sz, nr_pools;
+ uint8_t aura_sz;
+ int rc;
+
+ /* Sanity checks */
+ if (!lf || !base || !mbox)
+ return NPA_ERR_PARAM;
+
+ if (base & ROC_AURA_ID_MASK)
+ return NPA_ERR_BASE_INVALID;
+
+ pool_count_aura_sz_get(&nr_pools, &aura_sz);
+ if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX)
+ return NPA_ERR_PARAM;
+
+ memset(lf, 0x0, sizeof(*lf));
+ lf->base = base;
+ lf->aura_sz = aura_sz;
+ lf->nr_pools = nr_pools;
+ lf->mbox = mbox;
+
+ rc = npa_lf_alloc(lf);
+ if (rc)
+ goto exit;
+
+ bmp_sz = plt_bitmap_get_memory_footprint(nr_pools);
+
+ /* Allocate memory for bitmap */
+ lf->npa_bmp_mem = plt_zmalloc(bmp_sz, ROC_ALIGN);
+ if (lf->npa_bmp_mem == NULL) {
+ rc = NPA_ERR_ALLOC;
+ goto lf_free;
+ }
+
+ /* Initialize pool resource bitmap array */
+ lf->npa_bmp = plt_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz);
+ if (lf->npa_bmp == NULL) {
+ rc = NPA_ERR_PARAM;
+ goto bmap_mem_free;
+ }
+
+ /* Mark all pools available */
+ for (i = 0; i < nr_pools; i++)
+ plt_bitmap_set(lf->npa_bmp, i);
+
+ /* Allocate memory for qint context */
+ lf->npa_qint_mem = plt_zmalloc(sizeof(struct npa_qint) * nr_pools, 0);
+ if (lf->npa_qint_mem == NULL) {
+ rc = NPA_ERR_ALLOC;
+ goto bmap_free;
+ }
+
+ /* Allocate memory for nap_aura_lim memory */
+ lf->aura_lim = plt_zmalloc(sizeof(struct npa_aura_lim) * nr_pools, 0);
+ if (lf->aura_lim == NULL) {
+ rc = NPA_ERR_ALLOC;
+ goto qint_free;
+ }
+
+ /* Init aura start & end limits */
+ for (i = 0; i < nr_pools; i++) {
+ lf->aura_lim[i].ptr_start = UINT64_MAX;
+ lf->aura_lim[i].ptr_end = 0x0ull;
+ }
+
+ return 0;
+
+qint_free:
+ plt_free(lf->npa_qint_mem);
+bmap_free:
+ plt_bitmap_free(lf->npa_bmp);
+bmap_mem_free:
+ plt_free(lf->npa_bmp_mem);
+lf_free:
+ npa_lf_free(lf->mbox);
+exit:
+ return rc;
+}
+
+static int
+npa_dev_fini(struct npa_lf *lf)
+{
+ if (!lf)
+ return NPA_ERR_PARAM;
+
+ plt_free(lf->aura_lim);
+ plt_free(lf->npa_qint_mem);
+ plt_bitmap_free(lf->npa_bmp);
+ plt_free(lf->npa_bmp_mem);
+
+ return npa_lf_free(lf->mbox);
+}
+
+int
+npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
+{
+ struct idev_cfg *idev;
+ uint16_t npa_msixoff;
+ struct npa_lf *lf;
+ int rc;
+
+ idev = idev_get_cfg();
+ if (idev == NULL)
+ return NPA_ERR_ALLOC;
+
+ /* Not the first PCI device */
+ if (__atomic_fetch_add(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0)
+ return 0;
+
+ rc = npa_attach(dev->mbox);
+ if (rc)
+ goto fail;
+
+ rc = npa_get_msix_offset(dev->mbox, &npa_msixoff);
+ if (rc)
+ goto npa_detach;
+
+ lf = &dev->npa;
+ rc = npa_dev_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20),
+ dev->mbox);
+ if (rc)
+ goto npa_detach;
+
+ lf->pf_func = dev->pf_func;
+ lf->npa_msixoff = npa_msixoff;
+ lf->intr_handle = &pci_dev->intr_handle;
+ lf->pci_dev = pci_dev;
+
+ idev->npa_pf_func = dev->pf_func;
+ idev->npa = lf;
+ plt_wmb();
+
+ plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf,
+ roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff);
+
+ return 0;
+
+npa_detach:
+ npa_detach(dev->mbox);
+fail:
+ __atomic_fetch_sub(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST);
+ return rc;
+}
+
+int
+npa_lf_fini(void)
+{
+ struct idev_cfg *idev;
+ int rc = 0;
+
+ idev = idev_get_cfg();
+ if (idev == NULL)
+ return NPA_ERR_ALLOC;
+
+ /* Not the last PCI device */
+ if (__atomic_sub_fetch(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0)
+ return 0;
+
+ rc |= npa_dev_fini(idev->npa);
+ rc |= npa_detach(idev->npa->mbox);
+ idev_set_defaults(idev);
+
+ return rc;
+}
+
+int
+roc_npa_dev_init(struct roc_npa *roc_npa)
+{
+ struct plt_pci_device *pci_dev;
+ struct npa *npa;
+ struct dev *dev;
+ int rc;
+
+ if (roc_npa == NULL || roc_npa->pci_dev == NULL)
+ return NPA_ERR_PARAM;
+
+ PLT_STATIC_ASSERT(sizeof(struct npa) <= ROC_NPA_MEM_SZ);
+ npa = roc_npa_to_npa_priv(roc_npa);
+ memset(npa, 0, sizeof(*npa));
+ pci_dev = roc_npa->pci_dev;
+ dev = &npa->dev;
+
+ /* Initialize device */
+ rc = dev_init(dev, pci_dev);
+ if (rc) {
+ plt_err("Failed to init roc device");
+ goto fail;
+ }
+
+ npa->pci_dev = pci_dev;
+ dev->drv_inited = true;
+fail:
+ return rc;
+}
+
+int
+roc_npa_dev_fini(struct roc_npa *roc_npa)
+{
+ struct npa *npa = roc_npa_to_npa_priv(roc_npa);
+
+ if (npa == NULL)
+ return NPA_ERR_PARAM;
+
+ npa->dev.drv_inited = false;
+ return dev_fini(&npa->dev, npa->pci_dev);
+}
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
new file mode 100644
index 0000000..b9cf847
--- /dev/null
+++ b/drivers/common/cnxk/roc_npa.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_NPA_H_
+#define _ROC_NPA_H_
+
+#define ROC_AURA_ID_MASK (BIT_ULL(16) - 1)
+
+struct roc_npa {
+ struct plt_pci_device *pci_dev;
+
+#define ROC_NPA_MEM_SZ (1 * 1024)
+ uint8_t reserved[ROC_NPA_MEM_SZ] __plt_cache_aligned;
+} __plt_cache_aligned;
+
+int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa);
+int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa);
+
+#endif /* _ROC_NPA_H_ */
diff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h
new file mode 100644
index 0000000..a2173c4
--- /dev/null
+++ b/drivers/common/cnxk/roc_npa_priv.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_NPA_PRIV_H_
+#define _ROC_NPA_PRIV_H_
+
+enum npa_error_status {
+ NPA_ERR_PARAM = -512,
+ NPA_ERR_ALLOC = -513,
+ NPA_ERR_INVALID_BLOCK_SZ = -514,
+ NPA_ERR_AURA_ID_ALLOC = -515,
+ NPA_ERR_AURA_POOL_INIT = -516,
+ NPA_ERR_AURA_POOL_FINI = -517,
+ NPA_ERR_BASE_INVALID = -518,
+ NPA_ERR_DEVICE_NOT_BOUNDED = -519,
+};
+
+struct npa_lf {
+ struct plt_intr_handle *intr_handle;
+ struct npa_aura_lim *aura_lim;
+ struct plt_pci_device *pci_dev;
+ struct plt_bitmap *npa_bmp;
+ struct mbox *mbox;
+ uint32_t stack_pg_ptrs;
+ uint32_t stack_pg_bytes;
+ uint16_t npa_msixoff;
+ void *npa_qint_mem;
+ void *npa_bmp_mem;
+ uint32_t nr_pools;
+ uint16_t pf_func;
+ uint8_t aura_sz;
+ uint32_t qints;
+ uintptr_t base;
+};
+
+struct npa_qint {
+ struct npa_lf *lf;
+ uint8_t qintx;
+};
+
+struct npa_aura_lim {
+ uint64_t ptr_start;
+ uint64_t ptr_end;
+};
+
+struct dev;
+
+static inline struct npa *
+roc_npa_to_npa_priv(struct roc_npa *roc_npa)
+{
+ return (struct npa *)&roc_npa->reserved[0];
+}
+
+/* NPA lf */
+int npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev);
+int npa_lf_fini(void);
+
+#endif /* _ROC_NPA_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index b20ae69..4666749 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -30,3 +30,4 @@ plt_init(void)
RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE);
+RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 76b931b..ba6722c 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -126,6 +126,7 @@
/* Log */
extern int cnxk_logtype_base;
extern int cnxk_logtype_mbox;
+extern int cnxk_logtype_npa;
#define plt_err(fmt, args...) \
RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args)
@@ -143,6 +144,7 @@ extern int cnxk_logtype_mbox;
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__)
+#define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index 090c597..dfd6351 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -11,6 +11,9 @@
/* Mbox */
#include "roc_mbox_priv.h"
+/* NPA */
+#include "roc_npa_priv.h"
+
/* Dev */
#include "roc_dev_priv.h"
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
index 81d511e..0a88f78 100644
--- a/drivers/common/cnxk/roc_utils.c
+++ b/drivers/common/cnxk/roc_utils.c
@@ -11,9 +11,31 @@ roc_error_msg_get(int errorcode)
const char *err_msg;
switch (errorcode) {
+ case NPA_ERR_PARAM:
case UTIL_ERR_PARAM:
err_msg = "Invalid parameter";
break;
+ case NPA_ERR_ALLOC:
+ err_msg = "NPA alloc failed";
+ break;
+ case NPA_ERR_INVALID_BLOCK_SZ:
+ err_msg = "NPA invalid block size";
+ break;
+ case NPA_ERR_AURA_ID_ALLOC:
+ err_msg = "NPA aura id alloc failed";
+ break;
+ case NPA_ERR_AURA_POOL_INIT:
+ err_msg = "NPA aura pool init failed";
+ break;
+ case NPA_ERR_AURA_POOL_FINI:
+ err_msg = "NPA aura pool fini failed";
+ break;
+ case NPA_ERR_BASE_INVALID:
+ err_msg = "NPA invalid base";
+ break;
+ case NPA_ERR_DEVICE_NOT_BOUNDED:
+ err_msg = "NPA device is not bounded";
+ break;
case UTIL_ERR_FS:
err_msg = "file operation failed";
break;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index a9d137d..cf34580 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -3,12 +3,17 @@ INTERNAL {
cnxk_logtype_base;
cnxk_logtype_mbox;
+ cnxk_logtype_npa;
plt_init;
roc_clk_freq_get;
roc_error_msg_get;
roc_idev_lmt_base_addr_get;
+ roc_idev_npa_maxpools_get;
+ roc_idev_npa_maxpools_set;
roc_idev_num_lmtlines_get;
roc_model;
+ roc_npa_dev_fini;
+ roc_npa_dev_init;
local: *;
};
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 10/52] common/cnxk: add npa irq support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (8 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 09/52] common/cnxk: add base npa device support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 11/52] common/cnxk: add npa debug support Nithin Dabilpuram
` (45 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add support for NPA IRQs.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_npa.c | 7 +
drivers/common/cnxk/roc_npa_irq.c | 297 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npa_priv.h | 4 +
4 files changed, 309 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npa_irq.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index c684e1d..60af484 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -17,6 +17,7 @@ sources = files('roc_dev.c',
'roc_mbox.c',
'roc_model.c',
'roc_npa.c',
+ 'roc_npa_irq.c',
'roc_platform.c',
'roc_utils.c')
includes += include_directories('../../bus/pci')
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index 762f025..003bd8c 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -242,11 +242,17 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
idev->npa = lf;
plt_wmb();
+ rc = npa_register_irqs(lf);
+ if (rc)
+ goto npa_fini;
+
plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf,
roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff);
return 0;
+npa_fini:
+ npa_dev_fini(idev->npa);
npa_detach:
npa_detach(dev->mbox);
fail:
@@ -268,6 +274,7 @@ npa_lf_fini(void)
if (__atomic_sub_fetch(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST) != 0)
return 0;
+ npa_unregister_irqs(idev->npa);
rc |= npa_dev_fini(idev->npa);
rc |= npa_detach(idev->npa->mbox);
idev_set_defaults(idev);
diff --git a/drivers/common/cnxk/roc_npa_irq.c b/drivers/common/cnxk/roc_npa_irq.c
new file mode 100644
index 0000000..99b57b0
--- /dev/null
+++ b/drivers/common/cnxk/roc_npa_irq.c
@@ -0,0 +1,297 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static void
+npa_err_irq(void *param)
+{
+ struct npa_lf *lf = (struct npa_lf *)param;
+ uint64_t intr;
+
+ intr = plt_read64(lf->base + NPA_LF_ERR_INT);
+ if (intr == 0)
+ return;
+
+ plt_err("Err_intr=0x%" PRIx64 "", intr);
+
+ /* Clear interrupt */
+ plt_write64(intr, lf->base + NPA_LF_ERR_INT);
+}
+
+static int
+npa_register_err_irq(struct npa_lf *lf)
+{
+ struct plt_intr_handle *handle = lf->intr_handle;
+ int rc, vec;
+
+ vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
+ /* Register err interrupt vector */
+ rc = dev_irq_register(handle, npa_err_irq, lf, vec);
+
+ /* Enable hw interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S);
+
+ return rc;
+}
+
+static void
+npa_unregister_err_irq(struct npa_lf *lf)
+{
+ struct plt_intr_handle *handle = lf->intr_handle;
+ int vec;
+
+ vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
+
+ /* Clear err interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
+ dev_irq_unregister(handle, npa_err_irq, lf, vec);
+}
+
+static void
+npa_ras_irq(void *param)
+{
+ struct npa_lf *lf = (struct npa_lf *)param;
+ uint64_t intr;
+
+ intr = plt_read64(lf->base + NPA_LF_RAS);
+ if (intr == 0)
+ return;
+
+ plt_err("Ras_intr=0x%" PRIx64 "", intr);
+
+ /* Clear interrupt */
+ plt_write64(intr, lf->base + NPA_LF_RAS);
+}
+
+static int
+npa_register_ras_irq(struct npa_lf *lf)
+{
+ struct plt_intr_handle *handle = lf->intr_handle;
+ int rc, vec;
+
+ vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
+ /* Set used interrupt vectors */
+ rc = dev_irq_register(handle, npa_ras_irq, lf, vec);
+ /* Enable hw interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S);
+
+ return rc;
+}
+
+static void
+npa_unregister_ras_irq(struct npa_lf *lf)
+{
+ int vec;
+ struct plt_intr_handle *handle = lf->intr_handle;
+
+ vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
+
+ /* Clear err interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
+ dev_irq_unregister(handle, npa_ras_irq, lf, vec);
+}
+
+static inline uint8_t
+npa_q_irq_get_and_clear(struct npa_lf *lf, uint32_t q, uint32_t off,
+ uint64_t mask)
+{
+ uint64_t reg, wdata;
+ uint8_t qint;
+
+ wdata = (uint64_t)q << 44;
+ reg = roc_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off));
+
+ if (reg & BIT_ULL(42) /* OP_ERR */) {
+ plt_err("Failed execute irq get off=0x%x", off);
+ return 0;
+ }
+
+ qint = reg & 0xff;
+ wdata &= mask;
+ plt_write64(wdata | qint, lf->base + off);
+
+ return qint;
+}
+
+static inline uint8_t
+npa_pool_irq_get_and_clear(struct npa_lf *lf, uint32_t p)
+{
+ return npa_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+npa_aura_irq_get_and_clear(struct npa_lf *lf, uint32_t a)
+{
+ return npa_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00);
+}
+
+static void
+npa_q_irq(void *param)
+{
+ struct npa_qint *qint = (struct npa_qint *)param;
+ struct npa_lf *lf = qint->lf;
+ uint8_t irq, qintx = qint->qintx;
+ uint32_t q, pool, aura;
+ uint64_t intr;
+
+ intr = plt_read64(lf->base + NPA_LF_QINTX_INT(qintx));
+ if (intr == 0)
+ return;
+
+ plt_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx);
+
+ /* Handle pool queue interrupts */
+ for (q = 0; q < lf->nr_pools; q++) {
+ /* Skip disabled POOL */
+ if (plt_bitmap_get(lf->npa_bmp, q))
+ continue;
+
+ pool = q % lf->qints;
+ irq = npa_pool_irq_get_and_clear(lf, pool);
+
+ if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS))
+ plt_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool);
+
+ if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE))
+ plt_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool);
+
+ if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR))
+ plt_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool);
+ }
+
+ /* Handle aura queue interrupts */
+ for (q = 0; q < lf->nr_pools; q++) {
+ /* Skip disabled AURA */
+ if (plt_bitmap_get(lf->npa_bmp, q))
+ continue;
+
+ aura = q % lf->qints;
+ irq = npa_aura_irq_get_and_clear(lf, aura);
+
+ if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER))
+ plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura);
+
+ if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER))
+ plt_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura);
+
+ if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER))
+ plt_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura);
+
+ if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS))
+ plt_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura);
+ }
+
+ /* Clear interrupt */
+ plt_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx));
+}
+
+static int
+npa_register_queue_irqs(struct npa_lf *lf)
+{
+ struct plt_intr_handle *handle = lf->intr_handle;
+ int vec, q, qs, rc = 0;
+
+ /* Figure out max qintx required */
+ qs = PLT_MIN(lf->qints, lf->nr_pools);
+
+ for (q = 0; q < qs; q++) {
+ vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
+
+ /* Clear interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
+
+ struct npa_qint *qintmem = lf->npa_qint_mem;
+
+ qintmem += q;
+
+ qintmem->lf = lf;
+ qintmem->qintx = q;
+
+ /* Sync qints_mem update */
+ plt_wmb();
+
+ /* Register queue irq vector */
+ rc = dev_irq_register(handle, npa_q_irq, qintmem, vec);
+ if (rc)
+ break;
+
+ plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
+ plt_write64(0, lf->base + NPA_LF_QINTX_INT(q));
+ /* Enable QINT interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q));
+ }
+
+ return rc;
+}
+
+static void
+npa_unregister_queue_irqs(struct npa_lf *lf)
+{
+ struct plt_intr_handle *handle = lf->intr_handle;
+ int vec, q, qs;
+
+ /* Figure out max qintx required */
+ qs = PLT_MIN(lf->qints, lf->nr_pools);
+
+ for (q = 0; q < qs; q++) {
+ vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ plt_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
+ plt_write64(0, lf->base + NPA_LF_QINTX_INT(q));
+
+ /* Clear interrupt */
+ plt_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
+
+ struct npa_qint *qintmem = lf->npa_qint_mem;
+
+ qintmem += q;
+
+ /* Unregister queue irq vector */
+ dev_irq_unregister(handle, npa_q_irq, qintmem, vec);
+
+ qintmem->lf = NULL;
+ qintmem->qintx = 0;
+ }
+}
+
+int
+npa_register_irqs(struct npa_lf *lf)
+{
+ int rc;
+
+ if (lf->npa_msixoff == MSIX_VECTOR_INVALID) {
+ plt_err("Invalid NPALF MSIX vector offset vector: 0x%x",
+ lf->npa_msixoff);
+ return NPA_ERR_PARAM;
+ }
+
+ /* Register lf err interrupt */
+ rc = npa_register_err_irq(lf);
+ /* Register RAS interrupt */
+ rc |= npa_register_ras_irq(lf);
+ /* Register queue interrupts */
+ rc |= npa_register_queue_irqs(lf);
+
+ return rc;
+}
+
+void
+npa_unregister_irqs(struct npa_lf *lf)
+{
+ npa_unregister_err_irq(lf);
+ npa_unregister_ras_irq(lf);
+ npa_unregister_queue_irqs(lf);
+}
diff --git a/drivers/common/cnxk/roc_npa_priv.h b/drivers/common/cnxk/roc_npa_priv.h
index a2173c4..c87de24 100644
--- a/drivers/common/cnxk/roc_npa_priv.h
+++ b/drivers/common/cnxk/roc_npa_priv.h
@@ -56,4 +56,8 @@ roc_npa_to_npa_priv(struct roc_npa *roc_npa)
int npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev);
int npa_lf_fini(void);
+/* IRQ */
+int npa_register_irqs(struct npa_lf *lf);
+void npa_unregister_irqs(struct npa_lf *lf);
+
#endif /* _ROC_NPA_PRIV_H_ */
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 11/52] common/cnxk: add npa debug support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (9 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 10/52] common/cnxk: add npa irq support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 12/52] common/cnxk: add npa pool HW ops Nithin Dabilpuram
` (44 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add NPA debug APIs.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_npa.h | 4 +
drivers/common/cnxk/roc_npa_debug.c | 184 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npa_irq.c | 1 +
drivers/common/cnxk/version.map | 2 +
5 files changed, 192 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npa_debug.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 60af484..884793f 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -17,6 +17,7 @@ sources = files('roc_dev.c',
'roc_mbox.c',
'roc_model.c',
'roc_npa.c',
+ 'roc_npa_debug.c',
'roc_npa_irq.c',
'roc_platform.c',
'roc_utils.c')
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
index b9cf847..9b7a43a 100644
--- a/drivers/common/cnxk/roc_npa.h
+++ b/drivers/common/cnxk/roc_npa.h
@@ -17,4 +17,8 @@ struct roc_npa {
int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa);
int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa);
+/* Debug */
+int __roc_api roc_npa_ctx_dump(void);
+int __roc_api roc_npa_dump(void);
+
#endif /* _ROC_NPA_H_ */
diff --git a/drivers/common/cnxk/roc_npa_debug.c b/drivers/common/cnxk/roc_npa_debug.c
new file mode 100644
index 0000000..6477393
--- /dev/null
+++ b/drivers/common/cnxk/roc_npa_debug.c
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+
+static inline void
+npa_pool_dump(__io struct npa_pool_s *pool)
+{
+ npa_dump("W0: Stack base\t\t0x%" PRIx64 "", pool->stack_base);
+ npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d",
+ pool->ena, pool->nat_align, pool->stack_caching);
+ npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d",
+ pool->stack_way_mask, pool->buf_offset);
+ npa_dump("W1: buf_size \t\t%d", pool->buf_size);
+
+ npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d",
+ pool->stack_max_pages, pool->stack_pages);
+
+ npa_dump("W3: op_pc \t\t0x%" PRIx64 "", (uint64_t)pool->op_pc);
+
+ npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d",
+ pool->stack_offset, pool->shift, pool->avg_level);
+ npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d",
+ pool->avg_con, pool->fc_ena, pool->fc_stype);
+ npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d",
+ pool->fc_hyst_bits, pool->fc_up_crossing);
+ npa_dump("W4: update_time\t\t%d\n", pool->update_time);
+
+ npa_dump("W5: fc_addr\t\t0x%" PRIx64 "\n", pool->fc_addr);
+
+ npa_dump("W6: ptr_start\t\t0x%" PRIx64 "\n", pool->ptr_start);
+
+ npa_dump("W7: ptr_end\t\t0x%" PRIx64 "\n", pool->ptr_end);
+ npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d", pool->err_int,
+ pool->err_int_ena);
+ npa_dump("W8: thresh_int\t\t%d", pool->thresh_int);
+
+ npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d",
+ pool->thresh_int_ena, pool->thresh_up);
+ npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d",
+ pool->thresh_qint_idx, pool->err_qint_idx);
+}
+
+static inline void
+npa_aura_dump(__io struct npa_aura_s *aura)
+{
+ npa_dump("W0: Pool addr\t\t0x%" PRIx64 "\n", aura->pool_addr);
+
+ npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d",
+ aura->ena, aura->pool_caching, aura->pool_way_mask);
+ npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d", aura->avg_con,
+ aura->pool_drop_ena);
+ npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena);
+ npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d",
+ aura->bp_ena, aura->aura_drop, aura->shift);
+ npa_dump("W1: avg_level\t\t%d\n", aura->avg_level);
+
+ npa_dump("W2: count\t\t%" PRIx64 "\nW2: nix0_bpid\t\t%d",
+ (uint64_t)aura->count, aura->nix0_bpid);
+ npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid);
+
+ npa_dump("W3: limit\t\t%" PRIx64 "\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n",
+ (uint64_t)aura->limit, aura->bp, aura->fc_ena);
+ npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d",
+ aura->fc_up_crossing, aura->fc_stype);
+
+ npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits);
+
+ npa_dump("W4: fc_addr\t\t0x%" PRIx64 "\n", aura->fc_addr);
+
+ npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d", aura->pool_drop,
+ aura->update_time);
+ npa_dump("W5: err_int\t\t%d", aura->err_int);
+ npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d",
+ aura->err_int_ena, aura->thresh_int);
+ npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena);
+
+ npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d",
+ aura->thresh_up, aura->thresh_qint_idx);
+ npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx);
+
+ npa_dump("W6: thresh\t\t%" PRIx64 "\n", (uint64_t)aura->thresh);
+}
+
+int
+roc_npa_ctx_dump(void)
+{
+ struct npa_aq_enq_req *aq;
+ struct npa_aq_enq_rsp *rsp;
+ struct npa_lf *lf;
+ uint32_t q;
+ int rc = 0;
+
+ lf = idev_npa_obj_get();
+ if (lf == NULL)
+ return NPA_ERR_DEVICE_NOT_BOUNDED;
+
+ for (q = 0; q < lf->nr_pools; q++) {
+ /* Skip disabled POOL */
+ if (plt_bitmap_get(lf->npa_bmp, q))
+ continue;
+
+ aq = mbox_alloc_msg_npa_aq_enq(lf->mbox);
+ if (aq == NULL)
+ return -ENOSPC;
+ aq->aura_id = q;
+ aq->ctype = NPA_AQ_CTYPE_POOL;
+ aq->op = NPA_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(lf->mbox, (void *)&rsp);
+ if (rc) {
+ plt_err("Failed to get pool(%d) context", q);
+ return rc;
+ }
+ npa_dump("============== pool=%d ===============\n", q);
+ npa_pool_dump(&rsp->pool);
+ }
+
+ for (q = 0; q < lf->nr_pools; q++) {
+ /* Skip disabled AURA */
+ if (plt_bitmap_get(lf->npa_bmp, q))
+ continue;
+
+ aq = mbox_alloc_msg_npa_aq_enq(lf->mbox);
+ if (aq == NULL)
+ return -ENOSPC;
+ aq->aura_id = q;
+ aq->ctype = NPA_AQ_CTYPE_AURA;
+ aq->op = NPA_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(lf->mbox, (void *)&rsp);
+ if (rc) {
+ plt_err("Failed to get aura(%d) context", q);
+ return rc;
+ }
+ npa_dump("============== aura=%d ===============\n", q);
+ npa_aura_dump(&rsp->aura);
+ }
+
+ return rc;
+}
+
+int
+roc_npa_dump(void)
+{
+ struct npa_lf *lf;
+ int aura_cnt = 0;
+ uint32_t i;
+
+ lf = idev_npa_obj_get();
+ if (lf == NULL)
+ return NPA_ERR_DEVICE_NOT_BOUNDED;
+
+ for (i = 0; i < lf->nr_pools; i++) {
+ if (plt_bitmap_get(lf->npa_bmp, i))
+ continue;
+ aura_cnt++;
+ }
+
+ npa_dump("npa@%p", lf);
+ npa_dump(" pf = %d", dev_get_pf(lf->pf_func));
+ npa_dump(" vf = %d", dev_get_vf(lf->pf_func));
+ npa_dump(" aura_cnt = %d", aura_cnt);
+ npa_dump(" \tpci_dev = %p", lf->pci_dev);
+ npa_dump(" \tnpa_bmp = %p", lf->npa_bmp);
+ npa_dump(" \tnpa_bmp_mem = %p", lf->npa_bmp_mem);
+ npa_dump(" \tnpa_qint_mem = %p", lf->npa_qint_mem);
+ npa_dump(" \tintr_handle = %p", lf->intr_handle);
+ npa_dump(" \tmbox = %p", lf->mbox);
+ npa_dump(" \tbase = 0x%" PRIx64 "", lf->base);
+ npa_dump(" \tstack_pg_ptrs = %d", lf->stack_pg_ptrs);
+ npa_dump(" \tstack_pg_bytes = %d", lf->stack_pg_bytes);
+ npa_dump(" \tnpa_msixoff = 0x%x", lf->npa_msixoff);
+ npa_dump(" \tnr_pools = %d", lf->nr_pools);
+ npa_dump(" \tpf_func = 0x%x", lf->pf_func);
+ npa_dump(" \taura_sz = %d", lf->aura_sz);
+ npa_dump(" \tqints = %d", lf->qints);
+
+ return 0;
+}
diff --git a/drivers/common/cnxk/roc_npa_irq.c b/drivers/common/cnxk/roc_npa_irq.c
index 99b57b0..9b94c1c 100644
--- a/drivers/common/cnxk/roc_npa_irq.c
+++ b/drivers/common/cnxk/roc_npa_irq.c
@@ -192,6 +192,7 @@ npa_q_irq(void *param)
/* Clear interrupt */
plt_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx));
+ roc_npa_ctx_dump();
}
static int
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index cf34580..dd9f224 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -12,8 +12,10 @@ INTERNAL {
roc_idev_npa_maxpools_set;
roc_idev_num_lmtlines_get;
roc_model;
+ roc_npa_ctx_dump;
roc_npa_dev_fini;
roc_npa_dev_init;
+ roc_npa_dump;
local: *;
};
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 12/52] common/cnxk: add npa pool HW ops
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (10 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 11/52] common/cnxk: add npa debug support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 13/52] common/cnxk: add npa bulk alloc/free support Nithin Dabilpuram
` (43 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add APIs for creating, destroying, modifying
NPA pools.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/roc_npa.c | 421 ++++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npa.h | 146 ++++++++++++++
drivers/common/cnxk/version.map | 5 +
3 files changed, 572 insertions(+)
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index 003bd8c..f747a9b 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -5,6 +5,427 @@
#include "roc_api.h"
#include "roc_priv.h"
+void
+roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova,
+ uint64_t end_iova)
+{
+ const uint64_t start = roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_POOL_OP_PTR_START0;
+ const uint64_t end = roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_POOL_OP_PTR_END0;
+ uint64_t reg = roc_npa_aura_handle_to_aura(aura_handle);
+ struct npa_lf *lf = idev_npa_obj_get();
+ struct npa_aura_lim *lim;
+
+ PLT_ASSERT(lf);
+ lim = lf->aura_lim;
+
+ lim[reg].ptr_start = PLT_MIN(lim[reg].ptr_start, start_iova);
+ lim[reg].ptr_end = PLT_MAX(lim[reg].ptr_end, end_iova);
+
+ roc_store_pair(lim[reg].ptr_start, reg, start);
+ roc_store_pair(lim[reg].ptr_end, reg, end);
+}
+
+static int
+npa_aura_pool_init(struct mbox *mbox, uint32_t aura_id, struct npa_aura_s *aura,
+ struct npa_pool_s *pool)
+{
+ struct npa_aq_enq_req *aura_init_req, *pool_init_req;
+ struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
+ struct mbox_dev *mdev = &mbox->dev[0];
+ int rc = -ENOSPC, off;
+
+ aura_init_req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (aura_init_req == NULL)
+ return rc;
+ aura_init_req->aura_id = aura_id;
+ aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_init_req->op = NPA_AQ_INSTOP_INIT;
+ mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura));
+
+ pool_init_req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (pool_init_req == NULL)
+ return rc;
+ pool_init_req->aura_id = aura_id;
+ pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_init_req->op = NPA_AQ_INSTOP_INIT;
+ mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool));
+
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ off = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
+ off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff;
+ pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
+
+ if (aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0)
+ return 0;
+ else
+ return NPA_ERR_AURA_POOL_INIT;
+}
+
+static int
+npa_aura_pool_fini(struct mbox *mbox, uint32_t aura_id, uint64_t aura_handle)
+{
+ struct npa_aq_enq_req *aura_req, *pool_req;
+ struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
+ struct mbox_dev *mdev = &mbox->dev[0];
+ struct ndc_sync_op *ndc_req;
+ int rc = -ENOSPC, off;
+ uint64_t ptr;
+
+ /* Procedure for disabling an aura/pool */
+ plt_delay_us(10);
+
+ /* Clear all the pointers from the aura */
+ do {
+ ptr = roc_npa_aura_op_alloc(aura_handle, 0);
+ } while (ptr);
+
+ pool_req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (pool_req == NULL)
+ return rc;
+ pool_req->aura_id = aura_id;
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_WRITE;
+ pool_req->pool.ena = 0;
+ pool_req->pool_mask.ena = ~pool_req->pool_mask.ena;
+
+ aura_req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (aura_req == NULL)
+ return rc;
+ aura_req->aura_id = aura_id;
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_WRITE;
+ aura_req->aura.ena = 0;
+ aura_req->aura_mask.ena = ~aura_req->aura_mask.ena;
+
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ off = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
+
+ off = mbox->rx_start + pool_rsp->hdr.next_msgoff;
+ aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
+
+ if (aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0)
+ return NPA_ERR_AURA_POOL_FINI;
+
+ /* Sync NDC-NPA for LF */
+ ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+ if (ndc_req == NULL)
+ return -ENOSPC;
+ ndc_req->npa_lf_sync = 1;
+ rc = mbox_process(mbox);
+ if (rc) {
+ plt_err("Error on NDC-NPA LF sync, rc %d", rc);
+ return NPA_ERR_AURA_POOL_FINI;
+ }
+ return 0;
+}
+
+static inline char *
+npa_stack_memzone_name(struct npa_lf *lf, int pool_id, char *name)
+{
+ snprintf(name, PLT_MEMZONE_NAMESIZE, "roc_npa_stack_%x_%d", lf->pf_func,
+ pool_id);
+ return name;
+}
+
+static inline const struct plt_memzone *
+npa_stack_dma_alloc(struct npa_lf *lf, char *name, int pool_id, size_t size)
+{
+ const char *mz_name = npa_stack_memzone_name(lf, pool_id, name);
+
+ return plt_memzone_reserve_cache_align(mz_name, size);
+}
+
+static inline int
+npa_stack_dma_free(struct npa_lf *lf, char *name, int pool_id)
+{
+ const struct plt_memzone *mz;
+
+ mz = plt_memzone_lookup(npa_stack_memzone_name(lf, pool_id, name));
+ if (mz == NULL)
+ return NPA_ERR_PARAM;
+
+ return plt_memzone_free(mz);
+}
+
+static inline int
+bitmap_ctzll(uint64_t slab)
+{
+ if (slab == 0)
+ return 0;
+
+ return __builtin_ctzll(slab);
+}
+
+static int
+npa_aura_pool_pair_alloc(struct npa_lf *lf, const uint32_t block_size,
+ const uint32_t block_count, struct npa_aura_s *aura,
+ struct npa_pool_s *pool, uint64_t *aura_handle)
+{
+ int rc, aura_id, pool_id, stack_size, alloc_size;
+ char name[PLT_MEMZONE_NAMESIZE];
+ const struct plt_memzone *mz;
+ uint64_t slab;
+ uint32_t pos;
+
+ /* Sanity check */
+ if (!lf || !block_size || !block_count || !pool || !aura ||
+ !aura_handle)
+ return NPA_ERR_PARAM;
+
+ /* Block size should be cache line aligned and in range of 128B-128KB */
+ if (block_size % ROC_ALIGN || block_size < 128 ||
+ block_size > 128 * 1024)
+ return NPA_ERR_INVALID_BLOCK_SZ;
+
+ pos = 0;
+ slab = 0;
+ /* Scan from the beginning */
+ plt_bitmap_scan_init(lf->npa_bmp);
+ /* Scan bitmap to get the free pool */
+ rc = plt_bitmap_scan(lf->npa_bmp, &pos, &slab);
+ /* Empty bitmap */
+ if (rc == 0) {
+ plt_err("Mempools exhausted");
+ return NPA_ERR_AURA_ID_ALLOC;
+ }
+
+ /* Get aura_id from resource bitmap */
+ aura_id = pos + bitmap_ctzll(slab);
+ /* Mark pool as reserved */
+ plt_bitmap_clear(lf->npa_bmp, aura_id);
+
+ /* Configuration based on each aura has separate pool(aura-pool pair) */
+ pool_id = aura_id;
+ rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools ||
+ aura_id >= (int)BIT_ULL(6 + lf->aura_sz)) ?
+ NPA_ERR_AURA_ID_ALLOC :
+ 0;
+ if (rc)
+ goto exit;
+
+ /* Allocate stack memory */
+ stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs;
+ alloc_size = stack_size * lf->stack_pg_bytes;
+
+ mz = npa_stack_dma_alloc(lf, name, pool_id, alloc_size);
+ if (mz == NULL) {
+ rc = NPA_ERR_ALLOC;
+ goto aura_res_put;
+ }
+
+ /* Update aura fields */
+ aura->pool_addr = pool_id; /* AF will translate to associated poolctx */
+ aura->ena = 1;
+ aura->shift = __builtin_clz(block_count) - 8;
+ aura->limit = block_count;
+ aura->pool_caching = 1;
+ aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER);
+ aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER);
+ aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER);
+ aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS);
+ /* Many to one reduction */
+ aura->err_qint_idx = aura_id % lf->qints;
+
+ /* Update pool fields */
+ pool->stack_base = mz->iova;
+ pool->ena = 1;
+ pool->buf_size = block_size / ROC_ALIGN;
+ pool->stack_max_pages = stack_size;
+ pool->shift = __builtin_clz(block_count) - 8;
+ pool->ptr_start = 0;
+ pool->ptr_end = ~0;
+ pool->stack_caching = 1;
+ pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS);
+ pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE);
+ pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR);
+
+ /* Many to one reduction */
+ pool->err_qint_idx = pool_id % lf->qints;
+
+ /* Issue AURA_INIT and POOL_INIT op */
+ rc = npa_aura_pool_init(lf->mbox, aura_id, aura, pool);
+ if (rc)
+ goto stack_mem_free;
+
+ *aura_handle = roc_npa_aura_handle_gen(aura_id, lf->base);
+ /* Update aura count */
+ roc_npa_aura_op_cnt_set(*aura_handle, 0, block_count);
+ /* Read it back to make sure aura count is updated */
+ roc_npa_aura_op_cnt_get(*aura_handle);
+
+ return 0;
+
+stack_mem_free:
+ plt_memzone_free(mz);
+aura_res_put:
+ plt_bitmap_set(lf->npa_bmp, aura_id);
+exit:
+ return rc;
+}
+
+int
+roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size,
+ uint32_t block_count, struct npa_aura_s *aura,
+ struct npa_pool_s *pool)
+{
+ struct npa_aura_s defaura;
+ struct npa_pool_s defpool;
+ struct idev_cfg *idev;
+ struct npa_lf *lf;
+ int rc;
+
+ lf = idev_npa_obj_get();
+ if (lf == NULL) {
+ rc = NPA_ERR_DEVICE_NOT_BOUNDED;
+ goto error;
+ }
+
+ idev = idev_get_cfg();
+ if (idev == NULL) {
+ rc = NPA_ERR_ALLOC;
+ goto error;
+ }
+
+ if (aura == NULL) {
+ memset(&defaura, 0, sizeof(struct npa_aura_s));
+ aura = &defaura;
+ }
+ if (pool == NULL) {
+ memset(&defpool, 0, sizeof(struct npa_pool_s));
+ defpool.nat_align = 1;
+ defpool.buf_offset = 1;
+ pool = &defpool;
+ }
+
+ rc = npa_aura_pool_pair_alloc(lf, block_size, block_count, aura, pool,
+ aura_handle);
+ if (rc) {
+ plt_err("Failed to alloc pool or aura rc=%d", rc);
+ goto error;
+ }
+
+ plt_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%" PRIx64,
+ lf, block_size, block_count, *aura_handle);
+
+ /* Just hold the reference of the object */
+ __atomic_fetch_add(&idev->npa_refcnt, 1, __ATOMIC_SEQ_CST);
+error:
+ return rc;
+}
+
+int
+roc_npa_aura_limit_modify(uint64_t aura_handle, uint16_t aura_limit)
+{
+ struct npa_aq_enq_req *aura_req;
+ struct npa_lf *lf;
+ int rc;
+
+ lf = idev_npa_obj_get();
+ if (lf == NULL)
+ return NPA_ERR_DEVICE_NOT_BOUNDED;
+
+ aura_req = mbox_alloc_msg_npa_aq_enq(lf->mbox);
+ if (aura_req == NULL)
+ return -ENOMEM;
+ aura_req->aura_id = roc_npa_aura_handle_to_aura(aura_handle);
+ aura_req->ctype = NPA_AQ_CTYPE_AURA;
+ aura_req->op = NPA_AQ_INSTOP_WRITE;
+
+ aura_req->aura.limit = aura_limit;
+ aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
+ rc = mbox_process(lf->mbox);
+
+ return rc;
+}
+
+static int
+npa_aura_pool_pair_free(struct npa_lf *lf, uint64_t aura_handle)
+{
+ char name[PLT_MEMZONE_NAMESIZE];
+ int aura_id, pool_id, rc;
+
+ if (!lf || !aura_handle)
+ return NPA_ERR_PARAM;
+
+ aura_id = roc_npa_aura_handle_to_aura(aura_handle);
+ pool_id = aura_id;
+ rc = npa_aura_pool_fini(lf->mbox, aura_id, aura_handle);
+ rc |= npa_stack_dma_free(lf, name, pool_id);
+
+ plt_bitmap_set(lf->npa_bmp, aura_id);
+
+ return rc;
+}
+
+int
+roc_npa_pool_destroy(uint64_t aura_handle)
+{
+ struct npa_lf *lf = idev_npa_obj_get();
+ int rc = 0;
+
+ plt_npa_dbg("lf=%p aura_handle=0x%" PRIx64, lf, aura_handle);
+ rc = npa_aura_pool_pair_free(lf, aura_handle);
+ if (rc)
+ plt_err("Failed to destroy pool or aura rc=%d", rc);
+
+ /* Release the reference of npa */
+ rc |= npa_lf_fini();
+ return rc;
+}
+
+int
+roc_npa_pool_range_update_check(uint64_t aura_handle)
+{
+ uint64_t aura_id = roc_npa_aura_handle_to_aura(aura_handle);
+ struct npa_lf *lf;
+ struct npa_aura_lim *lim;
+ __io struct npa_pool_s *pool;
+ struct npa_aq_enq_req *req;
+ struct npa_aq_enq_rsp *rsp;
+ int rc;
+
+ lf = idev_npa_obj_get();
+ if (lf == NULL)
+ return NPA_ERR_PARAM;
+
+ lim = lf->aura_lim;
+
+ req = mbox_alloc_msg_npa_aq_enq(lf->mbox);
+ if (req == NULL)
+ return -ENOSPC;
+
+ req->aura_id = aura_id;
+ req->ctype = NPA_AQ_CTYPE_POOL;
+ req->op = NPA_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(lf->mbox, (void *)&rsp);
+ if (rc) {
+ plt_err("Failed to get pool(0x%" PRIx64 ") context", aura_id);
+ return rc;
+ }
+
+ pool = &rsp->pool;
+ if (lim[aura_id].ptr_start != pool->ptr_start ||
+ lim[aura_id].ptr_end != pool->ptr_end) {
+ plt_err("Range update failed on pool(0x%" PRIx64 ")", aura_id);
+ return NPA_ERR_PARAM;
+ }
+
+ return 0;
+}
+
static inline int
npa_attach(struct mbox *mbox)
{
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
index 9b7a43a..0dffede 100644
--- a/drivers/common/cnxk/roc_npa.h
+++ b/drivers/common/cnxk/roc_npa.h
@@ -6,6 +6,140 @@
#define _ROC_NPA_H_
#define ROC_AURA_ID_MASK (BIT_ULL(16) - 1)
+#define ROC_AURA_OP_LIMIT_MASK (BIT_ULL(36) - 1)
+
+/*
+ * Generate 64bit handle to have optimized alloc and free aura operation.
+ * 0 - ROC_AURA_ID_MASK for storing the aura_id.
+ * [ROC_AURA_ID_MASK+1, (2^64 - 1)] for storing the lf base address.
+ * This scheme is valid when OS can give ROC_AURA_ID_MASK
+ * aligned address for lf base address.
+ */
+static inline uint64_t
+roc_npa_aura_handle_gen(uint32_t aura_id, uintptr_t addr)
+{
+ uint64_t val;
+
+ val = aura_id & ROC_AURA_ID_MASK;
+ return (uint64_t)addr | val;
+}
+
+static inline uint64_t
+roc_npa_aura_handle_to_aura(uint64_t aura_handle)
+{
+ return aura_handle & ROC_AURA_ID_MASK;
+}
+
+static inline uintptr_t
+roc_npa_aura_handle_to_base(uint64_t aura_handle)
+{
+ return (uintptr_t)(aura_handle & ~ROC_AURA_ID_MASK);
+}
+
+static inline uint64_t
+roc_npa_aura_op_alloc(uint64_t aura_handle, const int drop)
+{
+ uint64_t wdata = roc_npa_aura_handle_to_aura(aura_handle);
+ int64_t *addr;
+
+ if (drop)
+ wdata |= BIT_ULL(63); /* DROP */
+
+ addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_OP_ALLOCX(0));
+ return roc_atomic64_add_nosync(wdata, addr);
+}
+
+static inline void
+roc_npa_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova)
+{
+ uint64_t reg = roc_npa_aura_handle_to_aura(aura_handle);
+ const uint64_t addr =
+ roc_npa_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0;
+ if (fabs)
+ reg |= BIT_ULL(63); /* FABS */
+
+ roc_store_pair(iova, reg, addr);
+}
+
+static inline uint64_t
+roc_npa_aura_op_cnt_get(uint64_t aura_handle)
+{
+ uint64_t wdata;
+ int64_t *addr;
+ uint64_t reg;
+
+ wdata = roc_npa_aura_handle_to_aura(aura_handle) << 44;
+ addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_OP_CNT);
+ reg = roc_atomic64_add_nosync(wdata, addr);
+
+ if (reg & BIT_ULL(42) /* OP_ERR */)
+ return 0;
+ else
+ return reg & 0xFFFFFFFFF;
+}
+
+static inline void
+roc_npa_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count)
+{
+ uint64_t reg = count & (BIT_ULL(36) - 1);
+
+ if (sign)
+ reg |= BIT_ULL(43); /* CNT_ADD */
+
+ reg |= (roc_npa_aura_handle_to_aura(aura_handle) << 44);
+
+ plt_write64(reg, roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_OP_CNT);
+}
+
+static inline uint64_t
+roc_npa_aura_op_limit_get(uint64_t aura_handle)
+{
+ uint64_t wdata;
+ int64_t *addr;
+ uint64_t reg;
+
+ wdata = roc_npa_aura_handle_to_aura(aura_handle) << 44;
+ addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_OP_LIMIT);
+ reg = roc_atomic64_add_nosync(wdata, addr);
+
+ if (reg & BIT_ULL(42) /* OP_ERR */)
+ return 0;
+ else
+ return reg & ROC_AURA_OP_LIMIT_MASK;
+}
+
+static inline void
+roc_npa_aura_op_limit_set(uint64_t aura_handle, uint64_t limit)
+{
+ uint64_t reg = limit & ROC_AURA_OP_LIMIT_MASK;
+
+ reg |= (roc_npa_aura_handle_to_aura(aura_handle) << 44);
+
+ plt_write64(reg, roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_OP_LIMIT);
+}
+
+static inline uint64_t
+roc_npa_aura_op_available(uint64_t aura_handle)
+{
+ uint64_t wdata;
+ uint64_t reg;
+ int64_t *addr;
+
+ wdata = roc_npa_aura_handle_to_aura(aura_handle) << 44;
+ addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_POOL_OP_AVAILABLE);
+ reg = roc_atomic64_add_nosync(wdata, addr);
+
+ if (reg & BIT_ULL(42) /* OP_ERR */)
+ return 0;
+ else
+ return reg & 0xFFFFFFFFF;
+}
struct roc_npa {
struct plt_pci_device *pci_dev;
@@ -17,6 +151,18 @@ struct roc_npa {
int __roc_api roc_npa_dev_init(struct roc_npa *roc_npa);
int __roc_api roc_npa_dev_fini(struct roc_npa *roc_npa);
+/* NPA pool */
+int __roc_api roc_npa_pool_create(uint64_t *aura_handle, uint32_t block_size,
+ uint32_t block_count, struct npa_aura_s *aura,
+ struct npa_pool_s *pool);
+int __roc_api roc_npa_aura_limit_modify(uint64_t aura_handle,
+ uint16_t aura_limit);
+int __roc_api roc_npa_pool_destroy(uint64_t aura_handle);
+int __roc_api roc_npa_pool_range_update_check(uint64_t aura_handle);
+void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle,
+ uint64_t start_iova,
+ uint64_t end_iova);
+
/* Debug */
int __roc_api roc_npa_ctx_dump(void);
int __roc_api roc_npa_dump(void);
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index dd9f224..114034e 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -12,10 +12,15 @@ INTERNAL {
roc_idev_npa_maxpools_set;
roc_idev_num_lmtlines_get;
roc_model;
+ roc_npa_aura_limit_modify;
+ roc_npa_aura_op_range_set;
roc_npa_ctx_dump;
roc_npa_dev_fini;
roc_npa_dev_init;
roc_npa_dump;
+ roc_npa_pool_create;
+ roc_npa_pool_destroy;
+ roc_npa_pool_range_update_check;
local: *;
};
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 13/52] common/cnxk: add npa bulk alloc/free support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (11 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 12/52] common/cnxk: add npa pool HW ops Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 14/52] common/cnxk: add npa performance counter support Nithin Dabilpuram
` (42 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add APIs to alloc/free in bulk from NPA pool.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/roc_npa.h | 229 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 229 insertions(+)
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
index 0dffede..ed63718 100644
--- a/drivers/common/cnxk/roc_npa.h
+++ b/drivers/common/cnxk/roc_npa.h
@@ -8,6 +8,11 @@
#define ROC_AURA_ID_MASK (BIT_ULL(16) - 1)
#define ROC_AURA_OP_LIMIT_MASK (BIT_ULL(36) - 1)
+/* 16 CASP instructions can be outstanding in CN9k, but we use only 15
+ * outstanding CASPs as we run out of registers.
+ */
+#define ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS 30
+
/*
* Generate 64bit handle to have optimized alloc and free aura operation.
* 0 - ROC_AURA_ID_MASK for storing the aura_id.
@@ -141,6 +146,230 @@ roc_npa_aura_op_available(uint64_t aura_handle)
return reg & 0xFFFFFFFFF;
}
+static inline void
+roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf,
+ unsigned int num, const int fabs)
+{
+ unsigned int i;
+
+ for (i = 0; i < num; i++) {
+ const uint64_t inbuf = buf[i];
+
+ roc_npa_aura_op_free(aura_handle, fabs, inbuf);
+ }
+}
+
+static inline unsigned int
+roc_npa_aura_bulk_alloc(uint64_t aura_handle, uint64_t *buf, unsigned int num,
+ const int drop)
+{
+#if defined(__aarch64__)
+ uint64_t wdata = roc_npa_aura_handle_to_aura(aura_handle);
+ unsigned int i, count;
+ uint64_t addr;
+
+ if (drop)
+ wdata |= BIT_ULL(63); /* DROP */
+
+ addr = roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_OP_ALLOCX(0);
+
+ switch (num) {
+ case 30:
+ asm volatile(
+ ".cpu generic+lse\n"
+ "mov v18.d[0], %[dst]\n"
+ "mov v18.d[1], %[loc]\n"
+ "mov v19.d[0], %[wdata]\n"
+ "mov v19.d[1], x30\n"
+ "mov v20.d[0], x24\n"
+ "mov v20.d[1], x25\n"
+ "mov v21.d[0], x26\n"
+ "mov v21.d[1], x27\n"
+ "mov v22.d[0], x28\n"
+ "mov v22.d[1], x29\n"
+ "mov x28, v19.d[0]\n"
+ "mov x29, v19.d[0]\n"
+ "mov x30, v18.d[1]\n"
+ "casp x0, x1, x28, x29, [x30]\n"
+ "casp x2, x3, x28, x29, [x30]\n"
+ "casp x4, x5, x28, x29, [x30]\n"
+ "casp x6, x7, x28, x29, [x30]\n"
+ "casp x8, x9, x28, x29, [x30]\n"
+ "casp x10, x11, x28, x29, [x30]\n"
+ "casp x12, x13, x28, x29, [x30]\n"
+ "casp x14, x15, x28, x29, [x30]\n"
+ "casp x16, x17, x28, x29, [x30]\n"
+ "casp x18, x19, x28, x29, [x30]\n"
+ "casp x20, x21, x28, x29, [x30]\n"
+ "casp x22, x23, x28, x29, [x30]\n"
+ "casp x24, x25, x28, x29, [x30]\n"
+ "casp x26, x27, x28, x29, [x30]\n"
+ "casp x28, x29, x28, x29, [x30]\n"
+ "mov x30, v18.d[0]\n"
+ "stp x0, x1, [x30]\n"
+ "stp x2, x3, [x30, #16]\n"
+ "stp x4, x5, [x30, #32]\n"
+ "stp x6, x7, [x30, #48]\n"
+ "stp x8, x9, [x30, #64]\n"
+ "stp x10, x11, [x30, #80]\n"
+ "stp x12, x13, [x30, #96]\n"
+ "stp x14, x15, [x30, #112]\n"
+ "stp x16, x17, [x30, #128]\n"
+ "stp x18, x19, [x30, #144]\n"
+ "stp x20, x21, [x30, #160]\n"
+ "stp x22, x23, [x30, #176]\n"
+ "stp x24, x25, [x30, #192]\n"
+ "stp x26, x27, [x30, #208]\n"
+ "stp x28, x29, [x30, #224]\n"
+ "mov %[dst], v18.d[0]\n"
+ "mov %[loc], v18.d[1]\n"
+ "mov %[wdata], v19.d[0]\n"
+ "mov x30, v19.d[1]\n"
+ "mov x24, v20.d[0]\n"
+ "mov x25, v20.d[1]\n"
+ "mov x26, v21.d[0]\n"
+ "mov x27, v21.d[1]\n"
+ "mov x28, v22.d[0]\n"
+ "mov x29, v22.d[1]\n"
+ :
+ : [wdata] "r"(wdata), [loc] "r"(addr), [dst] "r"(buf)
+ : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6",
+ "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14",
+ "x15", "x16", "x17", "x18", "x19", "x20", "x21",
+ "x22", "x23", "v18", "v19", "v20", "v21", "v22");
+ break;
+ case 16:
+ asm volatile(
+ ".cpu generic+lse\n"
+ "mov x16, %[wdata]\n"
+ "mov x17, %[wdata]\n"
+ "casp x0, x1, x16, x17, [%[loc]]\n"
+ "casp x2, x3, x16, x17, [%[loc]]\n"
+ "casp x4, x5, x16, x17, [%[loc]]\n"
+ "casp x6, x7, x16, x17, [%[loc]]\n"
+ "casp x8, x9, x16, x17, [%[loc]]\n"
+ "casp x10, x11, x16, x17, [%[loc]]\n"
+ "casp x12, x13, x16, x17, [%[loc]]\n"
+ "casp x14, x15, x16, x17, [%[loc]]\n"
+ "stp x0, x1, [%[dst]]\n"
+ "stp x2, x3, [%[dst], #16]\n"
+ "stp x4, x5, [%[dst], #32]\n"
+ "stp x6, x7, [%[dst], #48]\n"
+ "stp x8, x9, [%[dst], #64]\n"
+ "stp x10, x11, [%[dst], #80]\n"
+ "stp x12, x13, [%[dst], #96]\n"
+ "stp x14, x15, [%[dst], #112]\n"
+ :
+ : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr)
+ : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6",
+ "x7", "x8", "x9", "x10", "x11", "x12", "x13", "x14",
+ "x15", "x16", "x17"
+ );
+ break;
+ case 8:
+ asm volatile(
+ ".cpu generic+lse\n"
+ "mov x16, %[wdata]\n"
+ "mov x17, %[wdata]\n"
+ "casp x0, x1, x16, x17, [%[loc]]\n"
+ "casp x2, x3, x16, x17, [%[loc]]\n"
+ "casp x4, x5, x16, x17, [%[loc]]\n"
+ "casp x6, x7, x16, x17, [%[loc]]\n"
+ "stp x0, x1, [%[dst]]\n"
+ "stp x2, x3, [%[dst], #16]\n"
+ "stp x4, x5, [%[dst], #32]\n"
+ "stp x6, x7, [%[dst], #48]\n"
+ :
+ : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr)
+ : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6",
+ "x7", "x16", "x17"
+ );
+ break;
+ case 4:
+ asm volatile(
+ ".cpu generic+lse\n"
+ "mov x16, %[wdata]\n"
+ "mov x17, %[wdata]\n"
+ "casp x0, x1, x16, x17, [%[loc]]\n"
+ "casp x2, x3, x16, x17, [%[loc]]\n"
+ "stp x0, x1, [%[dst]]\n"
+ "stp x2, x3, [%[dst], #16]\n"
+ :
+ : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr)
+ : "memory", "x0", "x1", "x2", "x3", "x16", "x17"
+ );
+ break;
+ case 2:
+ asm volatile(
+ ".cpu generic+lse\n"
+ "mov x16, %[wdata]\n"
+ "mov x17, %[wdata]\n"
+ "casp x0, x1, x16, x17, [%[loc]]\n"
+ "stp x0, x1, [%[dst]]\n"
+ :
+ : [wdata] "r" (wdata), [dst] "r" (buf), [loc] "r" (addr)
+ : "memory", "x0", "x1", "x16", "x17"
+ );
+ break;
+ case 1:
+ buf[0] = roc_npa_aura_op_alloc(aura_handle, drop);
+ return !!buf[0];
+ }
+
+ /* Pack the pointers */
+ for (i = 0, count = 0; i < num; i++)
+ if (buf[i])
+ buf[count++] = buf[i];
+
+ return count;
+#else
+ unsigned int i, count;
+
+ for (i = 0, count = 0; i < num; i++) {
+ buf[count] = roc_npa_aura_op_alloc(aura_handle, drop);
+ if (buf[count])
+ count++;
+ }
+
+ return count;
+#endif
+}
+
+static inline unsigned int
+roc_npa_aura_op_bulk_alloc(uint64_t aura_handle, uint64_t *buf,
+ unsigned int num, const int drop, const int partial)
+{
+ unsigned int chunk, count, num_alloc;
+
+ count = 0;
+ while (num) {
+ chunk = (num >= ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS) ?
+ ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS :
+ plt_align32prevpow2(num);
+
+ num_alloc =
+ roc_npa_aura_bulk_alloc(aura_handle, buf, chunk, drop);
+
+ count += num_alloc;
+ buf += num_alloc;
+ num -= num_alloc;
+
+ if (unlikely(num_alloc != chunk))
+ break;
+ }
+
+ /* If the requested number of pointers was not allocated and if partial
+ * alloc is not desired, then free allocated pointers.
+ */
+ if (unlikely(num != 0 && !partial)) {
+ roc_npa_aura_op_bulk_free(aura_handle, buf - count, count, 1);
+ count = 0;
+ }
+
+ return count;
+}
+
struct roc_npa {
struct plt_pci_device *pci_dev;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 14/52] common/cnxk: add npa performance counter support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (12 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 13/52] common/cnxk: add npa bulk alloc/free support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 15/52] common/cnxk: add npa batch alloc/free support Nithin Dabilpuram
` (41 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add APIs to read NPA performance counters.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/roc_npa.c | 50 +++++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npa.h | 37 ++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 1 +
3 files changed, 88 insertions(+)
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index f747a9b..c4bcad7 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -131,6 +131,56 @@ npa_aura_pool_fini(struct mbox *mbox, uint32_t aura_id, uint64_t aura_handle)
return 0;
}
+int
+roc_npa_pool_op_pc_reset(uint64_t aura_handle)
+{
+ struct npa_lf *lf = idev_npa_obj_get();
+ struct npa_aq_enq_req *pool_req;
+ struct npa_aq_enq_rsp *pool_rsp;
+ struct ndc_sync_op *ndc_req;
+ struct mbox_dev *mdev;
+ int rc = -ENOSPC, off;
+ struct mbox *mbox;
+
+ if (lf == NULL)
+ return NPA_ERR_PARAM;
+
+ mbox = lf->mbox;
+ mdev = &mbox->dev[0];
+ plt_npa_dbg("lf=%p aura_handle=0x%" PRIx64, lf, aura_handle);
+
+ pool_req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (pool_req == NULL)
+ return rc;
+ pool_req->aura_id = roc_npa_aura_handle_to_aura(aura_handle);
+ pool_req->ctype = NPA_AQ_CTYPE_POOL;
+ pool_req->op = NPA_AQ_INSTOP_WRITE;
+ pool_req->pool.op_pc = 0;
+ pool_req->pool_mask.op_pc = ~pool_req->pool_mask.op_pc;
+
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ off = mbox->rx_start +
+ PLT_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
+
+ if (pool_rsp->hdr.rc != 0)
+ return NPA_ERR_AURA_POOL_FINI;
+
+ /* Sync NDC-NPA for LF */
+ ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+ if (ndc_req == NULL)
+ return -ENOSPC;
+ ndc_req->npa_lf_sync = 1;
+ rc = mbox_process(mbox);
+ if (rc) {
+ plt_err("Error on NDC-NPA LF sync, rc %d", rc);
+ return NPA_ERR_AURA_POOL_FINI;
+ }
+ return 0;
+}
static inline char *
npa_stack_memzone_name(struct npa_lf *lf, int pool_id, char *name)
{
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
index ed63718..a7815e5 100644
--- a/drivers/common/cnxk/roc_npa.h
+++ b/drivers/common/cnxk/roc_npa.h
@@ -146,6 +146,40 @@ roc_npa_aura_op_available(uint64_t aura_handle)
return reg & 0xFFFFFFFFF;
}
+static inline uint64_t
+roc_npa_pool_op_performance_counter(uint64_t aura_handle, const int drop)
+{
+ union {
+ uint64_t u;
+ struct npa_aura_op_wdata_s s;
+ } op_wdata;
+ int64_t *addr;
+ uint64_t reg;
+
+ op_wdata.u = 0;
+ op_wdata.s.aura = roc_npa_aura_handle_to_aura(aura_handle);
+ if (drop)
+ op_wdata.s.drop |= BIT_ULL(63); /* DROP */
+
+ addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_POOL_OP_PC);
+
+ reg = roc_atomic64_add_nosync(op_wdata.u, addr);
+ /*
+ * NPA_LF_POOL_OP_PC Read Data
+ *
+ * 63 49 48 48 47 0
+ * -----------------------------
+ * | Reserved | OP_ERR | OP_PC |
+ * -----------------------------
+ */
+
+ if (reg & BIT_ULL(48) /* OP_ERR */)
+ return 0;
+ else
+ return reg & 0xFFFFFFFFFFFF;
+}
+
static inline void
roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf,
unsigned int num, const int fabs)
@@ -396,4 +430,7 @@ void __roc_api roc_npa_aura_op_range_set(uint64_t aura_handle,
int __roc_api roc_npa_ctx_dump(void);
int __roc_api roc_npa_dump(void);
+/* Reset operation performance counter. */
+int __roc_api roc_npa_pool_op_pc_reset(uint64_t aura_handle);
+
#endif /* _ROC_NPA_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 114034e..02dc3df 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -20,6 +20,7 @@ INTERNAL {
roc_npa_dump;
roc_npa_pool_create;
roc_npa_pool_destroy;
+ roc_npa_pool_op_pc_reset;
roc_npa_pool_range_update_check;
local: *;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 15/52] common/cnxk: add npa batch alloc/free support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (13 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 14/52] common/cnxk: add npa performance counter support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 16/52] common/cnxk: add npa lf init/fini callback support Nithin Dabilpuram
` (40 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add APIs to do allocations/frees in batch from
NPA pool.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/roc_npa.h | 217 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 217 insertions(+)
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
index a7815e5..ab0b135 100644
--- a/drivers/common/cnxk/roc_npa.h
+++ b/drivers/common/cnxk/roc_npa.h
@@ -8,6 +8,9 @@
#define ROC_AURA_ID_MASK (BIT_ULL(16) - 1)
#define ROC_AURA_OP_LIMIT_MASK (BIT_ULL(36) - 1)
+#define ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS 512
+#define ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS 15
+
/* 16 CASP instructions can be outstanding in CN9k, but we use only 15
* outstanding CASPs as we run out of registers.
*/
@@ -180,6 +183,114 @@ roc_npa_pool_op_performance_counter(uint64_t aura_handle, const int drop)
return reg & 0xFFFFFFFFFFFF;
}
+static inline int
+roc_npa_aura_batch_alloc_issue(uint64_t aura_handle, uint64_t *buf,
+ unsigned int num, const int dis_wait,
+ const int drop)
+{
+ unsigned int i;
+ int64_t *addr;
+ uint64_t res;
+ union {
+ uint64_t u;
+ struct npa_batch_alloc_compare_s compare_s;
+ } cmp;
+
+ if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS)
+ return -1;
+
+ /* Zero first word of every cache line */
+ for (i = 0; i < num; i += (ROC_ALIGN / sizeof(uint64_t)))
+ buf[i] = 0;
+
+ addr = (int64_t *)(roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_BATCH_ALLOC);
+ cmp.u = 0;
+ cmp.compare_s.aura = roc_npa_aura_handle_to_aura(aura_handle);
+ cmp.compare_s.drop = drop;
+ cmp.compare_s.stype = ALLOC_STYPE_STSTP;
+ cmp.compare_s.dis_wait = dis_wait;
+ cmp.compare_s.count = num;
+
+ res = roc_atomic64_cas(cmp.u, (uint64_t)buf, addr);
+ if (res != ALLOC_RESULT_ACCEPTED && res != ALLOC_RESULT_NOCORE)
+ return -1;
+
+ return 0;
+}
+
+static inline unsigned int
+roc_npa_aura_batch_alloc_count(uint64_t *aligned_buf, unsigned int num)
+{
+ unsigned int count, i;
+
+ if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS)
+ return 0;
+
+ count = 0;
+ /* Check each ROC cache line one by one */
+ for (i = 0; i < num; i += (ROC_ALIGN >> 3)) {
+ struct npa_batch_alloc_status_s *status;
+ int ccode;
+
+ status = (struct npa_batch_alloc_status_s *)&aligned_buf[i];
+
+ /* Status is updated in first 7 bits of each 128 byte cache
+ * line. Wait until the status gets updated.
+ */
+ do {
+ ccode = (volatile int)status->ccode;
+ } while (ccode == ALLOC_CCODE_INVAL);
+
+ count += status->count;
+ }
+
+ return count;
+}
+
+static inline unsigned int
+roc_npa_aura_batch_alloc_extract(uint64_t *buf, uint64_t *aligned_buf,
+ unsigned int num)
+{
+ unsigned int count, i;
+
+ if (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS)
+ return 0;
+
+ count = 0;
+ /* Check each ROC cache line one by one */
+ for (i = 0; i < num; i += (ROC_ALIGN >> 3)) {
+ struct npa_batch_alloc_status_s *status;
+ int line_count, ccode;
+
+ status = (struct npa_batch_alloc_status_s *)&aligned_buf[i];
+
+ /* Status is updated in first 7 bits of each 128 byte cache
+ * line. Wait until the status gets updated.
+ */
+ do {
+ ccode = (volatile int)status->ccode;
+ } while (ccode == ALLOC_CCODE_INVAL);
+
+ line_count = status->count;
+
+ /* Clear the status from the cache line */
+ status->ccode = 0;
+ status->count = 0;
+
+ /* 'Compress' the allocated buffers as there can
+ * be 'holes' at the end of the 128 byte cache
+ * lines.
+ */
+ memmove(&buf[count], &aligned_buf[i],
+ line_count * sizeof(uint64_t));
+
+ count += line_count;
+ }
+
+ return count;
+}
+
static inline void
roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf,
unsigned int num, const int fabs)
@@ -194,6 +305,112 @@ roc_npa_aura_op_bulk_free(uint64_t aura_handle, uint64_t const *buf,
}
static inline unsigned int
+roc_npa_aura_op_batch_alloc(uint64_t aura_handle, uint64_t *buf,
+ uint64_t *aligned_buf, unsigned int num,
+ const int dis_wait, const int drop,
+ const int partial)
+{
+ unsigned int count, chunk, num_alloc;
+
+ /* The buffer should be 128 byte cache line aligned */
+ if (((uint64_t)aligned_buf & (ROC_ALIGN - 1)) != 0)
+ return 0;
+
+ count = 0;
+ while (num) {
+ chunk = (num > ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS) ?
+ ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS :
+ num;
+
+ if (roc_npa_aura_batch_alloc_issue(aura_handle, aligned_buf,
+ chunk, dis_wait, drop))
+ break;
+
+ num_alloc = roc_npa_aura_batch_alloc_extract(buf, aligned_buf,
+ chunk);
+
+ count += num_alloc;
+ buf += num_alloc;
+ num -= num_alloc;
+
+ if (num_alloc != chunk)
+ break;
+ }
+
+ /* If the requested number of pointers was not allocated and if partial
+ * alloc is not desired, then free allocated pointers.
+ */
+ if (unlikely(num != 0 && !partial)) {
+ roc_npa_aura_op_bulk_free(aura_handle, buf - count, count, 1);
+ count = 0;
+ }
+
+ return count;
+}
+
+static inline void
+roc_npa_aura_batch_free(uint64_t aura_handle, uint64_t const *buf,
+ unsigned int num, const int fabs, uint64_t lmt_addr,
+ uint64_t lmt_id)
+{
+ uint64_t addr, tar_addr, free0;
+ volatile uint64_t *lmt_data;
+ unsigned int i;
+
+ if (num > ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS)
+ return;
+
+ lmt_data = (uint64_t *)lmt_addr;
+
+ addr = roc_npa_aura_handle_to_base(aura_handle) +
+ NPA_LF_AURA_BATCH_FREE0;
+
+ /*
+ * NPA_LF_AURA_BATCH_FREE0
+ *
+ * 63 63 62 33 32 32 31 20 19 0
+ * -----------------------------------------
+ * | FABS | Rsvd | COUNT_EOT | Rsvd | AURA |
+ * -----------------------------------------
+ */
+ free0 = roc_npa_aura_handle_to_aura(aura_handle);
+ if (fabs)
+ free0 |= (0x1UL << 63);
+ if (num & 0x1)
+ free0 |= (0x1UL << 32);
+
+ /* tar_addr[4:6] is LMTST size-1 in units of 128b */
+ tar_addr = addr | ((num >> 1) << 4);
+
+ lmt_data[0] = free0;
+ for (i = 0; i < num; i++)
+ lmt_data[i + 1] = buf[i];
+
+ roc_lmt_submit_steorl(lmt_id, tar_addr);
+ plt_io_wmb();
+}
+
+static inline void
+roc_npa_aura_op_batch_free(uint64_t aura_handle, uint64_t const *buf,
+ unsigned int num, const int fabs, uint64_t lmt_addr,
+ uint64_t lmt_id)
+{
+ unsigned int chunk;
+
+ while (num) {
+ chunk = (num >= ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS) ?
+ ROC_CN10K_NPA_BATCH_FREE_MAX_PTRS :
+ num;
+
+ roc_npa_aura_batch_free(aura_handle, buf, chunk, fabs, lmt_addr,
+ lmt_id);
+
+ buf += chunk;
+ num -= chunk;
+ }
+}
+
+static inline unsigned int
roc_npa_aura_bulk_alloc(uint64_t aura_handle, uint64_t *buf, unsigned int num,
const int drop)
{
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 16/52] common/cnxk: add npa lf init/fini callback support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (14 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 15/52] common/cnxk: add npa batch alloc/free support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 17/52] common/cnxk: add base nix support Nithin Dabilpuram
` (39 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Ashwin Sekhar T K <asekhar@marvell.com>
Add support for npa lf init/fini callbacks.
Signed-off-by: Ashwin Sekhar T K <asekhar@marvell.com>
---
drivers/common/cnxk/roc_npa.c | 27 +++++++++++++++++++++++++++
drivers/common/cnxk/roc_npa.h | 8 ++++++++
drivers/common/cnxk/version.map | 2 ++
3 files changed, 37 insertions(+)
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index c4bcad7..4c45eef 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -5,6 +5,21 @@
#include "roc_api.h"
#include "roc_priv.h"
+static roc_npa_lf_init_cb_t npa_lf_init_cb;
+static roc_npa_lf_fini_cb_t npa_lf_fini_cb;
+
+void
+roc_npa_lf_init_cb_register(roc_npa_lf_init_cb_t cb)
+{
+ npa_lf_init_cb = cb;
+}
+
+void
+roc_npa_lf_fini_cb_register(roc_npa_lf_fini_cb_t cb)
+{
+ npa_lf_fini_cb = cb;
+}
+
void
roc_npa_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova,
uint64_t end_iova)
@@ -717,11 +732,20 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
if (rc)
goto npa_fini;
+ if (npa_lf_init_cb) {
+ rc = npa_lf_init_cb();
+ if (rc)
+ goto npa_irq_unregister;
+ }
+
plt_npa_dbg("npa=%p max_pools=%d pf_func=0x%x msix=0x%x", lf,
roc_idev_npa_maxpools_get(), lf->pf_func, npa_msixoff);
return 0;
+npa_irq_unregister:
+ npa_unregister_irqs(idev->npa);
+
npa_fini:
npa_dev_fini(idev->npa);
npa_detach:
@@ -750,6 +774,9 @@ npa_lf_fini(void)
rc |= npa_detach(idev->npa->mbox);
idev_set_defaults(idev);
+ if (npa_lf_fini_cb)
+ npa_lf_fini_cb();
+
return rc;
}
diff --git a/drivers/common/cnxk/roc_npa.h b/drivers/common/cnxk/roc_npa.h
index ab0b135..4288708 100644
--- a/drivers/common/cnxk/roc_npa.h
+++ b/drivers/common/cnxk/roc_npa.h
@@ -16,6 +16,10 @@
*/
#define ROC_CN9K_NPA_BULK_ALLOC_MAX_PTRS 30
+/* Callbacks that are called after NPA lf init/fini respectively */
+typedef int (*roc_npa_lf_init_cb_t)(void);
+typedef void (*roc_npa_lf_fini_cb_t)(void);
+
/*
* Generate 64bit handle to have optimized alloc and free aura operation.
* 0 - ROC_AURA_ID_MASK for storing the aura_id.
@@ -650,4 +654,8 @@ int __roc_api roc_npa_dump(void);
/* Reset operation performance counter. */
int __roc_api roc_npa_pool_op_pc_reset(uint64_t aura_handle);
+/* Callback registration */
+void __roc_api roc_npa_lf_init_cb_register(roc_npa_lf_init_cb_t cb);
+void __roc_api roc_npa_lf_fini_cb_register(roc_npa_lf_fini_cb_t cb);
+
#endif /* _ROC_NPA_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 02dc3df..c5724d6 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -18,6 +18,8 @@ INTERNAL {
roc_npa_dev_fini;
roc_npa_dev_init;
roc_npa_dump;
+ roc_npa_lf_fini_cb_register;
+ roc_npa_lf_init_cb_register;
roc_npa_pool_create;
roc_npa_pool_destroy;
roc_npa_pool_op_pc_reset;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 17/52] common/cnxk: add base nix support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (15 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 16/52] common/cnxk: add npa lf init/fini callback support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 18/52] common/cnxk: add nix irq support Nithin Dabilpuram
` (38 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add base nix support as ROC(Rest of Chip) API which will
be used by generic ETHDEV PMD(net/cnxk).
This patch adds support to device init, fini, resource
alloc and free API which sets up a ETHDEV PCI device of either
CN9K or CN10K Marvell SoC.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_api.h | 3 +
drivers/common/cnxk/roc_idev.c | 13 ++
drivers/common/cnxk/roc_idev.h | 2 +
drivers/common/cnxk/roc_nix.c | 396 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix.h | 84 ++++++++
drivers/common/cnxk/roc_nix_priv.h | 101 ++++++++++
drivers/common/cnxk/roc_platform.c | 1 +
drivers/common/cnxk/roc_platform.h | 2 +
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/roc_utils.c | 42 ++++
drivers/common/cnxk/version.map | 16 ++
12 files changed, 664 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix.c
create mode 100644 drivers/common/cnxk/roc_nix.h
create mode 100644 drivers/common/cnxk/roc_nix_priv.h
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 884793f..1371450 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -16,6 +16,7 @@ sources = files('roc_dev.c',
'roc_irq.c',
'roc_mbox.c',
'roc_model.c',
+ 'roc_nix.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index f2c5225..b805425 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -82,6 +82,9 @@
/* NPA */
#include "roc_npa.h"
+/* NIX */
+#include "roc_nix.h"
+
/* Utils */
#include "roc_utils.h"
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index dd03b2a..e038956 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -142,3 +142,16 @@ roc_idev_num_lmtlines_get(void)
return num_lmtlines;
}
+
+struct roc_nix *
+roc_idev_npa_nix_get(void)
+{
+ struct npa_lf *npa_lf = idev_npa_obj_get();
+ struct dev *dev;
+
+ if (!npa_lf)
+ return NULL;
+
+ dev = container_of(npa_lf, struct dev, npa);
+ return dev->roc_nix;
+}
diff --git a/drivers/common/cnxk/roc_idev.h b/drivers/common/cnxk/roc_idev.h
index 9715bb6..3bcb78e 100644
--- a/drivers/common/cnxk/roc_idev.h
+++ b/drivers/common/cnxk/roc_idev.h
@@ -12,4 +12,6 @@ void __roc_api roc_idev_npa_maxpools_set(uint32_t max_pools);
uint64_t __roc_api roc_idev_lmt_base_addr_get(void);
uint16_t __roc_api roc_idev_num_lmtlines_get(void);
+struct roc_nix *__roc_api roc_idev_npa_nix_get(void);
+
#endif /* _ROC_IDEV_H_ */
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
new file mode 100644
index 0000000..992fe5d
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix.c
@@ -0,0 +1,396 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+bool
+roc_nix_is_lbk(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return nix->lbk_link;
+}
+
+int
+roc_nix_get_base_chan(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return nix->rx_chan_base;
+}
+
+uint16_t
+roc_nix_get_vwqe_interval(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return nix->vwqe_interval;
+}
+
+bool
+roc_nix_is_sdp(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return nix->sdp_link;
+}
+
+bool
+roc_nix_is_pf(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return !dev_is_vf(&nix->dev);
+}
+
+int
+roc_nix_get_pf(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev_get_pf(dev->pf_func);
+}
+
+int
+roc_nix_get_vf(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev_get_vf(dev->pf_func);
+}
+
+bool
+roc_nix_is_vf_or_sdp(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return (dev_is_vf(&nix->dev) != 0) || roc_nix_is_sdp(roc_nix);
+}
+
+uint16_t
+roc_nix_get_pf_func(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->pf_func;
+}
+
+int
+roc_nix_max_pkt_len(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ if (roc_model_is_cn9k())
+ return NIX_CN9K_MAX_HW_FRS;
+
+ if (nix->lbk_link || roc_nix_is_sdp(roc_nix))
+ return NIX_LBK_MAX_HW_FRS;
+
+ return NIX_RPM_MAX_HW_FRS;
+}
+
+int
+roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq, uint32_t nb_txq,
+ uint64_t rx_cfg)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_lf_alloc_req *req;
+ struct nix_lf_alloc_rsp *rsp;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_nix_lf_alloc(mbox);
+ if (req == NULL)
+ return rc;
+ req->rq_cnt = nb_rxq;
+ req->sq_cnt = nb_txq;
+ req->cq_cnt = nb_rxq;
+ /* XQESZ can be W64 or W16 */
+ req->xqe_sz = NIX_XQESZ_W16;
+ req->rss_sz = nix->reta_sz;
+ req->rss_grps = ROC_NIX_RSS_GRPS;
+ req->npa_func = idev_npa_pffunc_get();
+ req->rx_cfg = rx_cfg;
+
+ if (!roc_nix->rss_tag_as_xor)
+ req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto fail;
+
+ nix->sqb_size = rsp->sqb_size;
+ nix->tx_chan_base = rsp->tx_chan_base;
+ nix->rx_chan_base = rsp->rx_chan_base;
+ if (roc_nix_is_lbk(roc_nix) && roc_nix->enable_loop)
+ nix->tx_chan_base = rsp->rx_chan_base;
+ nix->rx_chan_cnt = rsp->rx_chan_cnt;
+ nix->tx_chan_cnt = rsp->tx_chan_cnt;
+ nix->lso_tsov4_idx = rsp->lso_tsov4_idx;
+ nix->lso_tsov6_idx = rsp->lso_tsov6_idx;
+ nix->lf_tx_stats = rsp->lf_tx_stats;
+ nix->lf_rx_stats = rsp->lf_rx_stats;
+ nix->cints = rsp->cints;
+ roc_nix->cints = rsp->cints;
+ nix->qints = rsp->qints;
+ nix->ptp_en = rsp->hw_rx_tstamp_en;
+ roc_nix->rx_ptp_ena = rsp->hw_rx_tstamp_en;
+ nix->cgx_links = rsp->cgx_links;
+ nix->lbk_links = rsp->lbk_links;
+ nix->sdp_links = rsp->sdp_links;
+ nix->tx_link = rsp->tx_link;
+ nix->nb_rx_queues = nb_rxq;
+ nix->nb_tx_queues = nb_txq;
+ nix->sqs = plt_zmalloc(sizeof(struct roc_nix_sq *) * nb_txq, 0);
+ if (!nix->sqs)
+ return -ENOMEM;
+fail:
+ return rc;
+}
+
+int
+roc_nix_lf_free(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_lf_free_req *req;
+ struct ndc_sync_op *ndc_req;
+ int rc = -ENOSPC;
+
+ plt_free(nix->sqs);
+ nix->sqs = NULL;
+
+ /* Sync NDC-NIX for LF */
+ ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+ if (ndc_req == NULL)
+ return rc;
+ ndc_req->nix_lf_tx_sync = 1;
+ ndc_req->nix_lf_rx_sync = 1;
+ rc = mbox_process(mbox);
+ if (rc)
+ plt_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
+
+ req = mbox_alloc_msg_nix_lf_free(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ /* Let AF driver free all this nix lf's
+ * NPC entries allocated using NPC MBOX.
+ */
+ req->flags = 0;
+
+ return mbox_process(mbox);
+}
+
+static inline int
+nix_lf_attach(struct dev *dev)
+{
+ struct mbox *mbox = dev->mbox;
+ struct rsrc_attach_req *req;
+ int rc = -ENOSPC;
+
+ /* Attach NIX(lf) */
+ req = mbox_alloc_msg_attach_resources(mbox);
+ if (req == NULL)
+ return rc;
+ req->modify = true;
+ req->nixlf = true;
+
+ return mbox_process(mbox);
+}
+
+static inline int
+nix_lf_get_msix_offset(struct dev *dev, struct nix *nix)
+{
+ struct msix_offset_rsp *msix_rsp;
+ struct mbox *mbox = dev->mbox;
+ int rc;
+
+ /* Get MSIX vector offsets */
+ mbox_alloc_msg_msix_offset(mbox);
+ rc = mbox_process_msg(mbox, (void *)&msix_rsp);
+ if (rc == 0)
+ nix->msixoff = msix_rsp->nix_msixoff;
+
+ return rc;
+}
+
+static inline int
+nix_lf_detach(struct nix *nix)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct rsrc_detach_req *req;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_detach_resources(mbox);
+ if (req == NULL)
+ return rc;
+ req->partial = true;
+ req->nixlf = true;
+
+ return mbox_process(mbox);
+}
+
+static int
+roc_nix_get_hw_info(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_hw_info *hw_info;
+ int rc;
+
+ mbox_alloc_msg_nix_get_hw_info(mbox);
+ rc = mbox_process_msg(mbox, (void *)&hw_info);
+ if (rc == 0)
+ nix->vwqe_interval = hw_info->vwqe_delay;
+
+ return rc;
+}
+
+static void
+sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix)
+{
+ nix->sdp_link = false;
+ nix->lbk_link = false;
+
+ /* Update SDP/LBK link based on PCI device id */
+ switch (pci_dev->id.device_id) {
+ case PCI_DEVID_CNXK_RVU_SDP_PF:
+ case PCI_DEVID_CNXK_RVU_SDP_VF:
+ nix->sdp_link = true;
+ break;
+ case PCI_DEVID_CNXK_RVU_AF_VF:
+ nix->lbk_link = true;
+ break;
+ default:
+ break;
+ }
+}
+
+static inline uint64_t
+nix_get_blkaddr(struct dev *dev)
+{
+ uint64_t reg;
+
+ /* Reading the discovery register to know which NIX is the LF
+ * attached to.
+ */
+ reg = plt_read64(dev->bar2 +
+ RVU_PF_BLOCK_ADDRX_DISC(RVU_BLOCK_ADDR_NIX0));
+
+ return reg & 0x1FFULL ? RVU_BLOCK_ADDR_NIX0 : RVU_BLOCK_ADDR_NIX1;
+}
+
+int
+roc_nix_dev_init(struct roc_nix *roc_nix)
+{
+ enum roc_nix_rss_reta_sz reta_sz;
+ struct plt_pci_device *pci_dev;
+ uint16_t max_sqb_count;
+ uint64_t blkaddr;
+ struct dev *dev;
+ struct nix *nix;
+ int rc;
+
+ if (roc_nix == NULL || roc_nix->pci_dev == NULL)
+ return NIX_ERR_PARAM;
+
+ reta_sz = roc_nix->reta_sz;
+ if (reta_sz != 0 && reta_sz != 64 && reta_sz != 128 && reta_sz != 256)
+ return NIX_ERR_PARAM;
+
+ if (reta_sz == 0)
+ reta_sz = ROC_NIX_RSS_RETA_SZ_64;
+
+ max_sqb_count = roc_nix->max_sqb_count;
+ max_sqb_count = PLT_MIN(max_sqb_count, NIX_MAX_SQB);
+ max_sqb_count = PLT_MAX(max_sqb_count, NIX_MIN_SQB);
+ roc_nix->max_sqb_count = max_sqb_count;
+
+ PLT_STATIC_ASSERT(sizeof(struct nix) <= ROC_NIX_MEM_SZ);
+ nix = roc_nix_to_nix_priv(roc_nix);
+ pci_dev = roc_nix->pci_dev;
+ dev = &nix->dev;
+
+ if (nix->dev.drv_inited)
+ return 0;
+
+ if (dev->mbox_active)
+ goto skip_dev_init;
+
+ memset(nix, 0, sizeof(*nix));
+ /* Initialize device */
+ rc = dev_init(dev, pci_dev);
+ if (rc) {
+ plt_err("Failed to init roc device");
+ goto fail;
+ }
+
+skip_dev_init:
+ dev->roc_nix = roc_nix;
+
+ nix->lmt_base = dev->lmt_base;
+ /* Expose base LMT line address for
+ * "Per Core LMT line" mode.
+ */
+ roc_nix->lmt_base = dev->lmt_base;
+
+ /* Attach NIX LF */
+ rc = nix_lf_attach(dev);
+ if (rc)
+ goto dev_fini;
+
+ blkaddr = nix_get_blkaddr(dev);
+ nix->is_nix1 = (blkaddr == RVU_BLOCK_ADDR_NIX1);
+
+ /* Calculating base address based on which NIX block LF
+ * is attached to.
+ */
+ nix->base = dev->bar2 + (blkaddr << 20);
+
+ /* Get NIX MSIX offset */
+ rc = nix_lf_get_msix_offset(dev, nix);
+ if (rc)
+ goto lf_detach;
+
+ /* Update nix context */
+ sdp_lbk_id_update(pci_dev, nix);
+ nix->pci_dev = pci_dev;
+ nix->reta_sz = reta_sz;
+ nix->mtu = ROC_NIX_DEFAULT_HW_FRS;
+
+ /* Get NIX HW info */
+ roc_nix_get_hw_info(roc_nix);
+ nix->dev.drv_inited = true;
+
+ return 0;
+lf_detach:
+ nix_lf_detach(nix);
+dev_fini:
+ rc |= dev_fini(dev, pci_dev);
+fail:
+ return rc;
+}
+
+int
+roc_nix_dev_fini(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ int rc = 0;
+
+ if (nix == NULL)
+ return NIX_ERR_PARAM;
+
+ if (!nix->dev.drv_inited)
+ goto fini;
+
+ rc = nix_lf_detach(nix);
+ nix->dev.drv_inited = false;
+fini:
+ rc |= dev_fini(&nix->dev, nix->pci_dev);
+ return rc;
+}
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
new file mode 100644
index 0000000..5bbeb8b
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_NIX_H_
+#define _ROC_NIX_H_
+
+/* Constants */
+enum roc_nix_rss_reta_sz {
+ ROC_NIX_RSS_RETA_SZ_64 = 64,
+ ROC_NIX_RSS_RETA_SZ_128 = 128,
+ ROC_NIX_RSS_RETA_SZ_256 = 256,
+};
+
+enum roc_nix_sq_max_sqe_sz {
+ roc_nix_maxsqesz_w16 = NIX_MAXSQESZ_W16,
+ roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8,
+};
+
+/* NIX LF RX offload configuration flags.
+ * These are input flags to roc_nix_lf_alloc:rx_cfg
+ */
+#define ROC_NIX_LF_RX_CFG_DROP_RE BIT_ULL(32)
+#define ROC_NIX_LF_RX_CFG_L2_LEN_ERR BIT_ULL(33)
+#define ROC_NIX_LF_RX_CFG_IP6_UDP_OPT BIT_ULL(34)
+#define ROC_NIX_LF_RX_CFG_DIS_APAD BIT_ULL(35)
+#define ROC_NIX_LF_RX_CFG_CSUM_IL4 BIT_ULL(36)
+#define ROC_NIX_LF_RX_CFG_CSUM_OL4 BIT_ULL(37)
+#define ROC_NIX_LF_RX_CFG_LEN_IL4 BIT_ULL(38)
+#define ROC_NIX_LF_RX_CFG_LEN_IL3 BIT_ULL(39)
+#define ROC_NIX_LF_RX_CFG_LEN_OL4 BIT_ULL(40)
+#define ROC_NIX_LF_RX_CFG_LEN_OL3 BIT_ULL(41)
+
+/* Group 0 will be used for RSS, 1 -7 will be used for npc_flow RSS action*/
+#define ROC_NIX_RSS_GROUP_DEFAULT 0
+#define ROC_NIX_RSS_GRPS 8
+#define ROC_NIX_RSS_RETA_MAX ROC_NIX_RSS_RETA_SZ_256
+#define ROC_NIX_RSS_KEY_LEN 48 /* 352 Bits */
+
+#define ROC_NIX_DEFAULT_HW_FRS 1514
+
+#define ROC_NIX_VWQE_MAX_SIZE_LOG2 11
+#define ROC_NIX_VWQE_MIN_SIZE_LOG2 2
+struct roc_nix {
+ /* Input parameters */
+ struct plt_pci_device *pci_dev;
+ uint16_t port_id;
+ bool rss_tag_as_xor;
+ uint16_t max_sqb_count;
+ enum roc_nix_rss_reta_sz reta_sz;
+ bool enable_loop;
+ /* End of input parameters */
+ /* LMT line base for "Per Core Tx LMT line" mode*/
+ uintptr_t lmt_base;
+ bool io_enabled;
+ bool rx_ptp_ena;
+ uint16_t cints;
+
+#define ROC_NIX_MEM_SZ (6 * 1024)
+ uint8_t reserved[ROC_NIX_MEM_SZ] __plt_cache_aligned;
+} __plt_cache_aligned;
+
+/* Dev */
+int __roc_api roc_nix_dev_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_dev_fini(struct roc_nix *roc_nix);
+
+/* Type */
+bool __roc_api roc_nix_is_lbk(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_is_sdp(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_is_pf(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_is_vf_or_sdp(struct roc_nix *roc_nix);
+int __roc_api roc_nix_get_base_chan(struct roc_nix *roc_nix);
+int __roc_api roc_nix_get_pf(struct roc_nix *roc_nix);
+int __roc_api roc_nix_get_vf(struct roc_nix *roc_nix);
+uint16_t __roc_api roc_nix_get_pf_func(struct roc_nix *roc_nix);
+uint16_t __roc_api roc_nix_get_vwqe_interval(struct roc_nix *roc_nix);
+int __roc_api roc_nix_max_pkt_len(struct roc_nix *roc_nix);
+
+/* LF ops */
+int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq,
+ uint32_t nb_txq, uint64_t rx_cfg);
+int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix);
+
+#endif /* _ROC_NIX_H_ */
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
new file mode 100644
index 0000000..3bd96d9
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_NIX_PRIV_H_
+#define _ROC_NIX_PRIV_H_
+
+/* Constants */
+#define NIX_CQ_ENTRY_SZ 128
+#define NIX_CQ_ENTRY64_SZ 512
+#define NIX_CQ_ALIGN (uint16_t)512
+#define NIX_MAX_SQB (uint16_t)512
+#define NIX_DEF_SQB (uint16_t)16
+#define NIX_MIN_SQB (uint16_t)8
+#define NIX_SQB_LIST_SPACE (uint16_t)2
+#define NIX_SQB_LOWER_THRESH (uint16_t)70
+
+/* Apply BP/DROP when CQ is 95% full */
+#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100)
+
+struct nix {
+ uint16_t reta[ROC_NIX_RSS_GRPS][ROC_NIX_RSS_RETA_MAX];
+ enum roc_nix_rss_reta_sz reta_sz;
+ struct plt_pci_device *pci_dev;
+ uint16_t bpid[NIX_MAX_CHAN];
+ struct roc_nix_sq **sqs;
+ uint16_t vwqe_interval;
+ uint16_t tx_chan_base;
+ uint16_t rx_chan_base;
+ uint16_t nb_rx_queues;
+ uint16_t nb_tx_queues;
+ uint8_t lso_tsov6_idx;
+ uint8_t lso_tsov4_idx;
+ uint8_t lso_base_idx;
+ uint8_t lf_rx_stats;
+ uint8_t lf_tx_stats;
+ uint8_t rx_chan_cnt;
+ uint8_t rss_alg_idx;
+ uint8_t tx_chan_cnt;
+ uintptr_t lmt_base;
+ uint8_t cgx_links;
+ uint8_t lbk_links;
+ uint8_t sdp_links;
+ uint8_t tx_link;
+ uint16_t sqb_size;
+ /* Without FCS, with L2 overhead */
+ uint16_t mtu;
+ uint16_t chan_cnt;
+ uint16_t msixoff;
+ uint8_t rx_pause;
+ uint8_t tx_pause;
+ struct dev dev;
+ uint16_t cints;
+ uint16_t qints;
+ uintptr_t base;
+ bool sdp_link;
+ bool lbk_link;
+ bool ptp_en;
+ bool is_nix1;
+
+} __plt_cache_aligned;
+
+enum nix_err_status {
+ NIX_ERR_PARAM = -2048,
+ NIX_ERR_NO_MEM,
+ NIX_ERR_INVALID_RANGE,
+ NIX_ERR_INTERNAL,
+ NIX_ERR_OP_NOTSUP,
+ NIX_ERR_QUEUE_INVALID_RANGE,
+ NIX_ERR_AQ_READ_FAILED,
+ NIX_ERR_AQ_WRITE_FAILED,
+ NIX_ERR_NDC_SYNC,
+};
+
+enum nix_q_size {
+ nix_q_size_16, /* 16 entries */
+ nix_q_size_64, /* 64 entries */
+ nix_q_size_256,
+ nix_q_size_1K,
+ nix_q_size_4K,
+ nix_q_size_16K,
+ nix_q_size_64K,
+ nix_q_size_256K,
+ nix_q_size_1M, /* Million entries */
+ nix_q_size_max
+};
+
+static inline struct nix *
+roc_nix_to_nix_priv(struct roc_nix *roc_nix)
+{
+ return (struct nix *)&roc_nix->reserved[0];
+}
+
+static inline struct roc_nix *
+nix_priv_to_roc_nix(struct nix *nix)
+{
+ return (struct roc_nix *)((char *)nix -
+ offsetof(struct roc_nix, reserved));
+}
+
+#endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 4666749..f65033d 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -31,3 +31,4 @@ plt_init(void)
RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE);
+RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index ba6722c..466c71a 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -127,6 +127,7 @@
extern int cnxk_logtype_base;
extern int cnxk_logtype_mbox;
extern int cnxk_logtype_npa;
+extern int cnxk_logtype_nix;
#define plt_err(fmt, args...) \
RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args)
@@ -145,6 +146,7 @@ extern int cnxk_logtype_npa;
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__)
#define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__)
+#define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index dfd6351..4a1eb9c 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -20,4 +20,7 @@
/* idev */
#include "roc_idev_priv.h"
+/* NIX */
+#include "roc_nix_priv.h"
+
#endif /* _ROC_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
index 0a88f78..8ccbe2a 100644
--- a/drivers/common/cnxk/roc_utils.c
+++ b/drivers/common/cnxk/roc_utils.c
@@ -11,10 +11,37 @@ roc_error_msg_get(int errorcode)
const char *err_msg;
switch (errorcode) {
+ case NIX_AF_ERR_PARAM:
+ case NIX_ERR_PARAM:
case NPA_ERR_PARAM:
case UTIL_ERR_PARAM:
err_msg = "Invalid parameter";
break;
+ case NIX_ERR_NO_MEM:
+ err_msg = "Out of memory";
+ break;
+ case NIX_ERR_INVALID_RANGE:
+ err_msg = "Range is not supported";
+ break;
+ case NIX_ERR_INTERNAL:
+ err_msg = "Internal error";
+ break;
+ case NIX_ERR_OP_NOTSUP:
+ err_msg = "Operation not supported";
+ break;
+ case NIX_ERR_QUEUE_INVALID_RANGE:
+ err_msg = "Invalid Queue range";
+ break;
+ case NIX_ERR_AQ_READ_FAILED:
+ err_msg = "AQ read failed";
+ break;
+ case NIX_ERR_AQ_WRITE_FAILED:
+ err_msg = "AQ write failed";
+ break;
+ case NIX_ERR_NDC_SYNC:
+ err_msg = "NDC Sync failed";
+ break;
+ break;
case NPA_ERR_ALLOC:
err_msg = "NPA alloc failed";
break;
@@ -36,6 +63,21 @@ roc_error_msg_get(int errorcode)
case NPA_ERR_DEVICE_NOT_BOUNDED:
err_msg = "NPA device is not bounded";
break;
+ case NIX_AF_ERR_AQ_FULL:
+ err_msg = "AQ full";
+ break;
+ case NIX_AF_ERR_AQ_ENQUEUE:
+ err_msg = "AQ enqueue failed";
+ break;
+ case NIX_AF_ERR_AF_LF_INVALID:
+ err_msg = "Invalid NIX LF";
+ break;
+ case NIX_AF_ERR_AF_LF_ALLOC:
+ err_msg = "NIX LF alloc failed";
+ break;
+ case NIX_AF_ERR_LF_RESET:
+ err_msg = "NIX LF reset failed";
+ break;
case UTIL_ERR_FS:
err_msg = "file operation failed";
break;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index c5724d6..a7d0a6f 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -3,6 +3,7 @@ INTERNAL {
cnxk_logtype_base;
cnxk_logtype_mbox;
+ cnxk_logtype_nix;
cnxk_logtype_npa;
plt_init;
roc_clk_freq_get;
@@ -10,8 +11,23 @@ INTERNAL {
roc_idev_lmt_base_addr_get;
roc_idev_npa_maxpools_get;
roc_idev_npa_maxpools_set;
+ roc_idev_npa_nix_get;
roc_idev_num_lmtlines_get;
roc_model;
+ roc_nix_dev_fini;
+ roc_nix_dev_init;
+ roc_nix_get_base_chan;
+ roc_nix_get_pf;
+ roc_nix_get_pf_func;
+ roc_nix_get_vf;
+ roc_nix_get_vwqe_interval;
+ roc_nix_is_lbk;
+ roc_nix_is_pf;
+ roc_nix_is_sdp;
+ roc_nix_is_vf_or_sdp;
+ roc_nix_lf_alloc;
+ roc_nix_lf_free;
+ roc_nix_max_pkt_len;
roc_npa_aura_limit_modify;
roc_npa_aura_op_range_set;
roc_npa_ctx_dump;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 18/52] common/cnxk: add nix irq support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (16 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 17/52] common/cnxk: add base nix support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 19/52] common/cnxk: add nix Rx queue management API Nithin Dabilpuram
` (37 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Harman Kalra
From: Jerin Jacob <jerinj@marvell.com>
Add support to register NIX error and completion
queue IRQ's using base device class IRQ helper API's.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.c | 7 +
drivers/common/cnxk/roc_nix.h | 12 +
drivers/common/cnxk/roc_nix_irq.c | 484 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_priv.h | 18 ++
drivers/common/cnxk/version.map | 8 +
6 files changed, 530 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_irq.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 1371450..39aa4ae 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -17,6 +17,7 @@ sources = files('roc_dev.c',
'roc_mbox.c',
'roc_model.c',
'roc_nix.c',
+ 'roc_nix_irq.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index 992fe5d..41b4572 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -363,6 +363,11 @@ roc_nix_dev_init(struct roc_nix *roc_nix)
nix->reta_sz = reta_sz;
nix->mtu = ROC_NIX_DEFAULT_HW_FRS;
+ /* Register error and ras interrupts */
+ rc = nix_register_irqs(nix);
+ if (rc)
+ goto lf_detach;
+
/* Get NIX HW info */
roc_nix_get_hw_info(roc_nix);
nix->dev.drv_inited = true;
@@ -388,6 +393,8 @@ roc_nix_dev_fini(struct roc_nix *roc_nix)
if (!nix->dev.drv_inited)
goto fini;
+ nix_unregister_irqs(nix);
+
rc = nix_lf_detach(nix);
nix->dev.drv_inited = false;
fini:
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 5bbeb8b..f32f69d 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -81,4 +81,16 @@ int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq,
uint32_t nb_txq, uint64_t rx_cfg);
int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix);
+/* IRQ */
+void __roc_api roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix,
+ uint16_t rxq_id);
+void __roc_api roc_nix_rx_queue_intr_disable(struct roc_nix *roc_nix,
+ uint16_t rxq_id);
+void __roc_api roc_nix_err_intr_ena_dis(struct roc_nix *roc_nix, bool enb);
+void __roc_api roc_nix_ras_intr_ena_dis(struct roc_nix *roc_nix, bool enb);
+int __roc_api roc_nix_register_queue_irqs(struct roc_nix *roc_nix);
+void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix);
+int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix);
+void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix);
+
#endif /* _ROC_NIX_H_ */
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
new file mode 100644
index 0000000..d7390d4
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -0,0 +1,484 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static void
+nix_err_intr_enb_dis(struct nix *nix, bool enb)
+{
+ /* Enable all nix lf error irqs except RQ_DISABLED and CQ_DISABLED */
+ if (enb)
+ plt_write64(~(BIT_ULL(11) | BIT_ULL(24)),
+ nix->base + NIX_LF_ERR_INT_ENA_W1S);
+ else
+ plt_write64(~0ull, nix->base + NIX_LF_ERR_INT_ENA_W1C);
+}
+
+static void
+nix_ras_intr_enb_dis(struct nix *nix, bool enb)
+{
+ if (enb)
+ plt_write64(~0ull, nix->base + NIX_LF_RAS_ENA_W1S);
+ else
+ plt_write64(~0ull, nix->base + NIX_LF_RAS_ENA_W1C);
+}
+
+void
+roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix, uint16_t rx_queue_id)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ /* Enable CINT interrupt */
+ plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1S(rx_queue_id));
+}
+
+void
+roc_nix_rx_queue_intr_disable(struct roc_nix *roc_nix, uint16_t rx_queue_id)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ /* Clear and disable CINT interrupt */
+ plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1C(rx_queue_id));
+}
+
+void
+roc_nix_err_intr_ena_dis(struct roc_nix *roc_nix, bool enb)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return nix_err_intr_enb_dis(nix, enb);
+}
+
+void
+roc_nix_ras_intr_ena_dis(struct roc_nix *roc_nix, bool enb)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ return nix_ras_intr_enb_dis(nix, enb);
+}
+
+static void
+nix_lf_err_irq(void *param)
+{
+ struct nix *nix = (struct nix *)param;
+ struct dev *dev = &nix->dev;
+ uint64_t intr;
+
+ intr = plt_read64(nix->base + NIX_LF_ERR_INT);
+ if (intr == 0)
+ return;
+
+ plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+ /* Clear interrupt */
+ plt_write64(intr, nix->base + NIX_LF_ERR_INT);
+}
+
+static int
+nix_lf_register_err_irq(struct nix *nix)
+{
+ struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ int rc, vec;
+
+ vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
+ /* Clear err interrupt */
+ nix_err_intr_enb_dis(nix, false);
+ /* Set used interrupt vectors */
+ rc = dev_irq_register(handle, nix_lf_err_irq, nix, vec);
+ /* Enable all dev interrupt except for RQ_DISABLED */
+ nix_err_intr_enb_dis(nix, true);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_err_irq(struct nix *nix)
+{
+ struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ int vec;
+
+ vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
+ /* Clear err interrupt */
+ nix_err_intr_enb_dis(nix, false);
+ dev_irq_unregister(handle, nix_lf_err_irq, nix, vec);
+}
+
+static void
+nix_lf_ras_irq(void *param)
+{
+ struct nix *nix = (struct nix *)param;
+ struct dev *dev = &nix->dev;
+ uint64_t intr;
+
+ intr = plt_read64(nix->base + NIX_LF_RAS);
+ if (intr == 0)
+ return;
+
+ plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+ /* Clear interrupt */
+ plt_write64(intr, nix->base + NIX_LF_RAS);
+}
+
+static int
+nix_lf_register_ras_irq(struct nix *nix)
+{
+ struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ int rc, vec;
+
+ vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
+ /* Clear err interrupt */
+ nix_ras_intr_enb_dis(nix, false);
+ /* Set used interrupt vectors */
+ rc = dev_irq_register(handle, nix_lf_ras_irq, nix, vec);
+ /* Enable dev interrupt */
+ nix_ras_intr_enb_dis(nix, true);
+
+ return rc;
+}
+
+static void
+nix_lf_unregister_ras_irq(struct nix *nix)
+{
+ struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ int vec;
+
+ vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
+ /* Clear err interrupt */
+ nix_ras_intr_enb_dis(nix, false);
+ dev_irq_unregister(handle, nix_lf_ras_irq, nix, vec);
+}
+
+static inline uint8_t
+nix_lf_q_irq_get_and_clear(struct nix *nix, uint16_t q, uint32_t off,
+ uint64_t mask)
+{
+ uint64_t reg, wdata;
+ uint8_t qint;
+
+ wdata = (uint64_t)q << 44;
+ reg = roc_atomic64_add_nosync(wdata, (int64_t *)(nix->base + off));
+
+ if (reg & BIT_ULL(42) /* OP_ERR */) {
+ plt_err("Failed execute irq get off=0x%x", off);
+ return 0;
+ }
+ qint = reg & 0xff;
+ wdata &= mask;
+ plt_write64(wdata | qint, nix->base + off);
+
+ return qint;
+}
+
+static inline uint8_t
+nix_lf_rq_irq_get_and_clear(struct nix *nix, uint16_t rq)
+{
+ return nix_lf_q_irq_get_and_clear(nix, rq, NIX_LF_RQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_cq_irq_get_and_clear(struct nix *nix, uint16_t cq)
+{
+ return nix_lf_q_irq_get_and_clear(nix, cq, NIX_LF_CQ_OP_INT, ~0xff00);
+}
+
+static inline uint8_t
+nix_lf_sq_irq_get_and_clear(struct nix *nix, uint16_t sq)
+{
+ return nix_lf_q_irq_get_and_clear(nix, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
+}
+
+static inline void
+nix_lf_sq_debug_reg(struct nix *nix, uint32_t off)
+{
+ uint64_t reg;
+
+ reg = plt_read64(nix->base + off);
+ if (reg & BIT_ULL(44))
+ plt_err("SQ=%d err_code=0x%x", (int)((reg >> 8) & 0xfffff),
+ (uint8_t)(reg & 0xff));
+}
+
+static void
+nix_lf_cq_irq(void *param)
+{
+ struct nix_qint *cint = (struct nix_qint *)param;
+ struct nix *nix = cint->nix;
+
+ /* Clear interrupt */
+ plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_INT(cint->qintx));
+}
+
+static void
+nix_lf_q_irq(void *param)
+{
+ struct nix_qint *qint = (struct nix_qint *)param;
+ uint8_t irq, qintx = qint->qintx;
+ struct nix *nix = qint->nix;
+ struct dev *dev = &nix->dev;
+ int q, cq, rq, sq;
+ uint64_t intr;
+
+ intr = plt_read64(nix->base + NIX_LF_QINTX_INT(qintx));
+ if (intr == 0)
+ return;
+
+ plt_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d", intr, qintx,
+ dev->pf, dev->vf);
+
+ /* Handle RQ interrupts */
+ for (q = 0; q < nix->nb_rx_queues; q++) {
+ rq = q % nix->qints;
+ irq = nix_lf_rq_irq_get_and_clear(nix, rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_DROP))
+ plt_err("RQ=%d NIX_RQINT_DROP", rq);
+
+ if (irq & BIT_ULL(NIX_RQINT_RED))
+ plt_err("RQ=%d NIX_RQINT_RED", rq);
+ }
+
+ /* Handle CQ interrupts */
+ for (q = 0; q < nix->nb_rx_queues; q++) {
+ cq = q % nix->qints;
+ irq = nix_lf_cq_irq_get_and_clear(nix, cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
+ plt_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
+ plt_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
+
+ if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
+ plt_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
+ }
+
+ /* Handle SQ interrupts */
+ for (q = 0; q < nix->nb_tx_queues; q++) {
+ sq = q % nix->qints;
+ irq = nix_lf_sq_irq_get_and_clear(nix, sq);
+
+ if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
+ plt_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
+ nix_lf_sq_debug_reg(nix, NIX_LF_SQ_OP_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
+ plt_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
+ nix_lf_sq_debug_reg(nix, NIX_LF_MNQ_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
+ plt_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
+ nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG);
+ }
+ if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
+ plt_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
+ nix_lf_sq_debug_reg(nix, NIX_LF_SEND_ERR_DBG);
+ }
+ }
+
+ /* Clear interrupt */
+ plt_write64(intr, nix->base + NIX_LF_QINTX_INT(qintx));
+}
+
+int
+roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
+{
+ int vec, q, sqs, rqs, qs, rc = 0;
+ struct plt_intr_handle *handle;
+ struct nix *nix;
+
+ nix = roc_nix_to_nix_priv(roc_nix);
+ handle = &nix->pci_dev->intr_handle;
+
+ /* Figure out max qintx required */
+ rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
+ sqs = PLT_MIN(nix->qints, nix->nb_tx_queues);
+ qs = PLT_MAX(rqs, sqs);
+
+ nix->configured_qints = qs;
+
+ nix->qints_mem =
+ plt_zmalloc(nix->configured_qints * sizeof(struct nix_qint), 0);
+ if (nix->qints_mem == NULL)
+ return -ENOMEM;
+
+ for (q = 0; q < qs; q++) {
+ vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ plt_write64(0, nix->base + NIX_LF_QINTX_CNT(q));
+
+ /* Clear interrupt */
+ plt_write64(~0ull, nix->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ nix->qints_mem[q].nix = nix;
+ nix->qints_mem[q].qintx = q;
+
+ /* Sync qints_mem update */
+ plt_wmb();
+
+ /* Register queue irq vector */
+ rc = dev_irq_register(handle, nix_lf_q_irq, &nix->qints_mem[q],
+ vec);
+ if (rc)
+ break;
+
+ plt_write64(0, nix->base + NIX_LF_QINTX_CNT(q));
+ plt_write64(0, nix->base + NIX_LF_QINTX_INT(q));
+ /* Enable QINT interrupt */
+ plt_write64(~0ull, nix->base + NIX_LF_QINTX_ENA_W1S(q));
+ }
+
+ return rc;
+}
+
+void
+roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
+{
+ struct plt_intr_handle *handle;
+ struct nix *nix;
+ int vec, q;
+
+ nix = roc_nix_to_nix_priv(roc_nix);
+ handle = &nix->pci_dev->intr_handle;
+
+ for (q = 0; q < nix->configured_qints; q++) {
+ vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
+
+ /* Clear QINT CNT */
+ plt_write64(0, nix->base + NIX_LF_QINTX_CNT(q));
+ plt_write64(0, nix->base + NIX_LF_QINTX_INT(q));
+
+ /* Clear interrupt */
+ plt_write64(~0ull, nix->base + NIX_LF_QINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ dev_irq_unregister(handle, nix_lf_q_irq, &nix->qints_mem[q],
+ vec);
+ }
+ nix->configured_qints = 0;
+
+ plt_free(nix->qints_mem);
+ nix->qints_mem = NULL;
+}
+
+int
+roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
+{
+ struct plt_intr_handle *handle;
+ uint8_t rc = 0, vec, q;
+ struct nix *nix;
+
+ nix = roc_nix_to_nix_priv(roc_nix);
+ handle = &nix->pci_dev->intr_handle;
+
+ nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
+
+ nix->cints_mem =
+ plt_zmalloc(nix->configured_cints * sizeof(struct nix_qint), 0);
+ if (nix->cints_mem == NULL)
+ return -ENOMEM;
+
+ for (q = 0; q < nix->configured_cints; q++) {
+ vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
+
+ /* Clear CINT CNT */
+ plt_write64(0, nix->base + NIX_LF_CINTX_CNT(q));
+
+ /* Clear interrupt */
+ plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1C(q));
+
+ nix->cints_mem[q].nix = nix;
+ nix->cints_mem[q].qintx = q;
+
+ /* Sync cints_mem update */
+ plt_wmb();
+
+ /* Register queue irq vector */
+ rc = dev_irq_register(handle, nix_lf_cq_irq, &nix->cints_mem[q],
+ vec);
+ if (rc) {
+ plt_err("Fail to register CQ irq, rc=%d", rc);
+ return rc;
+ }
+
+ if (!handle->intr_vec) {
+ handle->intr_vec = plt_zmalloc(
+ nix->configured_cints * sizeof(int), 0);
+ if (!handle->intr_vec) {
+ plt_err("Failed to allocate %d rx intr_vec",
+ nix->configured_cints);
+ return -ENOMEM;
+ }
+ }
+ /* VFIO vector zero is resereved for misc interrupt so
+ * doing required adjustment. (b13bfab4cd)
+ */
+ handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+
+ /* Configure CQE interrupt coalescing parameters */
+ plt_write64(((CQ_CQE_THRESH_DEFAULT) |
+ (CQ_CQE_THRESH_DEFAULT << 32) |
+ (CQ_TIMER_THRESH_DEFAULT << 48)),
+ nix->base + NIX_LF_CINTX_WAIT((q)));
+
+ /* Keeping the CQ interrupt disabled as the rx interrupt
+ * feature needs to be enabled/disabled on demand.
+ */
+ }
+
+ return rc;
+}
+
+void
+roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
+{
+ struct plt_intr_handle *handle;
+ struct nix *nix;
+ int vec, q;
+
+ nix = roc_nix_to_nix_priv(roc_nix);
+ handle = &nix->pci_dev->intr_handle;
+
+ for (q = 0; q < nix->configured_cints; q++) {
+ vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
+
+ /* Clear CINT CNT */
+ plt_write64(0, nix->base + NIX_LF_CINTX_CNT(q));
+
+ /* Clear interrupt */
+ plt_write64(BIT_ULL(0), nix->base + NIX_LF_CINTX_ENA_W1C(q));
+
+ /* Unregister queue irq vector */
+ dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
+ vec);
+ }
+ plt_free(nix->cints_mem);
+}
+
+int
+nix_register_irqs(struct nix *nix)
+{
+ int rc;
+
+ if (nix->msixoff == MSIX_VECTOR_INVALID) {
+ plt_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
+ nix->msixoff);
+ return NIX_ERR_PARAM;
+ }
+
+ /* Register lf err interrupt */
+ rc = nix_lf_register_err_irq(nix);
+ /* Register RAS interrupt */
+ rc |= nix_lf_register_ras_irq(nix);
+
+ return rc;
+}
+
+void
+nix_unregister_irqs(struct nix *nix)
+{
+ nix_lf_unregister_err_irq(nix);
+ nix_lf_unregister_ras_irq(nix);
+}
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 3bd96d9..6e50a1b 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -18,11 +18,25 @@
/* Apply BP/DROP when CQ is 95% full */
#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100)
+/* IRQ triggered when NIX_LF_CINTX_CNT[QCOUNT] crosses this value */
+#define CQ_CQE_THRESH_DEFAULT 0x1ULL
+#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
+#define CQ_TIMER_THRESH_MAX 255
+
+struct nix_qint {
+ struct nix *nix;
+ uint8_t qintx;
+};
+
struct nix {
uint16_t reta[ROC_NIX_RSS_GRPS][ROC_NIX_RSS_RETA_MAX];
enum roc_nix_rss_reta_sz reta_sz;
struct plt_pci_device *pci_dev;
uint16_t bpid[NIX_MAX_CHAN];
+ struct nix_qint *qints_mem;
+ struct nix_qint *cints_mem;
+ uint8_t configured_qints;
+ uint8_t configured_cints;
struct roc_nix_sq **sqs;
uint16_t vwqe_interval;
uint16_t tx_chan_base;
@@ -98,4 +112,8 @@ nix_priv_to_roc_nix(struct nix *nix)
offsetof(struct roc_nix, reserved));
}
+/* IRQ */
+int nix_register_irqs(struct nix *nix);
+void nix_unregister_irqs(struct nix *nix);
+
#endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index a7d0a6f..3a51c7a 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -16,6 +16,7 @@ INTERNAL {
roc_model;
roc_nix_dev_fini;
roc_nix_dev_init;
+ roc_nix_err_intr_ena_dis;
roc_nix_get_base_chan;
roc_nix_get_pf;
roc_nix_get_pf_func;
@@ -28,6 +29,13 @@ INTERNAL {
roc_nix_lf_alloc;
roc_nix_lf_free;
roc_nix_max_pkt_len;
+ roc_nix_ras_intr_ena_dis;
+ roc_nix_register_cq_irqs;
+ roc_nix_register_queue_irqs;
+ roc_nix_rx_queue_intr_disable;
+ roc_nix_rx_queue_intr_enable;
+ roc_nix_unregister_cq_irqs;
+ roc_nix_unregister_queue_irqs;
roc_npa_aura_limit_modify;
roc_npa_aura_op_range_set;
roc_npa_ctx_dump;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 19/52] common/cnxk: add nix Rx queue management API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (17 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 18/52] common/cnxk: add nix irq support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 20/52] common/cnxk: add nix Tx " Nithin Dabilpuram
` (36 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add nix Rx queue management API to init/modify/fini
RQ context and also setup CQ(completion queue) context.
Current support is both for CN9K and CN10K devices.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 52 ++++
drivers/common/cnxk/roc_nix_queue.c | 496 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 6 +
4 files changed, 555 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_queue.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 39aa4ae..ebd659d 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -18,6 +18,7 @@ sources = files('roc_dev.c',
'roc_model.c',
'roc_nix.c',
'roc_nix_irq.c',
+ 'roc_nix_queue.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index f32f69d..383ddb4 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -41,6 +41,48 @@ enum roc_nix_sq_max_sqe_sz {
#define ROC_NIX_VWQE_MAX_SIZE_LOG2 11
#define ROC_NIX_VWQE_MIN_SIZE_LOG2 2
+
+struct roc_nix_rq {
+ /* Input parameters */
+ uint16_t qid;
+ uint64_t aura_handle;
+ bool ipsech_ena;
+ uint16_t first_skip;
+ uint16_t later_skip;
+ uint16_t wqe_skip;
+ uint16_t lpb_size;
+ uint32_t tag_mask;
+ uint32_t flow_tag_width;
+ uint8_t tt; /* Valid when SSO is enabled */
+ uint16_t hwgrp; /* Valid when SSO is enabled */
+ bool sso_ena;
+ bool vwqe_ena;
+ uint64_t spb_aura_handle; /* Valid when SPB is enabled */
+ uint16_t spb_size; /* Valid when SPB is enabled */
+ bool spb_ena;
+ uint8_t vwqe_first_skip;
+ uint32_t vwqe_max_sz_exp;
+ uint64_t vwqe_wait_tmo;
+ uint64_t vwqe_aura_handle;
+ /* End of Input parameters */
+ struct roc_nix *roc_nix;
+};
+
+struct roc_nix_cq {
+ /* Input parameters */
+ uint16_t qid;
+ uint16_t nb_desc;
+ /* End of Input parameters */
+ uint16_t drop_thresh;
+ struct roc_nix *roc_nix;
+ uintptr_t door;
+ int64_t *status;
+ uint64_t wdata;
+ void *desc_base;
+ uint32_t qmask;
+ uint32_t head;
+};
+
struct roc_nix {
/* Input parameters */
struct plt_pci_device *pci_dev;
@@ -93,4 +135,14 @@ void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix);
int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix);
void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix);
+/* Queue */
+int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
+ bool ena);
+int __roc_api roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
+ bool ena);
+int __roc_api roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable);
+int __roc_api roc_nix_rq_fini(struct roc_nix_rq *rq);
+int __roc_api roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq);
+int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq);
+
#endif /* _ROC_NIX_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
new file mode 100644
index 0000000..716bcec
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -0,0 +1,496 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline uint32_t
+nix_qsize_to_val(enum nix_q_size qsize)
+{
+ return (16UL << (qsize * 2));
+}
+
+static inline enum nix_q_size
+nix_qsize_clampup(uint32_t val)
+{
+ int i = nix_q_size_16;
+
+ for (; i < nix_q_size_max; i++)
+ if (val <= nix_qsize_to_val(i))
+ break;
+
+ if (i >= nix_q_size_max)
+ i = nix_q_size_max - 1;
+
+ return i;
+}
+
+int
+roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable)
+{
+ struct nix *nix = roc_nix_to_nix_priv(rq->roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ int rc;
+
+ /* Pkts will be dropped silently if RQ is disabled */
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rq->qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.ena = enable;
+ aq->rq_mask.ena = ~(aq->rq_mask.ena);
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = rq->qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.ena = enable;
+ aq->rq_mask.ena = ~(aq->rq_mask.ena);
+ }
+
+ rc = mbox_process(mbox);
+
+ if (roc_model_is_cn10k())
+ plt_write64(rq->qid, nix->base + NIX_LF_OP_VWQE_FLUSH);
+ return rc;
+}
+
+static int
+rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = rq->qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = cfg ? NIX_AQ_INSTOP_WRITE : NIX_AQ_INSTOP_INIT;
+
+ if (rq->sso_ena) {
+ /* SSO mode */
+ aq->rq.sso_ena = 1;
+ aq->rq.sso_tt = rq->tt;
+ aq->rq.sso_grp = rq->hwgrp;
+ aq->rq.ena_wqwd = 1;
+ aq->rq.wqe_skip = rq->wqe_skip;
+ aq->rq.wqe_caching = 1;
+
+ aq->rq.good_utag = rq->tag_mask >> 24;
+ aq->rq.bad_utag = rq->tag_mask >> 24;
+ aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0);
+ } else {
+ /* CQ mode */
+ aq->rq.sso_ena = 0;
+ aq->rq.good_utag = rq->tag_mask >> 24;
+ aq->rq.bad_utag = rq->tag_mask >> 24;
+ aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0);
+ aq->rq.cq = rq->qid;
+ }
+
+ if (rq->ipsech_ena)
+ aq->rq.ipsech_ena = 1;
+
+ aq->rq.spb_ena = 0;
+ aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle);
+
+ /* Sizes must be aligned to 8 bytes */
+ if (rq->first_skip & 0x7 || rq->later_skip & 0x7 || rq->lpb_size & 0x7)
+ return -EINVAL;
+
+ /* Expressed in number of dwords */
+ aq->rq.first_skip = rq->first_skip / 8;
+ aq->rq.later_skip = rq->later_skip / 8;
+ aq->rq.flow_tagw = rq->flow_tag_width; /* 32-bits */
+ aq->rq.lpb_sizem1 = rq->lpb_size / 8;
+ aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
+ aq->rq.ena = ena;
+ aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
+ aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
+ aq->rq.rq_int_ena = 0;
+ /* Many to one reduction */
+ aq->rq.qint_idx = rq->qid % nix->qints;
+ aq->rq.xqe_drop_ena = 1;
+
+ if (cfg) {
+ if (rq->sso_ena) {
+ /* SSO mode */
+ aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena;
+ aq->rq_mask.sso_tt = ~aq->rq_mask.sso_tt;
+ aq->rq_mask.sso_grp = ~aq->rq_mask.sso_grp;
+ aq->rq_mask.ena_wqwd = ~aq->rq_mask.ena_wqwd;
+ aq->rq_mask.wqe_skip = ~aq->rq_mask.wqe_skip;
+ aq->rq_mask.wqe_caching = ~aq->rq_mask.wqe_caching;
+ aq->rq_mask.good_utag = ~aq->rq_mask.good_utag;
+ aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag;
+ aq->rq_mask.ltag = ~aq->rq_mask.ltag;
+ } else {
+ /* CQ mode */
+ aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena;
+ aq->rq_mask.good_utag = ~aq->rq_mask.good_utag;
+ aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag;
+ aq->rq_mask.ltag = ~aq->rq_mask.ltag;
+ aq->rq_mask.cq = ~aq->rq_mask.cq;
+ }
+
+ if (rq->ipsech_ena)
+ aq->rq_mask.ipsech_ena = ~aq->rq_mask.ipsech_ena;
+
+ aq->rq_mask.spb_ena = ~aq->rq_mask.spb_ena;
+ aq->rq_mask.lpb_aura = ~aq->rq_mask.lpb_aura;
+ aq->rq_mask.first_skip = ~aq->rq_mask.first_skip;
+ aq->rq_mask.later_skip = ~aq->rq_mask.later_skip;
+ aq->rq_mask.flow_tagw = ~aq->rq_mask.flow_tagw;
+ aq->rq_mask.lpb_sizem1 = ~aq->rq_mask.lpb_sizem1;
+ aq->rq_mask.ena = ~aq->rq_mask.ena;
+ aq->rq_mask.pb_caching = ~aq->rq_mask.pb_caching;
+ aq->rq_mask.xqe_imm_size = ~aq->rq_mask.xqe_imm_size;
+ aq->rq_mask.rq_int_ena = ~aq->rq_mask.rq_int_ena;
+ aq->rq_mask.qint_idx = ~aq->rq_mask.qint_idx;
+ aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena;
+ }
+
+ return 0;
+}
+
+static int
+rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = rq->qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = cfg ? NIX_AQ_INSTOP_WRITE : NIX_AQ_INSTOP_INIT;
+
+ if (rq->sso_ena) {
+ /* SSO mode */
+ aq->rq.sso_ena = 1;
+ aq->rq.sso_tt = rq->tt;
+ aq->rq.sso_grp = rq->hwgrp;
+ aq->rq.ena_wqwd = 1;
+ aq->rq.wqe_skip = rq->wqe_skip;
+ aq->rq.wqe_caching = 1;
+
+ aq->rq.good_utag = rq->tag_mask >> 24;
+ aq->rq.bad_utag = rq->tag_mask >> 24;
+ aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0);
+
+ if (rq->vwqe_ena) {
+ aq->rq.vwqe_ena = true;
+ aq->rq.vwqe_skip = rq->vwqe_first_skip;
+ /* Maximal Vector size is (2^(MAX_VSIZE_EXP+2)) */
+ aq->rq.max_vsize_exp = rq->vwqe_max_sz_exp - 2;
+ aq->rq.vtime_wait = rq->vwqe_wait_tmo;
+ aq->rq.wqe_aura = rq->vwqe_aura_handle;
+ }
+ } else {
+ /* CQ mode */
+ aq->rq.sso_ena = 0;
+ aq->rq.good_utag = rq->tag_mask >> 24;
+ aq->rq.bad_utag = rq->tag_mask >> 24;
+ aq->rq.ltag = rq->tag_mask & BITMASK_ULL(24, 0);
+ aq->rq.cq = rq->qid;
+ }
+
+ if (rq->ipsech_ena)
+ aq->rq.ipsech_ena = 1;
+
+ aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle);
+
+ /* Sizes must be aligned to 8 bytes */
+ if (rq->first_skip & 0x7 || rq->later_skip & 0x7 || rq->lpb_size & 0x7)
+ return -EINVAL;
+
+ /* Expressed in number of dwords */
+ aq->rq.first_skip = rq->first_skip / 8;
+ aq->rq.later_skip = rq->later_skip / 8;
+ aq->rq.flow_tagw = rq->flow_tag_width; /* 32-bits */
+ aq->rq.lpb_sizem1 = rq->lpb_size / 8;
+ aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
+ aq->rq.ena = ena;
+
+ if (rq->spb_ena) {
+ uint32_t spb_sizem1;
+
+ aq->rq.spb_ena = 1;
+ aq->rq.spb_aura =
+ roc_npa_aura_handle_to_aura(rq->spb_aura_handle);
+
+ if (rq->spb_size & 0x7 ||
+ rq->spb_size > NIX_RQ_CN10K_SPB_MAX_SIZE)
+ return -EINVAL;
+
+ spb_sizem1 = rq->spb_size / 8; /* Expressed in no. of dwords */
+ spb_sizem1 -= 1; /* Expressed in size minus one */
+ aq->rq.spb_sizem1 = spb_sizem1 & 0x3F;
+ aq->rq.spb_high_sizem1 = (spb_sizem1 >> 6) & 0x7;
+ } else {
+ aq->rq.spb_ena = 0;
+ }
+
+ aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
+ aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
+ aq->rq.rq_int_ena = 0;
+ /* Many to one reduction */
+ aq->rq.qint_idx = rq->qid % nix->qints;
+ aq->rq.xqe_drop_ena = 1;
+
+ if (cfg) {
+ if (rq->sso_ena) {
+ /* SSO mode */
+ aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena;
+ aq->rq_mask.sso_tt = ~aq->rq_mask.sso_tt;
+ aq->rq_mask.sso_grp = ~aq->rq_mask.sso_grp;
+ aq->rq_mask.ena_wqwd = ~aq->rq_mask.ena_wqwd;
+ aq->rq_mask.wqe_skip = ~aq->rq_mask.wqe_skip;
+ aq->rq_mask.wqe_caching = ~aq->rq_mask.wqe_caching;
+ aq->rq_mask.good_utag = ~aq->rq_mask.good_utag;
+ aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag;
+ aq->rq_mask.ltag = ~aq->rq_mask.ltag;
+ if (rq->vwqe_ena) {
+ aq->rq_mask.vwqe_ena = ~aq->rq_mask.vwqe_ena;
+ aq->rq_mask.vwqe_skip = ~aq->rq_mask.vwqe_skip;
+ aq->rq_mask.max_vsize_exp =
+ ~aq->rq_mask.max_vsize_exp;
+ aq->rq_mask.vtime_wait =
+ ~aq->rq_mask.vtime_wait;
+ aq->rq_mask.wqe_aura = ~aq->rq_mask.wqe_aura;
+ }
+ } else {
+ /* CQ mode */
+ aq->rq_mask.sso_ena = ~aq->rq_mask.sso_ena;
+ aq->rq_mask.good_utag = ~aq->rq_mask.good_utag;
+ aq->rq_mask.bad_utag = ~aq->rq_mask.bad_utag;
+ aq->rq_mask.ltag = ~aq->rq_mask.ltag;
+ aq->rq_mask.cq = ~aq->rq_mask.cq;
+ }
+
+ if (rq->ipsech_ena)
+ aq->rq_mask.ipsech_ena = ~aq->rq_mask.ipsech_ena;
+
+ if (rq->spb_ena) {
+ aq->rq_mask.spb_aura = ~aq->rq_mask.spb_aura;
+ aq->rq_mask.spb_sizem1 = ~aq->rq_mask.spb_sizem1;
+ aq->rq_mask.spb_high_sizem1 =
+ ~aq->rq_mask.spb_high_sizem1;
+ }
+
+ aq->rq_mask.spb_ena = ~aq->rq_mask.spb_ena;
+ aq->rq_mask.lpb_aura = ~aq->rq_mask.lpb_aura;
+ aq->rq_mask.first_skip = ~aq->rq_mask.first_skip;
+ aq->rq_mask.later_skip = ~aq->rq_mask.later_skip;
+ aq->rq_mask.flow_tagw = ~aq->rq_mask.flow_tagw;
+ aq->rq_mask.lpb_sizem1 = ~aq->rq_mask.lpb_sizem1;
+ aq->rq_mask.ena = ~aq->rq_mask.ena;
+ aq->rq_mask.pb_caching = ~aq->rq_mask.pb_caching;
+ aq->rq_mask.xqe_imm_size = ~aq->rq_mask.xqe_imm_size;
+ aq->rq_mask.rq_int_ena = ~aq->rq_mask.rq_int_ena;
+ aq->rq_mask.qint_idx = ~aq->rq_mask.qint_idx;
+ aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena;
+ }
+
+ return 0;
+}
+
+int
+roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ bool is_cn9k = roc_model_is_cn9k();
+ int rc;
+
+ if (roc_nix == NULL || rq == NULL)
+ return NIX_ERR_PARAM;
+
+ if (rq->qid >= nix->nb_rx_queues)
+ return NIX_ERR_QUEUE_INVALID_RANGE;
+
+ rq->roc_nix = roc_nix;
+
+ if (is_cn9k)
+ rc = rq_cn9k_cfg(nix, rq, false, ena);
+ else
+ rc = rq_cfg(nix, rq, false, ena);
+
+ if (rc)
+ return rc;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ bool is_cn9k = roc_model_is_cn9k();
+ int rc;
+
+ if (roc_nix == NULL || rq == NULL)
+ return NIX_ERR_PARAM;
+
+ if (rq->qid >= nix->nb_rx_queues)
+ return NIX_ERR_QUEUE_INVALID_RANGE;
+
+ rq->roc_nix = roc_nix;
+
+ if (is_cn9k)
+ rc = rq_cn9k_cfg(nix, rq, true, ena);
+ else
+ rc = rq_cfg(nix, rq, true, ena);
+
+ if (rc)
+ return rc;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_rq_fini(struct roc_nix_rq *rq)
+{
+ /* Disabling RQ is sufficient */
+ return roc_nix_rq_ena_dis(rq, false);
+}
+
+int
+roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ volatile struct nix_cq_ctx_s *cq_ctx;
+ enum nix_q_size qsize;
+ size_t desc_sz;
+ int rc;
+
+ if (cq == NULL)
+ return NIX_ERR_PARAM;
+
+ if (cq->qid >= nix->nb_rx_queues)
+ return NIX_ERR_QUEUE_INVALID_RANGE;
+
+ qsize = nix_qsize_clampup(cq->nb_desc);
+ cq->nb_desc = nix_qsize_to_val(qsize);
+ cq->qmask = cq->nb_desc - 1;
+ cq->door = nix->base + NIX_LF_CQ_OP_DOOR;
+ cq->status = (int64_t *)(nix->base + NIX_LF_CQ_OP_STATUS);
+ cq->wdata = (uint64_t)cq->qid << 32;
+ cq->roc_nix = roc_nix;
+ cq->drop_thresh = NIX_CQ_THRESH_LEVEL;
+
+ /* CQE of W16 */
+ desc_sz = cq->nb_desc * NIX_CQ_ENTRY_SZ;
+ cq->desc_base = plt_zmalloc(desc_sz, NIX_CQ_ALIGN);
+ if (cq->desc_base == NULL) {
+ rc = NIX_ERR_NO_MEM;
+ goto fail;
+ }
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = cq->qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+ cq_ctx = &aq->cq;
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = cq->qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+ cq_ctx = &aq->cq;
+ }
+
+ cq_ctx->ena = 1;
+ cq_ctx->caching = 1;
+ cq_ctx->qsize = qsize;
+ cq_ctx->base = (uint64_t)cq->desc_base;
+ cq_ctx->avg_level = 0xff;
+ cq_ctx->cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
+ cq_ctx->cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
+
+ /* Many to one reduction */
+ cq_ctx->qint_idx = cq->qid % nix->qints;
+ /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
+ cq_ctx->cint_idx = cq->qid;
+
+ cq_ctx->drop = cq->drop_thresh;
+ cq_ctx->drop_ena = 1;
+
+ /* TX pause frames enable flow ctrl on RX side */
+ if (nix->tx_pause) {
+ /* Single BPID is allocated for all rx channels for now */
+ cq_ctx->bpid = nix->bpid[0];
+ cq_ctx->bp = cq_ctx->drop;
+ cq_ctx->bp_ena = 1;
+ }
+
+ rc = mbox_process(mbox);
+ if (rc)
+ goto free_mem;
+
+ return 0;
+
+free_mem:
+ plt_free(cq->desc_base);
+fail:
+ return rc;
+}
+
+int
+roc_nix_cq_fini(struct roc_nix_cq *cq)
+{
+ struct mbox *mbox;
+ struct nix *nix;
+ int rc;
+
+ if (cq == NULL)
+ return NIX_ERR_PARAM;
+
+ nix = roc_nix_to_nix_priv(cq->roc_nix);
+ mbox = (&nix->dev)->mbox;
+
+ /* Disable CQ */
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = cq->qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ aq->cq.ena = 0;
+ aq->cq.bp_ena = 0;
+ aq->cq_mask.ena = ~aq->cq_mask.ena;
+ aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena;
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = cq->qid;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ aq->cq.ena = 0;
+ aq->cq.bp_ena = 0;
+ aq->cq_mask.ena = ~aq->cq_mask.ena;
+ aq->cq_mask.bp_ena = ~aq->cq_mask.bp_ena;
+ }
+
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ plt_free(cq->desc_base);
+ return 0;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 3a51c7a..0f56582 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -14,6 +14,8 @@ INTERNAL {
roc_idev_npa_nix_get;
roc_idev_num_lmtlines_get;
roc_model;
+ roc_nix_cq_fini;
+ roc_nix_cq_init;
roc_nix_dev_fini;
roc_nix_dev_init;
roc_nix_err_intr_ena_dis;
@@ -32,6 +34,10 @@ INTERNAL {
roc_nix_ras_intr_ena_dis;
roc_nix_register_cq_irqs;
roc_nix_register_queue_irqs;
+ roc_nix_rq_ena_dis;
+ roc_nix_rq_fini;
+ roc_nix_rq_init;
+ roc_nix_rq_modify;
roc_nix_rx_queue_intr_disable;
roc_nix_rx_queue_intr_enable;
roc_nix_unregister_cq_irqs;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 20/52] common/cnxk: add nix Tx queue management API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (18 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 19/52] common/cnxk: add nix Rx queue management API Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 21/52] common/cnxk: add nix MAC operations support Nithin Dabilpuram
` (35 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
This patch adds support to init/modify/fini NIX
SQ(send queue) for both CN9K and CN10K platforms.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 19 ++
drivers/common/cnxk/roc_nix_queue.c | 358 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 2 +
3 files changed, 379 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 383ddb4..a6221d2 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -83,6 +83,23 @@ struct roc_nix_cq {
uint32_t head;
};
+struct roc_nix_sq {
+ /* Input parameters */
+ enum roc_nix_sq_max_sqe_sz max_sqe_sz;
+ uint32_t nb_desc;
+ uint16_t qid;
+ /* End of Input parameters */
+ uint16_t sqes_per_sqb_log2;
+ struct roc_nix *roc_nix;
+ uint64_t aura_handle;
+ int16_t nb_sqb_bufs_adj;
+ uint16_t nb_sqb_bufs;
+ plt_iova_t io_addr;
+ void *lmt_addr;
+ void *sqe_mem;
+ void *fc;
+};
+
struct roc_nix {
/* Input parameters */
struct plt_pci_device *pci_dev;
@@ -144,5 +161,7 @@ int __roc_api roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable);
int __roc_api roc_nix_rq_fini(struct roc_nix_rq *rq);
int __roc_api roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq);
int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq);
+int __roc_api roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq);
+int __roc_api roc_nix_sq_fini(struct roc_nix_sq *sq);
#endif /* _ROC_NIX_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 716bcec..bb410d5 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -494,3 +494,361 @@ roc_nix_cq_fini(struct roc_nix_cq *cq)
plt_free(cq->desc_base);
return 0;
}
+
+static int
+sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint16_t sqes_per_sqb, count, nb_sqb_bufs;
+ struct npa_pool_s pool;
+ struct npa_aura_s aura;
+ uint64_t blk_sz;
+ uint64_t iova;
+ int rc;
+
+ blk_sz = nix->sqb_size;
+ if (sq->max_sqe_sz == roc_nix_maxsqesz_w16)
+ sqes_per_sqb = (blk_sz / 8) / 16;
+ else
+ sqes_per_sqb = (blk_sz / 8) / 8;
+
+ sq->nb_desc = PLT_MAX(256U, sq->nb_desc);
+ nb_sqb_bufs = sq->nb_desc / sqes_per_sqb;
+ nb_sqb_bufs += NIX_SQB_LIST_SPACE;
+ /* Clamp up the SQB count */
+ nb_sqb_bufs = PLT_MIN(roc_nix->max_sqb_count,
+ (uint16_t)PLT_MAX(NIX_DEF_SQB, nb_sqb_bufs));
+
+ sq->nb_sqb_bufs = nb_sqb_bufs;
+ sq->sqes_per_sqb_log2 = (uint16_t)plt_log2_u32(sqes_per_sqb);
+ sq->nb_sqb_bufs_adj =
+ nb_sqb_bufs -
+ (PLT_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb);
+ sq->nb_sqb_bufs_adj =
+ (sq->nb_sqb_bufs_adj * NIX_SQB_LOWER_THRESH) / 100;
+
+ /* Explicitly set nat_align alone as by default pool is with both
+ * nat_align and buf_offset = 1 which we don't want for SQB.
+ */
+ memset(&pool, 0, sizeof(struct npa_pool_s));
+ pool.nat_align = 1;
+
+ memset(&aura, 0, sizeof(aura));
+ aura.fc_ena = 1;
+ aura.fc_addr = (uint64_t)sq->fc;
+ aura.fc_hyst_bits = 0; /* Store count on all updates */
+ rc = roc_npa_pool_create(&sq->aura_handle, blk_sz, nb_sqb_bufs, &aura,
+ &pool);
+ if (rc)
+ goto fail;
+
+ sq->sqe_mem = plt_zmalloc(blk_sz * nb_sqb_bufs, blk_sz);
+ if (sq->sqe_mem == NULL) {
+ rc = NIX_ERR_NO_MEM;
+ goto nomem;
+ }
+
+ /* Fill the initial buffers */
+ iova = (uint64_t)sq->sqe_mem;
+ for (count = 0; count < nb_sqb_bufs; count++) {
+ roc_npa_aura_op_free(sq->aura_handle, 0, iova);
+ iova += blk_sz;
+ }
+ roc_npa_aura_op_range_set(sq->aura_handle, (uint64_t)sq->sqe_mem, iova);
+
+ return rc;
+nomem:
+ roc_npa_pool_destroy(sq->aura_handle);
+fail:
+ return rc;
+}
+
+static void
+sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
+ uint16_t smq)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+ aq->sq.max_sqe_size = sq->max_sqe_sz;
+
+ aq->sq.max_sqe_size = sq->max_sqe_sz;
+ aq->sq.smq = smq;
+ aq->sq.smq_rr_quantum = rr_quantum;
+ aq->sq.default_chan = nix->tx_chan_base;
+ aq->sq.sqe_stype = NIX_STYPE_STF;
+ aq->sq.ena = 1;
+ if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
+ aq->sq.sqe_stype = NIX_STYPE_STP;
+ aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
+ aq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
+ aq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
+ aq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
+ aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
+
+ /* Many to one reduction */
+ aq->sq.qint_idx = sq->qid % nix->qints;
+}
+
+static int
+sq_cn9k_fini(struct nix *nix, struct roc_nix_sq *sq)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ uint16_t sqes_per_sqb;
+ void *sqb_buf;
+ int rc, count;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Check if sq is already cleaned up */
+ if (!rsp->sq.ena)
+ return 0;
+
+ /* Disable sq */
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ aq->sq_mask.ena = ~aq->sq_mask.ena;
+ aq->sq.ena = 0;
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read SQ and free sqb's */
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (aq->sq.smq_pend)
+ plt_err("SQ has pending SQE's");
+
+ count = aq->sq.sqb_count;
+ sqes_per_sqb = 1 << sq->sqes_per_sqb_log2;
+ /* Free SQB's that are used */
+ sqb_buf = (void *)rsp->sq.head_sqb;
+ while (count) {
+ void *next_sqb;
+
+ next_sqb = *(void **)((uintptr_t)sqb_buf +
+ (uint32_t)((sqes_per_sqb - 1) *
+ sq->max_sqe_sz));
+ roc_npa_aura_op_free(sq->aura_handle, 1, (uint64_t)sqb_buf);
+ sqb_buf = next_sqb;
+ count--;
+ }
+
+ /* Free next to use sqb */
+ if (rsp->sq.next_sqb)
+ roc_npa_aura_op_free(sq->aura_handle, 1, rsp->sq.next_sqb);
+ return 0;
+}
+
+static void
+sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
+ uint16_t smq)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_INIT;
+ aq->sq.max_sqe_size = sq->max_sqe_sz;
+
+ aq->sq.max_sqe_size = sq->max_sqe_sz;
+ aq->sq.smq = smq;
+ aq->sq.smq_rr_weight = rr_quantum;
+ aq->sq.default_chan = nix->tx_chan_base;
+ aq->sq.sqe_stype = NIX_STYPE_STF;
+ aq->sq.ena = 1;
+ if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
+ aq->sq.sqe_stype = NIX_STYPE_STP;
+ aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
+ aq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
+ aq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
+ aq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
+ aq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
+
+ /* Many to one reduction */
+ aq->sq.qint_idx = sq->qid % nix->qints;
+}
+
+static int
+sq_fini(struct nix *nix, struct roc_nix_sq *sq)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_cn10k_aq_enq_rsp *rsp;
+ struct nix_cn10k_aq_enq_req *aq;
+ uint16_t sqes_per_sqb;
+ void *sqb_buf;
+ int rc, count;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Check if sq is already cleaned up */
+ if (!rsp->sq.ena)
+ return 0;
+
+ /* Disable sq */
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ aq->sq_mask.ena = ~aq->sq_mask.ena;
+ aq->sq.ena = 0;
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read SQ and free sqb's */
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = sq->qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (aq->sq.smq_pend)
+ plt_err("SQ has pending SQE's");
+
+ count = aq->sq.sqb_count;
+ sqes_per_sqb = 1 << sq->sqes_per_sqb_log2;
+ /* Free SQB's that are used */
+ sqb_buf = (void *)rsp->sq.head_sqb;
+ while (count) {
+ void *next_sqb;
+
+ next_sqb = *(void **)((uintptr_t)sqb_buf +
+ (uint32_t)((sqes_per_sqb - 1) *
+ sq->max_sqe_sz));
+ roc_npa_aura_op_free(sq->aura_handle, 1, (uint64_t)sqb_buf);
+ sqb_buf = next_sqb;
+ count--;
+ }
+
+ /* Free next to use sqb */
+ if (rsp->sq.next_sqb)
+ roc_npa_aura_op_free(sq->aura_handle, 1, rsp->sq.next_sqb);
+ return 0;
+}
+
+int
+roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ uint16_t qid, smq = UINT16_MAX;
+ uint32_t rr_quantum = 0;
+ int rc;
+
+ if (sq == NULL)
+ return NIX_ERR_PARAM;
+
+ qid = sq->qid;
+ if (qid >= nix->nb_tx_queues)
+ return NIX_ERR_QUEUE_INVALID_RANGE;
+
+ sq->roc_nix = roc_nix;
+ /*
+ * Allocate memory for flow control updates from HW.
+ * Alloc one cache line, so that fits all FC_STYPE modes.
+ */
+ sq->fc = plt_zmalloc(ROC_ALIGN, ROC_ALIGN);
+ if (sq->fc == NULL) {
+ rc = NIX_ERR_NO_MEM;
+ goto fail;
+ }
+
+ rc = sqb_pool_populate(roc_nix, sq);
+ if (rc)
+ goto nomem;
+
+ /* Init SQ context */
+ if (roc_model_is_cn9k())
+ sq_cn9k_init(nix, sq, rr_quantum, smq);
+ else
+ sq_init(nix, sq, rr_quantum, smq);
+
+ rc = mbox_process(mbox);
+ if (rc)
+ goto nomem;
+
+ nix->sqs[qid] = sq;
+ sq->io_addr = nix->base + NIX_LF_OP_SENDX(0);
+ /* Evenly distribute LMT slot for each sq */
+ if (roc_model_is_cn9k()) {
+ /* Multiple cores/SQ's can use same LMTLINE safely in CN9K */
+ sq->lmt_addr = (void *)(nix->lmt_base +
+ ((qid & RVU_CN9K_LMT_SLOT_MASK) << 12));
+ }
+
+ return rc;
+nomem:
+ plt_free(sq->fc);
+fail:
+ return rc;
+}
+
+int
+roc_nix_sq_fini(struct roc_nix_sq *sq)
+{
+ struct nix *nix;
+ struct mbox *mbox;
+ struct ndc_sync_op *ndc_req;
+ uint16_t qid;
+ int rc = 0;
+
+ if (sq == NULL)
+ return NIX_ERR_PARAM;
+
+ nix = roc_nix_to_nix_priv(sq->roc_nix);
+ mbox = (&nix->dev)->mbox;
+
+ qid = sq->qid;
+
+ /* Release SQ context */
+ if (roc_model_is_cn9k())
+ rc |= sq_cn9k_fini(roc_nix_to_nix_priv(sq->roc_nix), sq);
+ else
+ rc |= sq_fini(roc_nix_to_nix_priv(sq->roc_nix), sq);
+
+ /* Sync NDC-NIX-TX for LF */
+ ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+ if (ndc_req == NULL)
+ return -ENOSPC;
+ ndc_req->nix_lf_tx_sync = 1;
+ if (mbox_process(mbox))
+ rc |= NIX_ERR_NDC_SYNC;
+
+ rc |= roc_npa_pool_destroy(sq->aura_handle);
+ plt_free(sq->fc);
+ plt_free(sq->sqe_mem);
+ nix->sqs[qid] = NULL;
+
+ return rc;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 0f56582..a25fe15 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -40,6 +40,8 @@ INTERNAL {
roc_nix_rq_modify;
roc_nix_rx_queue_intr_disable;
roc_nix_rx_queue_intr_enable;
+ roc_nix_sq_fini;
+ roc_nix_sq_init;
roc_nix_unregister_cq_irqs;
roc_nix_unregister_queue_irqs;
roc_npa_aura_limit_modify;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 21/52] common/cnxk: add nix MAC operations support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (19 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 20/52] common/cnxk: add nix Tx " Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 22/52] common/cnxk: add nix specific npc operations Nithin Dabilpuram
` (34 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Sunil Kumar Kori <skori@marvell.com>
Add support to different MAC related operations such as
MAC address set/get, link set/get, link status callback,
etc.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 41 ++++++
drivers/common/cnxk/roc_nix_mac.c | 298 ++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 15 ++
4 files changed, 355 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_mac.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index ebd659d..21d9ad3 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -18,6 +18,7 @@ sources = files('roc_dev.c',
'roc_model.c',
'roc_nix.c',
'roc_nix_irq.c',
+ 'roc_nix_mac.c',
'roc_nix_queue.c',
'roc_npa.c',
'roc_npa_debug.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index a6221d2..a2910b8 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -100,6 +100,23 @@ struct roc_nix_sq {
void *fc;
};
+struct roc_nix_link_info {
+ uint64_t status : 1;
+ uint64_t full_duplex : 1;
+ uint64_t lmac_type_id : 4;
+ uint64_t speed : 20;
+ uint64_t autoneg : 1;
+ uint64_t fec : 2;
+ uint64_t port : 8;
+};
+
+/* Link status update callback */
+typedef void (*link_status_t)(struct roc_nix *roc_nix,
+ struct roc_nix_link_info *link);
+
+/* PTP info update callback */
+typedef int (*ptp_info_update_t)(struct roc_nix *roc_nix, bool enable);
+
struct roc_nix {
/* Input parameters */
struct plt_pci_device *pci_dev;
@@ -152,6 +169,30 @@ void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix);
int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix);
void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix);
+/* MAC */
+int __roc_api roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start);
+int __roc_api roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix,
+ bool start);
+int __roc_api roc_nix_mac_loopback_enable(struct roc_nix *roc_nix, bool enable);
+int __roc_api roc_nix_mac_addr_set(struct roc_nix *roc_nix,
+ const uint8_t addr[]);
+int __roc_api roc_nix_mac_max_entries_get(struct roc_nix *roc_nix);
+int __roc_api roc_nix_mac_addr_add(struct roc_nix *roc_nix, uint8_t addr[]);
+int __roc_api roc_nix_mac_addr_del(struct roc_nix *roc_nix, uint32_t index);
+int __roc_api roc_nix_mac_promisc_mode_enable(struct roc_nix *roc_nix,
+ int enable);
+int __roc_api roc_nix_mac_link_state_set(struct roc_nix *roc_nix, uint8_t up);
+int __roc_api roc_nix_mac_link_info_set(struct roc_nix *roc_nix,
+ struct roc_nix_link_info *link_info);
+int __roc_api roc_nix_mac_link_info_get(struct roc_nix *roc_nix,
+ struct roc_nix_link_info *link_info);
+int __roc_api roc_nix_mac_mtu_set(struct roc_nix *roc_nix, uint16_t mtu);
+int __roc_api roc_nix_mac_max_rx_len_set(struct roc_nix *roc_nix,
+ uint16_t maxlen);
+int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix,
+ link_status_t link_update);
+void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix);
+
/* Queue */
int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
bool ena);
diff --git a/drivers/common/cnxk/roc_nix_mac.c b/drivers/common/cnxk/roc_nix_mac.c
new file mode 100644
index 0000000..bedd95b
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_mac.c
@@ -0,0 +1,298 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline struct mbox *
+nix_to_mbox(struct nix *nix)
+{
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+int
+roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ if (start)
+ mbox_alloc_msg_cgx_start_rxtx(mbox);
+ else
+ mbox_alloc_msg_cgx_stop_rxtx(mbox);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix, bool start)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ if (start)
+ mbox_alloc_msg_cgx_start_linkevents(mbox);
+ else
+ mbox_alloc_msg_cgx_stop_linkevents(mbox);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_loopback_enable(struct roc_nix *roc_nix, bool enable)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+
+ if (enable && roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ if (enable)
+ mbox_alloc_msg_cgx_intlbk_enable(mbox);
+ else
+ mbox_alloc_msg_cgx_intlbk_disable(mbox);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_addr_set(struct roc_nix *roc_nix, const uint8_t addr[])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct cgx_mac_addr_set_or_get *req;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ if (dev_active_vfs(&nix->dev))
+ return NIX_ERR_OP_NOTSUP;
+
+ req = mbox_alloc_msg_cgx_mac_addr_set(mbox);
+ mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_max_entries_get(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct cgx_max_dmac_entries_get_rsp *rsp;
+ struct mbox *mbox = nix_to_mbox(nix);
+ int rc;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->max_dmac_filters ? rsp->max_dmac_filters : 1;
+}
+
+int
+roc_nix_mac_addr_add(struct roc_nix *roc_nix, uint8_t addr[])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct cgx_mac_addr_add_req *req;
+ struct cgx_mac_addr_add_rsp *rsp;
+ int rc;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ if (dev_active_vfs(&nix->dev))
+ return NIX_ERR_OP_NOTSUP;
+
+ req = mbox_alloc_msg_cgx_mac_addr_add(mbox);
+ mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN);
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc < 0)
+ return rc;
+
+ return rsp->index;
+}
+
+int
+roc_nix_mac_addr_del(struct roc_nix *roc_nix, uint32_t index)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct cgx_mac_addr_del_req *req;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ req = mbox_alloc_msg_cgx_mac_addr_del(mbox);
+ if (req == NULL)
+ return rc;
+ req->index = index;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_promisc_mode_enable(struct roc_nix *roc_nix, int enable)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ if (enable)
+ mbox_alloc_msg_cgx_promisc_enable(mbox);
+ else
+ mbox_alloc_msg_cgx_promisc_disable(mbox);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_link_info_get(struct roc_nix *roc_nix,
+ struct roc_nix_link_info *link_info)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct cgx_link_info_msg *rsp;
+ int rc;
+
+ mbox_alloc_msg_cgx_get_linkinfo(mbox);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ link_info->status = rsp->link_info.link_up;
+ link_info->full_duplex = rsp->link_info.full_duplex;
+ link_info->lmac_type_id = rsp->link_info.lmac_type_id;
+ link_info->speed = rsp->link_info.speed;
+ link_info->autoneg = rsp->link_info.an;
+ link_info->fec = rsp->link_info.fec;
+ link_info->port = rsp->link_info.port;
+
+ return 0;
+}
+
+int
+roc_nix_mac_link_state_set(struct roc_nix *roc_nix, uint8_t up)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct cgx_set_link_state_msg *req;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_cgx_set_link_state(mbox);
+ if (req == NULL)
+ return rc;
+ req->enable = up;
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_link_info_set(struct roc_nix *roc_nix,
+ struct roc_nix_link_info *link_info)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct cgx_set_link_mode_req *req;
+ int rc;
+
+ rc = roc_nix_mac_link_state_set(roc_nix, link_info->status);
+ if (rc)
+ return rc;
+
+ req = mbox_alloc_msg_cgx_set_link_mode(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->args.speed = link_info->speed;
+ req->args.duplex = link_info->full_duplex;
+ req->args.an = link_info->autoneg;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_mtu_set(struct roc_nix *roc_nix, uint16_t mtu)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct nix_frs_cfg *req;
+ bool sdp_link = false;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_sdp(roc_nix))
+ sdp_link = true;
+
+ req = mbox_alloc_msg_nix_set_hw_frs(mbox);
+ if (req == NULL)
+ return rc;
+ req->maxlen = mtu;
+ req->update_smq = true;
+ req->sdp_link = sdp_link;
+
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Save MTU for later use */
+ nix->mtu = mtu;
+ return 0;
+}
+
+int
+roc_nix_mac_max_rx_len_set(struct roc_nix *roc_nix, uint16_t maxlen)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = nix_to_mbox(nix);
+ struct nix_frs_cfg *req;
+ bool sdp_link = false;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_sdp(roc_nix))
+ sdp_link = true;
+
+ req = mbox_alloc_msg_nix_set_hw_frs(mbox);
+ if (req == NULL)
+ return rc;
+ req->sdp_link = sdp_link;
+ req->maxlen = maxlen;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mac_link_cb_register(struct roc_nix *roc_nix, link_status_t link_update)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ if (link_update == NULL)
+ return NIX_ERR_PARAM;
+
+ dev->ops->link_status_update = (link_info_t)link_update;
+ return 0;
+}
+
+void
+roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ dev->ops->link_status_update = NULL;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index a25fe15..4d63514 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -30,6 +30,21 @@ INTERNAL {
roc_nix_is_vf_or_sdp;
roc_nix_lf_alloc;
roc_nix_lf_free;
+ roc_nix_mac_addr_add;
+ roc_nix_mac_addr_del;
+ roc_nix_mac_addr_set;
+ roc_nix_mac_link_cb_register;
+ roc_nix_mac_link_cb_unregister;
+ roc_nix_mac_link_event_start_stop;
+ roc_nix_mac_link_info_get;
+ roc_nix_mac_link_info_set;
+ roc_nix_mac_link_state_set;
+ roc_nix_mac_loopback_enable;
+ roc_nix_mac_max_entries_get;
+ roc_nix_mac_max_rx_len_set;
+ roc_nix_mac_mtu_set;
+ roc_nix_mac_promisc_mode_enable;
+ roc_nix_mac_rxtx_start_stop;
roc_nix_max_pkt_len;
roc_nix_ras_intr_ena_dis;
roc_nix_register_cq_irqs;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 22/52] common/cnxk: add nix specific npc operations
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (20 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 21/52] common/cnxk: add nix MAC operations support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 23/52] common/cnxk: add nix inline IPsec config API Nithin Dabilpuram
` (33 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Sunil Kumar Kori <skori@marvell.com>
Add NIX specific NPC operations such as NPC mac address get/set,
mcast entry add/delete, promiscuous mode enable/disable etc.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 2 +
drivers/common/cnxk/roc_nix.h | 25 +++++++++
drivers/common/cnxk/roc_nix_mcast.c | 98 ++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_npc.c | 103 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 9 ++++
5 files changed, 237 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_mcast.c
create mode 100644 drivers/common/cnxk/roc_nix_npc.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 21d9ad3..bde5728 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -19,6 +19,8 @@ sources = files('roc_dev.c',
'roc_nix.c',
'roc_nix_irq.c',
'roc_nix_mac.c',
+ 'roc_nix_mcast.c',
+ 'roc_nix_npc.c',
'roc_nix_queue.c',
'roc_npa.c',
'roc_npa_debug.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index a2910b8..ccb31be 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -193,6 +193,18 @@ int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix,
link_status_t link_update);
void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix);
+/* NPC */
+int __roc_api roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable);
+
+int __roc_api roc_nix_npc_mac_addr_set(struct roc_nix *roc_nix, uint8_t addr[]);
+
+int __roc_api roc_nix_npc_mac_addr_get(struct roc_nix *roc_nix, uint8_t *addr);
+
+int __roc_api roc_nix_npc_rx_ena_dis(struct roc_nix *roc_nix, bool enable);
+
+int __roc_api roc_nix_npc_mcast_config(struct roc_nix *roc_nix,
+ bool mcast_enable, bool prom_enable);
+
/* Queue */
int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
bool ena);
@@ -205,4 +217,17 @@ int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq);
int __roc_api roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq);
int __roc_api roc_nix_sq_fini(struct roc_nix_sq *sq);
+/* MCAST*/
+int __roc_api roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix,
+ uint16_t nb_entries,
+ uint8_t priority,
+ uint16_t index[]);
+int __roc_api roc_nix_mcast_mcam_entry_free(struct roc_nix *roc_nix,
+ uint32_t index);
+int __roc_api roc_nix_mcast_mcam_entry_write(struct roc_nix *roc_nix,
+ struct mcam_entry *entry,
+ uint32_t index, uint8_t intf,
+ uint64_t action);
+int __roc_api roc_nix_mcast_mcam_entry_ena_dis(struct roc_nix *roc_nix,
+ uint32_t index, bool enable);
#endif /* _ROC_NIX_H_ */
diff --git a/drivers/common/cnxk/roc_nix_mcast.c b/drivers/common/cnxk/roc_nix_mcast.c
new file mode 100644
index 0000000..51576cf
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_mcast.c
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline struct mbox *
+get_mbox(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+int
+roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix, uint16_t nb_entries,
+ uint8_t priority, uint16_t index[])
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ int rc = -ENOSPC, i;
+
+ req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->priority = priority;
+ req->count = nb_entries;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ for (i = 0; i < rsp->count; i++)
+ index[i] = rsp->entry_list[i];
+
+ return rsp->count;
+}
+
+int
+roc_nix_mcast_mcam_entry_free(struct roc_nix *roc_nix, uint32_t index)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_mcam_free_entry_req *req;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = index;
+
+ return mbox_process_msg(mbox, NULL);
+}
+
+int
+roc_nix_mcast_mcam_entry_write(struct roc_nix *roc_nix,
+ struct mcam_entry *entry, uint32_t index,
+ uint8_t intf, uint64_t action)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_mcam_write_entry_req *req;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = index;
+ req->intf = intf;
+ req->enable_entry = true;
+ mbox_memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+ req->entry_data.action = action;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_mcast_mcam_entry_ena_dis(struct roc_nix *roc_nix, uint32_t index,
+ bool enable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ struct mbox *mbox = get_mbox(roc_nix);
+ int rc = -ENOSPC;
+
+ if (enable) {
+ req = mbox_alloc_msg_npc_mcam_ena_entry(mbox);
+ if (req == NULL)
+ return rc;
+ } else {
+ req = mbox_alloc_msg_npc_mcam_dis_entry(mbox);
+ if (req == NULL)
+ return rc;
+ }
+
+ req->entry = index;
+ return mbox_process(mbox);
+}
diff --git a/drivers/common/cnxk/roc_nix_npc.c b/drivers/common/cnxk/roc_nix_npc.c
new file mode 100644
index 0000000..6efb19d
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_npc.c
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline struct mbox *
+get_mbox(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+int
+roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_rx_mode *req;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return NIX_ERR_PARAM;
+
+ req = mbox_alloc_msg_nix_set_rx_mode(mbox);
+ if (req == NULL)
+ return rc;
+
+ if (enable)
+ req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_npc_mac_addr_set(struct roc_nix *roc_nix, uint8_t addr[])
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_set_mac_addr *req;
+
+ req = mbox_alloc_msg_nix_set_mac_addr(mbox);
+ mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN);
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_npc_mac_addr_get(struct roc_nix *roc_nix, uint8_t *addr)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_get_mac_addr_rsp *rsp;
+ int rc;
+
+ mbox_alloc_msg_nix_get_mac_addr(mbox);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ mbox_memcpy(addr, rsp->mac_addr, PLT_ETHER_ADDR_LEN);
+ return 0;
+}
+
+int
+roc_nix_npc_rx_ena_dis(struct roc_nix *roc_nix, bool enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ int rc;
+
+ if (enable)
+ mbox_alloc_msg_nix_lf_start_rx(mbox);
+ else
+ mbox_alloc_msg_nix_lf_stop_rx(mbox);
+
+ rc = mbox_process(mbox);
+ if (!rc)
+ roc_nix->io_enabled = enable;
+ return rc;
+}
+
+int
+roc_nix_npc_mcast_config(struct roc_nix *roc_nix, bool mcast_enable,
+ bool prom_enable)
+
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_rx_mode *req;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return 0;
+
+ req = mbox_alloc_msg_nix_set_rx_mode(mbox);
+ if (req == NULL)
+ return rc;
+
+ if (mcast_enable)
+ req->mode = NIX_RX_MODE_ALLMULTI;
+ else if (prom_enable)
+ req->mode = NIX_RX_MODE_PROMISC;
+
+ return mbox_process(mbox);
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 4d63514..0649b01 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -46,6 +46,15 @@ INTERNAL {
roc_nix_mac_promisc_mode_enable;
roc_nix_mac_rxtx_start_stop;
roc_nix_max_pkt_len;
+ roc_nix_mcast_mcam_entry_alloc;
+ roc_nix_mcast_mcam_entry_ena_dis;
+ roc_nix_mcast_mcam_entry_free;
+ roc_nix_mcast_mcam_entry_write;
+ roc_nix_npc_mac_addr_get;
+ roc_nix_npc_mac_addr_set;
+ roc_nix_npc_promisc_ena_dis;
+ roc_nix_npc_rx_ena_dis;
+ roc_nix_npc_mcast_config;
roc_nix_ras_intr_ena_dis;
roc_nix_register_cq_irqs;
roc_nix_register_queue_irqs;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 23/52] common/cnxk: add nix inline IPsec config API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (21 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 22/52] common/cnxk: add nix specific npc operations Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 24/52] common/cnxk: add nix RSS support Nithin Dabilpuram
` (32 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Vidya Sagar Velumuri
From: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Add API to configure NIX block for inline IPSec.
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
drivers/common/cnxk/roc_nix.c | 28 ++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix.h | 10 ++++++++++
drivers/common/cnxk/version.map | 1 +
3 files changed, 39 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index 41b4572..7662e59 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -81,6 +81,34 @@ roc_nix_get_pf_func(struct roc_nix *roc_nix)
}
int
+roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix, struct roc_nix_ipsec_cfg *cfg,
+ bool enb)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_inline_ipsec_lf_cfg *lf_cfg;
+ struct mbox *mbox = (&nix->dev)->mbox;
+
+ lf_cfg = mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
+ if (lf_cfg == NULL)
+ return -ENOSPC;
+
+ if (enb) {
+ lf_cfg->enable = 1;
+ lf_cfg->sa_base_addr = cfg->iova;
+ lf_cfg->ipsec_cfg1.sa_idx_w = plt_log2_u32(cfg->max_sa);
+ lf_cfg->ipsec_cfg0.lenm1_max = roc_nix_max_pkt_len(roc_nix) - 1;
+ lf_cfg->ipsec_cfg1.sa_idx_max = cfg->max_sa - 1;
+ lf_cfg->ipsec_cfg0.sa_pow2_size = plt_log2_u32(cfg->sa_size);
+ lf_cfg->ipsec_cfg0.tag_const = cfg->tag_const;
+ lf_cfg->ipsec_cfg0.tt = cfg->tt;
+ } else {
+ lf_cfg->enable = 0;
+ }
+
+ return mbox_process(mbox);
+}
+
+int
roc_nix_max_pkt_len(struct roc_nix *roc_nix)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index ccb31be..34057b9 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -110,6 +110,14 @@ struct roc_nix_link_info {
uint64_t port : 8;
};
+struct roc_nix_ipsec_cfg {
+ uint32_t sa_size;
+ uint32_t tag_const;
+ plt_iova_t iova;
+ uint16_t max_sa;
+ uint8_t tt;
+};
+
/* Link status update callback */
typedef void (*link_status_t)(struct roc_nix *roc_nix,
struct roc_nix_link_info *link);
@@ -156,6 +164,8 @@ int __roc_api roc_nix_max_pkt_len(struct roc_nix *roc_nix);
int __roc_api roc_nix_lf_alloc(struct roc_nix *roc_nix, uint32_t nb_rxq,
uint32_t nb_txq, uint64_t rx_cfg);
int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix);
+int __roc_api roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix,
+ struct roc_nix_ipsec_cfg *cfg, bool enb);
/* IRQ */
void __roc_api roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix,
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 0649b01..53ed0e1 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -29,6 +29,7 @@ INTERNAL {
roc_nix_is_sdp;
roc_nix_is_vf_or_sdp;
roc_nix_lf_alloc;
+ roc_nix_lf_inl_ipsec_cfg;
roc_nix_lf_free;
roc_nix_mac_addr_add;
roc_nix_mac_addr_del;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 24/52] common/cnxk: add nix RSS support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (22 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 23/52] common/cnxk: add nix inline IPsec config API Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 25/52] common/cnxk: add nix ptp support Nithin Dabilpuram
` (31 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add API's for default/non-default reta table setup,
key set/get, and flow algo setup for CN9K and CN10K.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 17 +++
| 219 ++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 7 ++
4 files changed, 244 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_rss.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index bde5728..36c3ada 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -22,6 +22,7 @@ sources = files('roc_dev.c',
'roc_nix_mcast.c',
'roc_nix_npc.c',
'roc_nix_queue.c',
+ 'roc_nix_rss.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 34057b9..0409040 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -215,6 +215,23 @@ int __roc_api roc_nix_npc_rx_ena_dis(struct roc_nix *roc_nix, bool enable);
int __roc_api roc_nix_npc_mcast_config(struct roc_nix *roc_nix,
bool mcast_enable, bool prom_enable);
+/* RSS */
+void __roc_api roc_nix_rss_key_default_fill(struct roc_nix *roc_nix,
+ uint8_t key[ROC_NIX_RSS_KEY_LEN]);
+void __roc_api roc_nix_rss_key_set(struct roc_nix *roc_nix,
+ uint8_t key[ROC_NIX_RSS_KEY_LEN]);
+void __roc_api roc_nix_rss_key_get(struct roc_nix *roc_nix,
+ uint8_t key[ROC_NIX_RSS_KEY_LEN]);
+int __roc_api roc_nix_rss_reta_set(struct roc_nix *roc_nix, uint8_t group,
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX]);
+int __roc_api roc_nix_rss_reta_get(struct roc_nix *roc_nix, uint8_t group,
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX]);
+int __roc_api roc_nix_rss_flowkey_set(struct roc_nix *roc_nix, uint8_t *alg_idx,
+ uint32_t flowkey, uint8_t group,
+ int mcam_index);
+int __roc_api roc_nix_rss_default_setup(struct roc_nix *roc_nix,
+ uint32_t flowkey);
+
/* Queue */
int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
bool ena);
--git a/drivers/common/cnxk/roc_nix_rss.c b/drivers/common/cnxk/roc_nix_rss.c
new file mode 100644
index 0000000..add411e
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_rss.c
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+void
+roc_nix_rss_key_default_fill(struct roc_nix *roc_nix,
+ uint8_t key[ROC_NIX_RSS_KEY_LEN])
+{
+ PLT_SET_USED(roc_nix);
+ const uint8_t default_key[ROC_NIX_RSS_KEY_LEN] = {
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED,
+ 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED,
+ 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
+ 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD};
+
+ memcpy(key, default_key, ROC_NIX_RSS_KEY_LEN);
+}
+
+void
+roc_nix_rss_key_set(struct roc_nix *roc_nix, uint8_t key[ROC_NIX_RSS_KEY_LEN])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint64_t *keyptr;
+ uint64_t val;
+ uint32_t idx;
+
+ keyptr = (uint64_t *)key;
+ for (idx = 0; idx < (ROC_NIX_RSS_KEY_LEN >> 3); idx++) {
+ val = plt_cpu_to_be_64(keyptr[idx]);
+ plt_write64(val, nix->base + NIX_LF_RX_SECRETX(idx));
+ }
+}
+
+void
+roc_nix_rss_key_get(struct roc_nix *roc_nix, uint8_t key[ROC_NIX_RSS_KEY_LEN])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint64_t *keyptr = (uint64_t *)key;
+ uint64_t val;
+ uint32_t idx;
+
+ for (idx = 0; idx < (ROC_NIX_RSS_KEY_LEN >> 3); idx++) {
+ val = plt_read64(nix->base + NIX_LF_RX_SECRETX(idx));
+ keyptr[idx] = plt_be_to_cpu_64(val);
+ }
+}
+
+static int
+nix_cn9k_rss_reta_set(struct nix *nix, uint8_t group,
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX])
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_aq_enq_req *req;
+ uint16_t idx;
+ int rc;
+
+ for (idx = 0; idx < nix->reta_sz; idx++) {
+ req = mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+ req = mbox_alloc_msg_nix_aq_enq(mbox);
+ if (!req)
+ return NIX_ERR_NO_MEM;
+ }
+ req->rss.rq = reta[idx];
+ /* Fill AQ info */
+ req->qidx = (group * nix->reta_sz) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_INIT;
+ }
+
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+static int
+nix_rss_reta_set(struct nix *nix, uint8_t group,
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX])
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_cn10k_aq_enq_req *req;
+ uint16_t idx;
+ int rc;
+
+ for (idx = 0; idx < nix->reta_sz; idx++) {
+ req = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ if (!req) {
+ /* The shared memory buffer can be full.
+ * Flush it and retry
+ */
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+ req = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ if (!req)
+ return NIX_ERR_NO_MEM;
+ }
+ req->rss.rq = reta[idx];
+ /* Fill AQ info */
+ req->qidx = (group * nix->reta_sz) + idx;
+ req->ctype = NIX_AQ_CTYPE_RSS;
+ req->op = NIX_AQ_INSTOP_INIT;
+ }
+
+ rc = mbox_process(mbox);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+int
+roc_nix_rss_reta_set(struct roc_nix *roc_nix, uint8_t group,
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ int rc;
+
+ if (group >= ROC_NIX_RSS_GRPS)
+ return NIX_ERR_PARAM;
+
+ if (roc_model_is_cn9k())
+ rc = nix_cn9k_rss_reta_set(nix, group, reta);
+ else
+ rc = nix_rss_reta_set(nix, group, reta);
+ if (rc)
+ return rc;
+
+ memcpy(&nix->reta[group], reta, ROC_NIX_RSS_RETA_MAX);
+ return 0;
+}
+
+int
+roc_nix_rss_reta_get(struct roc_nix *roc_nix, uint8_t group,
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ if (group >= ROC_NIX_RSS_GRPS)
+ return NIX_ERR_PARAM;
+
+ memcpy(reta, &nix->reta[group], ROC_NIX_RSS_RETA_MAX);
+ return 0;
+}
+
+int
+roc_nix_rss_flowkey_set(struct roc_nix *roc_nix, uint8_t *alg_idx,
+ uint32_t flowkey, uint8_t group, int mcam_index)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_rss_flowkey_cfg_rsp *rss_rsp;
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_rss_flowkey_cfg *cfg;
+ int rc = -ENOSPC;
+
+ if (group >= ROC_NIX_RSS_GRPS)
+ return NIX_ERR_PARAM;
+
+ cfg = mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
+ if (cfg == NULL)
+ return rc;
+ cfg->flowkey_cfg = flowkey;
+ cfg->mcam_index = mcam_index; /* -1 indicates default group */
+ cfg->group = group; /* 0 is default group */
+ rc = mbox_process_msg(mbox, (void *)&rss_rsp);
+ if (rc)
+ return rc;
+ if (alg_idx)
+ *alg_idx = rss_rsp->alg_idx;
+
+ return rc;
+}
+
+int
+roc_nix_rss_default_setup(struct roc_nix *roc_nix, uint32_t flowkey)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint16_t idx, qcnt = nix->nb_rx_queues;
+ uint16_t reta[ROC_NIX_RSS_RETA_MAX];
+ uint8_t key[ROC_NIX_RSS_KEY_LEN];
+ uint8_t alg_idx;
+ int rc;
+
+ roc_nix_rss_key_default_fill(roc_nix, key);
+ roc_nix_rss_key_set(roc_nix, key);
+
+ /* Update default RSS RETA */
+ for (idx = 0; idx < nix->reta_sz; idx++)
+ reta[idx] = idx % qcnt;
+ rc = roc_nix_rss_reta_set(roc_nix, 0, reta);
+ if (rc) {
+ plt_err("Failed to set RSS reta table rc=%d", rc);
+ goto fail;
+ }
+
+ /* Update the default flowkey */
+ rc = roc_nix_rss_flowkey_set(roc_nix, &alg_idx, flowkey,
+ ROC_NIX_RSS_GROUP_DEFAULT, -1);
+ if (rc) {
+ plt_err("Failed to set RSS flowkey rc=%d", rc);
+ goto fail;
+ }
+
+ nix->rss_alg_idx = alg_idx;
+fail:
+ return rc;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 53ed0e1..f8f6b89 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -63,6 +63,13 @@ INTERNAL {
roc_nix_rq_fini;
roc_nix_rq_init;
roc_nix_rq_modify;
+ roc_nix_rss_default_setup;
+ roc_nix_rss_flowkey_set;
+ roc_nix_rss_key_default_fill;
+ roc_nix_rss_key_get;
+ roc_nix_rss_key_set;
+ roc_nix_rss_reta_get;
+ roc_nix_rss_reta_set;
roc_nix_rx_queue_intr_disable;
roc_nix_rx_queue_intr_enable;
roc_nix_sq_fini;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 25/52] common/cnxk: add nix ptp support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (23 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 24/52] common/cnxk: add nix RSS support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 26/52] common/cnxk: add nix stats support Nithin Dabilpuram
` (30 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Sunil Kumar Kori <skori@marvell.com>
Add support to enable/disable Rx and Tx PTP timestamping
support. Also provide API's to register ptp info callbacks
to get config change update from Kernel.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 11 ++++
drivers/common/cnxk/roc_nix_ptp.c | 122 ++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 6 ++
4 files changed, 140 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_ptp.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 36c3ada..65dadd9 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -21,6 +21,7 @@ sources = files('roc_dev.c',
'roc_nix_mac.c',
'roc_nix_mcast.c',
'roc_nix_npc.c',
+ 'roc_nix_ptp.c',
'roc_nix_queue.c',
'roc_nix_rss.c',
'roc_npa.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 0409040..90a5ac9 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -244,6 +244,17 @@ int __roc_api roc_nix_cq_fini(struct roc_nix_cq *cq);
int __roc_api roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq);
int __roc_api roc_nix_sq_fini(struct roc_nix_sq *sq);
+/* PTP */
+int __roc_api roc_nix_ptp_rx_ena_dis(struct roc_nix *roc_nix, int enable);
+int __roc_api roc_nix_ptp_tx_ena_dis(struct roc_nix *roc_nix, int enable);
+int __roc_api roc_nix_ptp_clock_read(struct roc_nix *roc_nix, uint64_t *clock,
+ uint64_t *tsc, uint8_t is_pmu);
+int __roc_api roc_nix_ptp_sync_time_adjust(struct roc_nix *roc_nix,
+ int64_t delta);
+int __roc_api roc_nix_ptp_info_cb_register(struct roc_nix *roc_nix,
+ ptp_info_update_t ptp_update);
+void __roc_api roc_nix_ptp_info_cb_unregister(struct roc_nix *roc_nix);
+
/* MCAST*/
int __roc_api roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix,
uint16_t nb_entries,
diff --git a/drivers/common/cnxk/roc_nix_ptp.c b/drivers/common/cnxk/roc_nix_ptp.c
new file mode 100644
index 0000000..05c40ca
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_ptp.c
@@ -0,0 +1,122 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define PTP_FREQ_ADJUST (1 << 9)
+
+static inline struct mbox *
+get_mbox(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+int
+roc_nix_ptp_rx_ena_dis(struct roc_nix *roc_nix, int enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+
+ if (roc_nix_is_vf_or_sdp(roc_nix) || roc_nix_is_lbk(roc_nix))
+ return NIX_ERR_PARAM;
+
+ if (enable)
+ mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
+ else
+ mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_ptp_tx_ena_dis(struct roc_nix *roc_nix, int enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+
+ if (roc_nix_is_vf_or_sdp(roc_nix) || roc_nix_is_lbk(roc_nix))
+ return NIX_ERR_PARAM;
+
+ if (enable)
+ mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
+ else
+ mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_ptp_clock_read(struct roc_nix *roc_nix, uint64_t *clock, uint64_t *tsc,
+ uint8_t is_pmu)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_ptp_op(mbox);
+ if (req == NULL)
+ return rc;
+ req->op = PTP_OP_GET_CLOCK;
+ req->is_pmu = is_pmu;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (clock)
+ *clock = rsp->clk;
+
+ if (tsc)
+ *tsc = rsp->tsc;
+
+ return 0;
+}
+
+int
+roc_nix_ptp_sync_time_adjust(struct roc_nix *roc_nix, int64_t delta)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct ptp_req *req;
+ struct ptp_rsp *rsp;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix) || roc_nix_is_lbk(roc_nix))
+ return NIX_ERR_PARAM;
+
+ if ((delta <= -PTP_FREQ_ADJUST) || (delta >= PTP_FREQ_ADJUST))
+ return NIX_ERR_INVALID_RANGE;
+
+ req = mbox_alloc_msg_ptp_op(mbox);
+ if (req == NULL)
+ return rc;
+ req->op = PTP_OP_ADJFINE;
+ req->scaled_ppm = delta;
+
+ return mbox_process_msg(mbox, (void *)&rsp);
+}
+
+int
+roc_nix_ptp_info_cb_register(struct roc_nix *roc_nix,
+ ptp_info_update_t ptp_update)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ if (ptp_update == NULL)
+ return NIX_ERR_PARAM;
+
+ dev->ops->ptp_info_update = (ptp_info_t)ptp_update;
+ return 0;
+}
+
+void
+roc_nix_ptp_info_cb_unregister(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ dev->ops->ptp_info_update = NULL;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index f8f6b89..bd3b834 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -56,6 +56,12 @@ INTERNAL {
roc_nix_npc_promisc_ena_dis;
roc_nix_npc_rx_ena_dis;
roc_nix_npc_mcast_config;
+ roc_nix_ptp_clock_read;
+ roc_nix_ptp_info_cb_register;
+ roc_nix_ptp_info_cb_unregister;
+ roc_nix_ptp_rx_ena_dis;
+ roc_nix_ptp_sync_time_adjust;
+ roc_nix_ptp_tx_ena_dis;
roc_nix_ras_intr_ena_dis;
roc_nix_register_cq_irqs;
roc_nix_register_queue_irqs;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 26/52] common/cnxk: add nix stats support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (24 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 25/52] common/cnxk: add nix ptp support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 27/52] common/cnxk: add support for nix extended stats Nithin Dabilpuram
` (29 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add API to provide Rx and Tx stats for a given NIX.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 53 ++++++++
drivers/common/cnxk/roc_nix_stats.c | 239 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 4 +
4 files changed, 297 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_stats.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 65dadd9..199c6d4 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -24,6 +24,7 @@ sources = files('roc_dev.c',
'roc_nix_ptp.c',
'roc_nix_queue.c',
'roc_nix_rss.c',
+ 'roc_nix_stats.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 90a5ac9..ee9e78b 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -42,6 +42,49 @@ enum roc_nix_sq_max_sqe_sz {
#define ROC_NIX_VWQE_MAX_SIZE_LOG2 11
#define ROC_NIX_VWQE_MIN_SIZE_LOG2 2
+struct roc_nix_stats {
+ /* Rx */
+ uint64_t rx_octs;
+ uint64_t rx_ucast;
+ uint64_t rx_bcast;
+ uint64_t rx_mcast;
+ uint64_t rx_drop;
+ uint64_t rx_drop_octs;
+ uint64_t rx_fcs;
+ uint64_t rx_err;
+ uint64_t rx_drop_bcast;
+ uint64_t rx_drop_mcast;
+ uint64_t rx_drop_l3_bcast;
+ uint64_t rx_drop_l3_mcast;
+ /* Tx */
+ uint64_t tx_ucast;
+ uint64_t tx_bcast;
+ uint64_t tx_mcast;
+ uint64_t tx_drop;
+ uint64_t tx_octs;
+};
+
+struct roc_nix_stats_queue {
+ PLT_STD_C11
+ union {
+ struct {
+ /* Rx */
+ uint64_t rx_pkts;
+ uint64_t rx_octs;
+ uint64_t rx_drop_pkts;
+ uint64_t rx_drop_octs;
+ uint64_t rx_error_pkts;
+ };
+ struct {
+ /* Tx */
+ uint64_t tx_pkts;
+ uint64_t tx_octs;
+ uint64_t tx_drop_pkts;
+ uint64_t tx_drop_octs;
+ };
+ };
+};
+
struct roc_nix_rq {
/* Input parameters */
uint16_t qid;
@@ -232,6 +275,16 @@ int __roc_api roc_nix_rss_flowkey_set(struct roc_nix *roc_nix, uint8_t *alg_idx,
int __roc_api roc_nix_rss_default_setup(struct roc_nix *roc_nix,
uint32_t flowkey);
+/* Stats */
+int __roc_api roc_nix_stats_get(struct roc_nix *roc_nix,
+ struct roc_nix_stats *stats);
+int __roc_api roc_nix_stats_reset(struct roc_nix *roc_nix);
+int __roc_api roc_nix_stats_queue_get(struct roc_nix *roc_nix, uint16_t qid,
+ bool is_rx,
+ struct roc_nix_stats_queue *qstats);
+int __roc_api roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid,
+ bool is_rx);
+
/* Queue */
int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
bool ena);
diff --git a/drivers/common/cnxk/roc_nix_stats.c b/drivers/common/cnxk/roc_nix_stats.c
new file mode 100644
index 0000000..dce496c
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_stats.c
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include <inttypes.h>
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define NIX_RX_STATS(val) plt_read64(nix->base + NIX_LF_RX_STATX(val))
+#define NIX_TX_STATS(val) plt_read64(nix->base + NIX_LF_TX_STATX(val))
+
+int
+roc_nix_stats_get(struct roc_nix *roc_nix, struct roc_nix_stats *stats)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ if (stats == NULL)
+ return NIX_ERR_PARAM;
+
+ stats->rx_octs = NIX_RX_STATS(NIX_STAT_LF_RX_RX_OCTS);
+ stats->rx_ucast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_UCAST);
+ stats->rx_bcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_BCAST);
+ stats->rx_mcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_MCAST);
+ stats->rx_drop = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DROP);
+ stats->rx_drop_octs = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DROP_OCTS);
+ stats->rx_fcs = NIX_RX_STATS(NIX_STAT_LF_RX_RX_FCS);
+ stats->rx_err = NIX_RX_STATS(NIX_STAT_LF_RX_RX_ERR);
+ stats->rx_drop_bcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_BCAST);
+ stats->rx_drop_mcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_MCAST);
+ stats->rx_drop_l3_bcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_L3BCAST);
+ stats->rx_drop_l3_mcast = NIX_RX_STATS(NIX_STAT_LF_RX_RX_DRP_L3MCAST);
+
+ stats->tx_ucast = NIX_TX_STATS(NIX_STAT_LF_TX_TX_UCAST);
+ stats->tx_bcast = NIX_TX_STATS(NIX_STAT_LF_TX_TX_BCAST);
+ stats->tx_mcast = NIX_TX_STATS(NIX_STAT_LF_TX_TX_MCAST);
+ stats->tx_drop = NIX_TX_STATS(NIX_STAT_LF_TX_TX_DROP);
+ stats->tx_octs = NIX_TX_STATS(NIX_STAT_LF_TX_TX_OCTS);
+ return 0;
+}
+
+int
+roc_nix_stats_reset(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+
+ if (mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
+ return -ENOMEM;
+
+ return mbox_process(mbox);
+}
+
+static int
+queue_is_valid(struct nix *nix, uint16_t qid, bool is_rx)
+{
+ uint16_t nb_queues;
+
+ if (is_rx)
+ nb_queues = nix->nb_rx_queues;
+ else
+ nb_queues = nix->nb_tx_queues;
+
+ if (qid >= nb_queues)
+ return NIX_ERR_QUEUE_INVALID_RANGE;
+
+ return 0;
+}
+
+static uint64_t
+qstat_read(struct nix *nix, uint16_t qid, uint32_t off)
+{
+ uint64_t reg, val;
+ int64_t *addr;
+
+ addr = (int64_t *)(nix->base + off);
+ reg = (((uint64_t)qid) << 32);
+ val = roc_atomic64_add_nosync(reg, addr);
+ if (val & BIT_ULL(NIX_CQ_OP_STAT_OP_ERR))
+ val = 0;
+ return val;
+}
+
+static void
+nix_stat_rx_queue_get(struct nix *nix, uint16_t qid,
+ struct roc_nix_stats_queue *qstats)
+{
+ qstats->rx_pkts = qstat_read(nix, qid, NIX_LF_RQ_OP_PKTS);
+ qstats->rx_octs = qstat_read(nix, qid, NIX_LF_RQ_OP_OCTS);
+ qstats->rx_drop_pkts = qstat_read(nix, qid, NIX_LF_RQ_OP_DROP_PKTS);
+ qstats->rx_drop_octs = qstat_read(nix, qid, NIX_LF_RQ_OP_DROP_OCTS);
+ qstats->rx_error_pkts = qstat_read(nix, qid, NIX_LF_RQ_OP_RE_PKTS);
+}
+
+static void
+nix_stat_tx_queue_get(struct nix *nix, uint16_t qid,
+ struct roc_nix_stats_queue *qstats)
+{
+ qstats->tx_pkts = qstat_read(nix, qid, NIX_LF_SQ_OP_PKTS);
+ qstats->tx_octs = qstat_read(nix, qid, NIX_LF_SQ_OP_OCTS);
+ qstats->tx_drop_pkts = qstat_read(nix, qid, NIX_LF_SQ_OP_DROP_PKTS);
+ qstats->tx_drop_octs = qstat_read(nix, qid, NIX_LF_SQ_OP_DROP_OCTS);
+}
+
+static int
+nix_stat_rx_queue_reset(struct nix *nix, uint16_t qid)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ int rc;
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.octs = 0;
+ aq->rq.pkts = 0;
+ aq->rq.drop_octs = 0;
+ aq->rq.drop_pkts = 0;
+ aq->rq.re_pkts = 0;
+
+ aq->rq_mask.octs = ~(aq->rq_mask.octs);
+ aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
+ aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
+ aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
+ aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_RQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ aq->rq.octs = 0;
+ aq->rq.pkts = 0;
+ aq->rq.drop_octs = 0;
+ aq->rq.drop_pkts = 0;
+ aq->rq.re_pkts = 0;
+
+ aq->rq_mask.octs = ~(aq->rq_mask.octs);
+ aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
+ aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
+ aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
+ aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
+ }
+
+ rc = mbox_process(mbox);
+ return rc ? NIX_ERR_AQ_WRITE_FAILED : 0;
+}
+
+static int
+nix_stat_tx_queue_reset(struct nix *nix, uint16_t qid)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ int rc;
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ aq->sq.octs = 0;
+ aq->sq.pkts = 0;
+ aq->sq.drop_octs = 0;
+ aq->sq.drop_pkts = 0;
+
+ aq->sq_mask.octs = ~(aq->sq_mask.octs);
+ aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
+ aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
+ aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+ aq->sq.octs = 0;
+ aq->sq.pkts = 0;
+ aq->sq.drop_octs = 0;
+ aq->sq.drop_pkts = 0;
+
+ aq->sq_mask.octs = ~(aq->sq_mask.octs);
+ aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
+ aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
+ aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
+ }
+
+ rc = mbox_process(mbox);
+ return rc ? NIX_ERR_AQ_WRITE_FAILED : 0;
+}
+
+int
+roc_nix_stats_queue_get(struct roc_nix *roc_nix, uint16_t qid, bool is_rx,
+ struct roc_nix_stats_queue *qstats)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ int rc;
+
+ if (qstats == NULL)
+ return NIX_ERR_PARAM;
+
+ rc = queue_is_valid(nix, qid, is_rx);
+ if (rc)
+ goto fail;
+
+ if (is_rx)
+ nix_stat_rx_queue_get(nix, qid, qstats);
+ else
+ nix_stat_tx_queue_get(nix, qid, qstats);
+
+fail:
+ return rc;
+}
+
+int
+roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid, bool is_rx)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ int rc;
+
+ rc = queue_is_valid(nix, qid, is_rx);
+ if (rc)
+ goto fail;
+
+ if (is_rx)
+ rc = nix_stat_rx_queue_reset(nix, qid);
+ else
+ rc = nix_stat_tx_queue_reset(nix, qid);
+
+fail:
+ return rc;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index bd3b834..8c6df6d 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -80,6 +80,10 @@ INTERNAL {
roc_nix_rx_queue_intr_enable;
roc_nix_sq_fini;
roc_nix_sq_init;
+ roc_nix_stats_get;
+ roc_nix_stats_queue_get;
+ roc_nix_stats_queue_reset;
+ roc_nix_stats_reset;
roc_nix_unregister_cq_irqs;
roc_nix_unregister_queue_irqs;
roc_npa_aura_limit_modify;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 27/52] common/cnxk: add support for nix extended stats
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (25 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 26/52] common/cnxk: add nix stats support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-26 14:17 ` Jerin Jacob
2021-03-05 13:38 ` [dpdk-dev] [PATCH 28/52] common/cnxk: add nix debug dump support Nithin Dabilpuram
` (28 subsequent siblings)
55 siblings, 1 reply; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Satha Rao <skoteshwar@marvell.com>
Add support for retrieving NIX extended stats that are
per NIX LF and per LMAC.
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 18 ++++
drivers/common/cnxk/roc_nix_stats.c | 172 +++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_xstats.h | 204 +++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 3 +
4 files changed, 397 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_xstats.h
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index ee9e78b..37f84fc 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -153,6 +153,18 @@ struct roc_nix_link_info {
uint64_t port : 8;
};
+/** Maximum name length for extended statistics counters */
+#define ROC_NIX_XSTATS_NAME_SIZE 64
+
+struct roc_nix_xstat {
+ uint64_t id; /**< The index in xstats name array. */
+ uint64_t value; /**< The statistic counter value. */
+};
+
+struct roc_nix_xstat_name {
+ char name[ROC_NIX_XSTATS_NAME_SIZE];
+};
+
struct roc_nix_ipsec_cfg {
uint32_t sa_size;
uint32_t tag_const;
@@ -284,6 +296,12 @@ int __roc_api roc_nix_stats_queue_get(struct roc_nix *roc_nix, uint16_t qid,
struct roc_nix_stats_queue *qstats);
int __roc_api roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid,
bool is_rx);
+int __roc_api roc_nix_num_xstats_get(struct roc_nix *roc_nix);
+int __roc_api roc_nix_xstats_get(struct roc_nix *roc_nix,
+ struct roc_nix_xstat *xstats, unsigned int n);
+int __roc_api roc_nix_xstats_names_get(struct roc_nix *roc_nix,
+ struct roc_nix_xstat_name *xstats_names,
+ unsigned int limit);
/* Queue */
int __roc_api roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq,
diff --git a/drivers/common/cnxk/roc_nix_stats.c b/drivers/common/cnxk/roc_nix_stats.c
index dce496c..1f11799 100644
--- a/drivers/common/cnxk/roc_nix_stats.c
+++ b/drivers/common/cnxk/roc_nix_stats.c
@@ -5,12 +5,24 @@
#include <inttypes.h>
#include "roc_api.h"
+#include "roc_nix_xstats.h"
#include "roc_priv.h"
#define NIX_RX_STATS(val) plt_read64(nix->base + NIX_LF_RX_STATX(val))
#define NIX_TX_STATS(val) plt_read64(nix->base + NIX_LF_TX_STATX(val))
int
+roc_nix_num_xstats_get(struct roc_nix *roc_nix)
+{
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return CNXK_NIX_NUM_XSTATS_REG;
+ else if (roc_model_is_cn9k())
+ return CNXK_NIX_NUM_XSTATS_CGX;
+
+ return CNXK_NIX_NUM_XSTATS_RPM;
+}
+
+int
roc_nix_stats_get(struct roc_nix *roc_nix, struct roc_nix_stats *stats)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
@@ -237,3 +249,163 @@ roc_nix_stats_queue_reset(struct roc_nix *roc_nix, uint16_t qid, bool is_rx)
fail:
return rc;
}
+
+int
+roc_nix_xstats_get(struct roc_nix *roc_nix, struct roc_nix_xstat *xstats,
+ unsigned int n)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct cgx_stats_rsp *cgx_resp;
+ struct rpm_stats_rsp *rpm_resp;
+ unsigned long int i, count = 0;
+ unsigned int xstat_cnt;
+ struct msg_req *req;
+ int rc;
+
+ xstat_cnt = roc_nix_num_xstats_get(roc_nix);
+ if (n < xstat_cnt)
+ return xstat_cnt;
+
+ if (xstats == NULL)
+ return -EINVAL;
+
+ memset(xstats, 0, (xstat_cnt * sizeof(*xstats)));
+ for (i = 0; i < CNXK_NIX_NUM_TX_XSTATS; i++) {
+ xstats[count].value = NIX_TX_STATS(nix_tx_xstats[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < CNXK_NIX_NUM_RX_XSTATS; i++) {
+ xstats[count].value = NIX_RX_STATS(nix_rx_xstats[i].offset);
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < nix->nb_rx_queues; i++)
+ xstats[count].value +=
+ qstat_read(nix, i, nix_q_xstats[0].offset);
+
+ xstats[count].id = count;
+ count++;
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return count;
+
+ if (roc_model_is_cn9k()) {
+ req = mbox_alloc_msg_cgx_stats(mbox);
+ req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix);
+
+ rc = mbox_process_msg(mbox, (void *)&cgx_resp);
+ if (rc)
+ return rc;
+
+ for (i = 0; i < roc_nix_num_rx_xstats(); i++) {
+ xstats[count].value =
+ cgx_resp->rx_stats[nix_rx_xstats_cgx[i].offset];
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < roc_nix_num_tx_xstats(); i++) {
+ xstats[count].value =
+ cgx_resp->tx_stats[nix_tx_xstats_cgx[i].offset];
+ xstats[count].id = count;
+ count++;
+ }
+ } else {
+ req = mbox_alloc_msg_rpm_stats(mbox);
+ req->hdr.pcifunc = roc_nix_get_pf_func(roc_nix);
+
+ rc = mbox_process_msg(mbox, (void *)&rpm_resp);
+ if (rc)
+ return rc;
+
+ for (i = 0; i < roc_nix_num_rx_xstats(); i++) {
+ xstats[count].value =
+ rpm_resp->rx_stats[nix_rx_xstats_rpm[i].offset];
+ xstats[count].id = count;
+ count++;
+ }
+
+ for (i = 0; i < roc_nix_num_tx_xstats(); i++) {
+ xstats[count].value =
+ rpm_resp->tx_stats[nix_tx_xstats_rpm[i].offset];
+ xstats[count].id = count;
+ count++;
+ }
+ }
+
+ return count;
+}
+
+int
+roc_nix_xstats_names_get(struct roc_nix *roc_nix,
+ struct roc_nix_xstat_name *xstats_names,
+ unsigned int limit)
+{
+ unsigned long int i, count = 0;
+ unsigned int xstat_cnt;
+
+ xstat_cnt = roc_nix_num_xstats_get(roc_nix);
+ if (limit < xstat_cnt && xstats_names != NULL)
+ return -ENOMEM;
+
+ if (xstats_names) {
+ for (i = 0; i < CNXK_NIX_NUM_TX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_tx_xstats[i].name);
+ count++;
+ }
+
+ for (i = 0; i < CNXK_NIX_NUM_RX_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_rx_xstats[i].name);
+ count++;
+ }
+ for (i = 0; i < CNXK_NIX_NUM_QUEUE_XSTATS; i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_q_xstats[i].name);
+ count++;
+ }
+
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return count;
+
+ if (roc_model_is_cn9k()) {
+ for (i = 0; i < roc_nix_num_rx_xstats(); i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_rx_xstats_cgx[i].name);
+ count++;
+ }
+
+ for (i = 0; i < roc_nix_num_tx_xstats(); i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_tx_xstats_cgx[i].name);
+ count++;
+ }
+ } else {
+ for (i = 0; i < roc_nix_num_rx_xstats(); i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_rx_xstats_rpm[i].name);
+ count++;
+ }
+
+ for (i = 0; i < roc_nix_num_tx_xstats(); i++) {
+ snprintf(xstats_names[count].name,
+ sizeof(xstats_names[count].name), "%s",
+ nix_tx_xstats_rpm[i].name);
+ count++;
+ }
+ }
+ }
+
+ return xstat_cnt;
+}
diff --git a/drivers/common/cnxk/roc_nix_xstats.h b/drivers/common/cnxk/roc_nix_xstats.h
new file mode 100644
index 0000000..0077c84
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_xstats.h
@@ -0,0 +1,204 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+#ifndef _ROC_NIX_XSTAT_H_
+#define _ROC_NIX_XSTAT_H_
+
+#include <inttypes.h>
+
+struct cnxk_nix_xstats_name {
+ char name[ROC_NIX_XSTATS_NAME_SIZE];
+ uint32_t offset;
+};
+
+static const struct cnxk_nix_xstats_name nix_tx_xstats[] = {
+ {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
+ {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
+ {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
+ {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
+ {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
+};
+
+static const struct cnxk_nix_xstats_name nix_rx_xstats[] = {
+ {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
+ {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
+ {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
+ {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
+ {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
+ {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
+ {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
+ {"rx_err", NIX_STAT_LF_RX_RX_ERR},
+ {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
+ {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
+ {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
+ {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
+};
+
+static const struct cnxk_nix_xstats_name nix_q_xstats[] = {
+ {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
+};
+
+static const struct cnxk_nix_xstats_name nix_rx_xstats_rpm[] = {
+ {"rpm_rx_etherStatsOctets", RPM_MTI_STAT_RX_OCT_CNT},
+ {"rpm_rx_OctetsReceivedOK", RPM_MTI_STAT_RX_OCT_RECV_OK},
+ {"rpm_rx_aAlignmentErrors", RPM_MTI_STAT_RX_ALIG_ERR},
+ {"rpm_rx_aPAUSEMACCtrlFramesReceived", RPM_MTI_STAT_RX_CTRL_FRM_RECV},
+ {"rpm_rx_aFrameTooLongErrors", RPM_MTI_STAT_RX_FRM_LONG},
+ {"rpm_rx_aInRangeLengthErrors", RPM_MTI_STAT_RX_LEN_ERR},
+ {"rpm_rx_aFramesReceivedOK", RPM_MTI_STAT_RX_FRM_RECV},
+ {"rpm_rx_aFrameCheckSequenceErrors", RPM_MTI_STAT_RX_FRM_SEQ_ERR},
+ {"rpm_rx_VLANReceivedOK", RPM_MTI_STAT_RX_VLAN_OK},
+ {"rpm_rx_ifInErrors", RPM_MTI_STAT_RX_IN_ERR},
+ {"rpm_rx_ifInUcastPkts", RPM_MTI_STAT_RX_IN_UCAST_PKT},
+ {"rpm_rx_ifInMulticastPkts", RPM_MTI_STAT_RX_IN_MCAST_PKT},
+ {"rpm_rx_ifInBroadcastPkts", RPM_MTI_STAT_RX_IN_BCAST_PKT},
+ {"rpm_rx_etherStatsDropEvents", RPM_MTI_STAT_RX_DRP_EVENTS},
+ {"rpm_rx_etherStatsPkts", RPM_MTI_STAT_RX_PKT},
+ {"rpm_rx_etherStatsUndersizePkts", RPM_MTI_STAT_RX_UNDER_SIZE},
+ {"rpm_rx_etherStatsPkts64Octets", RPM_MTI_STAT_RX_1_64_PKT_CNT},
+ {"rpm_rx_etherStatsPkts65to127Octets", RPM_MTI_STAT_RX_65_127_PKT_CNT},
+ {"rpm_rx_etherStatsPkts128to255Octets",
+ RPM_MTI_STAT_RX_128_255_PKT_CNT},
+ {"rpm_rx_etherStatsPkts256to511Octets",
+ RPM_MTI_STAT_RX_256_511_PKT_CNT},
+ {"rpm_rx_etherStatsPkts512to1023Octets",
+ RPM_MTI_STAT_RX_512_1023_PKT_CNT},
+ {"rpm_rx_etherStatsPkts1024to1518Octets",
+ RPM_MTI_STAT_RX_1024_1518_PKT_CNT},
+ {"rpm_rx_etherStatsPkts1519toMaxOctets",
+ RPM_MTI_STAT_RX_1519_MAX_PKT_CNT},
+ {"rpm_rx_etherStatsOversizePkts", RPM_MTI_STAT_RX_OVER_SIZE},
+ {"rpm_rx_etherStatsJabbers", RPM_MTI_STAT_RX_JABBER},
+ {"rpm_rx_etherStatsFragments", RPM_MTI_STAT_RX_ETH_FRAGS},
+ {"rpm_rx_CBFC_pause_frames_class_0", RPM_MTI_STAT_RX_CBFC_CLASS_0},
+ {"rpm_rx_CBFC_pause_frames_class_1", RPM_MTI_STAT_RX_CBFC_CLASS_1},
+ {"rpm_rx_CBFC_pause_frames_class_2", RPM_MTI_STAT_RX_CBFC_CLASS_2},
+ {"rpm_rx_CBFC_pause_frames_class_3", RPM_MTI_STAT_RX_CBFC_CLASS_3},
+ {"rpm_rx_CBFC_pause_frames_class_4", RPM_MTI_STAT_RX_CBFC_CLASS_4},
+ {"rpm_rx_CBFC_pause_frames_class_5", RPM_MTI_STAT_RX_CBFC_CLASS_5},
+ {"rpm_rx_CBFC_pause_frames_class_6", RPM_MTI_STAT_RX_CBFC_CLASS_6},
+ {"rpm_rx_CBFC_pause_frames_class_7", RPM_MTI_STAT_RX_CBFC_CLASS_7},
+ {"rpm_rx_CBFC_pause_frames_class_8", RPM_MTI_STAT_RX_CBFC_CLASS_8},
+ {"rpm_rx_CBFC_pause_frames_class_9", RPM_MTI_STAT_RX_CBFC_CLASS_9},
+ {"rpm_rx_CBFC_pause_frames_class_10", RPM_MTI_STAT_RX_CBFC_CLASS_10},
+ {"rpm_rx_CBFC_pause_frames_class_11", RPM_MTI_STAT_RX_CBFC_CLASS_11},
+ {"rpm_rx_CBFC_pause_frames_class_12", RPM_MTI_STAT_RX_CBFC_CLASS_12},
+ {"rpm_rx_CBFC_pause_frames_class_13", RPM_MTI_STAT_RX_CBFC_CLASS_13},
+ {"rpm_rx_CBFC_pause_frames_class_14", RPM_MTI_STAT_RX_CBFC_CLASS_14},
+ {"rpm_rx_CBFC_pause_frames_class_15", RPM_MTI_STAT_RX_CBFC_CLASS_15},
+ {"rpm_rx_aMACControlFramesReceived", RPM_MTI_STAT_RX_MAC_CONTROL},
+};
+
+static const struct cnxk_nix_xstats_name nix_tx_xstats_rpm[] = {
+ {"rpm_tx_etherStatsOctets", RPM_MTI_STAT_TX_OCT_CNT},
+ {"rpm_tx_OctetsTransmittedOK", RPM_MTI_STAT_TX_OCT_TX_OK},
+ {"rpm_tx_aPAUSEMACCtrlFramesTransmitted",
+ RPM_MTI_STAT_TX_PAUSE_MAC_CTRL},
+ {"rpm_tx_aFramesTransmittedOK", RPM_MTI_STAT_TX_FRAMES_OK},
+ {"rpm_tx_VLANTransmittedOK", RPM_MTI_STAT_TX_VLAN_OK},
+ {"rpm_tx_ifOutErrors", RPM_MTI_STAT_TX_OUT_ERR},
+ {"rpm_tx_ifOutUcastPkts", RPM_MTI_STAT_TX_UCAST_PKT_CNT},
+ {"rpm_tx_ifOutMulticastPkts", RPM_MTI_STAT_TX_MCAST_PKT_CNT},
+ {"rpm_tx_ifOutBroadcastPkts", RPM_MTI_STAT_TX_BCAST_PKT_CNT},
+ {"rpm_tx_etherStatsPkts64Octets", RPM_MTI_STAT_TX_1_64_PKT_CNT},
+ {"rpm_tx_etherStatsPkts65to127Octets", RPM_MTI_STAT_TX_65_127_PKT_CNT},
+ {"rpm_tx_etherStatsPkts128to255Octets",
+ RPM_MTI_STAT_TX_128_255_PKT_CNT},
+ {"rpm_tx_etherStatsPkts256to511Octets",
+ RPM_MTI_STAT_TX_256_511_PKT_CNT},
+ {"rpm_tx_etherStatsPkts512to1023Octets",
+ RPM_MTI_STAT_TX_512_1023_PKT_CNT},
+ {"rpm_tx_etherStatsPkts1024to1518Octets",
+ RPM_MTI_STAT_TX_1024_1518_PKT_CNT},
+ {"rpm_tx_etherStatsPkts1519toMaxOctets",
+ RPM_MTI_STAT_TX_1519_MAX_PKT_CNT},
+ {"rpm_tx_CBFC_pause_frames_class_0", RPM_MTI_STAT_TX_CBFC_CLASS_0},
+ {"rpm_tx_CBFC_pause_frames_class_1", RPM_MTI_STAT_TX_CBFC_CLASS_1},
+ {"rpm_tx_CBFC_pause_frames_class_2", RPM_MTI_STAT_TX_CBFC_CLASS_2},
+ {"rpm_tx_CBFC_pause_frames_class_3", RPM_MTI_STAT_TX_CBFC_CLASS_3},
+ {"rpm_tx_CBFC_pause_frames_class_4", RPM_MTI_STAT_TX_CBFC_CLASS_4},
+ {"rpm_tx_CBFC_pause_frames_class_5", RPM_MTI_STAT_TX_CBFC_CLASS_5},
+ {"rpm_tx_CBFC_pause_frames_class_6", RPM_MTI_STAT_TX_CBFC_CLASS_6},
+ {"rpm_tx_CBFC_pause_frames_class_7", RPM_MTI_STAT_TX_CBFC_CLASS_7},
+ {"rpm_tx_CBFC_pause_frames_class_8", RPM_MTI_STAT_TX_CBFC_CLASS_8},
+ {"rpm_tx_CBFC_pause_frames_class_9", RPM_MTI_STAT_TX_CBFC_CLASS_9},
+ {"rpm_tx_CBFC_pause_frames_class_10", RPM_MTI_STAT_TX_CBFC_CLASS_10},
+ {"rpm_tx_CBFC_pause_frames_class_11", RPM_MTI_STAT_TX_CBFC_CLASS_11},
+ {"rpm_tx_CBFC_pause_frames_class_12", RPM_MTI_STAT_TX_CBFC_CLASS_12},
+ {"rpm_tx_CBFC_pause_frames_class_13", RPM_MTI_STAT_TX_CBFC_CLASS_13},
+ {"rpm_tx_CBFC_pause_frames_class_14", RPM_MTI_STAT_TX_CBFC_CLASS_14},
+ {"rpm_tx_CBFC_pause_frames_class_15", RPM_MTI_STAT_TX_CBFC_CLASS_15},
+ {"rpm_tx_aMACControlFramesTransmitted",
+ RPM_MTI_STAT_TX_MAC_CONTROL_FRAMES},
+ {"rpm_tx_etherStatsPkts", RPM_MTI_STAT_TX_PKT_CNT},
+};
+
+static const struct cnxk_nix_xstats_name nix_rx_xstats_cgx[] = {
+ {"cgx_rx_pkts", CGX_RX_PKT_CNT},
+ {"cgx_rx_octs", CGX_RX_OCT_CNT},
+ {"cgx_rx_pause_pkts", CGX_RX_PAUSE_PKT_CNT},
+ {"cgx_rx_pause_octs", CGX_RX_PAUSE_OCT_CNT},
+ {"cgx_rx_dmac_filt_pkts", CGX_RX_DMAC_FILT_PKT_CNT},
+ {"cgx_rx_dmac_filt_octs", CGX_RX_DMAC_FILT_OCT_CNT},
+ {"cgx_rx_fifo_drop_pkts", CGX_RX_FIFO_DROP_PKT_CNT},
+ {"cgx_rx_fifo_drop_octs", CGX_RX_FIFO_DROP_OCT_CNT},
+ {"cgx_rx_errors", CGX_RX_ERR_CNT},
+};
+
+static const struct cnxk_nix_xstats_name nix_tx_xstats_cgx[] = {
+ {"cgx_tx_collision_drop", CGX_TX_COLLISION_DROP},
+ {"cgx_tx_frame_deferred_cnt", CGX_TX_FRAME_DEFER_CNT},
+ {"cgx_tx_multiple_collision", CGX_TX_MULTIPLE_COLLISION},
+ {"cgx_tx_single_collision", CGX_TX_SINGLE_COLLISION},
+ {"cgx_tx_octs", CGX_TX_OCT_CNT},
+ {"cgx_tx_pkts", CGX_TX_PKT_CNT},
+ {"cgx_tx_1_to_63_oct_frames", CGX_TX_1_63_PKT_CNT},
+ {"cgx_tx_64_oct_frames", CGX_TX_64_PKT_CNT},
+ {"cgx_tx_65_to_127_oct_frames", CGX_TX_65_127_PKT_CNT},
+ {"cgx_tx_128_to_255_oct_frames", CGX_TX_128_255_PKT_CNT},
+ {"cgx_tx_256_to_511_oct_frames", CGX_TX_256_511_PKT_CNT},
+ {"cgx_tx_512_to_1023_oct_frames", CGX_TX_512_1023_PKT_CNT},
+ {"cgx_tx_1024_to_1518_oct_frames", CGX_TX_1024_1518_PKT_CNT},
+ {"cgx_tx_1519_to_max_oct_frames", CGX_TX_1519_MAX_PKT_CNT},
+ {"cgx_tx_broadcast_packets", CGX_TX_BCAST_PKTS},
+ {"cgx_tx_multicast_packets", CGX_TX_MCAST_PKTS},
+ {"cgx_tx_underflow_packets", CGX_TX_UFLOW_PKTS},
+ {"cgx_tx_pause_packets", CGX_TX_PAUSE_PKTS},
+};
+
+#define CNXK_NIX_NUM_RX_XSTATS PLT_DIM(nix_rx_xstats)
+#define CNXK_NIX_NUM_TX_XSTATS PLT_DIM(nix_tx_xstats)
+#define CNXK_NIX_NUM_QUEUE_XSTATS PLT_DIM(nix_q_xstats)
+#define CNXK_NIX_NUM_RX_XSTATS_CGX PLT_DIM(nix_rx_xstats_cgx)
+#define CNXK_NIX_NUM_TX_XSTATS_CGX PLT_DIM(nix_tx_xstats_cgx)
+#define CNXK_NIX_NUM_RX_XSTATS_RPM PLT_DIM(nix_rx_xstats_rpm)
+#define CNXK_NIX_NUM_TX_XSTATS_RPM PLT_DIM(nix_tx_xstats_rpm)
+
+#define CNXK_NIX_NUM_XSTATS_REG \
+ (CNXK_NIX_NUM_RX_XSTATS + CNXK_NIX_NUM_TX_XSTATS + \
+ CNXK_NIX_NUM_QUEUE_XSTATS)
+#define CNXK_NIX_NUM_XSTATS_CGX \
+ (CNXK_NIX_NUM_XSTATS_REG + CNXK_NIX_NUM_RX_XSTATS_CGX + \
+ CNXK_NIX_NUM_TX_XSTATS_CGX)
+#define CNXK_NIX_NUM_XSTATS_RPM \
+ (CNXK_NIX_NUM_XSTATS_REG + CNXK_NIX_NUM_RX_XSTATS_RPM + \
+ CNXK_NIX_NUM_TX_XSTATS_RPM)
+
+static inline unsigned long int
+roc_nix_num_rx_xstats(void)
+{
+ if (roc_model_is_cn9k())
+ return CNXK_NIX_NUM_RX_XSTATS_CGX;
+
+ return CNXK_NIX_NUM_RX_XSTATS_RPM;
+}
+
+static inline unsigned long int
+roc_nix_num_tx_xstats(void)
+{
+ if (roc_model_is_cn9k())
+ return CNXK_NIX_NUM_TX_XSTATS_CGX;
+
+ return CNXK_NIX_NUM_TX_XSTATS_RPM;
+}
+#endif /* _ROC_NIX_XSTAT_H_ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 8c6df6d..1b65477 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -84,6 +84,9 @@ INTERNAL {
roc_nix_stats_queue_get;
roc_nix_stats_queue_reset;
roc_nix_stats_reset;
+ roc_nix_num_xstats_get;
+ roc_nix_xstats_get;
+ roc_nix_xstats_names_get;
roc_nix_unregister_cq_irqs;
roc_nix_unregister_queue_irqs;
roc_npa_aura_limit_modify;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 28/52] common/cnxk: add nix debug dump support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (26 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 27/52] common/cnxk: add support for nix extended stats Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 29/52] common/cnxk: add VLAN filter support Nithin Dabilpuram
` (27 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Jerin Jacob <jerinj@marvell.com>
Add support to dump NIX RQ, SQ and CQ contexts apart
from NIX LF registers.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 10 +
drivers/common/cnxk/roc_nix_debug.c | 806 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_irq.c | 11 +
drivers/common/cnxk/version.map | 8 +
5 files changed, 836 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_debug.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 199c6d4..4f00b1d 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -17,6 +17,7 @@ sources = files('roc_dev.c',
'roc_mbox.c',
'roc_model.c',
'roc_nix.c',
+ 'roc_nix_debug.c',
'roc_nix_irq.c',
'roc_nix_mac.c',
'roc_nix_mcast.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 37f84fc..441b752 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -222,6 +222,16 @@ int __roc_api roc_nix_lf_free(struct roc_nix *roc_nix);
int __roc_api roc_nix_lf_inl_ipsec_cfg(struct roc_nix *roc_nix,
struct roc_nix_ipsec_cfg *cfg, bool enb);
+/* Debug */
+int __roc_api roc_nix_lf_get_reg_count(struct roc_nix *roc_nix);
+int __roc_api roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data);
+int __roc_api roc_nix_queues_ctx_dump(struct roc_nix *roc_nix);
+void __roc_api roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
+void __roc_api roc_nix_rq_dump(struct roc_nix_rq *rq);
+void __roc_api roc_nix_cq_dump(struct roc_nix_cq *cq);
+void __roc_api roc_nix_sq_dump(struct roc_nix_sq *sq);
+void __roc_api roc_nix_dump(struct roc_nix *roc_nix);
+
/* IRQ */
void __roc_api roc_nix_rx_queue_intr_enable(struct roc_nix *roc_nix,
uint16_t rxq_id);
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
new file mode 100644
index 0000000..9794f2c
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -0,0 +1,806 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
+#define NIX_REG_INFO(reg) \
+ { \
+ reg, #reg \
+ }
+#define NIX_REG_NAME_SZ 48
+
+#define nix_dump_no_nl(fmt, ...) fprintf(stderr, fmt, ##__VA_ARGS__)
+
+struct nix_lf_reg_info {
+ uint32_t offset;
+ const char *name;
+};
+
+static const struct nix_lf_reg_info nix_lf_reg[] = {
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
+ NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
+ NIX_REG_INFO(NIX_LF_CFG),
+ NIX_REG_INFO(NIX_LF_GINT),
+ NIX_REG_INFO(NIX_LF_GINT_W1S),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT),
+ NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_RAS),
+ NIX_REG_INFO(NIX_LF_RAS_W1S),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
+ NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
+ NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
+ NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
+};
+
+int
+roc_nix_lf_get_reg_count(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ int reg_count;
+
+ if (roc_nix == NULL)
+ return NIX_ERR_PARAM;
+
+ reg_count = PLT_DIM(nix_lf_reg);
+ /* NIX_LF_TX_STATX */
+ reg_count += nix->lf_tx_stats;
+ /* NIX_LF_RX_STATX */
+ reg_count += nix->lf_rx_stats;
+ /* NIX_LF_QINTX_CNT*/
+ reg_count += nix->qints;
+ /* NIX_LF_QINTX_INT */
+ reg_count += nix->qints;
+ /* NIX_LF_QINTX_ENA_W1S */
+ reg_count += nix->qints;
+ /* NIX_LF_QINTX_ENA_W1C */
+ reg_count += nix->qints;
+ /* NIX_LF_CINTX_CNT */
+ reg_count += nix->cints;
+ /* NIX_LF_CINTX_WAIT */
+ reg_count += nix->cints;
+ /* NIX_LF_CINTX_INT */
+ reg_count += nix->cints;
+ /* NIX_LF_CINTX_INT_W1S */
+ reg_count += nix->cints;
+ /* NIX_LF_CINTX_ENA_W1S */
+ reg_count += nix->cints;
+ /* NIX_LF_CINTX_ENA_W1C */
+ reg_count += nix->cints;
+
+ return reg_count;
+}
+
+int
+roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uintptr_t nix_lf_base = nix->base;
+ bool dump_stdout;
+ uint64_t reg;
+ uint32_t i;
+
+ if (roc_nix == NULL)
+ return NIX_ERR_PARAM;
+
+ dump_stdout = data ? 0 : 1;
+
+ for (i = 0; i < PLT_DIM(nix_lf_reg); i++) {
+ reg = plt_read64(nix_lf_base + nix_lf_reg[i].offset);
+ if (dump_stdout && reg)
+ nix_dump("%32s = 0x%" PRIx64, nix_lf_reg[i].name, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_TX_STATX */
+ for (i = 0; i < nix->lf_tx_stats; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_TX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_TX_STATX", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_RX_STATX */
+ for (i = 0; i < nix->lf_rx_stats; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_RX_STATX(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_RX_STATX", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_CNT*/
+ for (i = 0; i < nix->qints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_CNT", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_INT */
+ for (i = 0; i < nix->qints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_INT", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1S */
+ for (i = 0; i < nix->qints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1S",
+ i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_QINTX_ENA_W1C */
+ for (i = 0; i < nix->qints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1C",
+ i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_CNT */
+ for (i = 0; i < nix->cints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_CNT", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_WAIT */
+ for (i = 0; i < nix->cints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_WAIT", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT */
+ for (i = 0; i < nix->cints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT", i,
+ reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_INT_W1S */
+ for (i = 0; i < nix->cints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT_W1S",
+ i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1S */
+ for (i = 0; i < nix->cints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1S",
+ i, reg);
+ if (data)
+ *data++ = reg;
+ }
+
+ /* NIX_LF_CINTX_ENA_W1C */
+ for (i = 0; i < nix->cints; i++) {
+ reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
+ if (dump_stdout && reg)
+ nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1C",
+ i, reg);
+ if (data)
+ *data++ = reg;
+ }
+ return 0;
+}
+
+static int
+nix_q_ctx_get(struct mbox *mbox, uint8_t ctype, uint16_t qid, __io void **ctx_p)
+{
+ int rc;
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_rsp *rsp;
+ struct nix_aq_enq_req *aq;
+ int rc;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = ctype;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ if (ctype == NIX_AQ_CTYPE_RQ)
+ *ctx_p = &rsp->rq;
+ else if (ctype == NIX_AQ_CTYPE_SQ)
+ *ctx_p = &rsp->sq;
+ else
+ *ctx_p = &rsp->cq;
+ } else {
+ struct nix_cn10k_aq_enq_rsp *rsp;
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = ctype;
+ aq->op = NIX_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (ctype == NIX_AQ_CTYPE_RQ)
+ *ctx_p = &rsp->rq;
+ else if (ctype == NIX_AQ_CTYPE_SQ)
+ *ctx_p = &rsp->sq;
+ else
+ *ctx_p = &rsp->cq;
+ }
+ return 0;
+}
+
+static inline void
+nix_cn9k_lf_sq_dump(__io struct nix_sq_ctx_s *ctx, uint32_t *sqb_aura_p)
+{
+ nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
+ ctx->sqe_way_mask, ctx->cq);
+ nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->sdp_mcast, ctx->substream);
+ nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n", ctx->qint_idx,
+ ctx->ena);
+
+ nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
+ ctx->sqb_count, ctx->default_chan);
+ nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
+ ctx->smq_rr_quantum, ctx->sso_ena);
+ nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
+ ctx->xoff, ctx->cq_ena, ctx->smq);
+
+ nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
+ ctx->sqe_stype, ctx->sq_int_ena);
+ nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d", ctx->sq_int,
+ ctx->sqb_aura);
+ nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
+
+ nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
+ ctx->smq_next_sq_vld, ctx->smq_pend);
+ nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
+ ctx->smenq_next_sqb_vld, ctx->head_offset);
+ nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
+ ctx->smenq_offset, ctx->tail_offset);
+ nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
+ ctx->smq_lso_segnum, ctx->smq_next_sq);
+ nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d", ctx->mnq_dis,
+ ctx->lmt_dis);
+ nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
+ ctx->cq_limit, ctx->max_sqe_size);
+
+ nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
+ nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
+ nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
+ nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
+ nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
+
+ nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
+ ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
+ nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
+ ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
+ nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
+ ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
+ nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
+
+ nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->scm_lso_rem);
+ nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_octs);
+ nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_pkts);
+
+ *sqb_aura_p = ctx->sqb_aura;
+}
+
+static inline void
+nix_lf_sq_dump(__io struct nix_cn10k_sq_ctx_s *ctx, uint32_t *sqb_aura_p)
+{
+ nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
+ ctx->sqe_way_mask, ctx->cq);
+ nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->sdp_mcast, ctx->substream);
+ nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n", ctx->qint_idx,
+ ctx->ena);
+
+ nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
+ ctx->sqb_count, ctx->default_chan);
+ nix_dump("W1: smq_rr_weight \t\t%d\nW1: sso_ena \t\t\t%d",
+ ctx->smq_rr_weight, ctx->sso_ena);
+ nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
+ ctx->xoff, ctx->cq_ena, ctx->smq);
+
+ nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
+ ctx->sqe_stype, ctx->sq_int_ena);
+ nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d", ctx->sq_int,
+ ctx->sqb_aura);
+ nix_dump("W2: smq_rr_count[ub:lb] \t\t%x:%x\n", ctx->smq_rr_count_ub,
+ ctx->smq_rr_count_lb);
+
+ nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
+ ctx->smq_next_sq_vld, ctx->smq_pend);
+ nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
+ ctx->smenq_next_sqb_vld, ctx->head_offset);
+ nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
+ ctx->smenq_offset, ctx->tail_offset);
+ nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
+ ctx->smq_lso_segnum, ctx->smq_next_sq);
+ nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d", ctx->mnq_dis,
+ ctx->lmt_dis);
+ nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
+ ctx->cq_limit, ctx->max_sqe_size);
+
+ nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
+ nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
+ nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
+ nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
+ nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
+
+ nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
+ ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
+ nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
+ ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
+ nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
+ ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
+ nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
+
+ nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->scm_lso_rem);
+ nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_octs);
+ nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
+ (uint64_t)ctx->drop_pkts);
+
+ *sqb_aura_p = ctx->sqb_aura;
+}
+
+static inline void
+nix_cn9k_lf_rq_dump(__io struct nix_rq_ctx_s *ctx)
+{
+ nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
+ ctx->wqe_aura, ctx->substream);
+ nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d", ctx->cq,
+ ctx->ena_wqwd);
+ nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
+ ctx->ipsech_ena, ctx->sso_ena);
+ nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
+
+ nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
+ ctx->lpb_drop_ena, ctx->spb_drop_ena);
+ nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
+ ctx->xqe_drop_ena, ctx->wqe_caching);
+ nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
+ ctx->pb_caching, ctx->sso_tt);
+ nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d", ctx->sso_grp,
+ ctx->lpb_aura);
+ nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
+
+ nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
+ ctx->xqe_hdr_split, ctx->xqe_imm_copy);
+ nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
+ ctx->xqe_imm_size, ctx->later_skip);
+ nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
+ ctx->first_skip, ctx->lpb_sizem1);
+ nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d", ctx->spb_ena,
+ ctx->wqe_skip);
+ nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
+
+ nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
+ ctx->spb_pool_pass, ctx->spb_pool_drop);
+ nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
+ ctx->spb_aura_pass, ctx->spb_aura_drop);
+ nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
+ ctx->wqe_pool_pass, ctx->wqe_pool_drop);
+ nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
+ ctx->xqe_pass, ctx->xqe_drop);
+
+ nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
+ ctx->qint_idx, ctx->rq_int_ena);
+ nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d", ctx->rq_int,
+ ctx->lpb_pool_pass);
+ nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
+ ctx->lpb_pool_drop, ctx->lpb_aura_pass);
+ nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
+
+ nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
+ ctx->flow_tagw, ctx->bad_utag);
+ nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n", ctx->good_utag,
+ ctx->ltag);
+
+ nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
+ nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
+ nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
+}
+
+static inline void
+nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx)
+{
+ nix_dump("W0: wqe_aura \t\t\t%d\nW0: len_ol3_dis \t\t\t%d",
+ ctx->wqe_aura, ctx->len_ol3_dis);
+ nix_dump("W0: len_ol4_dis \t\t\t%d\nW0: len_il3_dis \t\t\t%d",
+ ctx->len_ol4_dis, ctx->len_il3_dis);
+ nix_dump("W0: len_il4_dis \t\t\t%d\nW0: csum_ol4_dis \t\t\t%d",
+ ctx->len_il4_dis, ctx->csum_ol4_dis);
+ nix_dump("W0: csum_ol3_dis \t\t\t%d\nW0: lenerr_dis \t\t\t%d",
+ ctx->csum_ol4_dis, ctx->lenerr_dis);
+ nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d", ctx->cq,
+ ctx->ena_wqwd);
+ nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
+ ctx->ipsech_ena, ctx->sso_ena);
+ nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
+
+ nix_dump("W1: chi_ena \t\t%d\nW1: ipsecd_drop_en \t\t%d", ctx->chi_ena,
+ ctx->ipsecd_drop_en);
+ nix_dump("W1: pb_stashing \t\t\t%d", ctx->pb_stashing);
+ nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
+ ctx->lpb_drop_ena, ctx->spb_drop_ena);
+ nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
+ ctx->xqe_drop_ena, ctx->wqe_caching);
+ nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
+ ctx->pb_caching, ctx->sso_tt);
+ nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d", ctx->sso_grp,
+ ctx->lpb_aura);
+ nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
+
+ nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
+ ctx->xqe_hdr_split, ctx->xqe_imm_copy);
+ nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
+ ctx->xqe_imm_size, ctx->later_skip);
+ nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
+ ctx->first_skip, ctx->lpb_sizem1);
+ nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d", ctx->spb_ena,
+ ctx->wqe_skip);
+ nix_dump("W2: spb_sizem1 \t\t\t%d\nW2: policer_ena \t\t\t%d",
+ ctx->spb_sizem1, ctx->policer_ena);
+ nix_dump("W2: band_prof_id \t\t\t%d", ctx->band_prof_id);
+
+ nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
+ ctx->spb_pool_pass, ctx->spb_pool_drop);
+ nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
+ ctx->spb_aura_pass, ctx->spb_aura_drop);
+ nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
+ ctx->wqe_pool_pass, ctx->wqe_pool_drop);
+ nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
+ ctx->xqe_pass, ctx->xqe_drop);
+
+ nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
+ ctx->qint_idx, ctx->rq_int_ena);
+ nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d", ctx->rq_int,
+ ctx->lpb_pool_pass);
+ nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
+ ctx->lpb_pool_drop, ctx->lpb_aura_pass);
+ nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
+
+ nix_dump("W5: vwqe_skip \t\t\t%d\nW5: max_vsize_exp \t\t\t%d",
+ ctx->vwqe_skip, ctx->max_vsize_exp);
+ nix_dump("W5: vtime_wait \t\t\t%d\nW5: vwqe_ena \t\t\t%d",
+ ctx->vtime_wait, ctx->max_vsize_exp);
+ nix_dump("W5: ipsec_vwqe \t\t\t%d", ctx->ipsec_vwqe);
+ nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
+ ctx->flow_tagw, ctx->bad_utag);
+ nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n", ctx->good_utag,
+ ctx->ltag);
+
+ nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
+ nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
+ nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
+ nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
+ nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
+}
+
+static inline void
+nix_lf_cq_dump(__io struct nix_cq_ctx_s *ctx)
+{
+ nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
+
+ nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
+ nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d", ctx->avg_con,
+ ctx->cint_idx);
+ nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d", ctx->cq_err,
+ ctx->qint_idx);
+ nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n", ctx->bpid,
+ ctx->bp_ena);
+
+ nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
+ ctx->update_time, ctx->avg_level);
+ nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n", ctx->head,
+ ctx->tail);
+
+ nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
+ ctx->cq_err_int_ena, ctx->cq_err_int);
+ nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d", ctx->qsize,
+ ctx->caching);
+ nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d", ctx->substream,
+ ctx->ena);
+ nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d", ctx->drop_ena,
+ ctx->drop);
+ nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
+}
+
+int
+roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ int rc = -1, q, rq = nix->nb_rx_queues;
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct npa_aq_enq_rsp *npa_rsp;
+ struct npa_aq_enq_req *npa_aq;
+ volatile void *ctx;
+ int sq = nix->nb_tx_queues;
+ struct npa_lf *npa_lf;
+ uint32_t sqb_aura;
+
+ npa_lf = idev_npa_obj_get();
+ if (npa_lf == NULL)
+ return NPA_ERR_DEVICE_NOT_BOUNDED;
+
+ for (q = 0; q < rq; q++) {
+ rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_CQ, q, &ctx);
+ if (rc) {
+ plt_err("Failed to get cq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d cq=%d ===============",
+ roc_nix->port_id, q);
+ nix_lf_cq_dump(ctx);
+ }
+
+ for (q = 0; q < rq; q++) {
+ rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_RQ, q, &ctx);
+ if (rc) {
+ plt_err("Failed to get rq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d rq=%d ===============",
+ roc_nix->port_id, q);
+ if (roc_model_is_cn9k())
+ nix_cn9k_lf_rq_dump(ctx);
+ else
+ nix_lf_rq_dump(ctx);
+ }
+
+ for (q = 0; q < sq; q++) {
+ rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_SQ, q, &ctx);
+ if (rc) {
+ plt_err("Failed to get sq context");
+ goto fail;
+ }
+ nix_dump("============== port=%d sq=%d ===============",
+ roc_nix->port_id, q);
+ if (roc_model_is_cn9k())
+ nix_cn9k_lf_sq_dump(ctx, &sqb_aura);
+ else
+ nix_lf_sq_dump(ctx, &sqb_aura);
+
+ if (!npa_lf) {
+ plt_err("NPA LF does not exist");
+ continue;
+ }
+
+ /* Dump SQB Aura minimal info */
+ npa_aq = mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
+ if (npa_aq == NULL)
+ return -ENOSPC;
+ npa_aq->aura_id = sqb_aura;
+ npa_aq->ctype = NPA_AQ_CTYPE_AURA;
+ npa_aq->op = NPA_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
+ if (rc) {
+ plt_err("Failed to get sq's sqb_aura context");
+ continue;
+ }
+
+ nix_dump("\nSQB Aura W0: Pool addr\t\t0x%" PRIx64 "",
+ npa_rsp->aura.pool_addr);
+ nix_dump("SQB Aura W1: ena\t\t\t%d", npa_rsp->aura.ena);
+ nix_dump("SQB Aura W2: count\t\t%" PRIx64 "",
+ (uint64_t)npa_rsp->aura.count);
+ nix_dump("SQB Aura W3: limit\t\t%" PRIx64 "",
+ (uint64_t)npa_rsp->aura.limit);
+ nix_dump("SQB Aura W3: fc_ena\t\t%d", npa_rsp->aura.fc_ena);
+ nix_dump("SQB Aura W4: fc_addr\t\t0x%" PRIx64 "\n",
+ npa_rsp->aura.fc_addr);
+ }
+
+fail:
+ return rc;
+}
+
+/* Dumps struct nix_cqe_hdr_s and union nix_rx_parse_u */
+void
+roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
+{
+ const union nix_rx_parse_u *rx =
+ (const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+
+ nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
+ cq->tag, cq->q, cq->node, cq->cqe_type);
+
+ nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", rx->chan,
+ rx->desc_sizem1);
+ nix_dump("W0: imm_copy \t%d\t\texpress \t%d", rx->imm_copy,
+ rx->express);
+ nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d", rx->wqwd,
+ rx->errlev, rx->errcode);
+ nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
+ rx->latype, rx->lbtype, rx->lctype);
+ nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
+ rx->ldtype, rx->letype, rx->lftype);
+ nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d", rx->lgtype, rx->lhtype);
+
+ nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
+ nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
+ rx->l2m, rx->l2b, rx->l3m, rx->l3b);
+ nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d", rx->vtag0_valid,
+ rx->vtag0_gone);
+ nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d", rx->vtag1_valid,
+ rx->vtag1_gone);
+ nix_dump("W1: pkind \t%d", rx->pkind);
+ nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d", rx->vtag0_tci,
+ rx->vtag1_tci);
+
+ nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
+ rx->laflags, rx->lbflags, rx->lcflags);
+ nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
+ rx->ldflags, rx->leflags, rx->lfflags);
+ nix_dump("W2: lgflags \t%d\t\tlhflags \t%d", rx->lgflags, rx->lhflags);
+
+ nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
+ rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
+ nix_dump("W3: match_id \t%d", rx->match_id);
+
+ nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d", rx->laptr,
+ rx->lbptr, rx->lcptr);
+ nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d", rx->ldptr,
+ rx->leptr, rx->lfptr);
+ nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
+
+ nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
+ rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+}
+
+void
+roc_nix_rq_dump(struct roc_nix_rq *rq)
+{
+ nix_dump("nix_rq@%p", rq);
+ nix_dump(" qid = %d", rq->qid);
+ nix_dump(" aura_handle = 0x%" PRIx64 "", rq->aura_handle);
+ nix_dump(" ipsec_ena = %d", rq->ipsech_ena);
+ nix_dump(" first_skip = %d", rq->first_skip);
+ nix_dump(" later_skip = %d", rq->later_skip);
+ nix_dump(" lpb_size = %d", rq->lpb_size);
+ nix_dump(" sso_ena = %d", rq->sso_ena);
+ nix_dump(" tag_mask = %d", rq->tag_mask);
+ nix_dump(" flow_tag_width = %d", rq->flow_tag_width);
+ nix_dump(" tt = %d", rq->tt);
+ nix_dump(" hwgrp = %d", rq->hwgrp);
+ nix_dump(" vwqe_ena = %d", rq->vwqe_ena);
+ nix_dump(" vwqe_first_skip = %d", rq->vwqe_first_skip);
+ nix_dump(" vwqe_max_sz_exp = %d", rq->vwqe_max_sz_exp);
+ nix_dump(" vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo);
+ nix_dump(" vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
+ nix_dump(" roc_nix = %p", rq->roc_nix);
+}
+
+void
+roc_nix_cq_dump(struct roc_nix_cq *cq)
+{
+ nix_dump("nix_cq@%p", cq);
+ nix_dump(" qid = %d", cq->qid);
+ nix_dump(" qnb_desc = %d", cq->nb_desc);
+ nix_dump(" roc_nix = %p", cq->roc_nix);
+ nix_dump(" door = 0x%" PRIx64 "", cq->door);
+ nix_dump(" status = %p", cq->status);
+ nix_dump(" wdata = 0x%" PRIx64 "", cq->wdata);
+ nix_dump(" desc_base = %p", cq->desc_base);
+ nix_dump(" qmask = 0x%" PRIx32 "", cq->qmask);
+}
+
+void
+roc_nix_sq_dump(struct roc_nix_sq *sq)
+{
+ nix_dump("nix_sq@%p", sq);
+ nix_dump(" qid = %d", sq->qid);
+ nix_dump(" max_sqe_sz = %d", sq->max_sqe_sz);
+ nix_dump(" nb_desc = %d", sq->nb_desc);
+ nix_dump(" sqes_per_sqb_log2 = %d", sq->sqes_per_sqb_log2);
+ nix_dump(" roc_nix= %p", sq->roc_nix);
+ nix_dump(" aura_handle = 0x%" PRIx64 "", sq->aura_handle);
+ nix_dump(" nb_sqb_bufs_adj = %d", sq->nb_sqb_bufs_adj);
+ nix_dump(" nb_sqb_bufs = %d", sq->nb_sqb_bufs);
+ nix_dump(" io_addr = 0x%" PRIx64 "", sq->io_addr);
+ nix_dump(" lmt_addr = %p", sq->lmt_addr);
+ nix_dump(" sqe_mem = %p", sq->sqe_mem);
+ nix_dump(" fc = %p", sq->fc);
+};
+
+void
+roc_nix_dump(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ nix_dump("nix@%p", nix);
+ nix_dump(" pf = %d", dev_get_pf(dev->pf_func));
+ nix_dump(" vf = %d", dev_get_vf(dev->pf_func));
+ nix_dump(" bar2 = 0x%" PRIx64, dev->bar2);
+ nix_dump(" bar4 = 0x%" PRIx64, dev->bar4);
+ nix_dump(" port_id = %d", roc_nix->port_id);
+ nix_dump(" rss_tag_as_xor = %d", roc_nix->rss_tag_as_xor);
+ nix_dump(" rss_tag_as_xor = %d", roc_nix->max_sqb_count);
+
+ nix_dump(" \tpci_dev = %p", nix->pci_dev);
+ nix_dump(" \tbase = 0x%" PRIxPTR "", nix->base);
+ nix_dump(" \tlmt_base = 0x%" PRIxPTR "", nix->lmt_base);
+ nix_dump(" \treta_size = %d", nix->reta_sz);
+ nix_dump(" \ttx_chan_base = %d", nix->tx_chan_base);
+ nix_dump(" \trx_chan_base = %d", nix->rx_chan_base);
+ nix_dump(" \tnb_rx_queues = %d", nix->nb_rx_queues);
+ nix_dump(" \tnb_tx_queues = %d", nix->nb_tx_queues);
+ nix_dump(" \tlso_tsov6_idx = %d", nix->lso_tsov6_idx);
+ nix_dump(" \tlso_tsov4_idx = %d", nix->lso_tsov4_idx);
+ nix_dump(" \tlso_base_idx = %d", nix->lso_base_idx);
+ nix_dump(" \tlf_rx_stats = %d", nix->lf_rx_stats);
+ nix_dump(" \tlf_tx_stats = %d", nix->lf_tx_stats);
+ nix_dump(" \trx_chan_cnt = %d", nix->rx_chan_cnt);
+ nix_dump(" \ttx_chan_cnt = %d", nix->tx_chan_cnt);
+ nix_dump(" \tcgx_links = %d", nix->cgx_links);
+ nix_dump(" \tlbk_links = %d", nix->lbk_links);
+ nix_dump(" \tsdp_links = %d", nix->sdp_links);
+ nix_dump(" \ttx_link = %d", nix->tx_link);
+ nix_dump(" \tsqb_size = %d", nix->sqb_size);
+ nix_dump(" \tmsixoff = %d", nix->msixoff);
+ nix_dump(" \tcints = %d", nix->cints);
+ nix_dump(" \tqints = %d", nix->qints);
+ nix_dump(" \tsdp_link = %d", nix->sdp_link);
+ nix_dump(" \tptp_en = %d", nix->ptp_en);
+ nix_dump(" \trss_alg_idx = %d", nix->rss_alg_idx);
+ nix_dump(" \ttx_pause = %d", nix->tx_pause);
+}
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index d7390d4..ee62267 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -74,6 +74,9 @@ nix_lf_err_irq(void *param)
/* Clear interrupt */
plt_write64(intr, nix->base + NIX_LF_ERR_INT);
+ /* Dump registers to std out */
+ roc_nix_lf_reg_dump(nix_priv_to_roc_nix(nix), NULL);
+ roc_nix_queues_ctx_dump(nix_priv_to_roc_nix(nix));
}
static int
@@ -119,6 +122,10 @@ nix_lf_ras_irq(void *param)
plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
/* Clear interrupt */
plt_write64(intr, nix->base + NIX_LF_RAS);
+
+ /* Dump registers to std out */
+ roc_nix_lf_reg_dump(nix_priv_to_roc_nix(nix), NULL);
+ roc_nix_queues_ctx_dump(nix_priv_to_roc_nix(nix));
}
static int
@@ -279,6 +286,10 @@ nix_lf_q_irq(void *param)
/* Clear interrupt */
plt_write64(intr, nix->base + NIX_LF_QINTX_INT(qintx));
+
+ /* Dump registers to std out */
+ roc_nix_lf_reg_dump(nix_priv_to_roc_nix(nix), NULL);
+ roc_nix_queues_ctx_dump(nix_priv_to_roc_nix(nix));
}
int
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 1b65477..454e91c 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -14,10 +14,13 @@ INTERNAL {
roc_idev_npa_nix_get;
roc_idev_num_lmtlines_get;
roc_model;
+ roc_nix_cq_dump;
roc_nix_cq_fini;
roc_nix_cq_init;
+ roc_nix_cqe_dump;
roc_nix_dev_fini;
roc_nix_dev_init;
+ roc_nix_dump;
roc_nix_err_intr_ena_dis;
roc_nix_get_base_chan;
roc_nix_get_pf;
@@ -31,6 +34,8 @@ INTERNAL {
roc_nix_lf_alloc;
roc_nix_lf_inl_ipsec_cfg;
roc_nix_lf_free;
+ roc_nix_lf_get_reg_count;
+ roc_nix_lf_reg_dump;
roc_nix_mac_addr_add;
roc_nix_mac_addr_del;
roc_nix_mac_addr_set;
@@ -62,9 +67,11 @@ INTERNAL {
roc_nix_ptp_rx_ena_dis;
roc_nix_ptp_sync_time_adjust;
roc_nix_ptp_tx_ena_dis;
+ roc_nix_queues_ctx_dump;
roc_nix_ras_intr_ena_dis;
roc_nix_register_cq_irqs;
roc_nix_register_queue_irqs;
+ roc_nix_rq_dump;
roc_nix_rq_ena_dis;
roc_nix_rq_fini;
roc_nix_rq_init;
@@ -78,6 +85,7 @@ INTERNAL {
roc_nix_rss_reta_set;
roc_nix_rx_queue_intr_disable;
roc_nix_rx_queue_intr_enable;
+ roc_nix_sq_dump;
roc_nix_sq_fini;
roc_nix_sq_init;
roc_nix_stats_get;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 29/52] common/cnxk: add VLAN filter support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (27 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 28/52] common/cnxk: add nix debug dump support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 30/52] common/cnxk: add nix flow control support Nithin Dabilpuram
` (26 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Sunil Kumar Kori <skori@marvell.com>
Add helper API to support VLAN filtering and stripping
on Rx and VLAN insertion on Tx.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 45 ++++++++
drivers/common/cnxk/roc_nix_vlan.c | 205 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 8 ++
4 files changed, 259 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_vlan.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 4f00b1d..6ec6f4f 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -26,6 +26,7 @@ sources = files('roc_dev.c',
'roc_nix_queue.c',
'roc_nix_rss.c',
'roc_nix_stats.c',
+ 'roc_nix_vlan.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 441b752..8cd2f26 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -17,6 +17,26 @@ enum roc_nix_sq_max_sqe_sz {
roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8,
};
+enum roc_nix_vlan_type {
+ ROC_NIX_VLAN_TYPE_INNER = 0x01,
+ ROC_NIX_VLAN_TYPE_OUTER = 0x02,
+};
+
+struct roc_nix_vlan_config {
+ uint32_t type;
+ union {
+ struct {
+ uint32_t vtag_inner;
+ uint32_t vtag_outer;
+ } vlan;
+
+ struct {
+ int idx_inner;
+ int idx_outer;
+ } mcam;
+ };
+};
+
/* NIX LF RX offload configuration flags.
* These are input flags to roc_nix_lf_alloc:rx_cfg
*/
@@ -336,6 +356,31 @@ int __roc_api roc_nix_ptp_info_cb_register(struct roc_nix *roc_nix,
ptp_info_update_t ptp_update);
void __roc_api roc_nix_ptp_info_cb_unregister(struct roc_nix *roc_nix);
+/* VLAN */
+int __roc_api
+roc_nix_vlan_mcam_entry_read(struct roc_nix *roc_nix, uint32_t index,
+ struct npc_mcam_read_entry_rsp **rsp);
+int __roc_api roc_nix_vlan_mcam_entry_write(struct roc_nix *roc_nix,
+ uint32_t index,
+ struct mcam_entry *entry,
+ uint8_t intf, uint8_t enable);
+int __roc_api roc_nix_vlan_mcam_entry_alloc_and_write(struct roc_nix *roc_nix,
+ struct mcam_entry *entry,
+ uint8_t intf,
+ uint8_t priority,
+ uint8_t ref_entry);
+int __roc_api roc_nix_vlan_mcam_entry_free(struct roc_nix *roc_nix,
+ uint32_t index);
+int __roc_api roc_nix_vlan_mcam_entry_ena_dis(struct roc_nix *roc_nix,
+ uint32_t index, const int enable);
+int __roc_api roc_nix_vlan_strip_vtag_ena_dis(struct roc_nix *roc_nix,
+ bool enable);
+int __roc_api roc_nix_vlan_insert_ena_dis(struct roc_nix *roc_nix,
+ struct roc_nix_vlan_config *vlan_cfg,
+ uint64_t *mcam_index, bool enable);
+int __roc_api roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type,
+ uint16_t tpid);
+
/* MCAST*/
int __roc_api roc_nix_mcast_mcam_entry_alloc(struct roc_nix *roc_nix,
uint16_t nb_entries,
diff --git a/drivers/common/cnxk/roc_nix_vlan.c b/drivers/common/cnxk/roc_nix_vlan.c
new file mode 100644
index 0000000..293af82
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_vlan.c
@@ -0,0 +1,205 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline struct mbox *
+get_mbox(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+int
+roc_nix_vlan_mcam_entry_read(struct roc_nix *roc_nix, uint32_t index,
+ struct npc_mcam_read_entry_rsp **rsp)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_mcam_read_entry_req *req;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_read_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = index;
+
+ return mbox_process_msg(mbox, (void **)rsp);
+}
+
+int
+roc_nix_vlan_mcam_entry_write(struct roc_nix *roc_nix, uint32_t index,
+ struct mcam_entry *entry, uint8_t intf,
+ uint8_t enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_mcam_write_entry_req *req;
+ struct msghdr *rsp;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = index;
+ req->intf = intf;
+ req->enable_entry = enable;
+ mbox_memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ return mbox_process_msg(mbox, (void *)&rsp);
+}
+
+int
+roc_nix_vlan_mcam_entry_alloc_and_write(struct roc_nix *roc_nix,
+ struct mcam_entry *entry, uint8_t intf,
+ uint8_t priority, uint8_t ref_entry)
+{
+ struct npc_mcam_alloc_and_write_entry_req *req;
+ struct npc_mcam_alloc_and_write_entry_rsp *rsp;
+ struct mbox *mbox = get_mbox(roc_nix);
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->priority = priority;
+ req->ref_entry = ref_entry;
+ req->intf = intf;
+ req->enable_entry = true;
+ mbox_memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ return rsp->entry;
+}
+
+int
+roc_nix_vlan_mcam_entry_free(struct roc_nix *roc_nix, uint32_t index)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_mcam_free_entry_req *req;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = index;
+
+ return mbox_process_msg(mbox, NULL);
+}
+
+int
+roc_nix_vlan_mcam_entry_ena_dis(struct roc_nix *roc_nix, uint32_t index,
+ const int enable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ struct mbox *mbox = get_mbox(roc_nix);
+ int rc = -ENOSPC;
+
+ if (enable) {
+ req = mbox_alloc_msg_npc_mcam_ena_entry(mbox);
+ if (req == NULL)
+ return rc;
+ } else {
+ req = mbox_alloc_msg_npc_mcam_dis_entry(mbox);
+ if (req == NULL)
+ return rc;
+ }
+
+ req->entry = index;
+ return mbox_process_msg(mbox, NULL);
+}
+
+int
+roc_nix_vlan_strip_vtag_ena_dis(struct roc_nix *roc_nix, bool enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_vtag_config *vtag_cfg;
+ int rc = -ENOSPC;
+
+ vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox);
+ if (vtag_cfg == NULL)
+ return rc;
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+ vtag_cfg->cfg_type = 1; /* Rx VLAN configuration */
+ vtag_cfg->rx.capture_vtag = 1; /* Always capture */
+ vtag_cfg->rx.vtag_type = 0; /* Use index 0 */
+
+ if (enable)
+ vtag_cfg->rx.strip_vtag = 1;
+ else
+ vtag_cfg->rx.strip_vtag = 0;
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_vlan_insert_ena_dis(struct roc_nix *roc_nix,
+ struct roc_nix_vlan_config *vlan_cfg,
+ uint64_t *mcam_index, bool enable)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_vtag_config *vtag_cfg;
+ struct nix_vtag_config_rsp *rsp;
+ int rc = -ENOSPC;
+
+ vtag_cfg = mbox_alloc_msg_nix_vtag_cfg(mbox);
+ if (vtag_cfg == NULL)
+ return rc;
+ vtag_cfg->cfg_type = 0; /* Tx VLAN configuration */
+ vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
+
+ if (enable) {
+ if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_INNER) {
+ vtag_cfg->tx.vtag0 = vlan_cfg->vlan.vtag_inner;
+ vtag_cfg->tx.cfg_vtag0 = true;
+ }
+ if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_OUTER) {
+ vtag_cfg->tx.vtag1 = vlan_cfg->vlan.vtag_outer;
+ vtag_cfg->tx.cfg_vtag1 = true;
+ }
+ } else {
+ if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_INNER) {
+ vtag_cfg->tx.vtag0_idx = vlan_cfg->mcam.idx_inner;
+ vtag_cfg->tx.free_vtag0 = true;
+ }
+ if (vlan_cfg->type & ROC_NIX_VLAN_TYPE_OUTER) {
+ vtag_cfg->tx.vtag1_idx = vlan_cfg->mcam.idx_outer;
+ vtag_cfg->tx.free_vtag1 = true;
+ }
+ }
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (enable)
+ *mcam_index =
+ (((uint64_t)rsp->vtag1_idx << 32) | rsp->vtag0_idx);
+
+ return 0;
+}
+
+int
+roc_nix_vlan_tpid_set(struct roc_nix *roc_nix, uint32_t type, uint16_t tpid)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_set_vlan_tpid *tpid_cfg;
+ int rc = -ENOSPC;
+
+ tpid_cfg = mbox_alloc_msg_nix_set_vlan_tpid(mbox);
+ if (tpid_cfg == NULL)
+ return rc;
+ tpid_cfg->tpid = tpid;
+
+ if (type & ROC_NIX_VLAN_TYPE_OUTER)
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
+ else
+ tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
+
+ return mbox_process(mbox);
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 454e91c..2bac424 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -97,6 +97,14 @@ INTERNAL {
roc_nix_xstats_names_get;
roc_nix_unregister_cq_irqs;
roc_nix_unregister_queue_irqs;
+ roc_nix_vlan_insert_ena_dis;
+ roc_nix_vlan_mcam_entry_alloc_and_write;
+ roc_nix_vlan_mcam_entry_ena_dis;
+ roc_nix_vlan_mcam_entry_free;
+ roc_nix_vlan_mcam_entry_read;
+ roc_nix_vlan_mcam_entry_write;
+ roc_nix_vlan_strip_vtag_ena_dis;
+ roc_nix_vlan_tpid_set;
roc_npa_aura_limit_modify;
roc_npa_aura_op_range_set;
roc_npa_ctx_dump;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 30/52] common/cnxk: add nix flow control support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (28 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 29/52] common/cnxk: add VLAN filter support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 31/52] common/cnxk: add nix LSO support and misc utils Nithin Dabilpuram
` (25 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Sunil Kumar Kori <skori@marvell.com>
Add support to enable/disable Rx/Tx flow control and pause
frame configuration on NIX.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 34 ++++++
drivers/common/cnxk/roc_nix_fc.c | 251 +++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 4 +
4 files changed, 290 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_fc.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 6ec6f4f..e33d3f5 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -18,6 +18,7 @@ sources = files('roc_dev.c',
'roc_model.c',
'roc_nix.c',
'roc_nix_debug.c',
+ 'roc_nix_fc.c',
'roc_nix_irq.c',
'roc_nix_mac.c',
'roc_nix_mcast.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 8cd2f26..e60a35d 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -17,6 +17,13 @@ enum roc_nix_sq_max_sqe_sz {
roc_nix_maxsqesz_w8 = NIX_MAXSQESZ_W8,
};
+enum roc_nix_fc_mode {
+ ROC_NIX_FC_NONE = 0,
+ ROC_NIX_FC_RX,
+ ROC_NIX_FC_TX,
+ ROC_NIX_FC_FULL
+};
+
enum roc_nix_vlan_type {
ROC_NIX_VLAN_TYPE_INNER = 0x01,
ROC_NIX_VLAN_TYPE_OUTER = 0x02,
@@ -37,6 +44,21 @@ struct roc_nix_vlan_config {
};
};
+struct roc_nix_fc_cfg {
+ bool cq_cfg_valid;
+ union {
+ struct {
+ bool enable;
+ } rxchan_cfg;
+
+ struct {
+ uint32_t rq;
+ uint16_t cq_drop;
+ bool enable;
+ } cq_cfg;
+ };
+};
+
/* NIX LF RX offload configuration flags.
* These are input flags to roc_nix_lf_alloc:rx_cfg
*/
@@ -288,6 +310,18 @@ int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix,
link_status_t link_update);
void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix);
+/* Flow control */
+int __roc_api roc_nix_fc_config_set(struct roc_nix *roc_nix,
+ struct roc_nix_fc_cfg *fc_cfg);
+
+int __roc_api roc_nix_fc_config_get(struct roc_nix *roc_nix,
+ struct roc_nix_fc_cfg *fc_cfg);
+
+int __roc_api roc_nix_fc_mode_set(struct roc_nix *roc_nix,
+ enum roc_nix_fc_mode mode);
+
+enum roc_nix_fc_mode __roc_api roc_nix_fc_mode_get(struct roc_nix *roc_nix);
+
/* NPC */
int __roc_api roc_nix_npc_promisc_ena_dis(struct roc_nix *roc_nix, int enable);
diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
new file mode 100644
index 0000000..a71a758
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -0,0 +1,251 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline struct mbox *
+get_mbox(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+static int
+nix_fc_rxchan_bpid_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ if (nix->chan_cnt != 0)
+ fc_cfg->rxchan_cfg.enable = true;
+ else
+ fc_cfg->rxchan_cfg.enable = false;
+
+ fc_cfg->cq_cfg_valid = false;
+
+ return 0;
+}
+
+static int
+nix_fc_rxchan_bpid_set(struct roc_nix *roc_nix, bool enable)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_bp_cfg_req *req;
+ struct nix_bp_cfg_rsp *rsp;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_sdp(roc_nix))
+ return 0;
+
+ if (enable) {
+ req = mbox_alloc_msg_nix_bp_enable(mbox);
+ if (req == NULL)
+ return rc;
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+ req->bpid_per_chan = 0;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc || (req->chan_cnt != rsp->chan_cnt))
+ goto exit;
+
+ nix->bpid[0] = rsp->chan_bpid[0];
+ nix->chan_cnt = rsp->chan_cnt;
+ } else {
+ req = mbox_alloc_msg_nix_bp_disable(mbox);
+ if (req == NULL)
+ return rc;
+ req->chan_base = 0;
+ req->chan_cnt = 1;
+
+ rc = mbox_process(mbox);
+ if (rc)
+ goto exit;
+
+ memset(nix->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
+ nix->chan_cnt = 0;
+ }
+
+exit:
+ return rc;
+}
+
+static int
+nix_fc_cq_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_aq_enq_rsp *rsp;
+ int rc;
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = fc_cfg->cq_cfg.rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = fc_cfg->cq_cfg.rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_READ;
+ }
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto exit;
+
+ fc_cfg->cq_cfg.cq_drop = rsp->cq.bp;
+ fc_cfg->cq_cfg.enable = rsp->cq.bp_ena;
+ fc_cfg->cq_cfg_valid = true;
+
+exit:
+ return rc;
+}
+
+static int
+nix_fc_cq_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = get_mbox(roc_nix);
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = fc_cfg->cq_cfg.rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ if (fc_cfg->cq_cfg.enable) {
+ aq->cq.bpid = nix->bpid[0];
+ aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
+ aq->cq.bp = fc_cfg->cq_cfg.cq_drop;
+ aq->cq_mask.bp = ~(aq->cq_mask.bp);
+ }
+
+ aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable);
+ aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = fc_cfg->cq_cfg.rq;
+ aq->ctype = NIX_AQ_CTYPE_CQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ if (fc_cfg->cq_cfg.enable) {
+ aq->cq.bpid = nix->bpid[0];
+ aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
+ aq->cq.bp = fc_cfg->cq_cfg.cq_drop;
+ aq->cq_mask.bp = ~(aq->cq_mask.bp);
+ }
+
+ aq->cq.bp_ena = !!(fc_cfg->cq_cfg.enable);
+ aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
+ }
+
+ return mbox_process(mbox);
+}
+
+int
+roc_nix_fc_config_get(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
+{
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return 0;
+
+ if (fc_cfg->cq_cfg_valid)
+ return nix_fc_cq_config_get(roc_nix, fc_cfg);
+ else
+ return nix_fc_rxchan_bpid_get(roc_nix, fc_cfg);
+}
+
+int
+roc_nix_fc_config_set(struct roc_nix *roc_nix, struct roc_nix_fc_cfg *fc_cfg)
+{
+ if (roc_nix_is_vf_or_sdp(roc_nix))
+ return 0;
+
+ if (fc_cfg->cq_cfg_valid)
+ return nix_fc_cq_config_set(roc_nix, fc_cfg);
+ else
+ return nix_fc_rxchan_bpid_set(roc_nix,
+ fc_cfg->rxchan_cfg.enable);
+}
+
+enum roc_nix_fc_mode
+roc_nix_fc_mode_get(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct cgx_pause_frm_cfg *req, *rsp;
+ enum roc_nix_fc_mode mode;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_lbk(roc_nix))
+ return ROC_NIX_FC_NONE;
+
+ req = mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ if (req == NULL)
+ return rc;
+ req->set = 0;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto exit;
+
+ if (rsp->rx_pause && rsp->tx_pause)
+ mode = ROC_NIX_FC_FULL;
+ else if (rsp->rx_pause)
+ mode = ROC_NIX_FC_RX;
+ else if (rsp->tx_pause)
+ mode = ROC_NIX_FC_TX;
+ else
+ mode = ROC_NIX_FC_NONE;
+
+ nix->rx_pause = rsp->rx_pause;
+ nix->tx_pause = rsp->tx_pause;
+ return mode;
+
+exit:
+ return ROC_NIX_FC_NONE;
+}
+
+int
+roc_nix_fc_mode_set(struct roc_nix *roc_nix, enum roc_nix_fc_mode mode)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct cgx_pause_frm_cfg *req;
+ uint8_t tx_pause, rx_pause;
+ int rc = -ENOSPC;
+
+ if (roc_nix_is_lbk(roc_nix))
+ return NIX_ERR_OP_NOTSUP;
+
+ rx_pause = (mode == ROC_NIX_FC_FULL) || (mode == ROC_NIX_FC_RX);
+ tx_pause = (mode == ROC_NIX_FC_FULL) || (mode == ROC_NIX_FC_TX);
+
+ req = mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
+ if (req == NULL)
+ return rc;
+ req->set = 1;
+ req->rx_pause = rx_pause;
+ req->tx_pause = tx_pause;
+
+ rc = mbox_process(mbox);
+ if (rc)
+ goto exit;
+
+ nix->rx_pause = rx_pause;
+ nix->tx_pause = tx_pause;
+
+exit:
+ return rc;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 2bac424..e4d47a1 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -22,6 +22,10 @@ INTERNAL {
roc_nix_dev_init;
roc_nix_dump;
roc_nix_err_intr_ena_dis;
+ roc_nix_fc_config_get;
+ roc_nix_fc_config_set;
+ roc_nix_fc_mode_set;
+ roc_nix_fc_mode_get;
roc_nix_get_base_chan;
roc_nix_get_pf;
roc_nix_get_pf_func;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 31/52] common/cnxk: add nix LSO support and misc utils
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (29 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 30/52] common/cnxk: add nix flow control support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 32/52] common/cnxk: add nix traffic management base support Nithin Dabilpuram
` (24 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Sunil Kumar Kori <skori@marvell.com>
Add support to create LSO formats for TCP segmentation offload
for IPv4/IPv6, tunnel and non-tunnel protocols. Tunnel protocol
support is for GRE and UDP based tunnel protocols.
This patch also adds other helper API to retrieve eeprom info
and configure Rx for different switch headers.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_nix.h | 14 ++
drivers/common/cnxk/roc_nix_ops.c | 416 ++++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 3 +
4 files changed, 434 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_ops.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index e33d3f5..5eedb39 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -23,6 +23,7 @@ sources = files('roc_dev.c',
'roc_nix_mac.c',
'roc_nix_mcast.c',
'roc_nix_npc.c',
+ 'roc_nix_ops.c',
'roc_nix_ptp.c',
'roc_nix_queue.c',
'roc_nix_rss.c',
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index e60a35d..01f8a9f 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -59,6 +59,12 @@ struct roc_nix_fc_cfg {
};
};
+struct roc_nix_eeprom_info {
+#define ROC_NIX_EEPROM_SIZE 256
+ uint16_t sff_id;
+ uint8_t buf[ROC_NIX_EEPROM_SIZE];
+};
+
/* NIX LF RX offload configuration flags.
* These are input flags to roc_nix_lf_alloc:rx_cfg
*/
@@ -310,6 +316,14 @@ int __roc_api roc_nix_mac_link_cb_register(struct roc_nix *roc_nix,
link_status_t link_update);
void __roc_api roc_nix_mac_link_cb_unregister(struct roc_nix *roc_nix);
+/* Ops */
+int __roc_api roc_nix_switch_hdr_set(struct roc_nix *roc_nix,
+ uint64_t switch_header_type);
+int __roc_api roc_nix_lso_fmt_setup(struct roc_nix *roc_nix);
+
+int __roc_api roc_nix_eeprom_info_get(struct roc_nix *roc_nix,
+ struct roc_nix_eeprom_info *info);
+
/* Flow control */
int __roc_api roc_nix_fc_config_set(struct roc_nix *roc_nix,
struct roc_nix_fc_cfg *fc_cfg);
diff --git a/drivers/common/cnxk/roc_nix_ops.c b/drivers/common/cnxk/roc_nix_ops.c
new file mode 100644
index 0000000..501b957
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_ops.c
@@ -0,0 +1,416 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static inline struct mbox *
+get_mbox(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+
+ return dev->mbox;
+}
+
+static void
+nix_lso_tcp(struct nix_lso_format_cfg *req, bool v4)
+{
+ __io struct nix_lso_format *field;
+
+ /* Format works only with TCP packet marked by OL3/OL4 */
+ field = (__io struct nix_lso_format *)&req->fields[0];
+ req->field_mask = NIX_LSO_FIELD_MASK;
+ /* Outer IPv4/IPv6 */
+ field->layer = NIX_TXLAYER_OL3;
+ field->offset = v4 ? 2 : 4;
+ field->sizem1 = 1; /* 2B */
+ field->alg = NIX_LSOALG_ADD_PAYLEN;
+ field++;
+ if (v4) {
+ /* IPID field */
+ field->layer = NIX_TXLAYER_OL3;
+ field->offset = 4;
+ field->sizem1 = 1;
+ /* Incremented linearly per segment */
+ field->alg = NIX_LSOALG_ADD_SEGNUM;
+ field++;
+ }
+
+ /* TCP sequence number update */
+ field->layer = NIX_TXLAYER_OL4;
+ field->offset = 4;
+ field->sizem1 = 3; /* 4 bytes */
+ field->alg = NIX_LSOALG_ADD_OFFSET;
+ field++;
+ /* TCP flags field */
+ field->layer = NIX_TXLAYER_OL4;
+ field->offset = 12;
+ field->sizem1 = 1;
+ field->alg = NIX_LSOALG_TCP_FLAGS;
+ field++;
+}
+
+static void
+nix_lso_udp_tun_tcp(struct nix_lso_format_cfg *req, bool outer_v4,
+ bool inner_v4)
+{
+ __io struct nix_lso_format *field;
+
+ field = (__io struct nix_lso_format *)&req->fields[0];
+ req->field_mask = NIX_LSO_FIELD_MASK;
+ /* Outer IPv4/IPv6 len */
+ field->layer = NIX_TXLAYER_OL3;
+ field->offset = outer_v4 ? 2 : 4;
+ field->sizem1 = 1; /* 2B */
+ field->alg = NIX_LSOALG_ADD_PAYLEN;
+ field++;
+ if (outer_v4) {
+ /* IPID */
+ field->layer = NIX_TXLAYER_OL3;
+ field->offset = 4;
+ field->sizem1 = 1;
+ /* Incremented linearly per segment */
+ field->alg = NIX_LSOALG_ADD_SEGNUM;
+ field++;
+ }
+
+ /* Outer UDP length */
+ field->layer = NIX_TXLAYER_OL4;
+ field->offset = 4;
+ field->sizem1 = 1;
+ field->alg = NIX_LSOALG_ADD_PAYLEN;
+ field++;
+
+ /* Inner IPv4/IPv6 */
+ field->layer = NIX_TXLAYER_IL3;
+ field->offset = inner_v4 ? 2 : 4;
+ field->sizem1 = 1; /* 2B */
+ field->alg = NIX_LSOALG_ADD_PAYLEN;
+ field++;
+ if (inner_v4) {
+ /* IPID field */
+ field->layer = NIX_TXLAYER_IL3;
+ field->offset = 4;
+ field->sizem1 = 1;
+ /* Incremented linearly per segment */
+ field->alg = NIX_LSOALG_ADD_SEGNUM;
+ field++;
+ }
+
+ /* TCP sequence number update */
+ field->layer = NIX_TXLAYER_IL4;
+ field->offset = 4;
+ field->sizem1 = 3; /* 4 bytes */
+ field->alg = NIX_LSOALG_ADD_OFFSET;
+ field++;
+
+ /* TCP flags field */
+ field->layer = NIX_TXLAYER_IL4;
+ field->offset = 12;
+ field->sizem1 = 1;
+ field->alg = NIX_LSOALG_TCP_FLAGS;
+ field++;
+}
+
+static void
+nix_lso_tun_tcp(struct nix_lso_format_cfg *req, bool outer_v4, bool inner_v4)
+{
+ __io struct nix_lso_format *field;
+
+ field = (__io struct nix_lso_format *)&req->fields[0];
+ req->field_mask = NIX_LSO_FIELD_MASK;
+ /* Outer IPv4/IPv6 len */
+ field->layer = NIX_TXLAYER_OL3;
+ field->offset = outer_v4 ? 2 : 4;
+ field->sizem1 = 1; /* 2B */
+ field->alg = NIX_LSOALG_ADD_PAYLEN;
+ field++;
+ if (outer_v4) {
+ /* IPID */
+ field->layer = NIX_TXLAYER_OL3;
+ field->offset = 4;
+ field->sizem1 = 1;
+ /* Incremented linearly per segment */
+ field->alg = NIX_LSOALG_ADD_SEGNUM;
+ field++;
+ }
+
+ /* Inner IPv4/IPv6 */
+ field->layer = NIX_TXLAYER_IL3;
+ field->offset = inner_v4 ? 2 : 4;
+ field->sizem1 = 1; /* 2B */
+ field->alg = NIX_LSOALG_ADD_PAYLEN;
+ field++;
+ if (inner_v4) {
+ /* IPID field */
+ field->layer = NIX_TXLAYER_IL3;
+ field->offset = 4;
+ field->sizem1 = 1;
+ /* Incremented linearly per segment */
+ field->alg = NIX_LSOALG_ADD_SEGNUM;
+ field++;
+ }
+
+ /* TCP sequence number update */
+ field->layer = NIX_TXLAYER_IL4;
+ field->offset = 4;
+ field->sizem1 = 3; /* 4 bytes */
+ field->alg = NIX_LSOALG_ADD_OFFSET;
+ field++;
+
+ /* TCP flags field */
+ field->layer = NIX_TXLAYER_IL4;
+ field->offset = 12;
+ field->sizem1 = 1;
+ field->alg = NIX_LSOALG_TCP_FLAGS;
+ field++;
+}
+
+int
+roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct nix_lso_format_cfg_rsp *rsp;
+ struct nix_lso_format_cfg *req;
+ uint8_t base;
+ int rc = -ENOSPC;
+
+ /*
+ * IPv4/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return rc;
+ nix_lso_tcp(req, true);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ base = rsp->lso_format_idx;
+ if (base != NIX_LSO_FORMAT_IDX_TSOV4)
+ return NIX_ERR_INTERNAL;
+
+ nix->lso_base_idx = base;
+ plt_nix_dbg("tcpv4 lso fmt=%u\n", base);
+
+ /*
+ * IPv6/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_tcp(req, false);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 1)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("tcpv6 lso fmt=%u\n", base + 1);
+
+ /*
+ * IPv4/UDP/TUN HDR/IPv4/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_udp_tun_tcp(req, true, true);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 2)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("udp tun v4v4 fmt=%u\n", base + 2);
+
+ /*
+ * IPv4/UDP/TUN HDR/IPv6/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_udp_tun_tcp(req, true, false);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 3)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("udp tun v4v6 fmt=%u\n", base + 3);
+
+ /*
+ * IPv6/UDP/TUN HDR/IPv4/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_udp_tun_tcp(req, false, true);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 4)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("udp tun v6v4 fmt=%u\n", base + 4);
+
+ /*
+ * IPv6/UDP/TUN HDR/IPv6/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_udp_tun_tcp(req, false, false);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ if (rsp->lso_format_idx != base + 5)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("udp tun v6v6 fmt=%u\n", base + 5);
+
+ /*
+ * IPv4/TUN HDR/IPv4/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_tun_tcp(req, true, true);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 6)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("tun v4v4 fmt=%u\n", base + 6);
+
+ /*
+ * IPv4/TUN HDR/IPv6/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_tun_tcp(req, true, false);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 7)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("tun v4v6 fmt=%u\n", base + 7);
+
+ /*
+ * IPv6/TUN HDR/IPv4/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_tun_tcp(req, false, true);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 8)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("tun v6v4 fmt=%u\n", base + 8);
+
+ /*
+ * IPv6/TUN HDR/IPv6/TCP LSO
+ */
+ req = mbox_alloc_msg_nix_lso_format_cfg(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ nix_lso_tun_tcp(req, false, false);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ if (rsp->lso_format_idx != base + 9)
+ return NIX_ERR_INTERNAL;
+
+ plt_nix_dbg("tun v6v6 fmt=%u\n", base + 9);
+ return 0;
+}
+
+int
+roc_nix_switch_hdr_set(struct roc_nix *roc_nix, uint64_t switch_header_type)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct npc_set_pkind *req;
+ struct msg_resp *rsp;
+ int rc = -ENOSPC;
+
+ if (switch_header_type == 0)
+ switch_header_type = ROC_PRIV_FLAGS_DEFAULT;
+
+ if (switch_header_type != ROC_PRIV_FLAGS_DEFAULT &&
+ switch_header_type != ROC_PRIV_FLAGS_EDSA &&
+ switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
+ switch_header_type != ROC_PRIV_FLAGS_LEN_90B &&
+ switch_header_type != ROC_PRIV_FLAGS_CUSTOM) {
+ plt_err("switch header type is not supported");
+ return NIX_ERR_PARAM;
+ }
+
+ if (switch_header_type == ROC_PRIV_FLAGS_LEN_90B &&
+ !roc_nix_is_sdp(roc_nix)) {
+ plt_err("chlen90b is not supported on non-SDP device");
+ return NIX_ERR_PARAM;
+ }
+
+ if (switch_header_type == ROC_PRIV_FLAGS_HIGIG &&
+ roc_nix_is_vf_or_sdp(roc_nix)) {
+ plt_err("higig2 is supported on PF devices only");
+ return NIX_ERR_PARAM;
+ }
+
+ req = mbox_alloc_msg_npc_set_pkind(mbox);
+ if (req == NULL)
+ return rc;
+ req->mode = switch_header_type;
+ req->dir = PKIND_RX;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ req = mbox_alloc_msg_npc_set_pkind(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->mode = switch_header_type;
+ req->dir = PKIND_TX;
+ return mbox_process_msg(mbox, (void *)&rsp);
+}
+
+int
+roc_nix_eeprom_info_get(struct roc_nix *roc_nix,
+ struct roc_nix_eeprom_info *info)
+{
+ struct mbox *mbox = get_mbox(roc_nix);
+ struct cgx_fw_data *rsp = NULL;
+ int rc;
+
+ if (!info) {
+ plt_err("Input buffer is NULL");
+ return NIX_ERR_PARAM;
+ }
+
+ mbox_alloc_msg_cgx_get_aux_link_info(mbox);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc) {
+ plt_err("Failed to get fw data: %d", rc);
+ return rc;
+ }
+
+ info->sff_id = rsp->fwdata.sfp_eeprom.sff_id;
+ mbox_memcpy(info->buf, rsp->fwdata.sfp_eeprom.buf, SFP_EEPROM_SIZE);
+ return 0;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index e4d47a1..7ee657c 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -40,6 +40,7 @@ INTERNAL {
roc_nix_lf_free;
roc_nix_lf_get_reg_count;
roc_nix_lf_reg_dump;
+ roc_nix_lso_fmt_setup;
roc_nix_mac_addr_add;
roc_nix_mac_addr_del;
roc_nix_mac_addr_set;
@@ -99,6 +100,8 @@ INTERNAL {
roc_nix_num_xstats_get;
roc_nix_xstats_get;
roc_nix_xstats_names_get;
+ roc_nix_switch_hdr_set;
+ roc_nix_eeprom_info_get;
roc_nix_unregister_cq_irqs;
roc_nix_unregister_queue_irqs;
roc_nix_vlan_insert_ena_dis;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 32/52] common/cnxk: add nix traffic management base support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (30 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 31/52] common/cnxk: add nix LSO support and misc utils Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:38 ` [dpdk-dev] [PATCH 33/52] common/cnxk: add nix tm support to add/delete node Nithin Dabilpuram
` (23 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add nix traffic management base support to init/fini node, shaper profile
and topology, setup SQ for a given user hierarchy or default internal
hierarchy.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/meson.build | 3 +
drivers/common/cnxk/roc_nix.c | 7 +
drivers/common/cnxk/roc_nix.h | 26 +++
drivers/common/cnxk/roc_nix_priv.h | 219 ++++++++++++++++++
drivers/common/cnxk/roc_nix_queue.c | 9 +
drivers/common/cnxk/roc_nix_tm.c | 397 +++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_tm_ops.c | 67 ++++++
drivers/common/cnxk/roc_nix_tm_utils.c | 62 +++++
drivers/common/cnxk/roc_platform.c | 1 +
drivers/common/cnxk/roc_platform.h | 2 +
drivers/common/cnxk/version.map | 3 +
11 files changed, 796 insertions(+)
create mode 100644 drivers/common/cnxk/roc_nix_tm.c
create mode 100644 drivers/common/cnxk/roc_nix_tm_ops.c
create mode 100644 drivers/common/cnxk/roc_nix_tm_utils.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 5eedb39..a686763 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,9 @@ sources = files('roc_dev.c',
'roc_nix_queue.c',
'roc_nix_rss.c',
'roc_nix_stats.c',
+ 'roc_nix_tm.c',
+ 'roc_nix_tm_ops.c',
+ 'roc_nix_tm_utils.c',
'roc_nix_vlan.c',
'roc_npa.c',
'roc_npa_debug.c',
diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index 7662e59..32ebd77 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -396,11 +396,17 @@ roc_nix_dev_init(struct roc_nix *roc_nix)
if (rc)
goto lf_detach;
+ rc = nix_tm_conf_init(roc_nix);
+ if (rc)
+ goto unregister_irqs;
+
/* Get NIX HW info */
roc_nix_get_hw_info(roc_nix);
nix->dev.drv_inited = true;
return 0;
+unregister_irqs:
+ nix_unregister_irqs(nix);
lf_detach:
nix_lf_detach(nix);
dev_fini:
@@ -421,6 +427,7 @@ roc_nix_dev_fini(struct roc_nix *roc_nix)
if (!nix->dev.drv_inited)
goto fini;
+ nix_tm_conf_fini(roc_nix);
nix_unregister_irqs(nix);
rc = nix_lf_detach(nix);
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 01f8a9f..b89851f 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -292,6 +292,32 @@ void __roc_api roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix);
int __roc_api roc_nix_register_cq_irqs(struct roc_nix *roc_nix);
void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix);
+/* Traffic Management */
+#define ROC_NIX_TM_MAX_SCHED_WT ((uint8_t)~0)
+
+enum roc_nix_tm_tree {
+ ROC_NIX_TM_DEFAULT = 0,
+ ROC_NIX_TM_RLIMIT,
+ ROC_NIX_TM_USER,
+ ROC_NIX_TM_TREE_MAX,
+};
+
+enum roc_tm_node_level {
+ ROC_TM_LVL_ROOT = 0,
+ ROC_TM_LVL_SCH1,
+ ROC_TM_LVL_SCH2,
+ ROC_TM_LVL_SCH3,
+ ROC_TM_LVL_SCH4,
+ ROC_TM_LVL_QUEUE,
+ ROC_TM_LVL_MAX,
+};
+
+/*
+ * TM runtime hierarchy init API.
+ */
+int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable);
+int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq);
+
/* MAC */
int __roc_api roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start);
int __roc_api roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix,
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 6e50a1b..bd4a4e9 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -28,6 +28,77 @@ struct nix_qint {
uint8_t qintx;
};
+/* Traffic Manager */
+#define NIX_TM_MAX_HW_TXSCHQ 512
+#define NIX_TM_HW_ID_INVALID UINT32_MAX
+
+/* TM flags */
+#define NIX_TM_HIERARCHY_ENA BIT_ULL(0)
+#define NIX_TM_TL1_NO_SP BIT_ULL(1)
+#define NIX_TM_TL1_ACCESS BIT_ULL(2)
+
+struct nix_tm_tb {
+ /** Token bucket rate (bytes per second) */
+ uint64_t rate;
+
+ /** Token bucket size (bytes), a.k.a. max burst size */
+ uint64_t size;
+};
+
+struct nix_tm_node {
+ TAILQ_ENTRY(nix_tm_node) node;
+
+ /* Input params */
+ enum roc_nix_tm_tree tree;
+ uint32_t id;
+ uint32_t priority;
+ uint32_t weight;
+ uint16_t lvl;
+ uint32_t parent_id;
+ uint32_t shaper_profile_id;
+ void (*free_fn)(void *node);
+
+ /* Derived params */
+ uint32_t hw_id;
+ uint16_t hw_lvl;
+ uint32_t rr_prio;
+ uint32_t rr_num;
+ uint32_t max_prio;
+ uint32_t parent_hw_id;
+ uint32_t flags : 16;
+#define NIX_TM_NODE_HWRES BIT_ULL(0)
+#define NIX_TM_NODE_ENABLED BIT_ULL(1)
+ /* Shaper algorithm for RED state @NIX_REDALG_E */
+ uint32_t red_algo : 2;
+ uint32_t pkt_mode : 1;
+ uint32_t pkt_mode_set : 1;
+
+ bool child_realloc;
+ struct nix_tm_node *parent;
+
+ /* Non-leaf node sp count */
+ uint32_t n_sp_priorities;
+
+ /* Last stats */
+ uint64_t last_pkts;
+ uint64_t last_bytes;
+};
+
+struct nix_tm_shaper_profile {
+ TAILQ_ENTRY(nix_tm_shaper_profile) shaper;
+ struct nix_tm_tb commit;
+ struct nix_tm_tb peak;
+ int32_t pkt_len_adj;
+ bool pkt_mode;
+ uint32_t id;
+ void (*free_fn)(void *profile);
+
+ uint32_t ref_cnt;
+};
+
+TAILQ_HEAD(nix_tm_node_list, nix_tm_node);
+TAILQ_HEAD(nix_tm_shaper_profile_list, nix_tm_shaper_profile);
+
struct nix {
uint16_t reta[ROC_NIX_RSS_GRPS][ROC_NIX_RSS_RETA_MAX];
enum roc_nix_rss_reta_sz reta_sz;
@@ -72,6 +143,23 @@ struct nix {
bool ptp_en;
bool is_nix1;
+ /* Traffic manager info */
+
+ /* Contiguous resources per lvl */
+ struct plt_bitmap *schq_contig_bmp[NIX_TXSCH_LVL_CNT];
+ /* Dis-contiguous resources per lvl */
+ struct plt_bitmap *schq_bmp[NIX_TXSCH_LVL_CNT];
+ void *schq_bmp_mem;
+
+ struct nix_tm_shaper_profile_list shaper_profile_list;
+ struct nix_tm_node_list trees[ROC_NIX_TM_TREE_MAX];
+ enum roc_nix_tm_tree tm_tree;
+ uint64_t tm_rate_min;
+ uint16_t tm_root_lvl;
+ uint16_t tm_flags;
+ uint16_t tm_link_cfg_lvl;
+ uint16_t contig_rsvd[NIX_TXSCH_LVL_CNT];
+ uint16_t discontig_rsvd[NIX_TXSCH_LVL_CNT];
} __plt_cache_aligned;
enum nix_err_status {
@@ -83,6 +171,29 @@ enum nix_err_status {
NIX_ERR_QUEUE_INVALID_RANGE,
NIX_ERR_AQ_READ_FAILED,
NIX_ERR_AQ_WRITE_FAILED,
+ NIX_ERR_TM_LEAF_NODE_GET,
+ NIX_ERR_TM_INVALID_LVL,
+ NIX_ERR_TM_INVALID_PRIO,
+ NIX_ERR_TM_INVALID_PARENT,
+ NIX_ERR_TM_NODE_EXISTS,
+ NIX_ERR_TM_INVALID_NODE,
+ NIX_ERR_TM_INVALID_SHAPER_PROFILE,
+ NIX_ERR_TM_PKT_MODE_MISMATCH,
+ NIX_ERR_TM_WEIGHT_EXCEED,
+ NIX_ERR_TM_CHILD_EXISTS,
+ NIX_ERR_TM_INVALID_PEAK_SZ,
+ NIX_ERR_TM_INVALID_PEAK_RATE,
+ NIX_ERR_TM_INVALID_COMMIT_SZ,
+ NIX_ERR_TM_INVALID_COMMIT_RATE,
+ NIX_ERR_TM_SHAPER_PROFILE_IN_USE,
+ NIX_ERR_TM_SHAPER_PROFILE_EXISTS,
+ NIX_ERR_TM_SHAPER_PKT_LEN_ADJUST,
+ NIX_ERR_TM_INVALID_TREE,
+ NIX_ERR_TM_PARENT_PRIO_UPDATE,
+ NIX_ERR_TM_PRIO_EXCEEDED,
+ NIX_ERR_TM_PRIO_ORDER,
+ NIX_ERR_TM_MULTIPLE_RR_GROUPS,
+ NIX_ERR_TM_SQ_UPDATE_FAIL,
NIX_ERR_NDC_SYNC,
};
@@ -116,4 +227,112 @@ nix_priv_to_roc_nix(struct nix *nix)
int nix_register_irqs(struct nix *nix);
void nix_unregister_irqs(struct nix *nix);
+/* TM */
+#define NIX_TM_TREE_MASK_ALL \
+ (BIT(ROC_NIX_TM_DEFAULT) | BIT(ROC_NIX_TM_RLIMIT) | \
+ BIT(ROC_NIX_TM_USER))
+
+/* NIX_MAX_HW_FRS ==
+ * NIX_TM_DFLT_RR_WT * NIX_TM_RR_QUANTUM_MAX / ROC_NIX_TM_MAX_SCHED_WT
+ */
+#define NIX_TM_DFLT_RR_WT 71
+
+/* Default TL1 priority and Quantum from AF */
+#define NIX_TM_TL1_DFLT_RR_QTM ((1 << 24) - 1)
+#define NIX_TM_TL1_DFLT_RR_PRIO 1
+
+struct nix_tm_shaper_data {
+ uint64_t burst_exponent;
+ uint64_t burst_mantissa;
+ uint64_t div_exp;
+ uint64_t exponent;
+ uint64_t mantissa;
+ uint64_t burst;
+ uint64_t rate;
+};
+
+static inline uint64_t
+nix_tm_weight_to_rr_quantum(uint64_t weight)
+{
+ uint64_t max = (roc_model_is_cn9k() ? NIX_CN9K_TM_RR_QUANTUM_MAX :
+ NIX_TM_RR_QUANTUM_MAX);
+
+ weight &= (uint64_t)ROC_NIX_TM_MAX_SCHED_WT;
+ return (weight * max) / ROC_NIX_TM_MAX_SCHED_WT;
+}
+
+static inline bool
+nix_tm_have_tl1_access(struct nix *nix)
+{
+ return !!(nix->tm_flags & NIX_TM_TL1_ACCESS);
+}
+
+static inline bool
+nix_tm_is_leaf(struct nix *nix, int lvl)
+{
+ if (nix_tm_have_tl1_access(nix))
+ return (lvl == ROC_TM_LVL_QUEUE);
+ return (lvl == ROC_TM_LVL_SCH4);
+}
+
+static inline struct nix_tm_node_list *
+nix_tm_node_list(struct nix *nix, enum roc_nix_tm_tree tree)
+{
+ return &nix->trees[tree];
+}
+
+static inline const char *
+nix_tm_hwlvl2str(uint32_t hw_lvl)
+{
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_MDQ:
+ return "SMQ/MDQ";
+ case NIX_TXSCH_LVL_TL4:
+ return "TL4";
+ case NIX_TXSCH_LVL_TL3:
+ return "TL3";
+ case NIX_TXSCH_LVL_TL2:
+ return "TL2";
+ case NIX_TXSCH_LVL_TL1:
+ return "TL1";
+ default:
+ break;
+ }
+
+ return "???";
+}
+
+static inline const char *
+nix_tm_tree2str(enum roc_nix_tm_tree tree)
+{
+ if (tree == ROC_NIX_TM_DEFAULT)
+ return "Default Tree";
+ else if (tree == ROC_NIX_TM_RLIMIT)
+ return "Rate Limit Tree";
+ else if (tree == ROC_NIX_TM_USER)
+ return "User Tree";
+ return "???";
+}
+
+/*
+ * TM priv ops.
+ */
+
+int nix_tm_conf_init(struct roc_nix *roc_nix);
+void nix_tm_conf_fini(struct roc_nix *roc_nix);
+int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum,
+ uint16_t *smq);
+int nix_tm_sq_flush_pre(struct roc_nix_sq *sq);
+int nix_tm_sq_flush_post(struct roc_nix_sq *sq);
+int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable);
+int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node);
+
+/*
+ * TM priv utils.
+ */
+struct nix_tm_node *nix_tm_node_search(struct nix *nix, uint32_t node_id,
+ enum roc_nix_tm_tree tree);
+uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
+ volatile uint64_t *reg, volatile uint64_t *regval);
+
#endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index bb410d5..30c9034 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -788,6 +788,12 @@ roc_nix_sq_init(struct roc_nix *roc_nix, struct roc_nix_sq *sq)
if (rc)
goto nomem;
+ rc = nix_tm_leaf_data_get(nix, sq->qid, &rr_quantum, &smq);
+ if (rc) {
+ rc = NIX_ERR_TM_LEAF_NODE_GET;
+ goto nomem;
+ }
+
/* Init SQ context */
if (roc_model_is_cn9k())
sq_cn9k_init(nix, sq, rr_quantum, smq);
@@ -831,6 +837,8 @@ roc_nix_sq_fini(struct roc_nix_sq *sq)
qid = sq->qid;
+ rc = nix_tm_sq_flush_pre(sq);
+
/* Release SQ context */
if (roc_model_is_cn9k())
rc |= sq_cn9k_fini(roc_nix_to_nix_priv(sq->roc_nix), sq);
@@ -845,6 +853,7 @@ roc_nix_sq_fini(struct roc_nix_sq *sq)
if (mbox_process(mbox))
rc |= NIX_ERR_NDC_SYNC;
+ rc |= nix_tm_sq_flush_post(sq);
rc |= roc_npa_pool_destroy(sq->aura_handle);
plt_free(sq->fc);
plt_free(sq->sqe_mem);
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
new file mode 100644
index 0000000..8b26bc7
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -0,0 +1,397 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+
+int
+nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txschq_config *req;
+ struct nix_tm_node *p;
+ int rc;
+
+ /* Enable nodes in path for flush to succeed */
+ if (!nix_tm_is_leaf(nix, node->lvl))
+ p = node;
+ else
+ p = node->parent;
+ while (p) {
+ if (!(p->flags & NIX_TM_NODE_ENABLED) &&
+ (p->flags & NIX_TM_NODE_HWRES)) {
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = p->hw_lvl;
+ req->num_regs = nix_tm_sw_xoff_prep(p, false, req->reg,
+ req->regval);
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ p->flags |= NIX_TM_NODE_ENABLED;
+ }
+ p = p->parent;
+ }
+
+ return 0;
+}
+
+int
+nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txschq_config *req;
+ uint16_t smq;
+ int rc;
+
+ smq = node->hw_id;
+ plt_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
+ enable ? "enable" : "disable");
+
+ rc = nix_tm_clear_path_xoff(nix, node);
+ if (rc)
+ return rc;
+
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_SMQ;
+ req->num_regs = 1;
+
+ req->reg[0] = NIX_AF_SMQX_CFG(smq);
+ req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
+ req->regval_mask[0] =
+ enable ? ~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
+
+ return mbox_process(mbox);
+}
+
+int
+nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum,
+ uint16_t *smq)
+{
+ struct nix_tm_node *node;
+ int rc;
+
+ node = nix_tm_node_search(nix, sq, nix->tm_tree);
+
+ /* Check if we found a valid leaf node */
+ if (!node || !nix_tm_is_leaf(nix, node->lvl) || !node->parent ||
+ node->parent->hw_id == NIX_TM_HW_ID_INVALID) {
+ return -EIO;
+ }
+
+ /* Get SMQ Id of leaf node's parent */
+ *smq = node->parent->hw_id;
+ *rr_quantum = nix_tm_weight_to_rr_quantum(node->weight);
+
+ rc = nix_tm_smq_xoff(nix, node->parent, false);
+ if (rc)
+ return rc;
+ node->flags |= NIX_TM_NODE_ENABLED;
+ return 0;
+}
+
+int
+roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq)
+{
+ struct nix *nix = roc_nix_to_nix_priv(sq->roc_nix);
+ uint16_t sqb_cnt, head_off, tail_off;
+ uint64_t wdata, val, prev;
+ uint16_t qid = sq->qid;
+ int64_t *regaddr;
+ uint64_t timeout; /* 10's of usec */
+
+ /* Wait for enough time based on shaper min rate */
+ timeout = (sq->nb_desc * roc_nix_max_pkt_len(sq->roc_nix) * 8 * 1E5);
+ /* Wait for worst case scenario of this SQ being last priority
+ * and so have to wait for all other SQ's drain out by their own.
+ */
+ timeout = timeout * nix->nb_tx_queues;
+ timeout = timeout / nix->tm_rate_min;
+ if (!timeout)
+ timeout = 10000;
+
+ wdata = ((uint64_t)qid << 32);
+ regaddr = (int64_t *)(nix->base + NIX_LF_SQ_OP_STATUS);
+ val = roc_atomic64_add_nosync(wdata, regaddr);
+
+ /* Spin multiple iterations as "sq->fc_cache_pkts" can still
+ * have space to send pkts even though fc_mem is disabled
+ */
+
+ while (true) {
+ prev = val;
+ plt_delay_us(10);
+ val = roc_atomic64_add_nosync(wdata, regaddr);
+ /* Continue on error */
+ if (val & BIT_ULL(63))
+ continue;
+
+ if (prev != val)
+ continue;
+
+ sqb_cnt = val & 0xFFFF;
+ head_off = (val >> 20) & 0x3F;
+ tail_off = (val >> 28) & 0x3F;
+
+ /* SQ reached quiescent state */
+ if (sqb_cnt <= 1 && head_off == tail_off &&
+ (*(volatile uint64_t *)sq->fc == sq->nb_sqb_bufs)) {
+ break;
+ }
+
+ /* Timeout */
+ if (!timeout)
+ goto exit;
+ timeout--;
+ }
+
+ return 0;
+exit:
+ roc_nix_queues_ctx_dump(sq->roc_nix);
+ return -EFAULT;
+}
+
+/* Flush and disable tx queue and its parent SMQ */
+int
+nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
+{
+ struct roc_nix *roc_nix = sq->roc_nix;
+ struct nix_tm_node *node, *sibling;
+ struct nix_tm_node_list *list;
+ enum roc_nix_tm_tree tree;
+ struct mbox *mbox;
+ struct nix *nix;
+ uint16_t qid;
+ int rc;
+
+ nix = roc_nix_to_nix_priv(roc_nix);
+
+ /* Need not do anything if tree is in disabled state */
+ if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA))
+ return 0;
+
+ mbox = (&nix->dev)->mbox;
+ qid = sq->qid;
+
+ tree = nix->tm_tree;
+ list = nix_tm_node_list(nix, tree);
+
+ /* Find the node for this SQ */
+ node = nix_tm_node_search(nix, qid, tree);
+ if (!node || !(node->flags & NIX_TM_NODE_ENABLED)) {
+ plt_err("Invalid node/state for sq %u", qid);
+ return -EFAULT;
+ }
+
+ /* Enable CGX RXTX to drain pkts */
+ if (!roc_nix->io_enabled) {
+ /* Though it enables both RX MCAM Entries and CGX Link
+ * we assume all the rx queues are stopped way back.
+ */
+ mbox_alloc_msg_nix_lf_start_rx(mbox);
+ rc = mbox_process(mbox);
+ if (rc) {
+ plt_err("cgx start failed, rc=%d", rc);
+ return rc;
+ }
+ }
+
+ /* Disable smq xoff for case it was enabled earlier */
+ rc = nix_tm_smq_xoff(nix, node->parent, false);
+ if (rc) {
+ plt_err("Failed to enable smq %u, rc=%d", node->parent->hw_id,
+ rc);
+ return rc;
+ }
+
+ /* As per HRM, to disable an SQ, all other SQ's
+ * that feed to same SMQ must be paused before SMQ flush.
+ */
+ TAILQ_FOREACH(sibling, list, node) {
+ if (sibling->parent != node->parent)
+ continue;
+ if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+ continue;
+
+ qid = sibling->id;
+ sq = nix->sqs[qid];
+ if (!sq)
+ continue;
+
+ rc = roc_nix_tm_sq_aura_fc(sq, false);
+ if (rc) {
+ plt_err("Failed to disable sqb aura fc, rc=%d", rc);
+ goto cleanup;
+ }
+
+ /* Wait for sq entries to be flushed */
+ rc = roc_nix_tm_sq_flush_spin(sq);
+ if (rc) {
+ plt_err("Failed to drain sq %u, rc=%d\n", sq->qid, rc);
+ return rc;
+ }
+ }
+
+ node->flags &= ~NIX_TM_NODE_ENABLED;
+
+ /* Disable and flush */
+ rc = nix_tm_smq_xoff(nix, node->parent, true);
+ if (rc) {
+ plt_err("Failed to disable smq %u, rc=%d", node->parent->hw_id,
+ rc);
+ goto cleanup;
+ }
+cleanup:
+ /* Restore cgx state */
+ if (!roc_nix->io_enabled) {
+ mbox_alloc_msg_nix_lf_stop_rx(mbox);
+ rc |= mbox_process(mbox);
+ }
+
+ return rc;
+}
+
+int
+nix_tm_sq_flush_post(struct roc_nix_sq *sq)
+{
+ struct roc_nix *roc_nix = sq->roc_nix;
+ struct nix_tm_node *node, *sibling;
+ struct nix_tm_node_list *list;
+ enum roc_nix_tm_tree tree;
+ struct roc_nix_sq *s_sq;
+ bool once = false;
+ uint16_t qid, s_qid;
+ struct nix *nix;
+ int rc;
+
+ nix = roc_nix_to_nix_priv(roc_nix);
+
+ /* Need not do anything if tree is in disabled state */
+ if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA))
+ return 0;
+
+ qid = sq->qid;
+ tree = nix->tm_tree;
+ list = nix_tm_node_list(nix, tree);
+
+ /* Find the node for this SQ */
+ node = nix_tm_node_search(nix, qid, tree);
+ if (!node) {
+ plt_err("Invalid node for sq %u", qid);
+ return -EFAULT;
+ }
+
+ /* Enable all the siblings back */
+ TAILQ_FOREACH(sibling, list, node) {
+ if (sibling->parent != node->parent)
+ continue;
+
+ if (sibling->id == qid)
+ continue;
+
+ if (!(sibling->flags & NIX_TM_NODE_ENABLED))
+ continue;
+
+ s_qid = sibling->id;
+ s_sq = nix->sqs[s_qid];
+ if (!s_sq)
+ continue;
+
+ if (!once) {
+ /* Enable back if any SQ is still present */
+ rc = nix_tm_smq_xoff(nix, node->parent, false);
+ if (rc) {
+ plt_err("Failed to enable smq %u, rc=%d",
+ node->parent->hw_id, rc);
+ return rc;
+ }
+ once = true;
+ }
+
+ rc = roc_nix_tm_sq_aura_fc(s_sq, true);
+ if (rc) {
+ plt_err("Failed to enable sqb aura fc, rc=%d", rc);
+ return rc;
+ }
+ }
+
+ return 0;
+}
+
+int
+nix_tm_conf_init(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint32_t bmp_sz, hw_lvl;
+ void *bmp_mem;
+ int rc, i;
+
+ nix->tm_flags = 0;
+ for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++)
+ TAILQ_INIT(&nix->trees[i]);
+
+ TAILQ_INIT(&nix->shaper_profile_list);
+ nix->tm_rate_min = 1E9; /* 1Gbps */
+
+ rc = -ENOMEM;
+ bmp_sz = plt_bitmap_get_memory_footprint(NIX_TM_MAX_HW_TXSCHQ);
+ bmp_mem = plt_zmalloc(bmp_sz * NIX_TXSCH_LVL_CNT * 2, 0);
+ if (!bmp_mem)
+ return rc;
+ nix->schq_bmp_mem = bmp_mem;
+
+ /* Init contiguous and discontiguous bitmap per lvl */
+ rc = -EIO;
+ for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) {
+ /* Bitmap for discontiguous resource */
+ nix->schq_bmp[hw_lvl] =
+ plt_bitmap_init(NIX_TM_MAX_HW_TXSCHQ, bmp_mem, bmp_sz);
+ if (!nix->schq_bmp[hw_lvl])
+ goto exit;
+
+ bmp_mem = PLT_PTR_ADD(bmp_mem, bmp_sz);
+
+ /* Bitmap for contiguous resource */
+ nix->schq_contig_bmp[hw_lvl] =
+ plt_bitmap_init(NIX_TM_MAX_HW_TXSCHQ, bmp_mem, bmp_sz);
+ if (!nix->schq_contig_bmp[hw_lvl])
+ goto exit;
+
+ bmp_mem = PLT_PTR_ADD(bmp_mem, bmp_sz);
+ }
+
+ /* Disable TL1 Static Priority when VF's are enabled
+ * as otherwise VF's TL2 reallocation will be needed
+ * runtime to support a specific topology of PF.
+ */
+ if (nix->pci_dev->max_vfs)
+ nix->tm_flags |= NIX_TM_TL1_NO_SP;
+
+ /* TL1 access is only for PF's */
+ if (roc_nix_is_pf(roc_nix)) {
+ nix->tm_flags |= NIX_TM_TL1_ACCESS;
+ nix->tm_root_lvl = NIX_TXSCH_LVL_TL1;
+ } else {
+ nix->tm_root_lvl = NIX_TXSCH_LVL_TL2;
+ }
+
+ return 0;
+exit:
+ nix_tm_conf_fini(roc_nix);
+ return rc;
+}
+
+void
+nix_tm_conf_fini(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint16_t hw_lvl;
+
+ for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) {
+ plt_bitmap_free(nix->schq_bmp[hw_lvl]);
+ plt_bitmap_free(nix->schq_contig_bmp[hw_lvl]);
+ }
+ plt_free(nix->schq_bmp_mem);
+}
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
new file mode 100644
index 0000000..a3f6de0
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+int
+roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable)
+{
+ struct npa_aq_enq_req *req;
+ struct npa_aq_enq_rsp *rsp;
+ uint64_t aura_handle;
+ struct npa_lf *lf;
+ struct mbox *mbox;
+ int rc = -ENOSPC;
+
+ plt_tm_dbg("Setting SQ %u SQB aura FC to %s", sq->qid,
+ enable ? "enable" : "disable");
+
+ lf = idev_npa_obj_get();
+ if (!lf)
+ return NPA_ERR_DEVICE_NOT_BOUNDED;
+
+ mbox = lf->mbox;
+ /* Set/clear sqb aura fc_ena */
+ aura_handle = sq->aura_handle;
+ req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (req == NULL)
+ return rc;
+
+ req->aura_id = roc_npa_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_WRITE;
+ /* Below is not needed for aura writes but AF driver needs it */
+ /* AF will translate to associated poolctx */
+ req->aura.pool_addr = req->aura_id;
+
+ req->aura.fc_ena = enable;
+ req->aura_mask.fc_ena = 1;
+
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Read back npa aura ctx */
+ req = mbox_alloc_msg_npa_aq_enq(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+
+ req->aura_id = roc_npa_aura_handle_to_aura(aura_handle);
+ req->ctype = NPA_AQ_CTYPE_AURA;
+ req->op = NPA_AQ_INSTOP_READ;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Init when enabled as there might be no triggers */
+ if (enable)
+ *(volatile uint64_t *)sq->fc = rsp->aura.count;
+ else
+ *(volatile uint64_t *)sq->fc = sq->nb_sqb_bufs;
+ /* Sync write barrier */
+ plt_wmb();
+ return 0;
+}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
new file mode 100644
index 0000000..86f66cf
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+struct nix_tm_node *
+nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree)
+{
+ struct nix_tm_node_list *list;
+ struct nix_tm_node *node;
+
+ list = nix_tm_node_list(nix, tree);
+ TAILQ_FOREACH(node, list, node) {
+ if (node->id == node_id)
+ return node;
+ }
+ return NULL;
+}
+
+uint8_t
+nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
+ volatile uint64_t *reg, volatile uint64_t *regval)
+{
+ uint32_t hw_lvl = node->hw_lvl;
+ uint32_t schq = node->hw_id;
+ uint8_t k = 0;
+
+ plt_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
+ nix_tm_hwlvl2str(hw_lvl), schq, node->lvl, node->id, enable,
+ node);
+
+ regval[k] = enable;
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_MDQ:
+ reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+ k++;
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+ k++;
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+ k++;
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+ k++;
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+ k++;
+ break;
+ default:
+ break;
+ }
+
+ return k;
+}
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index f65033d..dd33e58 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -32,3 +32,4 @@ RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE);
+RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 466c71a..8f46fb3 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -128,6 +128,7 @@ extern int cnxk_logtype_base;
extern int cnxk_logtype_mbox;
extern int cnxk_logtype_npa;
extern int cnxk_logtype_nix;
+extern int cnxk_logtype_tm;
#define plt_err(fmt, args...) \
RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args)
@@ -147,6 +148,7 @@ extern int cnxk_logtype_nix;
#define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__)
#define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__)
#define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__)
+#define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 7ee657c..c146137 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -5,6 +5,7 @@ INTERNAL {
cnxk_logtype_mbox;
cnxk_logtype_nix;
cnxk_logtype_npa;
+ cnxk_logtype_tm;
plt_init;
roc_clk_freq_get;
roc_error_msg_get;
@@ -102,6 +103,8 @@ INTERNAL {
roc_nix_xstats_names_get;
roc_nix_switch_hdr_set;
roc_nix_eeprom_info_get;
+ roc_nix_tm_sq_aura_fc;
+ roc_nix_tm_sq_flush_spin;
roc_nix_unregister_cq_irqs;
roc_nix_unregister_queue_irqs;
roc_nix_vlan_insert_ena_dis;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 33/52] common/cnxk: add nix tm support to add/delete node
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (31 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 32/52] common/cnxk: add nix traffic management base support Nithin Dabilpuram
@ 2021-03-05 13:38 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 34/52] common/cnxk: add nix tm shaper profile add support Nithin Dabilpuram
` (22 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:38 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add support to add/delete nodes in a hierarchy.
This patch also adds misc utils to get node name,
walk through nodes etc.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 42 +++++++
drivers/common/cnxk/roc_nix_priv.h | 14 +++
drivers/common/cnxk/roc_nix_tm.c | 205 +++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_tm_ops.c | 88 ++++++++++++++
drivers/common/cnxk/roc_nix_tm_utils.c | 212 +++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 7 ++
6 files changed, 568 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index b89851f..fbbea1b 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -294,6 +294,8 @@ void __roc_api roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix);
/* Traffic Management */
#define ROC_NIX_TM_MAX_SCHED_WT ((uint8_t)~0)
+#define ROC_NIX_TM_SHAPER_PROFILE_NONE UINT32_MAX
+#define ROC_NIX_TM_NODE_ID_INVALID UINT32_MAX
enum roc_nix_tm_tree {
ROC_NIX_TM_DEFAULT = 0,
@@ -318,6 +320,46 @@ enum roc_tm_node_level {
int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable);
int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq);
+/*
+ * TM User hierarchy API.
+ */
+
+struct roc_nix_tm_node {
+#define ROC_NIX_TM_NODE_SZ (128)
+ uint8_t reserved[ROC_NIX_TM_NODE_SZ];
+
+ uint32_t id;
+ uint32_t parent_id;
+ uint32_t priority;
+ uint32_t weight;
+ uint32_t shaper_profile_id;
+ uint16_t lvl;
+ bool pkt_mode;
+ bool pkt_mode_set;
+ /* Function to free this memory */
+ void (*free_fn)(void *node);
+};
+
+int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix,
+ struct roc_nix_tm_node *roc_node);
+int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
+ bool free);
+int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix,
+ uint32_t node_id, bool pkt_mode);
+
+struct roc_nix_tm_node *__roc_api roc_nix_tm_node_get(struct roc_nix *roc_nix,
+ uint32_t node_id);
+struct roc_nix_tm_node *__roc_api
+roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev);
+
+/*
+ * TM utilities API.
+ */
+int __roc_api roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id);
+int __roc_api roc_nix_tm_node_name_get(struct roc_nix *roc_nix,
+ uint32_t node_id, char *buf,
+ size_t buflen);
+
/* MAC */
int __roc_api roc_nix_mac_rxtx_start_stop(struct roc_nix *roc_nix, bool start);
int __roc_api roc_nix_mac_link_event_start_stop(struct roc_nix *roc_nix,
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index bd4a4e9..2f7c20e 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -325,14 +325,28 @@ int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum,
int nix_tm_sq_flush_pre(struct roc_nix_sq *sq);
int nix_tm_sq_flush_post(struct roc_nix_sq *sq);
int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable);
+int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node);
+int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
+ enum roc_nix_tm_tree tree, bool free);
+int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node);
int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node);
/*
* TM priv utils.
*/
+uint16_t nix_tm_lvl2nix(struct nix *nix, uint32_t lvl);
+uint16_t nix_tm_lvl2nix_tl1_root(uint32_t lvl);
+uint16_t nix_tm_lvl2nix_tl2_root(uint32_t lvl);
+uint16_t nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig);
+int nix_tm_validate_prio(struct nix *nix, uint32_t lvl, uint32_t parent_id,
+ uint32_t priority, enum roc_nix_tm_tree tree);
struct nix_tm_node *nix_tm_node_search(struct nix *nix, uint32_t node_id,
enum roc_nix_tm_tree tree);
+struct nix_tm_shaper_profile *nix_tm_shaper_profile_search(struct nix *nix,
+ uint32_t id);
uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
volatile uint64_t *reg, volatile uint64_t *regval);
+struct nix_tm_node *nix_tm_node_alloc(void);
+void nix_tm_node_free(struct nix_tm_node *node);
#endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 8b26bc7..776d6b4 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -5,6 +5,100 @@
#include "roc_api.h"
#include "roc_priv.h"
+int
+nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile *profile;
+ uint32_t node_id, parent_id, lvl;
+ struct nix_tm_node *parent_node;
+ uint32_t priority, profile_id;
+ uint8_t hw_lvl, exp_next_lvl;
+ enum roc_nix_tm_tree tree;
+ int rc;
+
+ node_id = node->id;
+ priority = node->priority;
+ parent_id = node->parent_id;
+ profile_id = node->shaper_profile_id;
+ lvl = node->lvl;
+ tree = node->tree;
+
+ plt_tm_dbg("Add node %s lvl %u id %u, prio 0x%x weight 0x%x "
+ "parent %u profile 0x%x tree %u",
+ nix_tm_hwlvl2str(nix_tm_lvl2nix(nix, lvl)), lvl, node_id,
+ priority, node->weight, parent_id, profile_id, tree);
+
+ if (tree >= ROC_NIX_TM_TREE_MAX)
+ return NIX_ERR_PARAM;
+
+ /* Translate sw level id's to nix hw level id's */
+ hw_lvl = nix_tm_lvl2nix(nix, lvl);
+ if (hw_lvl == NIX_TXSCH_LVL_CNT && !nix_tm_is_leaf(nix, lvl))
+ return NIX_ERR_TM_INVALID_LVL;
+
+ /* Leaf nodes have to be same priority */
+ if (nix_tm_is_leaf(nix, lvl) && priority != 0)
+ return NIX_ERR_TM_INVALID_PRIO;
+
+ parent_node = nix_tm_node_search(nix, parent_id, tree);
+
+ if (node_id < nix->nb_tx_queues)
+ exp_next_lvl = NIX_TXSCH_LVL_SMQ;
+ else
+ exp_next_lvl = hw_lvl + 1;
+
+ /* Check if there is no parent node yet */
+ if (hw_lvl != nix->tm_root_lvl &&
+ (!parent_node || parent_node->hw_lvl != exp_next_lvl))
+ return NIX_ERR_TM_INVALID_PARENT;
+
+ /* Check if a node already exists */
+ if (nix_tm_node_search(nix, node_id, tree))
+ return NIX_ERR_TM_NODE_EXISTS;
+
+ profile = nix_tm_shaper_profile_search(nix, profile_id);
+ if (!nix_tm_is_leaf(nix, lvl)) {
+ /* Check if shaper profile exists for non leaf node */
+ if (!profile && profile_id != ROC_NIX_TM_SHAPER_PROFILE_NONE)
+ return NIX_ERR_TM_INVALID_SHAPER_PROFILE;
+
+ /* Packet mode in profile should match with that of tm node */
+ if (profile && profile->pkt_mode != node->pkt_mode)
+ return NIX_ERR_TM_PKT_MODE_MISMATCH;
+ }
+
+ /* Check if there is second DWRR already in siblings or holes in prio */
+ rc = nix_tm_validate_prio(nix, lvl, parent_id, priority, tree);
+ if (rc)
+ return rc;
+
+ if (node->weight > ROC_NIX_TM_MAX_SCHED_WT)
+ return NIX_ERR_TM_WEIGHT_EXCEED;
+
+ /* Maintain minimum weight */
+ if (!node->weight)
+ node->weight = 1;
+
+ node->hw_lvl = nix_tm_lvl2nix(nix, lvl);
+ node->rr_prio = 0xF;
+ node->max_prio = UINT32_MAX;
+ node->hw_id = NIX_TM_HW_ID_INVALID;
+ node->flags = 0;
+
+ if (profile)
+ profile->ref_cnt++;
+
+ node->parent = parent_node;
+ if (parent_node)
+ parent_node->child_realloc = true;
+ node->parent_hw_id = NIX_TM_HW_ID_INVALID;
+
+ TAILQ_INSERT_TAIL(&nix->trees[tree], node, node);
+ plt_tm_dbg("Added node %s lvl %u id %u (%p)",
+ nix_tm_hwlvl2str(node->hw_lvl), lvl, node_id, node);
+ return 0;
+}
int
nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node)
@@ -321,6 +415,115 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq)
}
int
+nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txsch_free_req *req;
+ struct plt_bitmap *bmp;
+ uint16_t avail, hw_id;
+ uint8_t hw_lvl;
+ int rc = -ENOSPC;
+
+ hw_lvl = node->hw_lvl;
+ hw_id = node->hw_id;
+ bmp = nix->schq_bmp[hw_lvl];
+ /* Free specific HW resource */
+ plt_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
+ nix_tm_hwlvl2str(node->hw_lvl), hw_id, node->lvl, node->id,
+ node);
+
+ avail = nix_tm_resource_avail(nix, hw_lvl, false);
+ /* Always for now free to discontiguous queue when avail
+ * is not sufficient.
+ */
+ if (nix->discontig_rsvd[hw_lvl] &&
+ avail < nix->discontig_rsvd[hw_lvl]) {
+ PLT_ASSERT(hw_id < NIX_TM_MAX_HW_TXSCHQ);
+ PLT_ASSERT(plt_bitmap_get(bmp, hw_id) == 0);
+ plt_bitmap_set(bmp, hw_id);
+ node->hw_id = NIX_TM_HW_ID_INVALID;
+ node->flags &= ~NIX_TM_NODE_HWRES;
+ return 0;
+ }
+
+ /* Free to AF */
+ req = mbox_alloc_msg_nix_txsch_free(mbox);
+ if (req == NULL)
+ return rc;
+ req->flags = 0;
+ req->schq_lvl = node->hw_lvl;
+ req->schq = hw_id;
+ rc = mbox_process(mbox);
+ if (rc) {
+ plt_err("failed to release hwres %s(%u) rc %d",
+ nix_tm_hwlvl2str(node->hw_lvl), hw_id, rc);
+ return rc;
+ }
+
+ /* Mark parent as dirty for reallocing it's children */
+ if (node->parent)
+ node->parent->child_realloc = true;
+
+ node->hw_id = NIX_TM_HW_ID_INVALID;
+ node->flags &= ~NIX_TM_NODE_HWRES;
+ plt_tm_dbg("Released hwres %s(%u) to af",
+ nix_tm_hwlvl2str(node->hw_lvl), hw_id);
+ return 0;
+}
+
+int
+nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
+ enum roc_nix_tm_tree tree, bool free)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile *profile;
+ struct nix_tm_node *node, *child;
+ struct nix_tm_node_list *list;
+ uint32_t profile_id;
+ int rc;
+
+ plt_tm_dbg("Delete node id %u tree %u", node_id, tree);
+
+ node = nix_tm_node_search(nix, node_id, tree);
+ if (!node)
+ return NIX_ERR_TM_INVALID_NODE;
+
+ list = nix_tm_node_list(nix, tree);
+ /* Check for any existing children */
+ TAILQ_FOREACH(child, list, node) {
+ if (child->parent == node)
+ return NIX_ERR_TM_CHILD_EXISTS;
+ }
+
+ /* Remove shaper profile reference */
+ profile_id = node->shaper_profile_id;
+ profile = nix_tm_shaper_profile_search(nix, profile_id);
+
+ /* Free hw resource locally */
+ if (node->flags & NIX_TM_NODE_HWRES) {
+ rc = nix_tm_free_node_resource(nix, node);
+ if (rc)
+ return rc;
+ }
+
+ if (profile)
+ profile->ref_cnt--;
+
+ TAILQ_REMOVE(list, node, node);
+
+ plt_tm_dbg("Deleted node %s lvl %u id %u, prio 0x%x weight 0x%x "
+ "parent %u profile 0x%x tree %u (%p)",
+ nix_tm_hwlvl2str(node->hw_lvl), node->lvl, node->id,
+ node->priority, node->weight,
+ node->parent ? node->parent->id : UINT32_MAX,
+ node->shaper_profile_id, tree, node);
+ /* Free only if requested */
+ if (free)
+ nix_tm_node_free(node);
+ return 0;
+}
+
+int
nix_tm_conf_init(struct roc_nix *roc_nix)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
@@ -328,6 +531,8 @@ nix_tm_conf_init(struct roc_nix *roc_nix)
void *bmp_mem;
int rc, i;
+ PLT_STATIC_ASSERT(sizeof(struct nix_tm_node) <= ROC_NIX_TM_NODE_SZ);
+
nix->tm_flags = 0;
for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++)
TAILQ_INIT(&nix->trees[i]);
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index a3f6de0..ede30fa 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -65,3 +65,91 @@ roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable)
plt_wmb();
return 0;
}
+
+int
+roc_nix_tm_node_add(struct roc_nix *roc_nix, struct roc_nix_tm_node *roc_node)
+{
+ struct nix_tm_node *node;
+
+ node = (struct nix_tm_node *)&roc_node->reserved;
+ node->id = roc_node->id;
+ node->priority = roc_node->priority;
+ node->weight = roc_node->weight;
+ node->lvl = roc_node->lvl;
+ node->parent_id = roc_node->parent_id;
+ node->shaper_profile_id = roc_node->shaper_profile_id;
+ node->pkt_mode = roc_node->pkt_mode;
+ node->pkt_mode_set = roc_node->pkt_mode_set;
+ node->free_fn = roc_node->free_fn;
+ node->tree = ROC_NIX_TM_USER;
+
+ return nix_tm_node_add(roc_nix, node);
+}
+
+int
+roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix, uint32_t node_id,
+ bool pkt_mode)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node *node, *child;
+ struct nix_tm_node_list *list;
+ int num_children = 0;
+
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ if (!node)
+ return NIX_ERR_TM_INVALID_NODE;
+
+ if (node->pkt_mode == pkt_mode) {
+ node->pkt_mode_set = true;
+ return 0;
+ }
+
+ /* Check for any existing children, if there are any,
+ * then we cannot update the pkt mode as children's quantum
+ * are already taken in.
+ */
+ list = nix_tm_node_list(nix, ROC_NIX_TM_USER);
+ TAILQ_FOREACH(child, list, node) {
+ if (child->parent == node)
+ num_children++;
+ }
+
+ /* Cannot update mode if it has children or tree is enabled */
+ if ((nix->tm_flags & NIX_TM_HIERARCHY_ENA) && num_children)
+ return -EBUSY;
+
+ if (node->pkt_mode_set && num_children)
+ return NIX_ERR_TM_PKT_MODE_MISMATCH;
+
+ node->pkt_mode = pkt_mode;
+ node->pkt_mode_set = true;
+
+ return 0;
+}
+
+int
+roc_nix_tm_node_name_get(struct roc_nix *roc_nix, uint32_t node_id, char *buf,
+ size_t buflen)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node *node;
+
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ if (!node) {
+ plt_strlcpy(buf, "???", buflen);
+ return NIX_ERR_TM_INVALID_NODE;
+ }
+
+ if (node->hw_lvl == NIX_TXSCH_LVL_CNT)
+ snprintf(buf, buflen, "SQ_%d", node->id);
+ else
+ snprintf(buf, buflen, "%s_%d", nix_tm_hwlvl2str(node->hw_lvl),
+ node->hw_id);
+ return 0;
+}
+
+int
+roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free)
+{
+ return nix_tm_node_delete(roc_nix, node_id, ROC_NIX_TM_USER, free);
+}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index 86f66cf..09cb693 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -5,6 +5,64 @@
#include "roc_api.h"
#include "roc_priv.h"
+uint16_t
+nix_tm_lvl2nix_tl1_root(uint32_t lvl)
+{
+ switch (lvl) {
+ case ROC_TM_LVL_ROOT:
+ return NIX_TXSCH_LVL_TL1;
+ case ROC_TM_LVL_SCH1:
+ return NIX_TXSCH_LVL_TL2;
+ case ROC_TM_LVL_SCH2:
+ return NIX_TXSCH_LVL_TL3;
+ case ROC_TM_LVL_SCH3:
+ return NIX_TXSCH_LVL_TL4;
+ case ROC_TM_LVL_SCH4:
+ return NIX_TXSCH_LVL_SMQ;
+ default:
+ return NIX_TXSCH_LVL_CNT;
+ }
+}
+
+uint16_t
+nix_tm_lvl2nix_tl2_root(uint32_t lvl)
+{
+ switch (lvl) {
+ case ROC_TM_LVL_ROOT:
+ return NIX_TXSCH_LVL_TL2;
+ case ROC_TM_LVL_SCH1:
+ return NIX_TXSCH_LVL_TL3;
+ case ROC_TM_LVL_SCH2:
+ return NIX_TXSCH_LVL_TL4;
+ case ROC_TM_LVL_SCH3:
+ return NIX_TXSCH_LVL_SMQ;
+ default:
+ return NIX_TXSCH_LVL_CNT;
+ }
+}
+
+uint16_t
+nix_tm_lvl2nix(struct nix *nix, uint32_t lvl)
+{
+ if (nix_tm_have_tl1_access(nix))
+ return nix_tm_lvl2nix_tl1_root(lvl);
+ else
+ return nix_tm_lvl2nix_tl2_root(lvl);
+}
+
+
+struct nix_tm_shaper_profile *
+nix_tm_shaper_profile_search(struct nix *nix, uint32_t id)
+{
+ struct nix_tm_shaper_profile *profile;
+
+ TAILQ_FOREACH(profile, &nix->shaper_profile_list, shaper) {
+ if (profile->id == id)
+ return profile;
+ }
+ return NULL;
+}
+
struct nix_tm_node *
nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree)
{
@@ -19,6 +77,70 @@ nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree)
return NULL;
}
+static uint16_t
+nix_tm_max_prio(struct nix *nix, uint16_t hw_lvl)
+{
+ if (hw_lvl >= NIX_TXSCH_LVL_CNT)
+ return 0;
+
+ /* MDQ does not support SP */
+ if (hw_lvl == NIX_TXSCH_LVL_MDQ)
+ return 0;
+
+ /* PF's TL1 with VF's enabled does not support SP */
+ if (hw_lvl == NIX_TXSCH_LVL_TL1 && (!nix_tm_have_tl1_access(nix) ||
+ (nix->tm_flags & NIX_TM_TL1_NO_SP)))
+ return 0;
+
+ return NIX_TM_TLX_SP_PRIO_MAX - 1;
+}
+
+int
+nix_tm_validate_prio(struct nix *nix, uint32_t lvl, uint32_t parent_id,
+ uint32_t priority, enum roc_nix_tm_tree tree)
+{
+ uint8_t priorities[NIX_TM_TLX_SP_PRIO_MAX];
+ struct nix_tm_node_list *list;
+ struct nix_tm_node *node;
+ uint32_t rr_num = 0;
+ int i;
+
+ list = nix_tm_node_list(nix, tree);
+ /* Validate priority against max */
+ if (priority > nix_tm_max_prio(nix, nix_tm_lvl2nix(nix, lvl - 1)))
+ return NIX_ERR_TM_PRIO_EXCEEDED;
+
+ if (parent_id == ROC_NIX_TM_NODE_ID_INVALID)
+ return 0;
+
+ memset(priorities, 0, sizeof(priorities));
+ priorities[priority] = 1;
+
+ TAILQ_FOREACH(node, list, node) {
+ if (!node->parent)
+ continue;
+
+ if (node->parent->id != parent_id)
+ continue;
+
+ priorities[node->priority]++;
+ }
+
+ for (i = 0; i < NIX_TM_TLX_SP_PRIO_MAX; i++)
+ if (priorities[i] > 1)
+ rr_num++;
+
+ /* At max, one rr groups per parent */
+ if (rr_num > 1)
+ return NIX_ERR_TM_MULTIPLE_RR_GROUPS;
+
+ /* Check for previous priority to avoid holes in priorities */
+ if (priority && !priorities[priority - 1])
+ return NIX_ERR_TM_PRIO_ORDER;
+
+ return 0;
+}
+
uint8_t
nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
volatile uint64_t *reg, volatile uint64_t *regval)
@@ -60,3 +182,93 @@ nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
return k;
}
+
+uint16_t
+nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig)
+{
+ uint32_t pos = 0, start_pos = 0;
+ struct plt_bitmap *bmp;
+ uint16_t count = 0;
+ uint64_t slab = 0;
+
+ bmp = contig ? nix->schq_contig_bmp[hw_lvl] : nix->schq_bmp[hw_lvl];
+ plt_bitmap_scan_init(bmp);
+
+ if (!plt_bitmap_scan(bmp, &pos, &slab))
+ return count;
+
+ /* Count bit set */
+ start_pos = pos;
+ do {
+ count += __builtin_popcountll(slab);
+ if (!plt_bitmap_scan(bmp, &pos, &slab))
+ break;
+ } while (pos != start_pos);
+
+ return count;
+}
+
+int
+roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node *node;
+
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ if (!node)
+ return NIX_ERR_TM_INVALID_NODE;
+
+ return node->lvl;
+}
+
+struct roc_nix_tm_node *
+roc_nix_tm_node_get(struct roc_nix *roc_nix, uint32_t node_id)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node *node;
+
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ return (struct roc_nix_tm_node *)node;
+}
+
+struct roc_nix_tm_node *
+roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev)
+{
+ struct nix_tm_node *prev = (struct nix_tm_node *)__prev;
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node_list *list;
+
+ list = nix_tm_node_list(nix, ROC_NIX_TM_USER);
+
+ /* HEAD of the list */
+ if (!prev)
+ return (struct roc_nix_tm_node *)TAILQ_FIRST(list);
+
+ /* Next entry */
+ if (prev->tree != ROC_NIX_TM_USER)
+ return NULL;
+
+ return (struct roc_nix_tm_node *)TAILQ_NEXT(prev, node);
+}
+
+struct nix_tm_node *
+nix_tm_node_alloc(void)
+{
+ struct nix_tm_node *node;
+
+ node = plt_zmalloc(sizeof(struct nix_tm_node), 0);
+ if (!node)
+ return NULL;
+
+ node->free_fn = plt_free;
+ return node;
+}
+
+void
+nix_tm_node_free(struct nix_tm_node *node)
+{
+ if (!node || node->free_fn == NULL)
+ return;
+
+ (node->free_fn)(node);
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index c146137..0f70fff 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -103,6 +103,13 @@ INTERNAL {
roc_nix_xstats_names_get;
roc_nix_switch_hdr_set;
roc_nix_eeprom_info_get;
+ roc_nix_tm_node_add;
+ roc_nix_tm_node_delete;
+ roc_nix_tm_node_get;
+ roc_nix_tm_node_lvl;
+ roc_nix_tm_node_name_get;
+ roc_nix_tm_node_next;
+ roc_nix_tm_node_pkt_mode_update;
roc_nix_tm_sq_aura_fc;
roc_nix_tm_sq_flush_spin;
roc_nix_unregister_cq_irqs;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 34/52] common/cnxk: add nix tm shaper profile add support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (32 preceding siblings ...)
2021-03-05 13:38 ` [dpdk-dev] [PATCH 33/52] common/cnxk: add nix tm support to add/delete node Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 35/52] common/cnxk: add nix tm helper to alloc and free resource Nithin Dabilpuram
` (21 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
From: Satha Rao <skoteshwar@marvell.com>
Add support to add/delete/update shaper profile for
a given NIX. Also add support to walk through existing
shaper profiles.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Satha Rao <skoteshwar@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 25 +++++
drivers/common/cnxk/roc_nix_priv.h | 8 ++
drivers/common/cnxk/roc_nix_tm.c | 18 ++++
drivers/common/cnxk/roc_nix_tm_ops.c | 145 ++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_tm_utils.c | 167 +++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 5 +
6 files changed, 368 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index fbbea1b..cadf4ea 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -340,17 +340,42 @@ struct roc_nix_tm_node {
void (*free_fn)(void *node);
};
+struct roc_nix_tm_shaper_profile {
+#define ROC_NIX_TM_SHAPER_PROFILE_SZ (128)
+ uint8_t reserved[ROC_NIX_TM_SHAPER_PROFILE_SZ];
+
+ uint32_t id;
+ uint64_t commit_rate;
+ uint64_t commit_sz;
+ uint64_t peak_rate;
+ uint64_t peak_sz;
+ int32_t pkt_len_adj;
+ bool pkt_mode;
+ /* Function to free this memory */
+ void (*free_fn)(void *profile);
+};
+
int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix,
struct roc_nix_tm_node *roc_node);
int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
bool free);
int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix,
uint32_t node_id, bool pkt_mode);
+int __roc_api roc_nix_tm_shaper_profile_add(
+ struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *profile);
+int __roc_api roc_nix_tm_shaper_profile_update(
+ struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *profile);
+int __roc_api roc_nix_tm_shaper_profile_delete(struct roc_nix *roc_nix,
+ uint32_t id);
struct roc_nix_tm_node *__roc_api roc_nix_tm_node_get(struct roc_nix *roc_nix,
uint32_t node_id);
struct roc_nix_tm_node *__roc_api
roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev);
+struct roc_nix_tm_shaper_profile *__roc_api
+roc_nix_tm_shaper_profile_get(struct roc_nix *roc_nix, uint32_t profile_id);
+struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next(
+ struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev);
/*
* TM utilities API.
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 2f7c20e..278e7df 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -330,6 +330,7 @@ int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
enum roc_nix_tm_tree tree, bool free);
int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node);
int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node);
+void nix_tm_clear_shaper_profiles(struct nix *nix);
/*
* TM priv utils.
@@ -346,7 +347,14 @@ struct nix_tm_shaper_profile *nix_tm_shaper_profile_search(struct nix *nix,
uint32_t id);
uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
volatile uint64_t *reg, volatile uint64_t *regval);
+uint64_t nix_tm_shaper_profile_rate_min(struct nix *nix);
+uint64_t nix_tm_shaper_rate_conv(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p, uint64_t *div_exp_p);
+uint64_t nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p);
struct nix_tm_node *nix_tm_node_alloc(void);
void nix_tm_node_free(struct nix_tm_node *node);
+struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void);
+void nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile);
#endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 776d6b4..1bfdbf9 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -5,6 +5,22 @@
#include "roc_api.h"
#include "roc_priv.h"
+void
+nix_tm_clear_shaper_profiles(struct nix *nix)
+{
+ struct nix_tm_shaper_profile *shaper_profile;
+
+ shaper_profile = TAILQ_FIRST(&nix->shaper_profile_list);
+ while (shaper_profile != NULL) {
+ if (shaper_profile->ref_cnt)
+ plt_warn("Shaper profile %u has non zero references",
+ shaper_profile->id);
+ TAILQ_REMOVE(&nix->shaper_profile_list, shaper_profile, shaper);
+ nix_tm_shaper_profile_free(shaper_profile);
+ shaper_profile = TAILQ_FIRST(&nix->shaper_profile_list);
+ }
+}
+
int
nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node)
{
@@ -532,6 +548,8 @@ nix_tm_conf_init(struct roc_nix *roc_nix)
int rc, i;
PLT_STATIC_ASSERT(sizeof(struct nix_tm_node) <= ROC_NIX_TM_NODE_SZ);
+ PLT_STATIC_ASSERT(sizeof(struct nix_tm_shaper_profile) <=
+ ROC_NIX_TM_SHAPER_PROFILE_SZ);
nix->tm_flags = 0;
for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++)
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index ede30fa..015f20e 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -66,6 +66,151 @@ roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable)
return 0;
}
+static int
+nix_tm_shaper_profile_add(struct roc_nix *roc_nix,
+ struct nix_tm_shaper_profile *profile, int skip_ins)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint64_t commit_rate, commit_sz;
+ uint64_t peak_rate, peak_sz;
+ uint32_t id;
+
+ id = profile->id;
+ commit_rate = profile->commit.rate;
+ commit_sz = profile->commit.size;
+ peak_rate = profile->peak.rate;
+ peak_sz = profile->peak.size;
+
+ if (nix_tm_shaper_profile_search(nix, id) && !skip_ins)
+ return NIX_ERR_TM_SHAPER_PROFILE_EXISTS;
+
+ if (profile->pkt_len_adj < NIX_TM_LENGTH_ADJUST_MIN ||
+ profile->pkt_len_adj > NIX_TM_LENGTH_ADJUST_MAX)
+ return NIX_ERR_TM_SHAPER_PKT_LEN_ADJUST;
+
+ /* We cannot support both pkt length adjust and pkt mode */
+ if (profile->pkt_mode && profile->pkt_len_adj)
+ return NIX_ERR_TM_SHAPER_PKT_LEN_ADJUST;
+
+ /* commit rate and burst size can be enabled/disabled */
+ if (commit_rate || commit_sz) {
+ if (commit_sz < NIX_TM_MIN_SHAPER_BURST ||
+ commit_sz > NIX_TM_MAX_SHAPER_BURST)
+ return NIX_ERR_TM_INVALID_COMMIT_SZ;
+ else if (!nix_tm_shaper_rate_conv(commit_rate, NULL, NULL,
+ NULL))
+ return NIX_ERR_TM_INVALID_COMMIT_RATE;
+ }
+
+ /* Peak rate and burst size can be enabled/disabled */
+ if (peak_sz || peak_rate) {
+ if (peak_sz < NIX_TM_MIN_SHAPER_BURST ||
+ peak_sz > NIX_TM_MAX_SHAPER_BURST)
+ return NIX_ERR_TM_INVALID_PEAK_SZ;
+ else if (!nix_tm_shaper_rate_conv(peak_rate, NULL, NULL, NULL))
+ return NIX_ERR_TM_INVALID_PEAK_RATE;
+ }
+
+ if (!skip_ins)
+ TAILQ_INSERT_TAIL(&nix->shaper_profile_list, profile, shaper);
+
+ plt_tm_dbg("Added TM shaper profile %u, "
+ " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
+ ", cbs %" PRIu64 " , adj %u, pkt_mode %u",
+ id, profile->peak.rate, profile->peak.size,
+ profile->commit.rate, profile->commit.size,
+ profile->pkt_len_adj, profile->pkt_mode);
+
+ /* Always use PIR for single rate shaping */
+ if (!peak_rate && commit_rate) {
+ profile->peak.rate = profile->commit.rate;
+ profile->peak.size = profile->commit.size;
+ profile->commit.rate = 0;
+ profile->commit.size = 0;
+ }
+
+ /* update min rate */
+ nix->tm_rate_min = nix_tm_shaper_profile_rate_min(nix);
+ return 0;
+}
+
+int
+roc_nix_tm_shaper_profile_add(struct roc_nix *roc_nix,
+ struct roc_nix_tm_shaper_profile *roc_profile)
+{
+ struct nix_tm_shaper_profile *profile;
+
+ profile = (struct nix_tm_shaper_profile *)roc_profile->reserved;
+
+ profile->ref_cnt = 0;
+ profile->id = roc_profile->id;
+ if (roc_profile->pkt_mode) {
+ /* Each packet accomulate single count, whereas HW
+ * considers each unit as Byte, so we need convert
+ * user pps to bps
+ */
+ profile->commit.rate = roc_profile->commit_rate * 8;
+ profile->peak.rate = roc_profile->peak_rate * 8;
+ } else {
+ profile->commit.rate = roc_profile->commit_rate;
+ profile->peak.rate = roc_profile->peak_rate;
+ }
+ profile->commit.size = roc_profile->commit_sz;
+ profile->peak.size = roc_profile->peak_sz;
+ profile->pkt_len_adj = roc_profile->pkt_len_adj;
+ profile->pkt_mode = roc_profile->pkt_mode;
+ profile->free_fn = roc_profile->free_fn;
+
+ return nix_tm_shaper_profile_add(roc_nix, profile, 0);
+}
+
+int
+roc_nix_tm_shaper_profile_update(struct roc_nix *roc_nix,
+ struct roc_nix_tm_shaper_profile *roc_profile)
+{
+ struct nix_tm_shaper_profile *profile;
+
+ profile = (struct nix_tm_shaper_profile *)roc_profile->reserved;
+
+ if (roc_profile->pkt_mode) {
+ /* Each packet accomulate single count, whereas HW
+ * considers each unit as Byte, so we need convert
+ * user pps to bps
+ */
+ profile->commit.rate = roc_profile->commit_rate * 8;
+ profile->peak.rate = roc_profile->peak_rate * 8;
+ } else {
+ profile->commit.rate = roc_profile->commit_rate;
+ profile->peak.rate = roc_profile->peak_rate;
+ }
+ profile->commit.size = roc_profile->commit_sz;
+ profile->peak.size = roc_profile->peak_sz;
+
+ return nix_tm_shaper_profile_add(roc_nix, profile, 1);
+}
+
+int
+roc_nix_tm_shaper_profile_delete(struct roc_nix *roc_nix, uint32_t id)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile *profile;
+
+ profile = nix_tm_shaper_profile_search(nix, id);
+ if (!profile)
+ return NIX_ERR_TM_INVALID_SHAPER_PROFILE;
+
+ if (profile->ref_cnt)
+ return NIX_ERR_TM_SHAPER_PROFILE_IN_USE;
+
+ plt_tm_dbg("Removing TM shaper profile %u", id);
+ TAILQ_REMOVE(&nix->shaper_profile_list, profile, shaper);
+ nix_tm_shaper_profile_free(profile);
+
+ /* update min rate */
+ nix->tm_rate_min = nix_tm_shaper_profile_rate_min(nix);
+ return 0;
+}
+
int
roc_nix_tm_node_add(struct roc_nix *roc_nix, struct roc_nix_tm_node *roc_node)
{
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index 09cb693..5eb93a7 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -77,6 +77,106 @@ nix_tm_node_search(struct nix *nix, uint32_t node_id, enum roc_nix_tm_tree tree)
return NULL;
}
+uint64_t
+nix_tm_shaper_rate_conv(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p, uint64_t *div_exp_p)
+{
+ uint64_t div_exp, exponent, mantissa;
+
+ /* Boundary checks */
+ if (value < NIX_TM_MIN_SHAPER_RATE || value > NIX_TM_MAX_SHAPER_RATE)
+ return 0;
+
+ if (value <= NIX_TM_SHAPER_RATE(0, 0, 0)) {
+ /* Calculate rate div_exp and mantissa using
+ * the following formula:
+ *
+ * value = (2E6 * (256 + mantissa)
+ * / ((1 << div_exp) * 256))
+ */
+ div_exp = 0;
+ exponent = 0;
+ mantissa = NIX_TM_MAX_RATE_MANTISSA;
+
+ while (value < (NIX_TM_SHAPER_RATE_CONST / (1 << div_exp)))
+ div_exp += 1;
+
+ while (value < ((NIX_TM_SHAPER_RATE_CONST * (256 + mantissa)) /
+ ((1 << div_exp) * 256)))
+ mantissa -= 1;
+ } else {
+ /* Calculate rate exponent and mantissa using
+ * the following formula:
+ *
+ * value = (2E6 * ((256 + mantissa) << exponent)) / 256
+ *
+ */
+ div_exp = 0;
+ exponent = NIX_TM_MAX_RATE_EXPONENT;
+ mantissa = NIX_TM_MAX_RATE_MANTISSA;
+
+ while (value < (NIX_TM_SHAPER_RATE_CONST * (1 << exponent)))
+ exponent -= 1;
+
+ while (value < ((NIX_TM_SHAPER_RATE_CONST *
+ ((256 + mantissa) << exponent)) /
+ 256))
+ mantissa -= 1;
+ }
+
+ if (div_exp > NIX_TM_MAX_RATE_DIV_EXP ||
+ exponent > NIX_TM_MAX_RATE_EXPONENT ||
+ mantissa > NIX_TM_MAX_RATE_MANTISSA)
+ return 0;
+
+ if (div_exp_p)
+ *div_exp_p = div_exp;
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ /* Calculate real rate value */
+ return NIX_TM_SHAPER_RATE(exponent, mantissa, div_exp);
+}
+
+uint64_t
+nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p,
+ uint64_t *mantissa_p)
+{
+ uint64_t exponent, mantissa;
+
+ if (value < NIX_TM_MIN_SHAPER_BURST || value > NIX_TM_MAX_SHAPER_BURST)
+ return 0;
+
+ /* Calculate burst exponent and mantissa using
+ * the following formula:
+ *
+ * value = (((256 + mantissa) << (exponent + 1)
+ / 256)
+ *
+ */
+ exponent = NIX_TM_MAX_BURST_EXPONENT;
+ mantissa = NIX_TM_MAX_BURST_MANTISSA;
+
+ while (value < (1ull << (exponent + 1)))
+ exponent -= 1;
+
+ while (value < ((256 + mantissa) << (exponent + 1)) / 256)
+ mantissa -= 1;
+
+ if (exponent > NIX_TM_MAX_BURST_EXPONENT ||
+ mantissa > NIX_TM_MAX_BURST_MANTISSA)
+ return 0;
+
+ if (exponent_p)
+ *exponent_p = exponent;
+ if (mantissa_p)
+ *mantissa_p = mantissa;
+
+ return NIX_TM_SHAPER_BURST(exponent, mantissa);
+}
+
static uint16_t
nix_tm_max_prio(struct nix *nix, uint16_t hw_lvl)
{
@@ -183,6 +283,23 @@ nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
return k;
}
+/* Search for min rate in topology */
+uint64_t
+nix_tm_shaper_profile_rate_min(struct nix *nix)
+{
+ struct nix_tm_shaper_profile *profile;
+ uint64_t rate_min = 1E9; /* 1 Gbps */
+
+ TAILQ_FOREACH(profile, &nix->shaper_profile_list, shaper) {
+ if (profile->peak.rate && profile->peak.rate < rate_min)
+ rate_min = profile->peak.rate;
+
+ if (profile->commit.rate && profile->commit.rate < rate_min)
+ rate_min = profile->commit.rate;
+ }
+ return rate_min;
+}
+
uint16_t
nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig)
{
@@ -251,6 +368,34 @@ roc_nix_tm_node_next(struct roc_nix *roc_nix, struct roc_nix_tm_node *__prev)
return (struct roc_nix_tm_node *)TAILQ_NEXT(prev, node);
}
+struct roc_nix_tm_shaper_profile *
+roc_nix_tm_shaper_profile_get(struct roc_nix *roc_nix, uint32_t profile_id)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile *profile;
+
+ profile = nix_tm_shaper_profile_search(nix, profile_id);
+ return (struct roc_nix_tm_shaper_profile *)profile;
+}
+
+struct roc_nix_tm_shaper_profile *
+roc_nix_tm_shaper_profile_next(struct roc_nix *roc_nix,
+ struct roc_nix_tm_shaper_profile *__prev)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile_list *list;
+ struct nix_tm_shaper_profile *prev;
+
+ prev = (struct nix_tm_shaper_profile *)__prev;
+ list = &nix->shaper_profile_list;
+
+ /* HEAD of the list */
+ if (!prev)
+ return (struct roc_nix_tm_shaper_profile *)TAILQ_FIRST(list);
+
+ return (struct roc_nix_tm_shaper_profile *)TAILQ_NEXT(prev, shaper);
+}
+
struct nix_tm_node *
nix_tm_node_alloc(void)
{
@@ -272,3 +417,25 @@ nix_tm_node_free(struct nix_tm_node *node)
(node->free_fn)(node);
}
+
+struct nix_tm_shaper_profile *
+nix_tm_shaper_profile_alloc(void)
+{
+ struct nix_tm_shaper_profile *profile;
+
+ profile = plt_zmalloc(sizeof(struct nix_tm_shaper_profile), 0);
+ if (!profile)
+ return NULL;
+
+ profile->free_fn = plt_free;
+ return profile;
+}
+
+void
+nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile)
+{
+ if (!profile || !profile->free_fn)
+ return;
+
+ (profile->free_fn)(profile);
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 0f70fff..5acbf4c 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -110,6 +110,11 @@ INTERNAL {
roc_nix_tm_node_name_get;
roc_nix_tm_node_next;
roc_nix_tm_node_pkt_mode_update;
+ roc_nix_tm_shaper_profile_add;
+ roc_nix_tm_shaper_profile_delete;
+ roc_nix_tm_shaper_profile_get;
+ roc_nix_tm_shaper_profile_next;
+ roc_nix_tm_shaper_profile_update;
roc_nix_tm_sq_aura_fc;
roc_nix_tm_sq_flush_spin;
roc_nix_unregister_cq_irqs;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 35/52] common/cnxk: add nix tm helper to alloc and free resource
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (33 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 34/52] common/cnxk: add nix tm shaper profile add support Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 36/52] common/cnxk: add nix tm hierarchy enable/disable Nithin Dabilpuram
` (20 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add TM helper API to estimate, alloc, assign, and free resources
for a NIX LF / ethdev.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 1 +
drivers/common/cnxk/roc_nix_priv.h | 16 ++
drivers/common/cnxk/roc_nix_tm.c | 461 +++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_tm_ops.c | 11 +
drivers/common/cnxk/roc_nix_tm_utils.c | 133 ++++++++++
drivers/common/cnxk/version.map | 1 +
6 files changed, 623 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index cadf4ea..0f3b85c 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -359,6 +359,7 @@ int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix,
struct roc_nix_tm_node *roc_node);
int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
bool free);
+int __roc_api roc_nix_tm_free_resources(struct roc_nix *roc_nix, bool hw_only);
int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix,
uint32_t node_id, bool pkt_mode);
int __roc_api roc_nix_tm_shaper_profile_add(
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 278e7df..6e86681 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -329,8 +329,17 @@ int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node);
int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
enum roc_nix_tm_tree tree, bool free);
int nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node);
+int nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask,
+ bool hw_only);
int nix_tm_clear_path_xoff(struct nix *nix, struct nix_tm_node *node);
void nix_tm_clear_shaper_profiles(struct nix *nix);
+int nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree);
+int nix_tm_assign_resources(struct nix *nix, enum roc_nix_tm_tree tree);
+int nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig,
+ bool above_thresh);
+void nix_tm_copy_rsp_to_nix(struct nix *nix, struct nix_txsch_alloc_rsp *rsp);
+
+int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree);
/*
* TM priv utils.
@@ -347,11 +356,18 @@ struct nix_tm_shaper_profile *nix_tm_shaper_profile_search(struct nix *nix,
uint32_t id);
uint8_t nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
volatile uint64_t *reg, volatile uint64_t *regval);
+uint32_t nix_tm_check_rr(struct nix *nix, uint32_t parent_id,
+ enum roc_nix_tm_tree tree, uint32_t *rr_prio,
+ uint32_t *max_prio);
uint64_t nix_tm_shaper_profile_rate_min(struct nix *nix);
uint64_t nix_tm_shaper_rate_conv(uint64_t value, uint64_t *exponent_p,
uint64_t *mantissa_p, uint64_t *div_exp_p);
uint64_t nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p,
uint64_t *mantissa_p);
+bool nix_tm_child_res_valid(struct nix_tm_node_list *list,
+ struct nix_tm_node *parent);
+uint16_t nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig,
+ uint16_t *schq, enum roc_nix_tm_tree tree);
struct nix_tm_node *nix_tm_node_alloc(void);
void nix_tm_node_free(struct nix_tm_node *node);
struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void);
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 1bfdbf9..9adeab9 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -5,6 +5,15 @@
#include "roc_api.h"
#include "roc_priv.h"
+static inline int
+bitmap_ctzll(uint64_t slab)
+{
+ if (slab == 0)
+ return 0;
+
+ return __builtin_ctzll(slab);
+}
+
void
nix_tm_clear_shaper_profiles(struct nix *nix)
{
@@ -22,6 +31,44 @@ nix_tm_clear_shaper_profiles(struct nix *nix)
}
int
+nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree)
+{
+ struct nix_tm_node *child, *parent;
+ struct nix_tm_node_list *list;
+ uint32_t rr_prio, max_prio;
+ uint32_t rr_num = 0;
+
+ list = nix_tm_node_list(nix, tree);
+
+ /* Release all the node hw resources locally
+ * if parent marked as dirty and resource exists.
+ */
+ TAILQ_FOREACH(child, list, node) {
+ /* Release resource only if parent direct hierarchy changed */
+ if (child->flags & NIX_TM_NODE_HWRES && child->parent &&
+ child->parent->child_realloc) {
+ nix_tm_free_node_resource(nix, child);
+ }
+ child->max_prio = UINT32_MAX;
+ }
+
+ TAILQ_FOREACH(parent, list, node) {
+ /* Count group of children of same priority i.e are RR */
+ rr_num = nix_tm_check_rr(nix, parent->id, tree, &rr_prio,
+ &max_prio);
+
+ /* Assuming that multiple RR groups are
+ * not configured based on capability.
+ */
+ parent->rr_prio = rr_prio;
+ parent->rr_num = rr_num;
+ parent->max_prio = max_prio;
+ }
+
+ return 0;
+}
+
+int
nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
@@ -431,6 +478,71 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq)
}
int
+nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig,
+ bool above_thresh)
+{
+ uint16_t avail, thresh, to_free = 0, schq;
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txsch_free_req *req;
+ struct plt_bitmap *bmp;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ int rc = -ENOSPC;
+
+ bmp = contig ? nix->schq_contig_bmp[hw_lvl] : nix->schq_bmp[hw_lvl];
+ thresh =
+ contig ? nix->contig_rsvd[hw_lvl] : nix->discontig_rsvd[hw_lvl];
+ plt_bitmap_scan_init(bmp);
+
+ avail = nix_tm_resource_avail(nix, hw_lvl, contig);
+
+ if (above_thresh) {
+ /* Release only above threshold */
+ if (avail > thresh)
+ to_free = avail - thresh;
+ } else {
+ /* Release everything */
+ to_free = avail;
+ }
+
+ /* Now release resources to AF */
+ while (to_free) {
+ if (!slab && !plt_bitmap_scan(bmp, &pos, &slab))
+ break;
+
+ schq = bitmap_ctzll(slab);
+ slab &= ~(1ULL << schq);
+ schq += pos;
+
+ /* Free to AF */
+ req = mbox_alloc_msg_nix_txsch_free(mbox);
+ if (req == NULL)
+ return rc;
+ req->flags = 0;
+ req->schq_lvl = hw_lvl;
+ req->schq = schq;
+ rc = mbox_process(mbox);
+ if (rc) {
+ plt_err("failed to release hwres %s(%u) rc %d",
+ nix_tm_hwlvl2str(hw_lvl), schq, rc);
+ return rc;
+ }
+
+ plt_tm_dbg("Released hwres %s(%u)", nix_tm_hwlvl2str(hw_lvl),
+ schq);
+ plt_bitmap_clear(bmp, schq);
+ to_free--;
+ }
+
+ if (to_free) {
+ plt_err("resource inconsistency for %s(%u)",
+ nix_tm_hwlvl2str(hw_lvl), contig);
+ return -EFAULT;
+ }
+ return 0;
+}
+
+int
nix_tm_free_node_resource(struct nix *nix, struct nix_tm_node *node)
{
struct mbox *mbox = (&nix->dev)->mbox;
@@ -539,6 +651,355 @@ nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
return 0;
}
+static int
+nix_tm_assign_hw_id(struct nix *nix, struct nix_tm_node *parent,
+ uint16_t *contig_id, int *contig_cnt,
+ struct nix_tm_node_list *list)
+{
+ struct nix_tm_node *child;
+ struct plt_bitmap *bmp;
+ uint8_t child_hw_lvl;
+ int spare_schq = -1;
+ uint32_t pos = 0;
+ uint64_t slab;
+ uint16_t schq;
+
+ child_hw_lvl = parent->hw_lvl - 1;
+ bmp = nix->schq_bmp[child_hw_lvl];
+ plt_bitmap_scan_init(bmp);
+ slab = 0;
+
+ /* Save spare schq if it is case of RR + SP */
+ if (parent->rr_prio != 0xf && *contig_cnt > 1)
+ spare_schq = *contig_id + parent->rr_prio;
+
+ TAILQ_FOREACH(child, list, node) {
+ if (!child->parent)
+ continue;
+ if (child->parent->id != parent->id)
+ continue;
+
+ /* Resource never expected to be present */
+ if (child->flags & NIX_TM_NODE_HWRES) {
+ plt_err("Resource exists for child (%s)%u, id %u (%p)",
+ nix_tm_hwlvl2str(child->hw_lvl), child->hw_id,
+ child->id, child);
+ return -EFAULT;
+ }
+
+ if (!slab)
+ plt_bitmap_scan(bmp, &pos, &slab);
+
+ if (child->priority == parent->rr_prio && spare_schq != -1) {
+ /* Use spare schq first if present */
+ schq = spare_schq;
+ spare_schq = -1;
+ *contig_cnt = *contig_cnt - 1;
+
+ } else if (child->priority == parent->rr_prio) {
+ /* Assign a discontiguous queue */
+ if (!slab) {
+ plt_err("Schq not found for Child %u "
+ "lvl %u (%p)",
+ child->id, child->lvl, child);
+ return -ENOENT;
+ }
+
+ schq = bitmap_ctzll(slab);
+ slab &= ~(1ULL << schq);
+ schq += pos;
+ plt_bitmap_clear(bmp, schq);
+ } else {
+ /* Assign a contiguous queue */
+ schq = *contig_id + child->priority;
+ *contig_cnt = *contig_cnt - 1;
+ }
+
+ plt_tm_dbg("Resource %s(%u), for lvl %u id %u(%p)",
+ nix_tm_hwlvl2str(child->hw_lvl), schq, child->lvl,
+ child->id, child);
+
+ child->hw_id = schq;
+ child->parent_hw_id = parent->hw_id;
+ child->flags |= NIX_TM_NODE_HWRES;
+ }
+
+ return 0;
+}
+
+int
+nix_tm_assign_resources(struct nix *nix, enum roc_nix_tm_tree tree)
+{
+ struct nix_tm_node *parent, *root = NULL;
+ struct plt_bitmap *bmp, *bmp_contig;
+ struct nix_tm_node_list *list;
+ uint8_t child_hw_lvl, hw_lvl;
+ uint16_t contig_id, j;
+ uint64_t slab = 0;
+ uint32_t pos = 0;
+ int cnt, rc;
+
+ list = nix_tm_node_list(nix, tree);
+ /* Walk from TL1 to TL4 parents */
+ for (hw_lvl = NIX_TXSCH_LVL_TL1; hw_lvl > 0; hw_lvl--) {
+ TAILQ_FOREACH(parent, list, node) {
+ child_hw_lvl = parent->hw_lvl - 1;
+ if (parent->hw_lvl != hw_lvl)
+ continue;
+
+ /* Remember root for future */
+ if (parent->hw_lvl == nix->tm_root_lvl)
+ root = parent;
+
+ if (!parent->child_realloc) {
+ /* Skip when parent is not dirty */
+ if (nix_tm_child_res_valid(list, parent))
+ continue;
+ plt_err("Parent not dirty but invalid "
+ "child res parent id %u(lvl %u)",
+ parent->id, parent->lvl);
+ return -EFAULT;
+ }
+
+ bmp_contig = nix->schq_contig_bmp[child_hw_lvl];
+
+ /* Prealloc contiguous indices for a parent */
+ contig_id = NIX_TM_MAX_HW_TXSCHQ;
+ cnt = (int)parent->max_prio + 1;
+ if (cnt > 0) {
+ plt_bitmap_scan_init(bmp_contig);
+ if (!plt_bitmap_scan(bmp_contig, &pos, &slab)) {
+ plt_err("Contig schq not found");
+ return -ENOENT;
+ }
+ contig_id = pos + bitmap_ctzll(slab);
+
+ /* Check if we have enough */
+ for (j = contig_id; j < contig_id + cnt; j++) {
+ if (!plt_bitmap_get(bmp_contig, j))
+ break;
+ }
+
+ if (j != contig_id + cnt) {
+ plt_err("Contig schq not sufficient");
+ return -ENOENT;
+ }
+
+ for (j = contig_id; j < contig_id + cnt; j++)
+ plt_bitmap_clear(bmp_contig, j);
+ }
+
+ /* Assign hw id to all children */
+ rc = nix_tm_assign_hw_id(nix, parent, &contig_id, &cnt,
+ list);
+ if (cnt || rc) {
+ plt_err("Unexpected err, contig res alloc, "
+ "parent %u, of %s, rc=%d, cnt=%d",
+ parent->id, nix_tm_hwlvl2str(hw_lvl),
+ rc, cnt);
+ return -EFAULT;
+ }
+
+ /* Clear the dirty bit as children's
+ * resources are reallocated.
+ */
+ parent->child_realloc = false;
+ }
+ }
+
+ /* Root is always expected to be there */
+ if (!root)
+ return -EFAULT;
+
+ if (root->flags & NIX_TM_NODE_HWRES)
+ return 0;
+
+ /* Process root node */
+ bmp = nix->schq_bmp[nix->tm_root_lvl];
+ plt_bitmap_scan_init(bmp);
+ if (!plt_bitmap_scan(bmp, &pos, &slab)) {
+ plt_err("Resource not allocated for root");
+ return -EIO;
+ }
+
+ root->hw_id = pos + bitmap_ctzll(slab);
+ root->flags |= NIX_TM_NODE_HWRES;
+ plt_bitmap_clear(bmp, root->hw_id);
+
+ /* Get TL1 id as well when root is not TL1 */
+ if (!nix_tm_have_tl1_access(nix)) {
+ bmp = nix->schq_bmp[NIX_TXSCH_LVL_TL1];
+
+ plt_bitmap_scan_init(bmp);
+ if (!plt_bitmap_scan(bmp, &pos, &slab)) {
+ plt_err("Resource not found for TL1");
+ return -EIO;
+ }
+ root->parent_hw_id = pos + bitmap_ctzll(slab);
+ plt_bitmap_clear(bmp, root->parent_hw_id);
+ }
+
+ plt_tm_dbg("Resource %s(%u) for root(id %u) (%p)",
+ nix_tm_hwlvl2str(root->hw_lvl), root->hw_id, root->id, root);
+
+ return 0;
+}
+
+void
+nix_tm_copy_rsp_to_nix(struct nix *nix, struct nix_txsch_alloc_rsp *rsp)
+{
+ uint8_t lvl;
+ uint16_t i;
+
+ for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
+ for (i = 0; i < rsp->schq[lvl]; i++)
+ plt_bitmap_set(nix->schq_bmp[lvl],
+ rsp->schq_list[lvl][i]);
+
+ for (i = 0; i < rsp->schq_contig[lvl]; i++)
+ plt_bitmap_set(nix->schq_contig_bmp[lvl],
+ rsp->schq_contig_list[lvl][i]);
+ }
+}
+
+int
+nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree)
+{
+ uint16_t schq_contig[NIX_TXSCH_LVL_CNT];
+ struct mbox *mbox = (&nix->dev)->mbox;
+ uint16_t schq[NIX_TXSCH_LVL_CNT];
+ struct nix_txsch_alloc_req *req;
+ struct nix_txsch_alloc_rsp *rsp;
+ uint8_t hw_lvl, i;
+ bool pend;
+ int rc;
+
+ memset(schq, 0, sizeof(schq));
+ memset(schq_contig, 0, sizeof(schq_contig));
+
+ /* Estimate requirement */
+ rc = nix_tm_resource_estimate(nix, schq_contig, schq, tree);
+ if (!rc)
+ return 0;
+
+ /* Release existing contiguous resources when realloc requested
+ * as there is no way to guarantee continuity of old with new.
+ */
+ for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) {
+ if (schq_contig[hw_lvl])
+ nix_tm_release_resources(nix, hw_lvl, true, false);
+ }
+
+ /* Alloc as needed */
+ do {
+ pend = false;
+ req = mbox_alloc_msg_nix_txsch_alloc(mbox);
+ if (!req) {
+ rc = -ENOMEM;
+ goto alloc_err;
+ }
+ mbox_memcpy(req->schq, schq, sizeof(req->schq));
+ mbox_memcpy(req->schq_contig, schq_contig,
+ sizeof(req->schq_contig));
+
+ /* Each alloc can be at max of MAX_TXSCHQ_PER_FUNC per level.
+ * So split alloc to multiple requests.
+ */
+ for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
+ if (req->schq[i] > MAX_TXSCHQ_PER_FUNC)
+ req->schq[i] = MAX_TXSCHQ_PER_FUNC;
+ schq[i] -= req->schq[i];
+
+ if (req->schq_contig[i] > MAX_TXSCHQ_PER_FUNC)
+ req->schq_contig[i] = MAX_TXSCHQ_PER_FUNC;
+ schq_contig[i] -= req->schq_contig[i];
+
+ if (schq[i] || schq_contig[i])
+ pend = true;
+ }
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ goto alloc_err;
+
+ nix_tm_copy_rsp_to_nix(nix, rsp);
+ } while (pend);
+
+ nix->tm_link_cfg_lvl = rsp->link_cfg_lvl;
+ return 0;
+alloc_err:
+ for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
+ if (nix_tm_release_resources(nix, i, true, false))
+ plt_err("Failed to release contig resources of "
+ "lvl %d on error",
+ i);
+ if (nix_tm_release_resources(nix, i, false, false))
+ plt_err("Failed to release discontig resources of "
+ "lvl %d on error",
+ i);
+ }
+ return rc;
+}
+
+int
+nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask, bool hw_only)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile *profile;
+ struct nix_tm_node *node, *next_node;
+ struct nix_tm_node_list *list;
+ enum roc_nix_tm_tree tree;
+ uint32_t profile_id;
+ int rc = 0;
+
+ for (tree = 0; tree < ROC_NIX_TM_TREE_MAX; tree++) {
+ if (!(tree_mask & BIT(tree)))
+ continue;
+
+ plt_tm_dbg("Freeing resources of tree %u", tree);
+
+ list = nix_tm_node_list(nix, tree);
+ next_node = TAILQ_FIRST(list);
+ while (next_node) {
+ node = next_node;
+ next_node = TAILQ_NEXT(node, node);
+
+ if (!nix_tm_is_leaf(nix, node->lvl) &&
+ node->flags & NIX_TM_NODE_HWRES) {
+ /* Clear xoff in path for flush to succeed */
+ rc = nix_tm_clear_path_xoff(nix, node);
+ if (rc)
+ return rc;
+ rc = nix_tm_free_node_resource(nix, node);
+ if (rc)
+ return rc;
+ }
+ }
+
+ /* Leave software elements if needed */
+ if (hw_only)
+ continue;
+
+ next_node = TAILQ_FIRST(list);
+ while (next_node) {
+ node = next_node;
+ next_node = TAILQ_NEXT(node, node);
+
+ plt_tm_dbg("Free node lvl %u id %u (%p)", node->lvl,
+ node->id, node);
+
+ profile_id = node->shaper_profile_id;
+ profile = nix_tm_shaper_profile_search(nix, profile_id);
+ if (profile)
+ profile->ref_cnt--;
+
+ TAILQ_REMOVE(list, node, node);
+ nix_tm_node_free(node);
+ }
+ }
+ return rc;
+}
+
int
nix_tm_conf_init(struct roc_nix *roc_nix)
{
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index 015f20e..958cceb 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -66,6 +66,17 @@ roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable)
return 0;
}
+int
+roc_nix_tm_free_resources(struct roc_nix *roc_nix, bool hw_only)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ if (nix->tm_flags & NIX_TM_HIERARCHY_ENA)
+ return -EBUSY;
+
+ return nix_tm_free_resources(roc_nix, BIT(ROC_NIX_TM_USER), hw_only);
+}
+
static int
nix_tm_shaper_profile_add(struct roc_nix *roc_nix,
struct nix_tm_shaper_profile *profile, int skip_ins)
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index 5eb93a7..e385787 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -177,6 +177,58 @@ nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p,
return NIX_TM_SHAPER_BURST(exponent, mantissa);
}
+uint32_t
+nix_tm_check_rr(struct nix *nix, uint32_t parent_id, enum roc_nix_tm_tree tree,
+ uint32_t *rr_prio, uint32_t *max_prio)
+{
+ uint32_t node_cnt[NIX_TM_TLX_SP_PRIO_MAX];
+ struct nix_tm_node_list *list;
+ struct nix_tm_node *node;
+ uint32_t rr_num = 0, i;
+ uint32_t children = 0;
+ uint32_t priority;
+
+ memset(node_cnt, 0, sizeof(node_cnt));
+ *rr_prio = 0xF;
+ *max_prio = UINT32_MAX;
+
+ list = nix_tm_node_list(nix, tree);
+ TAILQ_FOREACH(node, list, node) {
+ if (!node->parent)
+ continue;
+
+ if (!(node->parent->id == parent_id))
+ continue;
+
+ priority = node->priority;
+ node_cnt[priority]++;
+ children++;
+ }
+
+ for (i = 0; i < NIX_TM_TLX_SP_PRIO_MAX; i++) {
+ if (!node_cnt[i])
+ break;
+
+ if (node_cnt[i] > rr_num) {
+ *rr_prio = i;
+ rr_num = node_cnt[i];
+ }
+ }
+
+ /* RR group of single RR child is considered as SP */
+ if (rr_num == 1) {
+ *rr_prio = 0xF;
+ rr_num = 0;
+ }
+
+ /* Max prio will be returned only when we have non zero prio
+ * or if a parent has single child.
+ */
+ if (i > 1 || (children == 1))
+ *max_prio = i - 1;
+ return rr_num;
+}
+
static uint16_t
nix_tm_max_prio(struct nix *nix, uint16_t hw_lvl)
{
@@ -241,6 +293,21 @@ nix_tm_validate_prio(struct nix *nix, uint32_t lvl, uint32_t parent_id,
return 0;
}
+bool
+nix_tm_child_res_valid(struct nix_tm_node_list *list,
+ struct nix_tm_node *parent)
+{
+ struct nix_tm_node *child;
+
+ TAILQ_FOREACH(child, list, node) {
+ if (child->parent != parent)
+ continue;
+ if (!(child->flags & NIX_TM_NODE_HWRES))
+ return false;
+ }
+ return true;
+}
+
uint8_t
nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
volatile uint64_t *reg, volatile uint64_t *regval)
@@ -325,6 +392,72 @@ nix_tm_resource_avail(struct nix *nix, uint8_t hw_lvl, bool contig)
return count;
}
+uint16_t
+nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig, uint16_t *schq,
+ enum roc_nix_tm_tree tree)
+{
+ struct nix_tm_node_list *list;
+ uint8_t contig_cnt, hw_lvl;
+ struct nix_tm_node *parent;
+ uint16_t cnt = 0, avail;
+
+ list = nix_tm_node_list(nix, tree);
+ /* Walk through parents from TL1..TL4 */
+ for (hw_lvl = NIX_TXSCH_LVL_TL1; hw_lvl > 0; hw_lvl--) {
+ TAILQ_FOREACH(parent, list, node) {
+ if (hw_lvl != parent->hw_lvl)
+ continue;
+
+ /* Skip accounting for children whose
+ * parent does not indicate so.
+ */
+ if (!parent->child_realloc)
+ continue;
+
+ /* Count children needed */
+ schq[hw_lvl - 1] += parent->rr_num;
+ if (parent->max_prio != UINT32_MAX) {
+ contig_cnt = parent->max_prio + 1;
+ schq_contig[hw_lvl - 1] += contig_cnt;
+ /* When we have SP + DWRR at a parent,
+ * we will always have a spare schq at rr prio
+ * location in contiguous queues. Hence reduce
+ * discontiguous count by 1.
+ */
+ if (parent->max_prio > 0 && parent->rr_num)
+ schq[hw_lvl - 1] -= 1;
+ }
+ }
+ }
+
+ schq[nix->tm_root_lvl] = 1;
+ if (!nix_tm_have_tl1_access(nix))
+ schq[NIX_TXSCH_LVL_TL1] = 1;
+
+ /* Now check for existing resources */
+ for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) {
+ avail = nix_tm_resource_avail(nix, hw_lvl, false);
+ if (schq[hw_lvl] <= avail)
+ schq[hw_lvl] = 0;
+ else
+ schq[hw_lvl] -= avail;
+
+ /* For contiguous queues, realloc everything */
+ avail = nix_tm_resource_avail(nix, hw_lvl, true);
+ if (schq_contig[hw_lvl] <= avail)
+ schq_contig[hw_lvl] = 0;
+
+ cnt += schq[hw_lvl];
+ cnt += schq_contig[hw_lvl];
+
+ plt_tm_dbg("Estimate resources needed for %s: dis %u cont %u",
+ nix_tm_hwlvl2str(hw_lvl), schq[hw_lvl],
+ schq_contig[hw_lvl]);
+ }
+
+ return cnt;
+}
+
int
roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id)
{
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 5acbf4c..7b940c1 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -103,6 +103,7 @@ INTERNAL {
roc_nix_xstats_names_get;
roc_nix_switch_hdr_set;
roc_nix_eeprom_info_get;
+ roc_nix_tm_free_resources;
roc_nix_tm_node_add;
roc_nix_tm_node_delete;
roc_nix_tm_node_get;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 36/52] common/cnxk: add nix tm hierarchy enable/disable
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (34 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 35/52] common/cnxk: add nix tm helper to alloc and free resource Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 37/52] common/cnxk: add nix tm support for internal hierarchy Nithin Dabilpuram
` (19 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add support to enable or disable hierarchy along with
allocating node HW resources such as shapers and schedulers
and configuring them to match the user created or default
hierarchy.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 8 +
drivers/common/cnxk/roc_nix_priv.h | 16 ++
drivers/common/cnxk/roc_nix_tm.c | 147 ++++++++++++
drivers/common/cnxk/roc_nix_tm_ops.c | 234 +++++++++++++++++++
drivers/common/cnxk/roc_nix_tm_utils.c | 410 +++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 2 +
6 files changed, 817 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 0f3b85c..55850ee 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -379,6 +379,14 @@ struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next(
struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev);
/*
+ * TM hierarchy enable/disable API.
+ */
+int __roc_api roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix);
+int __roc_api roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix,
+ enum roc_nix_tm_tree tree,
+ bool xmit_enable);
+
+/*
* TM utilities API.
*/
int __roc_api roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id);
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 6e86681..56da341 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -339,7 +339,10 @@ int nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig,
bool above_thresh);
void nix_tm_copy_rsp_to_nix(struct nix *nix, struct nix_txsch_alloc_rsp *rsp);
+int nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree);
int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree);
+int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node,
+ bool rr_quantum_only);
/*
* TM priv utils.
@@ -368,6 +371,19 @@ bool nix_tm_child_res_valid(struct nix_tm_node_list *list,
struct nix_tm_node *parent);
uint16_t nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig,
uint16_t *schq, enum roc_nix_tm_tree tree);
+uint8_t nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg,
+ volatile uint64_t *regval);
+uint8_t nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
+ volatile uint64_t *reg,
+ volatile uint64_t *regval,
+ volatile uint64_t *regval_mask);
+uint8_t nix_tm_sched_reg_prep(struct nix *nix, struct nix_tm_node *node,
+ volatile uint64_t *reg,
+ volatile uint64_t *regval);
+uint8_t nix_tm_shaper_reg_prep(struct nix_tm_node *node,
+ struct nix_tm_shaper_profile *profile,
+ volatile uint64_t *reg,
+ volatile uint64_t *regval);
struct nix_tm_node *nix_tm_node_alloc(void);
void nix_tm_node_free(struct nix_tm_node *node);
struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void);
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 9adeab9..973a271 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -30,6 +30,93 @@ nix_tm_clear_shaper_profiles(struct nix *nix)
}
}
+static int
+nix_tm_node_reg_conf(struct nix *nix, struct nix_tm_node *node)
+{
+ uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
+ uint64_t regval[MAX_REGS_PER_MBOX_MSG];
+ struct nix_tm_shaper_profile *profile;
+ uint64_t reg[MAX_REGS_PER_MBOX_MSG];
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txschq_config *req;
+ int rc = -EFAULT;
+ uint32_t hw_lvl;
+ uint8_t k = 0;
+
+ memset(regval, 0, sizeof(regval));
+ memset(regval_mask, 0, sizeof(regval_mask));
+
+ profile = nix_tm_shaper_profile_search(nix, node->shaper_profile_id);
+ hw_lvl = node->hw_lvl;
+
+ /* Need this trigger to configure TL1 */
+ if (!nix_tm_have_tl1_access(nix) && hw_lvl == NIX_TXSCH_LVL_TL2) {
+ /* Prepare default conf for TL1 */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_TL1;
+
+ k = nix_tm_tl1_default_prep(node->parent_hw_id, req->reg,
+ req->regval);
+ req->num_regs = k;
+ rc = mbox_process(mbox);
+ if (rc)
+ goto error;
+ }
+
+ /* Prepare topology config */
+ k = nix_tm_topology_reg_prep(nix, node, reg, regval, regval_mask);
+
+ /* Prepare schedule config */
+ k += nix_tm_sched_reg_prep(nix, node, ®[k], ®val[k]);
+
+ /* Prepare shaping config */
+ k += nix_tm_shaper_reg_prep(node, profile, ®[k], ®val[k]);
+
+ if (!k)
+ return 0;
+
+ /* Copy and send config mbox */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = hw_lvl;
+ req->num_regs = k;
+
+ mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+ mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
+ mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
+
+ rc = mbox_process(mbox);
+ if (rc)
+ goto error;
+
+ return 0;
+error:
+ plt_err("Txschq conf failed for node %p, rc=%d", node, rc);
+ return rc;
+}
+
+int
+nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree)
+{
+ struct nix_tm_node_list *list;
+ struct nix_tm_node *node;
+ uint32_t hw_lvl;
+ int rc = 0;
+
+ list = nix_tm_node_list(nix, tree);
+
+ for (hw_lvl = 0; hw_lvl <= nix->tm_root_lvl; hw_lvl++) {
+ TAILQ_FOREACH(node, list, node) {
+ if (node->hw_lvl != hw_lvl)
+ continue;
+ rc = nix_tm_node_reg_conf(nix, node);
+ if (rc)
+ goto exit;
+ }
+ }
+exit:
+ return rc;
+}
+
int
nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree)
{
@@ -478,6 +565,66 @@ nix_tm_sq_flush_post(struct roc_nix_sq *sq)
}
int
+nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node,
+ bool rr_quantum_only)
+{
+ struct mbox *mbox = (&nix->dev)->mbox;
+ uint16_t qid = node->id, smq;
+ uint64_t rr_quantum;
+ int rc;
+
+ smq = node->parent->hw_id;
+ rr_quantum = nix_tm_weight_to_rr_quantum(node->weight);
+
+ if (rr_quantum_only)
+ plt_tm_dbg("Update sq(%u) rr_quantum 0x%" PRIx64, qid,
+ rr_quantum);
+ else
+ plt_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%" PRIx64,
+ qid, smq, rr_quantum);
+
+ if (qid > nix->nb_tx_queues)
+ return -EFAULT;
+
+ if (roc_model_is_cn9k()) {
+ struct nix_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ /* smq update only when needed */
+ if (!rr_quantum_only) {
+ aq->sq.smq = smq;
+ aq->sq_mask.smq = ~aq->sq_mask.smq;
+ }
+ aq->sq.smq_rr_quantum = rr_quantum;
+ aq->sq_mask.smq_rr_quantum = ~aq->sq_mask.smq_rr_quantum;
+ } else {
+ struct nix_cn10k_aq_enq_req *aq;
+
+ aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+ aq->qidx = qid;
+ aq->ctype = NIX_AQ_CTYPE_SQ;
+ aq->op = NIX_AQ_INSTOP_WRITE;
+
+ /* smq update only when needed */
+ if (!rr_quantum_only) {
+ aq->sq.smq = smq;
+ aq->sq_mask.smq = ~aq->sq_mask.smq;
+ }
+ aq->sq.smq_rr_weight = rr_quantum;
+ aq->sq_mask.smq_rr_weight = ~aq->sq_mask.smq_rr_weight;
+ }
+
+ rc = mbox_process(mbox);
+ if (rc)
+ plt_err("Failed to set smq, rc=%d", rc);
+ return rc;
+}
+
+int
nix_tm_release_resources(struct nix *nix, uint8_t hw_lvl, bool contig,
bool above_thresh)
{
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index 958cceb..23911eb 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -309,3 +309,237 @@ roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id, bool free)
{
return nix_tm_node_delete(roc_nix, node_id, ROC_NIX_TM_USER, free);
}
+
+int
+roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint16_t sqb_cnt, head_off, tail_off;
+ uint16_t sq_cnt = nix->nb_tx_queues;
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_tm_node_list *list;
+ enum roc_nix_tm_tree tree;
+ struct nix_tm_node *node;
+ struct roc_nix_sq *sq;
+ uint64_t wdata, val;
+ uintptr_t regaddr;
+ int rc = -1, i;
+
+ if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA))
+ return 0;
+
+ plt_tm_dbg("Disabling hierarchy on %s", nix->pci_dev->name);
+
+ tree = nix->tm_tree;
+ list = nix_tm_node_list(nix, tree);
+
+ /* Enable CGX RXTX to drain pkts */
+ if (!roc_nix->io_enabled) {
+ /* Though it enables both RX MCAM Entries and CGX Link
+ * we assume all the rx queues are stopped way back.
+ */
+ mbox_alloc_msg_nix_lf_start_rx(mbox);
+ rc = mbox_process(mbox);
+ if (rc) {
+ plt_err("cgx start failed, rc=%d", rc);
+ return rc;
+ }
+ }
+
+ /* XON all SMQ's */
+ TAILQ_FOREACH(node, list, node) {
+ if (node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+ continue;
+ if (!(node->flags & NIX_TM_NODE_HWRES))
+ continue;
+
+ rc = nix_tm_smq_xoff(nix, node, false);
+ if (rc) {
+ plt_err("Failed to enable smq %u, rc=%d", node->hw_id,
+ rc);
+ goto cleanup;
+ }
+ }
+
+ /* Flush all tx queues */
+ for (i = 0; i < sq_cnt; i++) {
+ sq = nix->sqs[i];
+ if (!sq)
+ continue;
+
+ rc = roc_nix_tm_sq_aura_fc(sq, false);
+ if (rc) {
+ plt_err("Failed to disable sqb aura fc, rc=%d", rc);
+ goto cleanup;
+ }
+
+ /* Wait for sq entries to be flushed */
+ rc = roc_nix_tm_sq_flush_spin(sq);
+ if (rc) {
+ plt_err("Failed to drain sq, rc=%d\n", rc);
+ goto cleanup;
+ }
+ }
+
+ /* XOFF & Flush all SMQ's. HRM mandates
+ * all SQ's empty before SMQ flush is issued.
+ */
+ TAILQ_FOREACH(node, list, node) {
+ if (node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+ continue;
+ if (!(node->flags & NIX_TM_NODE_HWRES))
+ continue;
+
+ rc = nix_tm_smq_xoff(nix, node, true);
+ if (rc) {
+ plt_err("Failed to enable smq %u, rc=%d", node->hw_id,
+ rc);
+ goto cleanup;
+ }
+
+ node->flags &= ~NIX_TM_NODE_ENABLED;
+ }
+
+ /* Verify sanity of all tx queues */
+ for (i = 0; i < sq_cnt; i++) {
+ sq = nix->sqs[i];
+ if (!sq)
+ continue;
+
+ wdata = ((uint64_t)sq->qid << 32);
+ regaddr = nix->base + NIX_LF_SQ_OP_STATUS;
+ val = roc_atomic64_add_nosync(wdata, (int64_t *)regaddr);
+
+ sqb_cnt = val & 0xFFFF;
+ head_off = (val >> 20) & 0x3F;
+ tail_off = (val >> 28) & 0x3F;
+
+ if (sqb_cnt > 1 || head_off != tail_off ||
+ (*(uint64_t *)sq->fc != sq->nb_sqb_bufs))
+ plt_err("Failed to gracefully flush sq %u", sq->qid);
+ }
+
+ nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA;
+cleanup:
+ /* Restore cgx state */
+ if (!roc_nix->io_enabled) {
+ mbox_alloc_msg_nix_lf_stop_rx(mbox);
+ rc |= mbox_process(mbox);
+ }
+ return rc;
+}
+
+int
+roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree,
+ bool xmit_enable)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node_list *list;
+ struct nix_tm_node *node;
+ struct roc_nix_sq *sq;
+ uint32_t tree_mask;
+ uint16_t sq_id;
+ int rc;
+
+ if (tree >= ROC_NIX_TM_TREE_MAX)
+ return NIX_ERR_PARAM;
+
+ if (nix->tm_flags & NIX_TM_HIERARCHY_ENA) {
+ if (nix->tm_tree != tree)
+ return -EBUSY;
+ return 0;
+ }
+
+ plt_tm_dbg("Enabling hierarchy on %s, xmit_ena %u, tree %u",
+ nix->pci_dev->name, xmit_enable, tree);
+
+ /* Free hw resources of other trees */
+ tree_mask = NIX_TM_TREE_MASK_ALL;
+ tree_mask &= ~BIT(tree);
+
+ rc = nix_tm_free_resources(roc_nix, tree_mask, true);
+ if (rc) {
+ plt_err("failed to free resources of other trees, rc=%d", rc);
+ return rc;
+ }
+
+ /* Update active tree before starting to do anything */
+ nix->tm_tree = tree;
+
+ nix_tm_update_parent_info(nix, tree);
+
+ rc = nix_tm_alloc_txschq(nix, tree);
+ if (rc) {
+ plt_err("TM failed to alloc tm resources=%d", rc);
+ return rc;
+ }
+
+ rc = nix_tm_assign_resources(nix, tree);
+ if (rc) {
+ plt_err("TM failed to assign tm resources=%d", rc);
+ return rc;
+ }
+
+ rc = nix_tm_txsch_reg_config(nix, tree);
+ if (rc) {
+ plt_err("TM failed to configure sched registers=%d", rc);
+ return rc;
+ }
+
+ list = nix_tm_node_list(nix, tree);
+ /* Mark all non-leaf's as enabled */
+ TAILQ_FOREACH(node, list, node) {
+ if (!nix_tm_is_leaf(nix, node->lvl))
+ node->flags |= NIX_TM_NODE_ENABLED;
+ }
+
+ if (!xmit_enable)
+ goto skip_sq_update;
+
+ /* Update SQ Sched Data while SQ is idle */
+ TAILQ_FOREACH(node, list, node) {
+ if (!nix_tm_is_leaf(nix, node->lvl))
+ continue;
+
+ rc = nix_tm_sq_sched_conf(nix, node, false);
+ if (rc) {
+ plt_err("SQ %u sched update failed, rc=%d", node->id,
+ rc);
+ return rc;
+ }
+ }
+
+ /* Finally XON all SMQ's */
+ TAILQ_FOREACH(node, list, node) {
+ if (node->hw_lvl != NIX_TXSCH_LVL_SMQ)
+ continue;
+
+ rc = nix_tm_smq_xoff(nix, node, false);
+ if (rc) {
+ plt_err("Failed to enable smq %u, rc=%d", node->hw_id,
+ rc);
+ return rc;
+ }
+ }
+
+ /* Enable xmit as all the topology is ready */
+ TAILQ_FOREACH(node, list, node) {
+ if (!nix_tm_is_leaf(nix, node->lvl))
+ continue;
+
+ sq_id = node->id;
+ sq = nix->sqs[sq_id];
+
+ rc = roc_nix_tm_sq_aura_fc(sq, true);
+ if (rc) {
+ plt_err("TM sw xon failed on SQ %u, rc=%d", node->id,
+ rc);
+ return rc;
+ }
+ node->flags |= NIX_TM_NODE_ENABLED;
+ }
+
+skip_sq_update:
+ nix->tm_flags |= NIX_TM_HIERARCHY_ENA;
+ return 0;
+}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index e385787..cb130e0 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -5,6 +5,14 @@
#include "roc_api.h"
#include "roc_priv.h"
+static inline uint64_t
+nix_tm_shaper2regval(struct nix_tm_shaper_data *shaper)
+{
+ return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
+ (shaper->div_exp << 13) | (shaper->exponent << 9) |
+ (shaper->mantissa << 1);
+}
+
uint16_t
nix_tm_lvl2nix_tl1_root(uint32_t lvl)
{
@@ -50,6 +58,32 @@ nix_tm_lvl2nix(struct nix *nix, uint32_t lvl)
return nix_tm_lvl2nix_tl2_root(lvl);
}
+static uint8_t
+nix_tm_relchan_get(struct nix *nix)
+{
+ return nix->tx_chan_base & 0xff;
+}
+
+static int
+nix_tm_find_prio_anchor(struct nix *nix, uint32_t node_id,
+ enum roc_nix_tm_tree tree)
+{
+ struct nix_tm_node *child_node;
+ struct nix_tm_node_list *list;
+
+ list = nix_tm_node_list(nix, tree);
+
+ TAILQ_FOREACH(child_node, list, node) {
+ if (!child_node->parent)
+ continue;
+ if (!(child_node->parent->id == node_id))
+ continue;
+ if (child_node->priority == child_node->parent->rr_prio)
+ continue;
+ return child_node->hw_id - child_node->priority;
+ }
+ return 0;
+}
struct nix_tm_shaper_profile *
nix_tm_shaper_profile_search(struct nix *nix, uint32_t id)
@@ -177,6 +211,39 @@ nix_tm_shaper_burst_conv(uint64_t value, uint64_t *exponent_p,
return NIX_TM_SHAPER_BURST(exponent, mantissa);
}
+static void
+nix_tm_shaper_conf_get(struct nix_tm_shaper_profile *profile,
+ struct nix_tm_shaper_data *cir,
+ struct nix_tm_shaper_data *pir)
+{
+ if (!profile)
+ return;
+
+ /* Calculate CIR exponent and mantissa */
+ if (profile->commit.rate)
+ cir->rate = nix_tm_shaper_rate_conv(
+ profile->commit.rate, &cir->exponent, &cir->mantissa,
+ &cir->div_exp);
+
+ /* Calculate PIR exponent and mantissa */
+ if (profile->peak.rate)
+ pir->rate = nix_tm_shaper_rate_conv(
+ profile->peak.rate, &pir->exponent, &pir->mantissa,
+ &pir->div_exp);
+
+ /* Calculate CIR burst exponent and mantissa */
+ if (profile->commit.size)
+ cir->burst = nix_tm_shaper_burst_conv(profile->commit.size,
+ &cir->burst_exponent,
+ &cir->burst_mantissa);
+
+ /* Calculate PIR burst exponent and mantissa */
+ if (profile->peak.size)
+ pir->burst = nix_tm_shaper_burst_conv(profile->peak.size,
+ &pir->burst_exponent,
+ &pir->burst_mantissa);
+}
+
uint32_t
nix_tm_check_rr(struct nix *nix, uint32_t parent_id, enum roc_nix_tm_tree tree,
uint32_t *rr_prio, uint32_t *max_prio)
@@ -309,6 +376,349 @@ nix_tm_child_res_valid(struct nix_tm_node_list *list,
}
uint8_t
+nix_tm_tl1_default_prep(uint32_t schq, volatile uint64_t *reg,
+ volatile uint64_t *regval)
+{
+ uint8_t k = 0;
+
+ /*
+ * Default config for TL1.
+ * For VF this is always ignored.
+ */
+ plt_tm_dbg("Default config for main root %s(%u)",
+ nix_tm_hwlvl2str(NIX_TXSCH_LVL_TL1), schq);
+
+ /* Set DWRR quantum */
+ reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+ regval[k] = NIX_TM_TL1_DFLT_RR_QTM;
+ k++;
+
+ reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+ regval[k] = (NIX_TM_TL1_DFLT_RR_PRIO << 1);
+ k++;
+
+ reg[k] = NIX_AF_TL1X_CIR(schq);
+ regval[k] = 0;
+ k++;
+
+ return k;
+}
+
+uint8_t
+nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
+ volatile uint64_t *reg, volatile uint64_t *regval,
+ volatile uint64_t *regval_mask)
+{
+ uint8_t k = 0, hw_lvl, parent_lvl;
+ uint64_t parent = 0, child = 0;
+ enum roc_nix_tm_tree tree;
+ uint32_t rr_prio, schq;
+ uint16_t link, relchan;
+
+ tree = node->tree;
+ schq = node->hw_id;
+ hw_lvl = node->hw_lvl;
+ parent_lvl = hw_lvl + 1;
+ rr_prio = node->rr_prio;
+
+ /* Root node will not have a parent node */
+ if (hw_lvl == nix->tm_root_lvl)
+ parent = node->parent_hw_id;
+ else
+ parent = node->parent->hw_id;
+
+ link = nix->tx_link;
+ relchan = nix_tm_relchan_get(nix);
+
+ if (hw_lvl != NIX_TXSCH_LVL_SMQ)
+ child = nix_tm_find_prio_anchor(nix, node->id, tree);
+
+ /* Override default rr_prio when TL1
+ * Static Priority is disabled
+ */
+ if (hw_lvl == NIX_TXSCH_LVL_TL1 && nix->tm_flags & NIX_TM_TL1_NO_SP) {
+ rr_prio = NIX_TM_TL1_DFLT_RR_PRIO;
+ child = 0;
+ }
+
+ plt_tm_dbg("Topology config node %s(%u)->%s(%" PRIu64 ") lvl %u, id %u"
+ " prio_anchor %" PRIu64 " rr_prio %u (%p)",
+ nix_tm_hwlvl2str(hw_lvl), schq, nix_tm_hwlvl2str(parent_lvl),
+ parent, node->lvl, node->id, child, rr_prio, node);
+
+ /* Prepare Topology and Link config */
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+
+ /* Set xoff which will be cleared later */
+ reg[k] = NIX_AF_SMQX_CFG(schq);
+ regval[k] = (BIT_ULL(50) | NIX_MIN_HW_FRS |
+ ((nix->mtu & 0xFFFF) << 8));
+ regval_mask[k] =
+ ~(BIT_ULL(50) | GENMASK_ULL(6, 0) | GENMASK_ULL(23, 8));
+ k++;
+
+ /* Parent and schedule conf */
+ reg[k] = NIX_AF_MDQX_PARENT(schq);
+ regval[k] = parent << 16;
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ /* Parent and schedule conf */
+ reg[k] = NIX_AF_TL4X_PARENT(schq);
+ regval[k] = parent << 16;
+ k++;
+
+ reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+ regval[k] = (child << 32) | (rr_prio << 1);
+ k++;
+
+ /* Configure TL4 to send to SDP channel instead of CGX/LBK */
+ if (nix->sdp_link) {
+ reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+ regval[k] = BIT_ULL(12);
+ k++;
+ }
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ /* Parent and schedule conf */
+ reg[k] = NIX_AF_TL3X_PARENT(schq);
+ regval[k] = parent << 16;
+ k++;
+
+ reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+ regval[k] = (child << 32) | (rr_prio << 1);
+ k++;
+
+ /* Link configuration */
+ if (!nix->sdp_link &&
+ nix->tm_link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
+ reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+ regval[k] = BIT_ULL(12) | relchan;
+ k++;
+ }
+
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ /* Parent and schedule conf */
+ reg[k] = NIX_AF_TL2X_PARENT(schq);
+ regval[k] = parent << 16;
+ k++;
+
+ reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+ regval[k] = (child << 32) | (rr_prio << 1);
+ k++;
+
+ /* Link configuration */
+ if (!nix->sdp_link &&
+ nix->tm_link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
+ reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+ regval[k] = BIT_ULL(12) | relchan;
+ k++;
+ }
+
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+ regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
+ k++;
+
+ break;
+ }
+
+ return k;
+}
+
+uint8_t
+nix_tm_sched_reg_prep(struct nix *nix, struct nix_tm_node *node,
+ volatile uint64_t *reg, volatile uint64_t *regval)
+{
+ uint64_t strict_prio = node->priority;
+ uint32_t hw_lvl = node->hw_lvl;
+ uint32_t schq = node->hw_id;
+ uint64_t rr_quantum;
+ uint8_t k = 0;
+
+ rr_quantum = nix_tm_weight_to_rr_quantum(node->weight);
+
+ /* For children to root, strict prio is default if either
+ * device root is TL2 or TL1 Static Priority is disabled.
+ */
+ if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
+ (!nix_tm_have_tl1_access(nix) || nix->tm_flags & NIX_TM_TL1_NO_SP))
+ strict_prio = NIX_TM_TL1_DFLT_RR_PRIO;
+
+ plt_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
+ "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
+ nix_tm_hwlvl2str(node->hw_lvl), schq, node->lvl, node->id,
+ strict_prio, rr_quantum, node);
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+ regval[k] = (strict_prio << 24) | rr_quantum;
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+ regval[k] = (strict_prio << 24) | rr_quantum;
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+ regval[k] = (strict_prio << 24) | rr_quantum;
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+ regval[k] = (strict_prio << 24) | rr_quantum;
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+ regval[k] = rr_quantum;
+ k++;
+
+ break;
+ }
+
+ return k;
+}
+
+uint8_t
+nix_tm_shaper_reg_prep(struct nix_tm_node *node,
+ struct nix_tm_shaper_profile *profile,
+ volatile uint64_t *reg, volatile uint64_t *regval)
+{
+ struct nix_tm_shaper_data cir, pir;
+ uint32_t schq = node->hw_id;
+ uint64_t adjust = 0;
+ uint8_t k = 0;
+
+ memset(&cir, 0, sizeof(cir));
+ memset(&pir, 0, sizeof(pir));
+ nix_tm_shaper_conf_get(profile, &cir, &pir);
+
+ if (node->pkt_mode)
+ adjust = 1;
+ else if (profile)
+ adjust = profile->pkt_len_adj;
+
+ plt_tm_dbg("Shaper config node %s(%u) lvl %u id %u, "
+ "pir %" PRIu64 "(%" PRIu64 "B),"
+ " cir %" PRIu64 "(%" PRIu64 "B)"
+ "adjust 0x%" PRIx64 "(pktmode %u) (%p)",
+ nix_tm_hwlvl2str(node->hw_lvl), schq, node->lvl, node->id,
+ pir.rate, pir.burst, cir.rate, cir.burst, adjust,
+ node->pkt_mode, node);
+
+ switch (node->hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ /* Configure PIR, CIR */
+ reg[k] = NIX_AF_MDQX_PIR(schq);
+ regval[k] = (pir.rate && pir.burst) ?
+ (nix_tm_shaper2regval(&pir) | 1) :
+ 0;
+ k++;
+
+ reg[k] = NIX_AF_MDQX_CIR(schq);
+ regval[k] = (cir.rate && cir.burst) ?
+ (nix_tm_shaper2regval(&cir) | 1) :
+ 0;
+ k++;
+
+ /* Configure RED ALG */
+ reg[k] = NIX_AF_MDQX_SHAPE(schq);
+ regval[k] = (adjust | (uint64_t)node->red_algo << 9 |
+ (uint64_t)node->pkt_mode << 24);
+ k++;
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ /* Configure PIR, CIR */
+ reg[k] = NIX_AF_TL4X_PIR(schq);
+ regval[k] = (pir.rate && pir.burst) ?
+ (nix_tm_shaper2regval(&pir) | 1) :
+ 0;
+ k++;
+
+ reg[k] = NIX_AF_TL4X_CIR(schq);
+ regval[k] = (cir.rate && cir.burst) ?
+ (nix_tm_shaper2regval(&cir) | 1) :
+ 0;
+ k++;
+
+ /* Configure RED algo */
+ reg[k] = NIX_AF_TL4X_SHAPE(schq);
+ regval[k] = (adjust | (uint64_t)node->red_algo << 9 |
+ (uint64_t)node->pkt_mode << 24);
+ k++;
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ /* Configure PIR, CIR */
+ reg[k] = NIX_AF_TL3X_PIR(schq);
+ regval[k] = (pir.rate && pir.burst) ?
+ (nix_tm_shaper2regval(&pir) | 1) :
+ 0;
+ k++;
+
+ reg[k] = NIX_AF_TL3X_CIR(schq);
+ regval[k] = (cir.rate && cir.burst) ?
+ (nix_tm_shaper2regval(&cir) | 1) :
+ 0;
+ k++;
+
+ /* Configure RED algo */
+ reg[k] = NIX_AF_TL3X_SHAPE(schq);
+ regval[k] = (adjust | (uint64_t)node->red_algo << 9 |
+ (uint64_t)node->pkt_mode);
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ /* Configure PIR, CIR */
+ reg[k] = NIX_AF_TL2X_PIR(schq);
+ regval[k] = (pir.rate && pir.burst) ?
+ (nix_tm_shaper2regval(&pir) | 1) :
+ 0;
+ k++;
+
+ reg[k] = NIX_AF_TL2X_CIR(schq);
+ regval[k] = (cir.rate && cir.burst) ?
+ (nix_tm_shaper2regval(&cir) | 1) :
+ 0;
+ k++;
+
+ /* Configure RED algo */
+ reg[k] = NIX_AF_TL2X_SHAPE(schq);
+ regval[k] = (adjust | (uint64_t)node->red_algo << 9 |
+ (uint64_t)node->pkt_mode << 24);
+ k++;
+
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ /* Configure CIR */
+ reg[k] = NIX_AF_TL1X_CIR(schq);
+ regval[k] = (cir.rate && cir.burst) ?
+ (nix_tm_shaper2regval(&cir) | 1) :
+ 0;
+ k++;
+
+ /* Configure length disable and adjust */
+ reg[k] = NIX_AF_TL1X_SHAPE(schq);
+ regval[k] = (adjust | (uint64_t)node->pkt_mode << 24);
+ k++;
+ break;
+ }
+
+ return k;
+}
+
+uint8_t
nix_tm_sw_xoff_prep(struct nix_tm_node *node, bool enable,
volatile uint64_t *reg, volatile uint64_t *regval)
{
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 7b940c1..cf47576 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -104,6 +104,8 @@ INTERNAL {
roc_nix_switch_hdr_set;
roc_nix_eeprom_info_get;
roc_nix_tm_free_resources;
+ roc_nix_tm_hierarchy_disable;
+ roc_nix_tm_hierarchy_enable;
roc_nix_tm_node_add;
roc_nix_tm_node_delete;
roc_nix_tm_node_get;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 37/52] common/cnxk: add nix tm support for internal hierarchy
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (35 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 36/52] common/cnxk: add nix tm hierarchy enable/disable Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 38/52] common/cnxk: add nix tm dynamic update support Nithin Dabilpuram
` (18 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add support to create internal TM default hierarchy and ratelimit
hierarchy and API to ratelimit SQ to a given rate. This will be
used by cnxk ethdev driver's tx queue ratelimit op.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 7 ++
drivers/common/cnxk/roc_nix_priv.h | 2 +
drivers/common/cnxk/roc_nix_tm.c | 156 +++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_tm_ops.c | 141 +++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 3 +
5 files changed, 309 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 55850ee..859995b 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -317,6 +317,8 @@ enum roc_tm_node_level {
/*
* TM runtime hierarchy init API.
*/
+int __roc_api roc_nix_tm_init(struct roc_nix *roc_nix);
+void __roc_api roc_nix_tm_fini(struct roc_nix *roc_nix);
int __roc_api roc_nix_tm_sq_aura_fc(struct roc_nix_sq *sq, bool enable);
int __roc_api roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq);
@@ -379,6 +381,11 @@ struct roc_nix_tm_shaper_profile *__roc_api roc_nix_tm_shaper_profile_next(
struct roc_nix *roc_nix, struct roc_nix_tm_shaper_profile *__prev);
/*
+ * TM ratelimit tree API.
+ */
+int __roc_api roc_nix_tm_rlimit_sq(struct roc_nix *roc_nix, uint16_t qid,
+ uint64_t rate);
+/*
* TM hierarchy enable/disable API.
*/
int __roc_api roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix);
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 56da341..6d9fa2a 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -325,6 +325,7 @@ int nix_tm_leaf_data_get(struct nix *nix, uint16_t sq, uint32_t *rr_quantum,
int nix_tm_sq_flush_pre(struct roc_nix_sq *sq);
int nix_tm_sq_flush_post(struct roc_nix_sq *sq);
int nix_tm_smq_xoff(struct nix *nix, struct nix_tm_node *node, bool enable);
+int nix_tm_prepare_default_tree(struct roc_nix *roc_nix);
int nix_tm_node_add(struct roc_nix *roc_nix, struct nix_tm_node *node);
int nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
enum roc_nix_tm_tree tree, bool free);
@@ -343,6 +344,7 @@ int nix_tm_txsch_reg_config(struct nix *nix, enum roc_nix_tm_tree tree);
int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree);
int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node,
bool rr_quantum_only);
+int nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix);
/*
* TM priv utils.
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 973a271..90bf8de 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -1089,6 +1089,162 @@ nix_tm_alloc_txschq(struct nix *nix, enum roc_nix_tm_tree tree)
}
int
+nix_tm_prepare_default_tree(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint32_t nonleaf_id = nix->nb_tx_queues;
+ struct nix_tm_node *node = NULL;
+ uint8_t leaf_lvl, lvl, lvl_end;
+ uint32_t parent, i;
+ int rc = 0;
+
+ /* Add ROOT, SCH1, SCH2, SCH3, [SCH4] nodes */
+ parent = ROC_NIX_TM_NODE_ID_INVALID;
+ /* With TL1 access we have an extra level */
+ lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 :
+ ROC_TM_LVL_SCH3);
+
+ for (lvl = ROC_TM_LVL_ROOT; lvl <= lvl_end; lvl++) {
+ rc = -ENOMEM;
+ node = nix_tm_node_alloc();
+ if (!node)
+ goto error;
+
+ node->id = nonleaf_id;
+ node->parent_id = parent;
+ node->priority = 0;
+ node->weight = NIX_TM_DFLT_RR_WT;
+ node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE;
+ node->lvl = lvl;
+ node->tree = ROC_NIX_TM_DEFAULT;
+
+ rc = nix_tm_node_add(roc_nix, node);
+ if (rc)
+ goto error;
+ parent = nonleaf_id;
+ nonleaf_id++;
+ }
+
+ parent = nonleaf_id - 1;
+ leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE :
+ ROC_TM_LVL_SCH4);
+
+ /* Add leaf nodes */
+ for (i = 0; i < nix->nb_tx_queues; i++) {
+ rc = -ENOMEM;
+ node = nix_tm_node_alloc();
+ if (!node)
+ goto error;
+
+ node->id = i;
+ node->parent_id = parent;
+ node->priority = 0;
+ node->weight = NIX_TM_DFLT_RR_WT;
+ node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE;
+ node->lvl = leaf_lvl;
+ node->tree = ROC_NIX_TM_DEFAULT;
+
+ rc = nix_tm_node_add(roc_nix, node);
+ if (rc)
+ goto error;
+ }
+
+ return 0;
+error:
+ nix_tm_node_free(node);
+ return rc;
+}
+
+int
+nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint32_t nonleaf_id = nix->nb_tx_queues;
+ struct nix_tm_node *node = NULL;
+ uint8_t leaf_lvl, lvl, lvl_end;
+ uint32_t parent, i;
+ int rc = 0;
+
+ /* Add ROOT, SCH1, SCH2 nodes */
+ parent = ROC_NIX_TM_NODE_ID_INVALID;
+ lvl_end = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH3 :
+ ROC_TM_LVL_SCH2);
+
+ for (lvl = ROC_TM_LVL_ROOT; lvl <= lvl_end; lvl++) {
+ rc = -ENOMEM;
+ node = nix_tm_node_alloc();
+ if (!node)
+ goto error;
+
+ node->id = nonleaf_id;
+ node->parent_id = parent;
+ node->priority = 0;
+ node->weight = NIX_TM_DFLT_RR_WT;
+ node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE;
+ node->lvl = lvl;
+ node->tree = ROC_NIX_TM_RLIMIT;
+
+ rc = nix_tm_node_add(roc_nix, node);
+ if (rc)
+ goto error;
+ parent = nonleaf_id;
+ nonleaf_id++;
+ }
+
+ /* SMQ is mapped to SCH4 when we have TL1 access and SCH3 otherwise */
+ lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_SCH4 : ROC_TM_LVL_SCH3);
+
+ /* Add per queue SMQ nodes i.e SCH4 / SCH3 */
+ for (i = 0; i < nix->nb_tx_queues; i++) {
+ rc = -ENOMEM;
+ node = nix_tm_node_alloc();
+ if (!node)
+ goto error;
+
+ node->id = nonleaf_id + i;
+ node->parent_id = parent;
+ node->priority = 0;
+ node->weight = NIX_TM_DFLT_RR_WT;
+ node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE;
+ node->lvl = lvl;
+ node->tree = ROC_NIX_TM_RLIMIT;
+
+ rc = nix_tm_node_add(roc_nix, node);
+ if (rc)
+ goto error;
+ }
+
+ parent = nonleaf_id;
+ leaf_lvl = (nix_tm_have_tl1_access(nix) ? ROC_TM_LVL_QUEUE :
+ ROC_TM_LVL_SCH4);
+
+ /* Add leaf nodes */
+ for (i = 0; i < nix->nb_tx_queues; i++) {
+ rc = -ENOMEM;
+ node = nix_tm_node_alloc();
+ if (!node)
+ goto error;
+
+ node->id = i;
+ node->parent_id = parent;
+ node->priority = 0;
+ node->weight = NIX_TM_DFLT_RR_WT;
+ node->shaper_profile_id = ROC_NIX_TM_SHAPER_PROFILE_NONE;
+ node->lvl = leaf_lvl;
+ node->tree = ROC_NIX_TM_RLIMIT;
+
+ rc = nix_tm_node_add(roc_nix, node);
+ if (rc)
+ goto error;
+ }
+
+ return 0;
+error:
+ nix_tm_node_free(node);
+ return rc;
+}
+
+int
nix_tm_free_resources(struct roc_nix *roc_nix, uint32_t tree_mask, bool hw_only)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index 23911eb..6274f32 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -543,3 +543,144 @@ roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree,
nix->tm_flags |= NIX_TM_HIERARCHY_ENA;
return 0;
}
+
+int
+roc_nix_tm_init(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ uint32_t tree_mask;
+ int rc;
+
+ if (nix->tm_flags & NIX_TM_HIERARCHY_ENA) {
+ plt_err("Cannot init while existing hierarchy is enabled");
+ return -EBUSY;
+ }
+
+ /* Free up all user resources already held */
+ tree_mask = NIX_TM_TREE_MASK_ALL;
+ rc = nix_tm_free_resources(roc_nix, tree_mask, false);
+ if (rc) {
+ plt_err("Failed to freeup all nodes and resources, rc=%d", rc);
+ return rc;
+ }
+
+ /* Prepare default tree */
+ rc = nix_tm_prepare_default_tree(roc_nix);
+ if (rc) {
+ plt_err("failed to prepare default tm tree, rc=%d", rc);
+ return rc;
+ }
+
+ /* Prepare rlimit tree */
+ rc = nix_tm_prepare_rate_limited_tree(roc_nix);
+ if (rc) {
+ plt_err("failed to prepare rlimit tm tree, rc=%d", rc);
+ return rc;
+ }
+
+ return rc;
+}
+
+int
+roc_nix_tm_rlimit_sq(struct roc_nix *roc_nix, uint16_t qid, uint64_t rate)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile profile;
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_tm_node *node, *parent;
+
+ volatile uint64_t *reg, *regval;
+ struct nix_txschq_config *req;
+ uint16_t flags;
+ uint8_t k = 0;
+ int rc;
+
+ if (nix->tm_tree != ROC_NIX_TM_RLIMIT ||
+ !(nix->tm_flags & NIX_TM_HIERARCHY_ENA))
+ return NIX_ERR_TM_INVALID_TREE;
+
+ node = nix_tm_node_search(nix, qid, ROC_NIX_TM_RLIMIT);
+
+ /* check if we found a valid leaf node */
+ if (!node || !nix_tm_is_leaf(nix, node->lvl) || !node->parent ||
+ node->parent->hw_id == NIX_TM_HW_ID_INVALID)
+ return NIX_ERR_TM_INVALID_NODE;
+
+ parent = node->parent;
+ flags = parent->flags;
+
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = NIX_TXSCH_LVL_MDQ;
+ reg = req->reg;
+ regval = req->regval;
+
+ if (rate == 0) {
+ k += nix_tm_sw_xoff_prep(parent, true, ®[k], ®val[k]);
+ flags &= ~NIX_TM_NODE_ENABLED;
+ goto exit;
+ }
+
+ if (!(flags & NIX_TM_NODE_ENABLED)) {
+ k += nix_tm_sw_xoff_prep(parent, false, ®[k], ®val[k]);
+ flags |= NIX_TM_NODE_ENABLED;
+ }
+
+ /* Use only PIR for rate limit */
+ memset(&profile, 0, sizeof(profile));
+ profile.peak.rate = rate;
+ /* Minimum burst of ~4us Bytes of Tx */
+ profile.peak.size = PLT_MAX((uint64_t)roc_nix_max_pkt_len(roc_nix),
+ (4ul * rate) / ((uint64_t)1E6 * 8));
+ if (!nix->tm_rate_min || nix->tm_rate_min > rate)
+ nix->tm_rate_min = rate;
+
+ k += nix_tm_shaper_reg_prep(parent, &profile, ®[k], ®val[k]);
+exit:
+ req->num_regs = k;
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ parent->flags = flags;
+ return 0;
+}
+
+void
+roc_nix_tm_fini(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txsch_free_req *req;
+ uint32_t tree_mask;
+ uint8_t hw_lvl;
+ int rc;
+
+ /* Xmit is assumed to be disabled */
+ /* Free up resources already held */
+ tree_mask = NIX_TM_TREE_MASK_ALL;
+ rc = nix_tm_free_resources(roc_nix, tree_mask, false);
+ if (rc)
+ plt_err("Failed to freeup existing nodes or rsrcs, rc=%d", rc);
+
+ /* Free all other hw resources */
+ req = mbox_alloc_msg_nix_txsch_free(mbox);
+ if (req == NULL)
+ return;
+
+ req->flags = TXSCHQ_FREE_ALL;
+ rc = mbox_process(mbox);
+ if (rc)
+ plt_err("Failed to freeup all res, rc=%d", rc);
+
+ for (hw_lvl = 0; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) {
+ plt_bitmap_reset(nix->schq_bmp[hw_lvl]);
+ plt_bitmap_reset(nix->schq_contig_bmp[hw_lvl]);
+ nix->contig_rsvd[hw_lvl] = 0;
+ nix->discontig_rsvd[hw_lvl] = 0;
+ }
+
+ /* Clear shaper profiles */
+ nix_tm_clear_shaper_profiles(nix);
+ nix->tm_tree = 0;
+ nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA;
+}
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index cf47576..4d8f092 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -103,9 +103,11 @@ INTERNAL {
roc_nix_xstats_names_get;
roc_nix_switch_hdr_set;
roc_nix_eeprom_info_get;
+ roc_nix_tm_fini;
roc_nix_tm_free_resources;
roc_nix_tm_hierarchy_disable;
roc_nix_tm_hierarchy_enable;
+ roc_nix_tm_init;
roc_nix_tm_node_add;
roc_nix_tm_node_delete;
roc_nix_tm_node_get;
@@ -113,6 +115,7 @@ INTERNAL {
roc_nix_tm_node_name_get;
roc_nix_tm_node_next;
roc_nix_tm_node_pkt_mode_update;
+ roc_nix_tm_rlimit_sq;
roc_nix_tm_shaper_profile_add;
roc_nix_tm_shaper_profile_delete;
roc_nix_tm_shaper_profile_get;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 38/52] common/cnxk: add nix tm dynamic update support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (36 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 37/52] common/cnxk: add nix tm support for internal hierarchy Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 39/52] common/cnxk: add nix tm debug support and misc utils Nithin Dabilpuram
` (17 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add support for dynamic node update of shaper profile,
RR quantum and also support to suspend or resume an active
TM node.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 10 ++
drivers/common/cnxk/roc_nix_tm_ops.c | 220 +++++++++++++++++++++++++++++++++++
drivers/common/cnxk/version.map | 3 +
3 files changed, 233 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 859995b..84ec09f 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -362,6 +362,16 @@ int __roc_api roc_nix_tm_node_add(struct roc_nix *roc_nix,
int __roc_api roc_nix_tm_node_delete(struct roc_nix *roc_nix, uint32_t node_id,
bool free);
int __roc_api roc_nix_tm_free_resources(struct roc_nix *roc_nix, bool hw_only);
+int __roc_api roc_nix_tm_node_suspend_resume(struct roc_nix *roc_nix,
+ uint32_t node_id, bool suspend);
+int __roc_api roc_nix_tm_node_parent_update(struct roc_nix *roc_nix,
+ uint32_t node_id,
+ uint32_t new_parent_id,
+ uint32_t priority, uint32_t weight);
+int __roc_api roc_nix_tm_node_shaper_update(struct roc_nix *roc_nix,
+ uint32_t node_id,
+ uint32_t profile_id,
+ bool force_update);
int __roc_api roc_nix_tm_node_pkt_mode_update(struct roc_nix *roc_nix,
uint32_t node_id, bool pkt_mode);
int __roc_api roc_nix_tm_shaper_profile_add(
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index 6274f32..b383ef2 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -545,6 +545,226 @@ roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix, enum roc_nix_tm_tree tree,
}
int
+roc_nix_tm_node_suspend_resume(struct roc_nix *roc_nix, uint32_t node_id,
+ bool suspend)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txschq_config *req;
+ struct nix_tm_node *node;
+ uint16_t flags;
+ int rc;
+
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ if (!node)
+ return NIX_ERR_TM_INVALID_NODE;
+
+ flags = node->flags;
+ flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
+ (flags | NIX_TM_NODE_ENABLED);
+
+ if (node->flags == flags)
+ return 0;
+
+ /* send mbox for state change */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+
+ req->lvl = node->hw_lvl;
+ req->num_regs =
+ nix_tm_sw_xoff_prep(node, suspend, req->reg, req->regval);
+ rc = mbox_process(mbox);
+ if (!rc)
+ node->flags = flags;
+ return rc;
+}
+
+int
+roc_nix_tm_node_shaper_update(struct roc_nix *roc_nix, uint32_t node_id,
+ uint32_t profile_id, bool force_update)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_shaper_profile *profile = NULL;
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txschq_config *req;
+ struct nix_tm_node *node;
+ uint8_t k;
+ int rc;
+
+ /* Shaper updates valid only for user nodes */
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ if (!node || nix_tm_is_leaf(nix, node->lvl))
+ return NIX_ERR_TM_INVALID_NODE;
+
+ if (profile_id != ROC_NIX_TM_SHAPER_PROFILE_NONE) {
+ profile = nix_tm_shaper_profile_search(nix, profile_id);
+ if (!profile)
+ return NIX_ERR_TM_INVALID_SHAPER_PROFILE;
+ }
+
+ /* Pkt mode should match existing node's pkt mode */
+ if (profile && profile->pkt_mode != node->pkt_mode)
+ return NIX_ERR_TM_PKT_MODE_MISMATCH;
+
+ if ((profile_id == node->shaper_profile_id) && !force_update) {
+ return 0;
+ } else if (profile_id != node->shaper_profile_id) {
+ struct nix_tm_shaper_profile *old;
+
+ /* Find old shaper profile and reduce ref count */
+ old = nix_tm_shaper_profile_search(nix,
+ node->shaper_profile_id);
+ if (old)
+ old->ref_cnt--;
+
+ if (profile)
+ profile->ref_cnt++;
+
+ /* Reduce older shaper ref count and increase new one */
+ node->shaper_profile_id = profile_id;
+ }
+
+ /* Nothing to do if hierarchy not yet enabled */
+ if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA))
+ return 0;
+
+ node->flags &= ~NIX_TM_NODE_ENABLED;
+
+ /* Flush the specific node with SW_XOFF */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->hw_lvl;
+ k = nix_tm_sw_xoff_prep(node, true, req->reg, req->regval);
+ req->num_regs = k;
+
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Update the PIR/CIR and clear SW XOFF */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->hw_lvl;
+
+ k = nix_tm_shaper_reg_prep(node, profile, req->reg, req->regval);
+
+ k += nix_tm_sw_xoff_prep(node, false, &req->reg[k], &req->regval[k]);
+
+ req->num_regs = k;
+ rc = mbox_process(mbox);
+ if (!rc)
+ node->flags |= NIX_TM_NODE_ENABLED;
+ return rc;
+}
+
+int
+roc_nix_tm_node_parent_update(struct roc_nix *roc_nix, uint32_t node_id,
+ uint32_t new_parent_id, uint32_t priority,
+ uint32_t weight)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_tm_node *node, *sibling;
+ struct nix_tm_node *new_parent;
+ struct nix_txschq_config *req;
+ struct nix_tm_node_list *list;
+ uint8_t k;
+ int rc;
+
+ node = nix_tm_node_search(nix, node_id, ROC_NIX_TM_USER);
+ if (!node)
+ return NIX_ERR_TM_INVALID_NODE;
+
+ /* Parent id valid only for non root nodes */
+ if (node->hw_lvl != nix->tm_root_lvl) {
+ new_parent =
+ nix_tm_node_search(nix, new_parent_id, ROC_NIX_TM_USER);
+ if (!new_parent)
+ return NIX_ERR_TM_INVALID_PARENT;
+
+ /* Current support is only for dynamic weight update */
+ if (node->parent != new_parent || node->priority != priority)
+ return NIX_ERR_TM_PARENT_PRIO_UPDATE;
+ }
+
+ list = nix_tm_node_list(nix, ROC_NIX_TM_USER);
+ /* Skip if no change */
+ if (node->weight == weight)
+ return 0;
+
+ node->weight = weight;
+
+ /* Nothing to do if hierarchy not yet enabled */
+ if (!(nix->tm_flags & NIX_TM_HIERARCHY_ENA))
+ return 0;
+
+ /* For leaf nodes, SQ CTX needs update */
+ if (nix_tm_is_leaf(nix, node->lvl)) {
+ /* Update SQ quantum data on the fly */
+ rc = nix_tm_sq_sched_conf(nix, node, true);
+ if (rc)
+ return NIX_ERR_TM_SQ_UPDATE_FAIL;
+ } else {
+ /* XOFF Parent node */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->parent->hw_lvl;
+ req->num_regs = nix_tm_sw_xoff_prep(node->parent, true,
+ req->reg, req->regval);
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* XOFF this node and all other siblings */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->hw_lvl;
+
+ k = 0;
+ TAILQ_FOREACH(sibling, list, node) {
+ if (sibling->parent != node->parent)
+ continue;
+ k += nix_tm_sw_xoff_prep(sibling, true, &req->reg[k],
+ &req->regval[k]);
+ }
+ req->num_regs = k;
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* Update new weight for current node */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->hw_lvl;
+ req->num_regs =
+ nix_tm_sched_reg_prep(nix, node, req->reg, req->regval);
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* XON this node and all other siblings */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->hw_lvl;
+
+ k = 0;
+ TAILQ_FOREACH(sibling, list, node) {
+ if (sibling->parent != node->parent)
+ continue;
+ k += nix_tm_sw_xoff_prep(sibling, false, &req->reg[k],
+ &req->regval[k]);
+ }
+ req->num_regs = k;
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+
+ /* XON Parent node */
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->lvl = node->parent->hw_lvl;
+ req->num_regs = nix_tm_sw_xoff_prep(node->parent, false,
+ req->reg, req->regval);
+ rc = mbox_process(mbox);
+ if (rc)
+ return rc;
+ }
+ return 0;
+}
+
+int
roc_nix_tm_init(struct roc_nix *roc_nix)
{
struct nix *nix = roc_nix_to_nix_priv(roc_nix);
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 4d8f092..22955fe 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -114,7 +114,10 @@ INTERNAL {
roc_nix_tm_node_lvl;
roc_nix_tm_node_name_get;
roc_nix_tm_node_next;
+ roc_nix_tm_node_parent_update;
roc_nix_tm_node_pkt_mode_update;
+ roc_nix_tm_node_shaper_update;
+ roc_nix_tm_node_suspend_resume;
roc_nix_tm_rlimit_sq;
roc_nix_tm_shaper_profile_add;
roc_nix_tm_shaper_profile_delete;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 39/52] common/cnxk: add nix tm debug support and misc utils
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (37 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 38/52] common/cnxk: add nix tm dynamic update support Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 40/52] common/cnxk: add npc support Nithin Dabilpuram
` (16 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh,
asekhar, Nithin Dabilpuram
Add support to dump TM HW registers and hierarchy on error.
This patch also adds support for misc utils such as API to
query TM HW resource availability, resource pre-allocation
and static priority support on root node.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix.h | 9 +
drivers/common/cnxk/roc_nix_debug.c | 330 +++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_nix_tm.c | 1 +
drivers/common/cnxk/roc_nix_tm_ops.c | 125 +++++++++++++
drivers/common/cnxk/roc_nix_tm_utils.c | 18 ++
drivers/common/cnxk/roc_utils.c | 108 +++++++++++
drivers/common/cnxk/version.map | 6 +
7 files changed, 597 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 84ec09f..587feae 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -278,6 +278,7 @@ void __roc_api roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
void __roc_api roc_nix_rq_dump(struct roc_nix_rq *rq);
void __roc_api roc_nix_cq_dump(struct roc_nix_cq *cq);
void __roc_api roc_nix_sq_dump(struct roc_nix_sq *sq);
+void __roc_api roc_nix_tm_dump(struct roc_nix *roc_nix);
void __roc_api roc_nix_dump(struct roc_nix *roc_nix);
/* IRQ */
@@ -381,6 +382,10 @@ int __roc_api roc_nix_tm_shaper_profile_update(
int __roc_api roc_nix_tm_shaper_profile_delete(struct roc_nix *roc_nix,
uint32_t id);
+int __roc_api roc_nix_tm_prealloc_res(struct roc_nix *roc_nix, uint8_t lvl,
+ uint16_t discontig, uint16_t contig);
+uint16_t __roc_api roc_nix_tm_leaf_cnt(struct roc_nix *roc_nix);
+
struct roc_nix_tm_node *__roc_api roc_nix_tm_node_get(struct roc_nix *roc_nix,
uint32_t node_id);
struct roc_nix_tm_node *__roc_api
@@ -407,6 +412,10 @@ int __roc_api roc_nix_tm_hierarchy_enable(struct roc_nix *roc_nix,
* TM utilities API.
*/
int __roc_api roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id);
+bool __roc_api roc_nix_tm_root_has_sp(struct roc_nix *roc_nix);
+void __roc_api roc_nix_tm_rsrc_max(bool pf, uint16_t schq[ROC_TM_LVL_MAX]);
+int __roc_api roc_nix_tm_rsrc_count(struct roc_nix *roc_nix,
+ uint16_t schq[ROC_TM_LVL_MAX]);
int __roc_api roc_nix_tm_node_name_get(struct roc_nix *roc_nix,
uint32_t node_id, char *buf,
size_t buflen);
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 9794f2c..6c66d48 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -44,6 +44,33 @@ static const struct nix_lf_reg_info nix_lf_reg[] = {
NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
};
+static void
+nix_bitmap_dump(struct plt_bitmap *bmp)
+{
+ uint32_t pos = 0, start_pos;
+ uint64_t slab = 0;
+ int i;
+
+ plt_bitmap_scan_init(bmp);
+ plt_bitmap_scan(bmp, &pos, &slab);
+ start_pos = pos;
+
+ nix_dump_no_nl(" \t\t[");
+ do {
+ if (!slab)
+ break;
+ i = 0;
+
+ for (i = 0; i < 64; i++)
+ if (slab & (1ULL << i))
+ nix_dump_no_nl("%d, ", i);
+
+ if (!plt_bitmap_scan(bmp, &pos, &slab))
+ break;
+ } while (start_pos != pos);
+ nix_dump_no_nl(" ]");
+}
+
int
roc_nix_lf_get_reg_count(struct roc_nix *roc_nix)
{
@@ -761,6 +788,309 @@ roc_nix_sq_dump(struct roc_nix_sq *sq)
nix_dump(" fc = %p", sq->fc);
};
+static uint8_t
+nix_tm_reg_dump_prep(uint16_t hw_lvl, uint16_t schq, uint16_t link,
+ uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
+{
+ uint8_t k = 0;
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ reg[k] = NIX_AF_SMQX_CFG(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_SMQ[%u]_CFG",
+ schq);
+
+ reg[k] = NIX_AF_MDQX_PARENT(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_PARENT",
+ schq);
+
+ reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_MDQ[%u]_SCHEDULE", schq);
+
+ reg[k] = NIX_AF_MDQX_PIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_PIR",
+ schq);
+
+ reg[k] = NIX_AF_MDQX_CIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_CIR",
+ schq);
+
+ reg[k] = NIX_AF_MDQX_SHAPE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_SHAPE",
+ schq);
+
+ reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_MDQ[%u]_SW_XOFF",
+ schq);
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ reg[k] = NIX_AF_TL4X_PARENT(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_PARENT",
+ schq);
+
+ reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL4[%u]_TOPOLOGY", schq);
+
+ reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
+
+ reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL4[%u]_SCHEDULE", schq);
+
+ reg[k] = NIX_AF_TL4X_PIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_PIR",
+ schq);
+
+ reg[k] = NIX_AF_TL4X_CIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_CIR",
+ schq);
+
+ reg[k] = NIX_AF_TL4X_SHAPE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_SHAPE",
+ schq);
+
+ reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL4[%u]_SW_XOFF",
+ schq);
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ reg[k] = NIX_AF_TL3X_PARENT(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_PARENT",
+ schq);
+
+ reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL3[%u]_TOPOLOGY", schq);
+
+ reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+ reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL3[%u]_SCHEDULE", schq);
+
+ reg[k] = NIX_AF_TL3X_PIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_PIR",
+ schq);
+
+ reg[k] = NIX_AF_TL3X_CIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_CIR",
+ schq);
+
+ reg[k] = NIX_AF_TL3X_SHAPE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_SHAPE",
+ schq);
+
+ reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL3[%u]_SW_XOFF",
+ schq);
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ reg[k] = NIX_AF_TL2X_PARENT(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_PARENT",
+ schq);
+
+ reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL2[%u]_TOPOLOGY", schq);
+
+ reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
+
+ reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL2[%u]_SCHEDULE", schq);
+
+ reg[k] = NIX_AF_TL2X_PIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_PIR",
+ schq);
+
+ reg[k] = NIX_AF_TL2X_CIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_CIR",
+ schq);
+
+ reg[k] = NIX_AF_TL2X_SHAPE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_SHAPE",
+ schq);
+
+ reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL2[%u]_SW_XOFF",
+ schq);
+ break;
+ case NIX_TXSCH_LVL_TL1:
+
+ reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL1[%u]_TOPOLOGY", schq);
+
+ reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL1[%u]_SCHEDULE", schq);
+
+ reg[k] = NIX_AF_TL1X_CIR(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL1[%u]_CIR",
+ schq);
+
+ reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ, "NIX_AF_TL1[%u]_SW_XOFF",
+ schq);
+
+ reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
+ snprintf(regstr[k++], NIX_REG_NAME_SZ,
+ "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
+ break;
+ default:
+ break;
+ }
+
+ if (k > MAX_REGS_PER_MBOX_MSG) {
+ nix_dump("\t!!!NIX TM Registers request overflow!!!");
+ return 0;
+ }
+ return k;
+}
+
+static void
+nix_tm_dump_lvl(struct nix *nix, struct nix_tm_node_list *list, uint8_t hw_lvl)
+{
+ char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
+ uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txschq_config *req, *rsp;
+ const char *lvlstr, *parent_lvlstr;
+ struct nix_tm_node *node, *parent;
+ struct nix_tm_node *root = NULL;
+ uint32_t schq, parent_schq;
+ bool found = false;
+ uint8_t j, k, rc;
+
+ TAILQ_FOREACH(node, list, node) {
+ if (node->hw_lvl != hw_lvl)
+ continue;
+
+ found = true;
+ parent = node->parent;
+ if (hw_lvl == NIX_TXSCH_LVL_CNT) {
+ lvlstr = "SQ";
+ schq = node->id;
+ } else {
+ lvlstr = nix_tm_hwlvl2str(node->hw_lvl);
+ schq = node->hw_id;
+ }
+
+ if (parent) {
+ parent_schq = parent->hw_id;
+ parent_lvlstr = nix_tm_hwlvl2str(parent->hw_lvl);
+ } else if (node->hw_lvl == NIX_TXSCH_LVL_TL1) {
+ parent_schq = nix->tx_link;
+ parent_lvlstr = "LINK";
+ } else {
+ parent_schq = node->parent_hw_id;
+ parent_lvlstr = nix_tm_hwlvl2str(node->hw_lvl + 1);
+ }
+
+ nix_dump("\t(%p%s) %s_%d->%s_%d", node,
+ node->child_realloc ? "[CR]" : "", lvlstr, schq,
+ parent_lvlstr, parent_schq);
+
+ if (!(node->flags & NIX_TM_NODE_HWRES))
+ continue;
+
+ /* Need to dump TL1 when root is TL2 */
+ if (node->hw_lvl == nix->tm_root_lvl)
+ root = node;
+
+ /* Dump registers only when HWRES is present */
+ k = nix_tm_reg_dump_prep(node->hw_lvl, schq, nix->tx_link, reg,
+ regstr);
+ if (!k)
+ continue;
+
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->read = 1;
+ req->lvl = node->hw_lvl;
+ req->num_regs = k;
+ mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+ rc = mbox_process_msg(mbox, (void **)&rsp);
+ if (!rc) {
+ for (j = 0; j < k; j++)
+ nix_dump("\t\t%s=0x%016" PRIx64, regstr[j],
+ rsp->regval[j]);
+ } else {
+ nix_dump("\t!!!Failed to dump registers!!!");
+ }
+ }
+
+ if (found)
+ nix_dump("\n");
+
+ /* Dump TL1 node data when root level is TL2 */
+ if (root && root->hw_lvl == NIX_TXSCH_LVL_TL2) {
+ k = nix_tm_reg_dump_prep(NIX_TXSCH_LVL_TL1, root->parent_hw_id,
+ nix->tx_link, reg, regstr);
+ if (!k)
+ return;
+
+ req = mbox_alloc_msg_nix_txschq_cfg(mbox);
+ req->read = 1;
+ req->lvl = NIX_TXSCH_LVL_TL1;
+ req->num_regs = k;
+ mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
+ rc = mbox_process_msg(mbox, (void **)&rsp);
+ if (!rc) {
+ for (j = 0; j < k; j++)
+ nix_dump("\t\t%s=0x%016" PRIx64, regstr[j],
+ rsp->regval[j]);
+ } else {
+ nix_dump("\t!!!Failed to dump registers!!!");
+ }
+ nix_dump("\n");
+ }
+}
+
+void
+roc_nix_tm_dump(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct dev *dev = &nix->dev;
+ uint8_t hw_lvl, i;
+
+ nix_dump("===TM hierarchy and registers dump of %s (pf:vf) (%d:%d)===",
+ nix->pci_dev->name, dev_get_pf(dev->pf_func),
+ dev_get_vf(dev->pf_func));
+
+ /* Dump all trees */
+ for (i = 0; i < ROC_NIX_TM_TREE_MAX; i++) {
+ nix_dump("\tTM %s:", nix_tm_tree2str(i));
+ for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++)
+ nix_tm_dump_lvl(nix, &nix->trees[i], hw_lvl);
+ }
+
+ /* Dump unused resources */
+ nix_dump("\tTM unused resources:");
+ hw_lvl = NIX_TXSCH_LVL_SMQ;
+ for (; hw_lvl < NIX_TXSCH_LVL_CNT; hw_lvl++) {
+ nix_dump("\t\ttxschq %7s num = %d",
+ nix_tm_hwlvl2str(hw_lvl),
+ nix_tm_resource_avail(nix, hw_lvl, false));
+
+ nix_bitmap_dump(nix->schq_bmp[hw_lvl]);
+ nix_dump("\n");
+
+ nix_dump("\t\ttxschq_contig %7s num = %d",
+ nix_tm_hwlvl2str(hw_lvl),
+ nix_tm_resource_avail(nix, hw_lvl, true));
+ nix_bitmap_dump(nix->schq_contig_bmp[hw_lvl]);
+ nix_dump("\n");
+ }
+}
+
void
roc_nix_dump(struct roc_nix *roc_nix)
{
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 90bf8de..fa5b9e4 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -393,6 +393,7 @@ roc_nix_tm_sq_flush_spin(struct roc_nix_sq *sq)
return 0;
exit:
+ roc_nix_tm_dump(sq->roc_nix);
roc_nix_queues_ctx_dump(sq->roc_nix);
return -EFAULT;
}
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index b383ef2..fb02239 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -579,6 +579,58 @@ roc_nix_tm_node_suspend_resume(struct roc_nix *roc_nix, uint32_t node_id,
}
int
+roc_nix_tm_prealloc_res(struct roc_nix *roc_nix, uint8_t lvl,
+ uint16_t discontig, uint16_t contig)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct nix_txsch_alloc_req *req;
+ struct nix_txsch_alloc_rsp *rsp;
+ uint8_t hw_lvl;
+ int rc = -ENOSPC;
+
+ hw_lvl = nix_tm_lvl2nix(nix, lvl);
+ if (hw_lvl == NIX_TXSCH_LVL_CNT)
+ return -EINVAL;
+
+ /* Preallocate contiguous */
+ if (nix->contig_rsvd[hw_lvl] < contig) {
+ req = mbox_alloc_msg_nix_txsch_alloc(mbox);
+ if (req == NULL)
+ return rc;
+ req->schq_contig[hw_lvl] = contig - nix->contig_rsvd[hw_lvl];
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ nix_tm_copy_rsp_to_nix(nix, rsp);
+ }
+
+ /* Preallocate contiguous */
+ if (nix->discontig_rsvd[hw_lvl] < discontig) {
+ req = mbox_alloc_msg_nix_txsch_alloc(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->schq[hw_lvl] = discontig - nix->discontig_rsvd[hw_lvl];
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ nix_tm_copy_rsp_to_nix(nix, rsp);
+ }
+
+ /* Save thresholds */
+ nix->contig_rsvd[hw_lvl] = contig;
+ nix->discontig_rsvd[hw_lvl] = discontig;
+ /* Release anything present above thresholds */
+ nix_tm_release_resources(nix, hw_lvl, true, true);
+ nix_tm_release_resources(nix, hw_lvl, false, true);
+ return 0;
+}
+
+int
roc_nix_tm_node_shaper_update(struct roc_nix *roc_nix, uint32_t node_id,
uint32_t profile_id, bool force_update)
{
@@ -904,3 +956,76 @@ roc_nix_tm_fini(struct roc_nix *roc_nix)
nix->tm_tree = 0;
nix->tm_flags &= ~NIX_TM_HIERARCHY_ENA;
}
+
+int
+roc_nix_tm_rsrc_count(struct roc_nix *roc_nix, uint16_t schq[ROC_TM_LVL_MAX])
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct mbox *mbox = (&nix->dev)->mbox;
+ struct free_rsrcs_rsp *rsp;
+ uint8_t hw_lvl;
+ int rc, i;
+
+ /* Get the current free resources */
+ mbox_alloc_msg_free_rsrc_cnt(mbox);
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ for (i = 0; i < ROC_TM_LVL_MAX; i++) {
+ hw_lvl = nix_tm_lvl2nix(nix, i);
+ if (hw_lvl == NIX_TXSCH_LVL_CNT)
+ continue;
+
+ schq[i] = (nix->is_nix1 ? rsp->schq_nix1[hw_lvl] :
+ rsp->schq[hw_lvl]);
+ }
+
+ return 0;
+}
+
+void
+roc_nix_tm_rsrc_max(bool pf, uint16_t schq[ROC_TM_LVL_MAX])
+{
+ uint8_t hw_lvl, i;
+ uint16_t max;
+
+ for (i = 0; i < ROC_TM_LVL_MAX; i++) {
+ hw_lvl = pf ? nix_tm_lvl2nix_tl1_root(i) :
+ nix_tm_lvl2nix_tl2_root(i);
+
+ switch (hw_lvl) {
+ case NIX_TXSCH_LVL_SMQ:
+ max = (roc_model_is_cn9k() ?
+ NIX_CN9K_TXSCH_LVL_SMQ_MAX :
+ NIX_TXSCH_LVL_SMQ_MAX);
+ break;
+ case NIX_TXSCH_LVL_TL4:
+ max = NIX_TXSCH_LVL_TL4_MAX;
+ break;
+ case NIX_TXSCH_LVL_TL3:
+ max = NIX_TXSCH_LVL_TL3_MAX;
+ break;
+ case NIX_TXSCH_LVL_TL2:
+ max = pf ? NIX_TXSCH_LVL_TL2_MAX : 1;
+ break;
+ case NIX_TXSCH_LVL_TL1:
+ max = pf ? 1 : 0;
+ break;
+ default:
+ max = 0;
+ break;
+ }
+ schq[i] = max;
+ }
+}
+
+bool
+roc_nix_tm_root_has_sp(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+ if (nix->tm_flags & NIX_TM_TL1_NO_SP)
+ return false;
+ return true;
+}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index cb130e0..6d3a580 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -868,6 +868,24 @@ nix_tm_resource_estimate(struct nix *nix, uint16_t *schq_contig, uint16_t *schq,
return cnt;
}
+uint16_t
+roc_nix_tm_leaf_cnt(struct roc_nix *roc_nix)
+{
+ struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+ struct nix_tm_node_list *list;
+ struct nix_tm_node *node;
+ uint16_t leaf_cnt = 0;
+
+ /* Count leafs only in user list */
+ list = nix_tm_node_list(nix, ROC_NIX_TM_USER);
+ TAILQ_FOREACH(node, list, node) {
+ if (node->id < nix->nb_tx_queues)
+ leaf_cnt++;
+ }
+
+ return leaf_cnt;
+}
+
int
roc_nix_tm_node_lvl(struct roc_nix *roc_nix, uint32_t node_id)
{
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
index 8ccbe2a..43ba177 100644
--- a/drivers/common/cnxk/roc_utils.c
+++ b/drivers/common/cnxk/roc_utils.c
@@ -38,6 +38,69 @@ roc_error_msg_get(int errorcode)
case NIX_ERR_AQ_WRITE_FAILED:
err_msg = "AQ write failed";
break;
+ case NIX_ERR_TM_LEAF_NODE_GET:
+ err_msg = "TM leaf node get failed";
+ break;
+ case NIX_ERR_TM_INVALID_LVL:
+ err_msg = "TM node level invalid";
+ break;
+ case NIX_ERR_TM_INVALID_PRIO:
+ err_msg = "TM node priority invalid";
+ break;
+ case NIX_ERR_TM_INVALID_PARENT:
+ err_msg = "TM parent id invalid";
+ break;
+ case NIX_ERR_TM_NODE_EXISTS:
+ err_msg = "TM Node Exists";
+ break;
+ case NIX_ERR_TM_INVALID_NODE:
+ err_msg = "TM node id invalid";
+ break;
+ case NIX_ERR_TM_INVALID_SHAPER_PROFILE:
+ err_msg = "TM shaper profile invalid";
+ break;
+ case NIX_ERR_TM_WEIGHT_EXCEED:
+ err_msg = "TM DWRR weight exceeded";
+ break;
+ case NIX_ERR_TM_CHILD_EXISTS:
+ err_msg = "TM node children exists";
+ break;
+ case NIX_ERR_TM_INVALID_PEAK_SZ:
+ err_msg = "TM peak size invalid";
+ break;
+ case NIX_ERR_TM_INVALID_PEAK_RATE:
+ err_msg = "TM peak rate invalid";
+ break;
+ case NIX_ERR_TM_INVALID_COMMIT_SZ:
+ err_msg = "TM commit size invalid";
+ break;
+ case NIX_ERR_TM_INVALID_COMMIT_RATE:
+ err_msg = "TM commit rate invalid";
+ break;
+ case NIX_ERR_TM_SHAPER_PROFILE_IN_USE:
+ err_msg = "TM shaper profile in use";
+ break;
+ case NIX_ERR_TM_SHAPER_PROFILE_EXISTS:
+ err_msg = "TM shaper profile exists";
+ break;
+ case NIX_ERR_TM_INVALID_TREE:
+ err_msg = "TM tree invalid";
+ break;
+ case NIX_ERR_TM_PARENT_PRIO_UPDATE:
+ err_msg = "TM node parent and prio update failed";
+ break;
+ case NIX_ERR_TM_PRIO_EXCEEDED:
+ err_msg = "TM node priority exceeded";
+ break;
+ case NIX_ERR_TM_PRIO_ORDER:
+ err_msg = "TM node priority not in order";
+ break;
+ case NIX_ERR_TM_MULTIPLE_RR_GROUPS:
+ err_msg = "TM multiple rr groups";
+ break;
+ case NIX_ERR_TM_SQ_UPDATE_FAIL:
+ err_msg = "TM SQ update failed";
+ break;
case NIX_ERR_NDC_SYNC:
err_msg = "NDC Sync failed";
break;
@@ -75,9 +138,54 @@ roc_error_msg_get(int errorcode)
case NIX_AF_ERR_AF_LF_ALLOC:
err_msg = "NIX LF alloc failed";
break;
+ case NIX_AF_ERR_TLX_INVALID:
+ err_msg = "Invalid NIX TLX";
+ break;
+ case NIX_AF_ERR_TLX_ALLOC_FAIL:
+ err_msg = "NIX TLX alloc failed";
+ break;
+ case NIX_AF_ERR_RSS_SIZE_INVALID:
+ err_msg = "Invalid RSS size";
+ break;
+ case NIX_AF_ERR_RSS_GRPS_INVALID:
+ err_msg = "Invalid RSS groups";
+ break;
+ case NIX_AF_ERR_FRS_INVALID:
+ err_msg = "Invalid frame size";
+ break;
+ case NIX_AF_ERR_RX_LINK_INVALID:
+ err_msg = "Invalid Rx link";
+ break;
+ case NIX_AF_INVAL_TXSCHQ_CFG:
+ err_msg = "Invalid Tx scheduling config";
+ break;
+ case NIX_AF_SMQ_FLUSH_FAILED:
+ err_msg = "SMQ flush failed";
+ break;
case NIX_AF_ERR_LF_RESET:
err_msg = "NIX LF reset failed";
break;
+ case NIX_AF_ERR_MARK_CFG_FAIL:
+ err_msg = "Marking config failed";
+ break;
+ case NIX_AF_ERR_LSO_CFG_FAIL:
+ err_msg = "LSO config failed";
+ break;
+ case NIX_AF_INVAL_NPA_PF_FUNC:
+ err_msg = "Invalid NPA pf_func";
+ break;
+ case NIX_AF_INVAL_SSO_PF_FUNC:
+ err_msg = "Invalid SSO pf_func";
+ break;
+ case NIX_AF_ERR_TX_VTAG_NOSPC:
+ err_msg = "No space for Tx VTAG";
+ break;
+ case NIX_AF_ERR_RX_VTAG_INUSE:
+ err_msg = "Rx VTAG is in use";
+ break;
+ case NIX_AF_ERR_PTP_CONFIG_FAIL:
+ err_msg = "PTP config failed";
+ break;
case UTIL_ERR_FS:
err_msg = "file operation failed";
break;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 22955fe..829d471 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -103,11 +103,13 @@ INTERNAL {
roc_nix_xstats_names_get;
roc_nix_switch_hdr_set;
roc_nix_eeprom_info_get;
+ roc_nix_tm_dump;
roc_nix_tm_fini;
roc_nix_tm_free_resources;
roc_nix_tm_hierarchy_disable;
roc_nix_tm_hierarchy_enable;
roc_nix_tm_init;
+ roc_nix_tm_leaf_cnt;
roc_nix_tm_node_add;
roc_nix_tm_node_delete;
roc_nix_tm_node_get;
@@ -118,7 +120,11 @@ INTERNAL {
roc_nix_tm_node_pkt_mode_update;
roc_nix_tm_node_shaper_update;
roc_nix_tm_node_suspend_resume;
+ roc_nix_tm_prealloc_res;
roc_nix_tm_rlimit_sq;
+ roc_nix_tm_root_has_sp;
+ roc_nix_tm_rsrc_count;
+ roc_nix_tm_rsrc_max;
roc_nix_tm_shaper_profile_add;
roc_nix_tm_shaper_profile_delete;
roc_nix_tm_shaper_profile_get;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 40/52] common/cnxk: add npc support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (38 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 39/52] common/cnxk: add nix tm debug support and misc utils Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-26 14:23 ` Jerin Jacob
2021-03-05 13:39 ` [dpdk-dev] [PATCH 41/52] common/cnxk: add npc helper API Nithin Dabilpuram
` (15 subsequent siblings)
55 siblings, 1 reply; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Kiran Kumar K <kirankumark@marvell.com>
Adding initial npc support.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/common/cnxk/roc_api.h | 3 +
drivers/common/cnxk/roc_npc.h | 129 +++++++++++++
drivers/common/cnxk/roc_npc_priv.h | 381 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_platform.c | 1 +
drivers/common/cnxk/roc_platform.h | 2 +
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/roc_utils.c | 24 +++
drivers/common/cnxk/version.map | 1 +
8 files changed, 544 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npc.h
create mode 100644 drivers/common/cnxk/roc_npc_priv.h
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index b805425..44bed9a 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -82,6 +82,9 @@
/* NPA */
#include "roc_npa.h"
+/* NPC */
+#include "roc_npc.h"
+
/* NIX */
#include "roc_nix.h"
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
new file mode 100644
index 0000000..f273976
--- /dev/null
+++ b/drivers/common/cnxk/roc_npc.h
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_NPC_H_
+#define _ROC_NPC_H_
+
+#include <sys/queue.h>
+
+enum roc_npc_item_type {
+ ROC_NPC_ITEM_TYPE_VOID,
+ ROC_NPC_ITEM_TYPE_ANY,
+ ROC_NPC_ITEM_TYPE_ETH,
+ ROC_NPC_ITEM_TYPE_VLAN,
+ ROC_NPC_ITEM_TYPE_E_TAG,
+ ROC_NPC_ITEM_TYPE_IPV4,
+ ROC_NPC_ITEM_TYPE_IPV6,
+ ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4,
+ ROC_NPC_ITEM_TYPE_MPLS,
+ ROC_NPC_ITEM_TYPE_ICMP,
+ ROC_NPC_ITEM_TYPE_IGMP,
+ ROC_NPC_ITEM_TYPE_UDP,
+ ROC_NPC_ITEM_TYPE_TCP,
+ ROC_NPC_ITEM_TYPE_SCTP,
+ ROC_NPC_ITEM_TYPE_ESP,
+ ROC_NPC_ITEM_TYPE_GRE,
+ ROC_NPC_ITEM_TYPE_NVGRE,
+ ROC_NPC_ITEM_TYPE_VXLAN,
+ ROC_NPC_ITEM_TYPE_GTPC,
+ ROC_NPC_ITEM_TYPE_GTPU,
+ ROC_NPC_ITEM_TYPE_GENEVE,
+ ROC_NPC_ITEM_TYPE_VXLAN_GPE,
+ ROC_NPC_ITEM_TYPE_IPV6_EXT,
+ ROC_NPC_ITEM_TYPE_GRE_KEY,
+ ROC_NPC_ITEM_TYPE_HIGIG2,
+ ROC_NPC_ITEM_TYPE_CPT_HDR,
+ ROC_NPC_ITEM_TYPE_L3_CUSTOM,
+ ROC_NPC_ITEM_TYPE_QINQ,
+ ROC_NPC_ITEM_TYPE_END,
+};
+
+struct roc_npc_item_info {
+ enum roc_npc_item_type type; /* Item type */
+ uint32_t size; /* item size */
+ const void *spec; /**< Pointer to item specification structure. */
+ const void *mask; /**< Bit-mask applied to spec and last. */
+ const void *last; /* For range */
+};
+
+#define ROC_NPC_MAX_ACTION_COUNT 12
+
+enum roc_npc_action_type {
+ ROC_NPC_ACTION_TYPE_END = (1 << 0),
+ ROC_NPC_ACTION_TYPE_VOID = (1 << 1),
+ ROC_NPC_ACTION_TYPE_MARK = (1 << 2),
+ ROC_NPC_ACTION_TYPE_FLAG = (1 << 3),
+ ROC_NPC_ACTION_TYPE_DROP = (1 << 4),
+ ROC_NPC_ACTION_TYPE_QUEUE = (1 << 5),
+ ROC_NPC_ACTION_TYPE_RSS = (1 << 6),
+ ROC_NPC_ACTION_TYPE_DUP = (1 << 7),
+ ROC_NPC_ACTION_TYPE_SEC = (1 << 8),
+ ROC_NPC_ACTION_TYPE_COUNT = (1 << 9),
+ ROC_NPC_ACTION_TYPE_PF = (1 << 10),
+ ROC_NPC_ACTION_TYPE_VF = (1 << 11),
+};
+
+struct roc_npc_action {
+ enum roc_npc_action_type type; /**< Action type. */
+ const void *conf; /**< Pointer to action configuration object. */
+};
+
+struct roc_npc_action_mark {
+ uint32_t id; /**< Integer value to return with packets. */
+};
+
+struct roc_npc_action_vf {
+ uint32_t original : 1; /**< Use original VF ID if possible. */
+ uint32_t reserved : 31; /**< Reserved, must be zero. */
+ uint32_t id; /**< VF ID. */
+};
+
+struct roc_npc_action_queue {
+ uint16_t index; /**< Queue index to use. */
+};
+
+struct roc_npc_attr {
+ uint32_t priority; /**< Rule priority level within group. */
+ uint32_t ingress : 1; /**< Rule applies to ingress traffic. */
+ uint32_t egress : 1; /**< Rule applies to egress traffic. */
+ uint32_t reserved : 30; /**< Reserved, must be zero. */
+};
+
+struct roc_npc_flow {
+ uint8_t nix_intf;
+ uint8_t enable;
+ uint32_t mcam_id;
+ int32_t ctr_id;
+ uint32_t priority;
+#define ROC_NPC_MAX_MCAM_WIDTH_DWORDS 7
+ /* Contiguous match string */
+ uint64_t mcam_data[ROC_NPC_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t mcam_mask[ROC_NPC_MAX_MCAM_WIDTH_DWORDS];
+ uint64_t npc_action;
+ uint64_t vtag_action;
+
+ TAILQ_ENTRY(roc_npc_flow) next;
+};
+
+enum roc_npc_intf {
+ ROC_NPC_INTF_RX = 0,
+ ROC_NPC_INTF_TX = 1,
+ ROC_NPC_INTF_MAX = 2,
+};
+
+struct roc_npc {
+ struct roc_nix *roc_nix;
+ uint8_t switch_header_type;
+ uint16_t flow_prealloc_size;
+ uint16_t flow_max_priority;
+ uint16_t channel;
+ uint16_t pf_func;
+ uint64_t kex_capability;
+ uint64_t rx_parse_nibble;
+
+#define ROC_NPC_MEM_SZ (5 * 1024)
+ uint8_t reserved[ROC_NPC_MEM_SZ];
+} __plt_cache_aligned;
+
+#endif /* _ROC_NPC_H_ */
diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h
new file mode 100644
index 0000000..0c3947a
--- /dev/null
+++ b/drivers/common/cnxk/roc_npc_priv.h
@@ -0,0 +1,381 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+
+#ifndef _ROC_NPC_PRIV_H_
+#define _ROC_NPC_PRIV_H_
+
+#define NPC_IH_LENGTH 8
+#define NPC_TPID_LENGTH 2
+#define NPC_HIGIG2_LENGTH 16
+#define NPC_COUNTER_NONE (-1)
+
+#define NPC_RSS_GRPS 8
+
+#define NPC_ACTION_FLAG_DEFAULT 0xffff
+
+#define NPC_PFVF_FUNC_MASK 0x3FF
+
+/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
+#define NPC_MAX_EXTRACT_DATA_LEN (64)
+#define NPC_MAX_EXTRACT_HW_LEN (4 * NPC_MAX_EXTRACT_DATA_LEN)
+#define NPC_LDATA_LFLAG_LEN (16)
+#define NPC_MAX_KEY_NIBBLES (31)
+
+/* Nibble offsets */
+#define NPC_LAYER_KEYX_SZ (3)
+#define NPC_PARSE_KEX_S_LA_OFFSET (7)
+#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
+ ((((lid) - (NPC_LID_LA)) * NPC_LAYER_KEYX_SZ) + \
+ NPC_PARSE_KEX_S_LA_OFFSET)
+
+/* This mark value indicates flag action */
+#define NPC_FLOW_FLAG_VAL (0xffff)
+
+#define NPC_RX_ACT_MATCH_OFFSET (40)
+#define NPC_RX_ACT_MATCH_MASK (0xFFFF)
+
+#define NPC_RSS_ACT_GRP_OFFSET (20)
+#define NPC_RSS_ACT_ALG_OFFSET (56)
+#define NPC_RSS_ACT_GRP_MASK (0xFFFFF)
+#define NPC_RSS_ACT_ALG_MASK (0x1F)
+
+#define NPC_MCAM_KEX_FIELD_MAX 23
+#define NPC_MCAM_MAX_PROTO_FIELDS (NPC_MCAM_KEX_FIELD_MAX + 1)
+#define NPC_MCAM_KEY_X4_WORDS 7 /* Number of 64-bit words */
+
+#define NPC_RVUPF_MAX_9XXX 0x10 /* HRM: RVU_PRIV_CONST */
+#define NPC_RVUPF_MAX_10XX 0x20 /* HRM: RVU_PRIV_CONST */
+#define NPC_NIXLF_MAX 0x80 /* HRM: NIX_AF_CONST2 */
+#define NPC_MCAME_PER_PF 2 /* DRV: RSVD_MCAM_ENTRIES_PER_PF */
+#define NPC_MCAME_PER_LF 1 /* DRV: RSVD_MCAM_ENTRIES_PER_NIXLF */
+#define NPC_MCAME_RESVD_9XXX \
+ (NPC_NIXLF_MAX * NPC_MCAME_PER_LF + \
+ (NPC_RVUPF_MAX_9XXX - 1) * NPC_MCAME_PER_PF)
+
+#define NPC_MCAME_RESVD_10XX \
+ (NPC_NIXLF_MAX * NPC_MCAME_PER_LF + \
+ (NPC_RVUPF_MAX_10XX - 1) * NPC_MCAME_PER_PF)
+
+enum npc_err_status {
+ NPC_ERR_PARAM = -1024,
+ NPC_ERR_NO_MEM,
+ NPC_ERR_INVALID_SPEC,
+ NPC_ERR_INVALID_MASK,
+ NPC_ERR_INVALID_RANGE,
+ NPC_ERR_INVALID_KEX,
+ NPC_ERR_INVALID_SIZE,
+ NPC_ERR_INTERNAL,
+ NPC_ERR_MCAM_ALLOC,
+ NPC_ERR_ACTION_NOTSUP,
+ NPC_ERR_PATTERN_NOTSUP,
+};
+
+enum npc_mcam_intf { NPC_MCAM_RX, NPC_MCAM_TX };
+
+typedef union npc_kex_cap_terms_t {
+ struct {
+ /** Total length of received packet */
+ uint64_t len : 1;
+ /** Initial (outer) Ethertype only */
+ uint64_t ethtype_0 : 1;
+ /** Ethertype of most inner VLAN tag */
+ uint64_t ethtype_x : 1;
+ /** First VLAN ID (outer) */
+ uint64_t vlan_id_0 : 1;
+ /** Last VLAN ID (inner) */
+ uint64_t vlan_id_x : 1;
+ /** destination MAC address */
+ uint64_t dmac : 1;
+ /** IP Protocol or IPv6 Next Header */
+ uint64_t ip_proto : 1;
+ /** Destination UDP port, implies IPPROTO=17 */
+ uint64_t udp_dport : 1;
+ /** Destination TCP port implies IPPROTO=6 */
+ uint64_t tcp_dport : 1;
+ /** Source UDP Port */
+ uint64_t udp_sport : 1;
+ /** Source TCP port */
+ uint64_t tcp_sport : 1;
+ /** Source IP address */
+ uint64_t sip_addr : 1;
+ /** Destination IP address */
+ uint64_t dip_addr : 1;
+ /** Source IP address */
+ uint64_t sip6_addr : 1;
+ /** Destination IP address */
+ uint64_t dip6_addr : 1;
+ /** IPsec session identifier */
+ uint64_t ipsec_spi : 1;
+ /** NVGRE/VXLAN network identifier */
+ uint64_t ld_vni : 1;
+ /** Custom frame match rule. PMR offset is counted from
+ * the start of the packet.
+ */
+ uint64_t custom_frame : 1;
+ /** Custom layer 3 match rule. PMR offset is counted from
+ * the start of layer 3 in the packet.
+ */
+ uint64_t custom_l3 : 1;
+ /** IGMP Group address */
+ uint64_t igmp_grp_addr : 1;
+ /** ICMP identifier */
+ uint64_t icmp_id : 1;
+ /** ICMP type */
+ uint64_t icmp_type : 1;
+ /** ICMP code */
+ uint64_t icmp_code : 1;
+ /** Source SCTP port */
+ uint64_t sctp_sport : 1;
+ /** Destination SCTP port */
+ uint64_t sctp_dport : 1;
+ /** GTPU Tunnel endpoint identifier */
+ uint64_t gtpu_teid : 1;
+
+ } bit;
+ /** All bits of the bit field structure */
+ uint64_t all_bits;
+} npc_kex_cap_terms_t;
+
+struct npc_parse_item_info {
+ const void *def_mask; /* default mask */
+ void *hw_mask; /* hardware supported mask */
+ int len; /* length of item */
+ const void *spec; /* spec to use, NULL implies match any */
+ const void *mask; /* mask to use */
+ uint8_t hw_hdr_len; /* Extra data len at each layer*/
+};
+
+struct npc_parse_state {
+ struct npc *npc;
+ const struct roc_npc_item_info *pattern;
+ const struct roc_npc_item_info *last_pattern;
+ struct roc_npc_flow *flow;
+ uint8_t nix_intf;
+ uint8_t tunnel;
+ uint8_t terminate;
+ uint8_t layer_mask;
+ uint8_t lt[NPC_MAX_LID];
+ uint8_t flags[NPC_MAX_LID];
+ uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
+ uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
+ bool is_vf;
+};
+
+enum npc_kpu_parser_flag {
+ NPC_F_NA = 0,
+ NPC_F_PKI,
+ NPC_F_PKI_VLAN,
+ NPC_F_PKI_ETAG,
+ NPC_F_PKI_ITAG,
+ NPC_F_PKI_MPLS,
+ NPC_F_PKI_NSH,
+ NPC_F_ETYPE_UNK,
+ NPC_F_ETHER_VLAN,
+ NPC_F_ETHER_ETAG,
+ NPC_F_ETHER_ITAG,
+ NPC_F_ETHER_MPLS,
+ NPC_F_ETHER_NSH,
+ NPC_F_STAG_CTAG,
+ NPC_F_STAG_CTAG_UNK,
+ NPC_F_STAG_STAG_CTAG,
+ NPC_F_STAG_STAG_STAG,
+ NPC_F_QINQ_CTAG,
+ NPC_F_QINQ_CTAG_UNK,
+ NPC_F_QINQ_QINQ_CTAG,
+ NPC_F_QINQ_QINQ_QINQ,
+ NPC_F_BTAG_ITAG,
+ NPC_F_BTAG_ITAG_STAG,
+ NPC_F_BTAG_ITAG_CTAG,
+ NPC_F_BTAG_ITAG_UNK,
+ NPC_F_ETAG_CTAG,
+ NPC_F_ETAG_BTAG_ITAG,
+ NPC_F_ETAG_STAG,
+ NPC_F_ETAG_QINQ,
+ NPC_F_ETAG_ITAG,
+ NPC_F_ETAG_ITAG_STAG,
+ NPC_F_ETAG_ITAG_CTAG,
+ NPC_F_ETAG_ITAG_UNK,
+ NPC_F_ITAG_STAG_CTAG,
+ NPC_F_ITAG_STAG,
+ NPC_F_ITAG_CTAG,
+ NPC_F_MPLS_4_LABELS,
+ NPC_F_MPLS_3_LABELS,
+ NPC_F_MPLS_2_LABELS,
+ NPC_F_IP_HAS_OPTIONS,
+ NPC_F_IP_IP_IN_IP,
+ NPC_F_IP_6TO4,
+ NPC_F_IP_MPLS_IN_IP,
+ NPC_F_IP_UNK_PROTO,
+ NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_6TO4_HAS_OPTIONS,
+ NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
+ NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
+ NPC_F_IP6_HAS_EXT,
+ NPC_F_IP6_TUN_IP6,
+ NPC_F_IP6_MPLS_IN_IP,
+ NPC_F_TCP_HAS_OPTIONS,
+ NPC_F_TCP_HTTP,
+ NPC_F_TCP_HTTPS,
+ NPC_F_TCP_PPTP,
+ NPC_F_TCP_UNK_PORT,
+ NPC_F_TCP_HTTP_HAS_OPTIONS,
+ NPC_F_TCP_HTTPS_HAS_OPTIONS,
+ NPC_F_TCP_PPTP_HAS_OPTIONS,
+ NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
+ NPC_F_UDP_VXLAN,
+ NPC_F_UDP_VXLAN_NOVNI,
+ NPC_F_UDP_VXLAN_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE,
+ NPC_F_UDP_VXLANGPE_NSH,
+ NPC_F_UDP_VXLANGPE_MPLS,
+ NPC_F_UDP_VXLANGPE_NOVNI,
+ NPC_F_UDP_VXLANGPE_NOVNI_NSH,
+ NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
+ NPC_F_UDP_VXLANGPE_UNK,
+ NPC_F_UDP_VXLANGPE_NONP,
+ NPC_F_UDP_GTP_GTPC,
+ NPC_F_UDP_GTP_GTPU_G_PDU,
+ NPC_F_UDP_GTP_GTPU_UNK,
+ NPC_F_UDP_UNK_PORT,
+ NPC_F_UDP_GENEVE,
+ NPC_F_UDP_GENEVE_OAM,
+ NPC_F_UDP_GENEVE_CRI_OPT,
+ NPC_F_UDP_GENEVE_OAM_CRI_OPT,
+ NPC_F_GRE_NVGRE,
+ NPC_F_GRE_HAS_SRE,
+ NPC_F_GRE_HAS_CSUM,
+ NPC_F_GRE_HAS_KEY,
+ NPC_F_GRE_HAS_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY,
+ NPC_F_GRE_HAS_CSUM_SEQ,
+ NPC_F_GRE_HAS_KEY_SEQ,
+ NPC_F_GRE_HAS_CSUM_KEY_SEQ,
+ NPC_F_GRE_HAS_ROUTE,
+ NPC_F_GRE_UNK_PROTO,
+ NPC_F_GRE_VER1,
+ NPC_F_GRE_VER1_HAS_SEQ,
+ NPC_F_GRE_VER1_HAS_ACK,
+ NPC_F_GRE_VER1_HAS_SEQ_ACK,
+ NPC_F_GRE_VER1_UNK_PROTO,
+ NPC_F_TU_ETHER_UNK,
+ NPC_F_TU_ETHER_CTAG,
+ NPC_F_TU_ETHER_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG_CTAG,
+ NPC_F_TU_ETHER_STAG_CTAG_UNK,
+ NPC_F_TU_ETHER_STAG,
+ NPC_F_TU_ETHER_STAG_UNK,
+ NPC_F_TU_ETHER_QINQ_CTAG,
+ NPC_F_TU_ETHER_QINQ_CTAG_UNK,
+ NPC_F_TU_ETHER_QINQ,
+ NPC_F_TU_ETHER_QINQ_UNK,
+ NPC_F_LAST /* has to be the last item */
+};
+
+#define NPC_ACTION_TERM \
+ (ROC_NPC_ACTION_TYPE_DROP | ROC_NPC_ACTION_TYPE_QUEUE | \
+ ROC_NPC_ACTION_TYPE_RSS | ROC_NPC_ACTION_TYPE_DUP | \
+ ROC_NPC_ACTION_TYPE_SEC)
+
+struct npc_xtract_info {
+ /* Length in bytes of pkt data extracted. len = 0
+ * indicates that extraction is disabled.
+ */
+ uint8_t len;
+ uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
+ uint8_t key_off; /* Byte offset in MCAM key where data is placed */
+ uint8_t enable; /* Extraction enabled or disabled */
+ uint8_t flags_enable; /* Flags extraction enabled */
+};
+
+/* Information for a given {LAYER, LTYPE} */
+struct npc_lid_lt_xtract_info {
+ /* Info derived from parser configuration */
+ uint16_t npc_proto; /* Network protocol identified */
+ uint8_t valid_flags_mask; /* Flags applicable */
+ uint8_t is_terminating : 1; /* No more parsing */
+ struct npc_xtract_info xtract[NPC_MAX_LD];
+};
+
+union npc_kex_ldata_flags_cfg {
+ struct {
+ uint64_t lid : 3;
+ uint64_t rvsd_62_1 : 61;
+ } s;
+
+ uint64_t i;
+};
+
+typedef struct npc_lid_lt_xtract_info npc_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID]
+ [NPC_MAX_LT];
+typedef struct npc_lid_lt_xtract_info npc_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD]
+ [NPC_MAX_LFL];
+typedef union npc_kex_ldata_flags_cfg npc_ld_flags_t[NPC_MAX_LD];
+
+/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
+struct npc_get_datax_cfg {
+ /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
+ union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
+ /* Extract information indexed with [LID][LTYPE] */
+ struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
+ /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
+ * Fields flags_ena_ld0, flags_ena_ld1 in
+ * struct npc_lid_lt_xtract_info indicate if this is applicable
+ * for a given {LAYER, LTYPE}
+ */
+ struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
+};
+
+TAILQ_HEAD(npc_flow_list, roc_npc_flow);
+
+struct npc_mcam_ents_info {
+ /* Current max & min values of mcam index */
+ uint32_t max_id;
+ uint32_t min_id;
+ uint32_t free_ent;
+ uint32_t live_ent;
+};
+
+struct npc {
+ struct mbox *mbox; /* Mbox */
+ uint32_t keyx_supp_nmask[NPC_MAX_INTF]; /* nibble mask */
+ uint8_t profile_name[MKEX_NAME_LEN]; /* KEX profile name */
+ uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
+ uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
+ uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
+ uint32_t mcam_entries; /* mcam entries supported */
+ uint16_t channel; /* RX Channel number */
+ uint32_t rss_grps; /* rss groups supported */
+ uint16_t flow_prealloc_size; /* Pre allocated mcam size */
+ uint16_t flow_max_priority; /* Max priority for flow */
+ uint16_t switch_header_type; /* Suppprted switch header type */
+ uint32_t mark_actions; /* Number of mark actions */
+ uint16_t pf_func; /* pf_func of device */
+ npc_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
+ npc_fxcfg_t prx_fxcfg; /* Flag extract */
+ npc_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
+ /* mcam entry info per priority level: both free & in-use */
+ struct npc_mcam_ents_info *flow_entry_info;
+ /* Bitmap of free preallocated entries in ascending index &
+ * descending priority
+ */
+ struct plt_bitmap **free_entries;
+ /* Bitmap of free preallocated entries in descending index &
+ * ascending priority
+ */
+ struct plt_bitmap **free_entries_rev;
+ /* Bitmap of live entries in ascending index & descending priority */
+ struct plt_bitmap **live_entries;
+ /* Bitmap of live entries in descending index & ascending priority */
+ struct plt_bitmap **live_entries_rev;
+ /* Priority bucket wise tail queue of all npc_flow resources */
+ struct npc_flow_list *flow_list;
+ struct plt_bitmap *rss_grp_entries;
+};
+
+static inline struct npc *
+roc_npc_to_npc_priv(struct roc_npc *npc)
+{
+ return (struct npc *)npc->reserved;
+}
+#endif /* _ROC_NPC_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index dd33e58..11ff0f8 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -32,4 +32,5 @@ RTE_LOG_REGISTER(cnxk_logtype_base, pmd.cnxk.base, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_mbox, pmd.cnxk.mbox, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_npa, pmd.mempool.cnxk, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_nix, pmd.net.cnxk, NOTICE);
+RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE);
RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE);
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 8f46fb3..d585d53 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -128,6 +128,7 @@ extern int cnxk_logtype_base;
extern int cnxk_logtype_mbox;
extern int cnxk_logtype_npa;
extern int cnxk_logtype_nix;
+extern int cnxk_logtype_npc;
extern int cnxk_logtype_tm;
#define plt_err(fmt, args...) \
@@ -148,6 +149,7 @@ extern int cnxk_logtype_tm;
#define plt_mbox_dbg(fmt, ...) plt_dbg(mbox, fmt, ##__VA_ARGS__)
#define plt_npa_dbg(fmt, ...) plt_dbg(npa, fmt, ##__VA_ARGS__)
#define plt_nix_dbg(fmt, ...) plt_dbg(nix, fmt, ##__VA_ARGS__)
+#define plt_npc_dbg(fmt, ...) plt_dbg(npc, fmt, ##__VA_ARGS__)
#define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__)
#ifdef __cplusplus
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index 4a1eb9c..b1756fe 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -23,4 +23,7 @@
/* NIX */
#include "roc_nix_priv.h"
+/* NPC */
+#include "roc_npc_priv.h"
+
#endif /* _ROC_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_utils.c b/drivers/common/cnxk/roc_utils.c
index 43ba177..c5c5962 100644
--- a/drivers/common/cnxk/roc_utils.c
+++ b/drivers/common/cnxk/roc_utils.c
@@ -14,16 +14,20 @@ roc_error_msg_get(int errorcode)
case NIX_AF_ERR_PARAM:
case NIX_ERR_PARAM:
case NPA_ERR_PARAM:
+ case NPC_ERR_PARAM:
case UTIL_ERR_PARAM:
err_msg = "Invalid parameter";
break;
case NIX_ERR_NO_MEM:
+ case NPC_ERR_NO_MEM:
err_msg = "Out of memory";
break;
case NIX_ERR_INVALID_RANGE:
+ case NPC_ERR_INVALID_RANGE:
err_msg = "Range is not supported";
break;
case NIX_ERR_INTERNAL:
+ case NPC_ERR_INTERNAL:
err_msg = "Internal error";
break;
case NIX_ERR_OP_NOTSUP:
@@ -104,6 +108,26 @@ roc_error_msg_get(int errorcode)
case NIX_ERR_NDC_SYNC:
err_msg = "NDC Sync failed";
break;
+ case NPC_ERR_INVALID_SPEC:
+ err_msg = "NPC invalid spec";
+ break;
+ case NPC_ERR_INVALID_MASK:
+ err_msg = "NPC invalid mask";
+ break;
+ case NPC_ERR_INVALID_KEX:
+ err_msg = "NPC invalid key";
+ break;
+ case NPC_ERR_INVALID_SIZE:
+ err_msg = "NPC invalid key size";
+ break;
+ case NPC_ERR_ACTION_NOTSUP:
+ err_msg = "NPC action not supported";
+ break;
+ case NPC_ERR_PATTERN_NOTSUP:
+ err_msg = "NPC pattern not supported";
+ break;
+ case NPC_ERR_MCAM_ALLOC:
+ err_msg = "MCAM entry alloc failed";
break;
case NPA_ERR_ALLOC:
err_msg = "NPA alloc failed";
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 829d471..5e2ec93 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -5,6 +5,7 @@ INTERNAL {
cnxk_logtype_mbox;
cnxk_logtype_nix;
cnxk_logtype_npa;
+ cnxk_logtype_npc;
cnxk_logtype_tm;
plt_init;
roc_clk_freq_get;
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 41/52] common/cnxk: add npc helper API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (39 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 40/52] common/cnxk: add npc support Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 42/52] common/cnxk: add mcam utility API Nithin Dabilpuram
` (14 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Kiran Kumar K <kirankumark@marvell.com>
Adding NPC helper APIs to manage MCAM like pre allocating the mcam,
configuring the rules, shifting mcam rules and preparing the data for
mcam based on KEX.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_npc_priv.h | 11 +
drivers/common/cnxk/roc_npc_utils.c | 631 ++++++++++++++++++++++++++++++++++++
3 files changed, 643 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npc_utils.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index a686763..c4ec988 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -35,6 +35,7 @@ sources = files('roc_dev.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
+ 'roc_npc_utils.c',
'roc_platform.c',
'roc_utils.c')
includes += include_directories('../../bus/pci')
diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h
index 0c3947a..185c60e 100644
--- a/drivers/common/cnxk/roc_npc_priv.h
+++ b/drivers/common/cnxk/roc_npc_priv.h
@@ -378,4 +378,15 @@ roc_npc_to_npc_priv(struct roc_npc *npc)
{
return (struct npc *)npc->reserved;
}
+
+int npc_update_parse_state(struct npc_parse_state *pst,
+ struct npc_parse_item_info *info, int lid, int lt,
+ uint8_t flags);
+void npc_get_hw_supp_mask(struct npc_parse_state *pst,
+ struct npc_parse_item_info *info, int lid, int lt);
+int npc_parse_item_basic(const struct roc_npc_item_info *item,
+ struct npc_parse_item_info *info);
+int npc_check_preallocated_entry_cache(struct mbox *mbox,
+ struct roc_npc_flow *flow,
+ struct npc *npc);
#endif /* _ROC_NPC_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_npc_utils.c b/drivers/common/cnxk/roc_npc_utils.c
new file mode 100644
index 0000000..5453e1a
--- /dev/null
+++ b/drivers/common/cnxk/roc_npc_utils.c
@@ -0,0 +1,631 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static void
+npc_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
+{
+ int idx;
+
+ for (idx = 0; idx < len; idx++)
+ ptr[idx] = data[len - 1 - idx];
+}
+
+static int
+npc_check_copysz(size_t size, size_t len)
+{
+ if (len <= size)
+ return len;
+ return NPC_ERR_PARAM;
+}
+
+static inline int
+npc_mem_is_zero(const void *mem, int len)
+{
+ const char *m = mem;
+ int i;
+
+ for (i = 0; i < len; i++) {
+ if (m[i] != 0)
+ return 0;
+ }
+ return 1;
+}
+
+static void
+npc_set_hw_mask(struct npc_parse_item_info *info, struct npc_xtract_info *xinfo,
+ char *hw_mask)
+{
+ int max_off, offset;
+ int j;
+
+ if (xinfo->enable == 0)
+ return;
+
+ if (xinfo->hdr_off < info->hw_hdr_len)
+ return;
+
+ max_off = xinfo->hdr_off + xinfo->len - info->hw_hdr_len;
+
+ if (max_off > info->len)
+ max_off = info->len;
+
+ offset = xinfo->hdr_off - info->hw_hdr_len;
+ for (j = offset; j < max_off; j++)
+ hw_mask[j] = 0xff;
+}
+
+void
+npc_get_hw_supp_mask(struct npc_parse_state *pst,
+ struct npc_parse_item_info *info, int lid, int lt)
+{
+ struct npc_xtract_info *xinfo, *lfinfo;
+ char *hw_mask = info->hw_mask;
+ int lf_cfg = 0;
+ int i, j;
+ int intf;
+
+ intf = pst->nix_intf;
+ xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
+ memset(hw_mask, 0, info->len);
+
+ for (i = 0; i < NPC_MAX_LD; i++)
+ npc_set_hw_mask(info, &xinfo[i], hw_mask);
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ if (xinfo[i].flags_enable == 0)
+ continue;
+
+ lf_cfg = pst->npc->prx_lfcfg[i].i;
+ if (lf_cfg == lid) {
+ for (j = 0; j < NPC_MAX_LFL; j++) {
+ lfinfo = pst->npc->prx_fxcfg[intf][i][j].xtract;
+ npc_set_hw_mask(info, &lfinfo[0], hw_mask);
+ }
+ }
+ }
+}
+
+static inline int
+npc_mask_is_supported(const char *mask, const char *hw_mask, int len)
+{
+ /*
+ * If no hw_mask, assume nothing is supported.
+ * mask is never NULL
+ */
+ if (hw_mask == NULL)
+ return npc_mem_is_zero(mask, len);
+
+ while (len--) {
+ if ((mask[len] | hw_mask[len]) != hw_mask[len])
+ return 0; /* False */
+ }
+ return 1;
+}
+
+int
+npc_parse_item_basic(const struct roc_npc_item_info *item,
+ struct npc_parse_item_info *info)
+{
+ /* Item must not be NULL */
+ if (item == NULL)
+ return NPC_ERR_PARAM;
+
+ /* Don't support ranges */
+ if (item->last != NULL)
+ return NPC_ERR_INVALID_RANGE;
+
+ /* If spec is NULL, both mask and last must be NULL, this
+ * makes it to match ANY value (eq to mask = 0).
+ * Setting either mask or last without spec is an error
+ */
+ if (item->spec == NULL) {
+ if (item->last == NULL && item->mask == NULL) {
+ info->spec = NULL;
+ return 0;
+ }
+ return NPC_ERR_INVALID_SPEC;
+ }
+
+ /* We have valid spec */
+ info->spec = item->spec;
+
+ /* If mask is not set, use default mask, err if default mask is
+ * also NULL.
+ */
+ if (item->mask == NULL) {
+ if (info->def_mask == NULL)
+ return NPC_ERR_PARAM;
+ info->mask = info->def_mask;
+ } else {
+ info->mask = item->mask;
+ }
+
+ /* mask specified must be subset of hw supported mask
+ * mask | hw_mask == hw_mask
+ */
+ if (!npc_mask_is_supported(info->mask, info->hw_mask, info->len))
+ return NPC_ERR_INVALID_MASK;
+
+ return 0;
+}
+
+static int
+npc_update_extraction_data(struct npc_parse_state *pst,
+ struct npc_parse_item_info *info,
+ struct npc_xtract_info *xinfo)
+{
+ uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
+ uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
+ struct npc_xtract_info *x;
+ int hdr_off;
+ int len = 0;
+
+ x = xinfo;
+ len = x->len;
+ hdr_off = x->hdr_off;
+
+ if (hdr_off < info->hw_hdr_len)
+ return 0;
+
+ if (x->enable == 0)
+ return 0;
+
+ hdr_off -= info->hw_hdr_len;
+
+ if (hdr_off >= info->len)
+ return 0;
+
+ if (hdr_off + len > info->len)
+ len = info->len - hdr_off;
+
+ len = npc_check_copysz((ROC_NPC_MAX_MCAM_WIDTH_DWORDS * 8) - x->key_off,
+ len);
+ if (len < 0)
+ return NPC_ERR_INVALID_SIZE;
+
+ /* Need to reverse complete structure so that dest addr is at
+ * MSB so as to program the MCAM using mcam_data & mcam_mask
+ * arrays
+ */
+ npc_prep_mcam_ldata(int_info, (const uint8_t *)info->spec + hdr_off,
+ x->len);
+ npc_prep_mcam_ldata(int_info_mask,
+ (const uint8_t *)info->mask + hdr_off, x->len);
+
+ memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
+ memcpy(pst->mcam_data + x->key_off, int_info, len);
+ return 0;
+}
+
+int
+npc_update_parse_state(struct npc_parse_state *pst,
+ struct npc_parse_item_info *info, int lid, int lt,
+ uint8_t flags)
+{
+ struct npc_lid_lt_xtract_info *xinfo;
+ struct npc_xtract_info *lfinfo;
+ int intf, lf_cfg;
+ int i, j, rc = 0;
+
+ pst->layer_mask |= lid;
+ pst->lt[lid] = lt;
+ pst->flags[lid] = flags;
+
+ intf = pst->nix_intf;
+ xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
+ if (xinfo->is_terminating)
+ pst->terminate = 1;
+
+ if (info->spec == NULL)
+ goto done;
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ rc = npc_update_extraction_data(pst, info, &xinfo->xtract[i]);
+ if (rc != 0)
+ return rc;
+ }
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ if (xinfo->xtract[i].flags_enable == 0)
+ continue;
+
+ lf_cfg = pst->npc->prx_lfcfg[i].i;
+ if (lf_cfg == lid) {
+ for (j = 0; j < NPC_MAX_LFL; j++) {
+ lfinfo = pst->npc->prx_fxcfg[intf][i][j].xtract;
+ rc = npc_update_extraction_data(pst, info,
+ &lfinfo[0]);
+ if (rc != 0)
+ return rc;
+
+ if (lfinfo[0].enable)
+ pst->flags[lid] = j;
+ }
+ }
+ }
+
+done:
+ pst->pattern++;
+ return 0;
+}
+
+static int
+npc_first_set_bit(uint64_t slab)
+{
+ int num = 0;
+
+ if ((slab & 0xffffffff) == 0) {
+ num += 32;
+ slab >>= 32;
+ }
+ if ((slab & 0xffff) == 0) {
+ num += 16;
+ slab >>= 16;
+ }
+ if ((slab & 0xff) == 0) {
+ num += 8;
+ slab >>= 8;
+ }
+ if ((slab & 0xf) == 0) {
+ num += 4;
+ slab >>= 4;
+ }
+ if ((slab & 0x3) == 0) {
+ num += 2;
+ slab >>= 2;
+ }
+ if ((slab & 0x1) == 0)
+ num += 1;
+
+ return num;
+}
+
+static int
+npc_shift_lv_ent(struct mbox *mbox, struct roc_npc_flow *flow, struct npc *npc,
+ uint32_t old_ent, uint32_t new_ent)
+{
+ struct npc_mcam_shift_entry_req *req;
+ struct npc_mcam_shift_entry_rsp *rsp;
+ struct npc_flow_list *list;
+ struct roc_npc_flow *flow_iter;
+ int rc = -ENOSPC;
+
+ list = &npc->flow_list[flow->priority];
+
+ /* Old entry is disabled & it's contents are moved to new_entry,
+ * new entry is enabled finally.
+ */
+ req = mbox_alloc_msg_npc_mcam_shift_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->curr_entry[0] = old_ent;
+ req->new_entry[0] = new_ent;
+ req->shift_count = 1;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+
+ /* Remove old node from list */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id == old_ent)
+ TAILQ_REMOVE(list, flow_iter, next);
+ }
+
+ /* Insert node with new mcam id at right place */
+ TAILQ_FOREACH(flow_iter, list, next) {
+ if (flow_iter->mcam_id > new_ent)
+ TAILQ_INSERT_BEFORE(flow_iter, flow, next);
+ }
+ return rc;
+}
+
+/* Exchange all required entries with a given priority level */
+static int
+npc_shift_ent(struct mbox *mbox, struct roc_npc_flow *flow, struct npc *npc,
+ struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
+{
+ struct plt_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
+ uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
+ uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
+ /* Bit position within the slab */
+ uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
+ /* Overall bit position of the start of slab */
+ /* free & live entry index */
+ int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
+ struct npc_mcam_ents_info *ent_info;
+ /* free & live bitmap slab */
+ uint64_t sl_fr = 0, sl_lv = 0, *sl;
+
+ fr_bmp = npc->free_entries[prio_lvl];
+ fr_bmp_rev = npc->free_entries_rev[prio_lvl];
+ lv_bmp = npc->live_entries[prio_lvl];
+ lv_bmp_rev = npc->live_entries_rev[prio_lvl];
+ ent_info = &npc->flow_entry_info[prio_lvl];
+ mcam_entries = npc->mcam_entries;
+
+ /* New entries allocated are always contiguous, but older entries
+ * already in free/live bitmap can be non-contiguous: so return
+ * shifted entries should be in non-contiguous format.
+ */
+ while (idx <= rsp->count) {
+ if (!sl_fr && !sl_lv) {
+ /* Lower index elements to be exchanged */
+ if (dir < 0) {
+ rc_fr = plt_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
+ rc_lv = plt_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
+ } else {
+ rc_fr = plt_bitmap_scan(fr_bmp_rev,
+ &sl_fr_bit_off, &sl_fr);
+ rc_lv = plt_bitmap_scan(lv_bmp_rev,
+ &sl_lv_bit_off, &sl_lv);
+ }
+ }
+
+ if (rc_fr) {
+ fr_bit_pos = npc_first_set_bit(sl_fr);
+ e_fr = sl_fr_bit_off + fr_bit_pos;
+ } else {
+ e_fr = ~(0);
+ }
+
+ if (rc_lv) {
+ lv_bit_pos = npc_first_set_bit(sl_lv);
+ e_lv = sl_lv_bit_off + lv_bit_pos;
+ } else {
+ e_lv = ~(0);
+ }
+
+ /* First entry is from free_bmap */
+ if (e_fr < e_lv) {
+ bmp = fr_bmp;
+ e = e_fr;
+ sl = &sl_fr;
+ bit_pos = fr_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+ } else {
+ bmp = lv_bmp;
+ e = e_lv;
+ sl = &sl_lv;
+ bit_pos = lv_bit_pos;
+ if (dir > 0)
+ e_id = mcam_entries - e - 1;
+ else
+ e_id = e;
+
+ if (idx < rsp->count)
+ rc = npc_shift_lv_ent(mbox, flow, npc, e_id,
+ rsp->entry + idx);
+ }
+
+ plt_bitmap_clear(bmp, e);
+ plt_bitmap_set(bmp, rsp->entry + idx);
+ /* Update entry list, use non-contiguous
+ * list now.
+ */
+ rsp->entry_list[idx] = e_id;
+ *sl &= ~(1UL << bit_pos);
+
+ /* Update min & max entry identifiers in current
+ * priority level.
+ */
+ if (dir < 0) {
+ ent_info->max_id = rsp->entry + idx;
+ ent_info->min_id = e_id;
+ } else {
+ ent_info->max_id = e_id;
+ ent_info->min_id = rsp->entry;
+ }
+
+ idx++;
+ }
+ return rc;
+}
+
+/* Validate if newly allocated entries lie in the correct priority zone
+ * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
+ * If not properly aligned, shift entries to do so
+ */
+static int
+npc_validate_and_shift_prio_ent(struct mbox *mbox, struct roc_npc_flow *flow,
+ struct npc *npc,
+ struct npc_mcam_alloc_entry_rsp *rsp,
+ int req_prio)
+{
+ int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
+ struct npc_mcam_ents_info *info = npc->flow_entry_info;
+ int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
+ uint32_t tot_ent = 0;
+
+ if (dir < 0)
+ prio_idx = npc->flow_max_priority - 1;
+
+ /* Only live entries needs to be shifted, free entries can just be
+ * moved by bits manipulation.
+ */
+
+ /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
+ * level entries(lower indexes).
+ *
+ * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
+ * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
+ * level entries(higher indexes) with highest indexes.
+ */
+ do {
+ tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
+
+ if (dir < 0 && prio_idx != prio &&
+ rsp->entry > info[prio_idx].max_id && tot_ent) {
+ needs_shift = 1;
+ } else if ((dir > 0) && (prio_idx != prio) &&
+ (rsp->entry < info[prio_idx].min_id) && tot_ent) {
+ needs_shift = 1;
+ }
+
+ if (needs_shift) {
+ needs_shift = 0;
+ rc = npc_shift_ent(mbox, flow, npc, rsp, dir, prio_idx);
+ } else {
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+ } while ((prio_idx != prio) && (prio_idx += dir));
+
+ return rc;
+}
+
+static int
+npc_find_ref_entry(struct npc *npc, int *prio, int prio_lvl)
+{
+ struct npc_mcam_ents_info *info = npc->flow_entry_info;
+ int step = 1;
+
+ while (step < npc->flow_max_priority) {
+ if (((prio_lvl + step) < npc->flow_max_priority) &&
+ info[prio_lvl + step].live_ent) {
+ *prio = NPC_MCAM_HIGHER_PRIO;
+ return info[prio_lvl + step].min_id;
+ }
+
+ if (((prio_lvl - step) >= 0) &&
+ info[prio_lvl - step].live_ent) {
+ *prio = NPC_MCAM_LOWER_PRIO;
+ return info[prio_lvl - step].max_id;
+ }
+ step++;
+ }
+ *prio = NPC_MCAM_ANY_PRIO;
+ return 0;
+}
+
+static int
+npc_fill_entry_cache(struct mbox *mbox, struct roc_npc_flow *flow,
+ struct npc *npc, uint32_t *free_ent)
+{
+ struct plt_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
+ struct npc_mcam_alloc_entry_rsp rsp_local;
+ struct npc_mcam_alloc_entry_rsp *rsp_cmd;
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ struct npc_mcam_ents_info *info;
+ int rc = -ENOSPC, prio;
+ uint16_t ref_ent, idx;
+
+ info = &npc->flow_entry_info[flow->priority];
+ free_bmp = npc->free_entries[flow->priority];
+ free_bmp_rev = npc->free_entries_rev[flow->priority];
+ live_bmp = npc->live_entries[flow->priority];
+ live_bmp_rev = npc->live_entries_rev[flow->priority];
+
+ ref_ent = npc_find_ref_entry(npc, &prio, flow->priority);
+
+ req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->contig = 1;
+ req->count = npc->flow_prealloc_size;
+ req->priority = prio;
+ req->ref_entry = ref_ent;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp_cmd);
+ if (rc)
+ return rc;
+
+ rsp = &rsp_local;
+ memcpy(rsp, rsp_cmd, sizeof(*rsp));
+
+ /* Non-first ent cache fill */
+ if (prio != NPC_MCAM_ANY_PRIO) {
+ npc_validate_and_shift_prio_ent(mbox, flow, npc, rsp, prio);
+ } else {
+ /* Copy into response entry list */
+ for (idx = 0; idx < rsp->count; idx++)
+ rsp->entry_list[idx] = rsp->entry + idx;
+ }
+
+ /* Update free entries, reverse free entries list,
+ * min & max entry ids.
+ */
+ for (idx = 0; idx < rsp->count; idx++) {
+ if (unlikely(rsp->entry_list[idx] < info->min_id))
+ info->min_id = rsp->entry_list[idx];
+
+ if (unlikely(rsp->entry_list[idx] > info->max_id))
+ info->max_id = rsp->entry_list[idx];
+
+ /* Skip entry to be returned, not to be part of free
+ * list.
+ */
+ if (prio == NPC_MCAM_HIGHER_PRIO) {
+ if (unlikely(idx == (rsp->count - 1))) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ } else {
+ if (unlikely(!idx)) {
+ *free_ent = rsp->entry_list[idx];
+ continue;
+ }
+ }
+ info->free_ent++;
+ plt_bitmap_set(free_bmp, rsp->entry_list[idx]);
+ plt_bitmap_set(free_bmp_rev,
+ npc->mcam_entries - rsp->entry_list[idx] - 1);
+ }
+
+ info->live_ent++;
+ plt_bitmap_set(live_bmp, *free_ent);
+ plt_bitmap_set(live_bmp_rev, npc->mcam_entries - *free_ent - 1);
+
+ return 0;
+}
+
+int
+npc_check_preallocated_entry_cache(struct mbox *mbox, struct roc_npc_flow *flow,
+ struct npc *npc)
+{
+ struct plt_bitmap *free, *free_rev, *live, *live_rev;
+ uint32_t pos = 0, free_ent = 0, mcam_entries;
+ struct npc_mcam_ents_info *info;
+ uint64_t slab = 0;
+ int rc;
+
+ info = &npc->flow_entry_info[flow->priority];
+
+ free_rev = npc->free_entries_rev[flow->priority];
+ free = npc->free_entries[flow->priority];
+ live_rev = npc->live_entries_rev[flow->priority];
+ live = npc->live_entries[flow->priority];
+ mcam_entries = npc->mcam_entries;
+
+ if (info->free_ent) {
+ rc = plt_bitmap_scan(free, &pos, &slab);
+ if (rc) {
+ /* Get free_ent from free entry bitmap */
+ free_ent = pos + __builtin_ctzll(slab);
+ /* Remove from free bitmaps and add to live ones */
+ plt_bitmap_clear(free, free_ent);
+ plt_bitmap_set(live, free_ent);
+ plt_bitmap_clear(free_rev, mcam_entries - free_ent - 1);
+ plt_bitmap_set(live_rev, mcam_entries - free_ent - 1);
+
+ info->free_ent--;
+ info->live_ent++;
+ return free_ent;
+ }
+ return NPC_ERR_INTERNAL;
+ }
+
+ rc = npc_fill_entry_cache(mbox, flow, npc, &free_ent);
+ if (rc)
+ return rc;
+
+ return free_ent;
+}
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 42/52] common/cnxk: add mcam utility API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (40 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 41/52] common/cnxk: add npc helper API Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 43/52] common/cnxk: add npc parsing API Nithin Dabilpuram
` (13 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Kiran Kumar K <kirankumark@marvell.com>
Adding mcam utility functions like reading KEX and reserving and writing
mcam rules.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_npc_mcam.c | 708 +++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npc_priv.h | 21 ++
3 files changed, 730 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npc_mcam.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index c4ec988..35dd3b9 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -35,6 +35,7 @@ sources = files('roc_dev.c',
'roc_npa.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
+ 'roc_npc_mcam.c',
'roc_npc_utils.c',
'roc_platform.c',
'roc_utils.c')
diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c
new file mode 100644
index 0000000..1cd7035
--- /dev/null
+++ b/drivers/common/cnxk/roc_npc_mcam.c
@@ -0,0 +1,708 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static int
+npc_mcam_alloc_counter(struct npc *npc, uint16_t *ctr)
+{
+ struct npc_mcam_alloc_counter_req *req;
+ struct npc_mcam_alloc_counter_rsp *rsp;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
+ if (req == NULL)
+ return rc;
+ req->count = 1;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ *ctr = rsp->cntr_list[0];
+ return rc;
+}
+
+int
+npc_mcam_free_counter(struct npc *npc, uint16_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_free_counter(mbox);
+ if (req == NULL)
+ return rc;
+ req->cntr = ctr_id;
+ return mbox_process(mbox);
+}
+
+int
+npc_mcam_read_counter(struct npc *npc, uint32_t ctr_id, uint64_t *count)
+{
+ struct npc_mcam_oper_counter_req *req;
+ struct npc_mcam_oper_counter_rsp *rsp;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_counter_stats(mbox);
+ if (req == NULL)
+ return rc;
+ req->cntr = ctr_id;
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ *count = rsp->stat;
+ return rc;
+}
+
+int
+npc_mcam_clear_counter(struct npc *npc, uint32_t ctr_id)
+{
+ struct npc_mcam_oper_counter_req *req;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_clear_counter(mbox);
+ if (req == NULL)
+ return rc;
+ req->cntr = ctr_id;
+ return mbox_process(mbox);
+}
+
+int
+npc_mcam_free_entry(struct npc *npc, uint32_t entry)
+{
+ struct npc_mcam_free_entry_req *req;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = entry;
+ return mbox_process(mbox);
+}
+
+int
+npc_mcam_free_all_entries(struct npc *npc)
+{
+ struct npc_mcam_free_entry_req *req;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_free_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->all = 1;
+ return mbox_process(mbox);
+}
+
+static int
+npc_supp_key_len(uint32_t supp_mask)
+{
+ int nib_count = 0;
+
+ while (supp_mask) {
+ nib_count++;
+ supp_mask &= (supp_mask - 1);
+ }
+ return nib_count * 4;
+}
+
+/**
+ * Returns true if any LDATA bits are extracted for specific LID+LTYPE.
+ *
+ * No LFLAG extraction is taken into account.
+ */
+static int
+npc_lid_lt_in_kex(struct npc *npc, uint8_t lid, uint8_t lt)
+{
+ struct npc_xtract_info *x_info;
+ int i;
+
+ for (i = 0; i < NPC_MAX_LD; i++) {
+ x_info = &npc->prx_dxcfg[NIX_INTF_RX][lid][lt].xtract[i];
+ /* Check for LDATA */
+ if (x_info->enable && x_info->len > 0)
+ return true;
+ }
+
+ return false;
+}
+
+static void
+npc_construct_ldata_mask(struct npc *npc, struct plt_bitmap *bmap, uint8_t lid,
+ uint8_t lt, uint8_t ld)
+{
+ struct npc_xtract_info *x_info, *infoflag;
+ int hdr_off, keylen;
+ npc_dxcfg_t *p;
+ npc_fxcfg_t *q;
+ int i, j;
+
+ p = &npc->prx_dxcfg;
+ x_info = &(*p)[0][lid][lt].xtract[ld];
+
+ if (x_info->enable == 0)
+ return;
+
+ hdr_off = x_info->hdr_off * 8;
+ keylen = x_info->len * 8;
+ for (i = hdr_off; i < (hdr_off + keylen); i++)
+ plt_bitmap_set(bmap, i);
+
+ if (x_info->flags_enable == 0)
+ return;
+
+ if ((npc->prx_lfcfg[0].i & 0x7) != lid)
+ return;
+
+ q = &npc->prx_fxcfg;
+ for (j = 0; j < NPC_MAX_LFL; j++) {
+ infoflag = &(*q)[0][ld][j].xtract[0];
+ if (infoflag->enable) {
+ hdr_off = infoflag->hdr_off * 8;
+ keylen = infoflag->len * 8;
+ for (i = hdr_off; i < (hdr_off + keylen); i++)
+ plt_bitmap_set(bmap, i);
+ }
+ }
+}
+
+/**
+ * Check if given LID+LTYPE combination is present in KEX
+ *
+ * len is non-zero, this function will return true if KEX extracts len bytes
+ * at given offset. Otherwise it'll return true if any bytes are extracted
+ * specifically for given LID+LTYPE combination (meaning not LFLAG based).
+ * The second case increases flexibility for custom frames whose extracted
+ * bits may change depending on KEX profile loaded.
+ *
+ * @param npc NPC context structure
+ * @param lid Layer ID to check for
+ * @param lt Layer Type to check for
+ * @param offset offset into the layer header to match
+ * @param len length of the match
+ */
+static bool
+npc_is_kex_enabled(struct npc *npc, uint8_t lid, uint8_t lt, int offset,
+ int len)
+{
+ struct plt_bitmap *bmap;
+ uint32_t bmap_sz;
+ uint8_t *mem;
+ int i;
+
+ if (!len)
+ return npc_lid_lt_in_kex(npc, lid, lt);
+
+ bmap_sz = plt_bitmap_get_memory_footprint(300 * 8);
+ mem = plt_zmalloc(bmap_sz, 0);
+ if (mem == NULL) {
+ plt_err("mem alloc failed");
+ return false;
+ }
+ bmap = plt_bitmap_init(300 * 8, mem, bmap_sz);
+ if (bmap == NULL) {
+ plt_err("mem alloc failed");
+ plt_free(mem);
+ return false;
+ }
+
+ npc_construct_ldata_mask(npc, bmap, lid, lt, 0);
+ npc_construct_ldata_mask(npc, bmap, lid, lt, 1);
+
+ for (i = offset; i < (offset + len); i++) {
+ if (plt_bitmap_get(bmap, i) != 0x1) {
+ plt_free(mem);
+ return false;
+ }
+ }
+
+ plt_free(mem);
+ return true;
+}
+
+uint64_t
+npc_get_kex_capability(struct npc *npc)
+{
+ npc_kex_cap_terms_t kex_cap;
+
+ memset(&kex_cap, 0, sizeof(kex_cap));
+
+ /* Ethtype: Offset 12B, len 2B */
+ kex_cap.bit.ethtype_0 = npc_is_kex_enabled(
+ npc, NPC_LID_LA, NPC_LT_LA_ETHER, 12 * 8, 2 * 8);
+ /* QINQ VLAN Ethtype: ofset 8B, len 2B */
+ kex_cap.bit.ethtype_x = npc_is_kex_enabled(
+ npc, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 8 * 8, 2 * 8);
+ /* VLAN ID0 : Outer VLAN: Offset 2B, len 2B */
+ kex_cap.bit.vlan_id_0 = npc_is_kex_enabled(
+ npc, NPC_LID_LB, NPC_LT_LB_CTAG, 2 * 8, 2 * 8);
+ /* VLAN ID0 : Inner VLAN: offset 6B, len 2B */
+ kex_cap.bit.vlan_id_x = npc_is_kex_enabled(
+ npc, NPC_LID_LB, NPC_LT_LB_STAG_QINQ, 6 * 8, 2 * 8);
+ /* DMCA: offset 0B, len 6B */
+ kex_cap.bit.dmac = npc_is_kex_enabled(npc, NPC_LID_LA, NPC_LT_LA_ETHER,
+ 0 * 8, 6 * 8);
+ /* IP proto: offset 9B, len 1B */
+ kex_cap.bit.ip_proto =
+ npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_IP, 9 * 8, 1 * 8);
+ /* UDP dport: offset 2B, len 2B */
+ kex_cap.bit.udp_dport = npc_is_kex_enabled(npc, NPC_LID_LD,
+ NPC_LT_LD_UDP, 2 * 8, 2 * 8);
+ /* UDP sport: offset 0B, len 2B */
+ kex_cap.bit.udp_sport = npc_is_kex_enabled(npc, NPC_LID_LD,
+ NPC_LT_LD_UDP, 0 * 8, 2 * 8);
+ /* TCP dport: offset 2B, len 2B */
+ kex_cap.bit.tcp_dport = npc_is_kex_enabled(npc, NPC_LID_LD,
+ NPC_LT_LD_TCP, 2 * 8, 2 * 8);
+ /* TCP sport: offset 0B, len 2B */
+ kex_cap.bit.tcp_sport = npc_is_kex_enabled(npc, NPC_LID_LD,
+ NPC_LT_LD_TCP, 0 * 8, 2 * 8);
+ /* IP SIP: offset 12B, len 4B */
+ kex_cap.bit.sip_addr = npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_IP,
+ 12 * 8, 4 * 8);
+ /* IP DIP: offset 14B, len 4B */
+ kex_cap.bit.dip_addr = npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_IP,
+ 14 * 8, 4 * 8);
+ /* IP6 SIP: offset 8B, len 16B */
+ kex_cap.bit.sip6_addr = npc_is_kex_enabled(
+ npc, NPC_LID_LC, NPC_LT_LC_IP6, 8 * 8, 16 * 8);
+ /* IP6 DIP: offset 24B, len 16B */
+ kex_cap.bit.dip6_addr = npc_is_kex_enabled(
+ npc, NPC_LID_LC, NPC_LT_LC_IP6, 24 * 8, 16 * 8);
+ /* ESP SPI: offset 0B, len 4B */
+ kex_cap.bit.ipsec_spi = npc_is_kex_enabled(npc, NPC_LID_LE,
+ NPC_LT_LE_ESP, 0 * 8, 4 * 8);
+ /* VXLAN VNI: offset 4B, len 3B */
+ kex_cap.bit.ld_vni = npc_is_kex_enabled(npc, NPC_LID_LE,
+ NPC_LT_LE_VXLAN, 0 * 8, 3 * 8);
+
+ /* Custom L3 frame: varied offset and lengths */
+ kex_cap.bit.custom_l3 =
+ npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_CUSTOM0, 0, 0);
+ kex_cap.bit.custom_l3 |=
+ npc_is_kex_enabled(npc, NPC_LID_LC, NPC_LT_LC_CUSTOM1, 0, 0);
+ /* SCTP sport : offset 0B, len 2B */
+ kex_cap.bit.sctp_sport = npc_is_kex_enabled(
+ npc, NPC_LID_LD, NPC_LT_LD_SCTP, 0 * 8, 2 * 8);
+ /* SCTP dport : offset 2B, len 2B */
+ kex_cap.bit.sctp_dport = npc_is_kex_enabled(
+ npc, NPC_LID_LD, NPC_LT_LD_SCTP, 2 * 8, 2 * 8);
+ /* ICMP type : offset 0B, len 1B */
+ kex_cap.bit.icmp_type = npc_is_kex_enabled(
+ npc, NPC_LID_LD, NPC_LT_LD_ICMP, 0 * 8, 1 * 8);
+ /* ICMP code : offset 1B, len 1B */
+ kex_cap.bit.icmp_code = npc_is_kex_enabled(
+ npc, NPC_LID_LD, NPC_LT_LD_ICMP, 1 * 8, 1 * 8);
+ /* ICMP id : offset 4B, len 2B */
+ kex_cap.bit.icmp_id = npc_is_kex_enabled(npc, NPC_LID_LD,
+ NPC_LT_LD_ICMP, 4 * 8, 2 * 8);
+ /* IGMP grp_addr : offset 4B, len 4B */
+ kex_cap.bit.igmp_grp_addr = npc_is_kex_enabled(
+ npc, NPC_LID_LD, NPC_LT_LD_IGMP, 4 * 8, 4 * 8);
+ /* GTPU teid : offset 4B, len 4B */
+ kex_cap.bit.gtpu_teid = npc_is_kex_enabled(
+ npc, NPC_LID_LE, NPC_LT_LE_GTPU, 4 * 8, 4 * 8);
+ return kex_cap.all_bits;
+}
+
+#define BYTESM1_SHIFT 16
+#define HDR_OFF_SHIFT 8
+static void
+npc_update_kex_info(struct npc_xtract_info *xtract_info, uint64_t val)
+{
+ xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
+ xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
+ xtract_info->key_off = val & 0x3f;
+ xtract_info->enable = ((val >> 7) & 0x1);
+ xtract_info->flags_enable = ((val >> 6) & 0x1);
+}
+
+int
+npc_mcam_alloc_entries(struct npc *npc, int ref_mcam, int *alloc_entry,
+ int req_count, int prio, int *resp_count)
+{
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+ int i;
+
+ req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->contig = 0;
+ req->count = req_count;
+ req->priority = prio;
+ req->ref_entry = ref_mcam;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ for (i = 0; i < rsp->count; i++)
+ alloc_entry[i] = rsp->entry_list[i];
+ *resp_count = rsp->count;
+ return 0;
+}
+
+int
+npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam,
+ struct roc_npc_flow *ref_mcam, int prio, int *resp_count)
+{
+ struct npc_mcam_alloc_entry_req *req;
+ struct npc_mcam_alloc_entry_rsp *rsp;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ req = mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->contig = 1;
+ req->count = 1;
+ req->priority = prio;
+ req->ref_entry = ref_mcam->mcam_id;
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc)
+ return rc;
+ memset(mcam, 0, sizeof(struct roc_npc_flow));
+ mcam->mcam_id = rsp->entry;
+ mcam->nix_intf = ref_mcam->nix_intf;
+ *resp_count = rsp->count;
+ return 0;
+}
+
+int
+npc_mcam_ena_dis_entry(struct npc *npc, struct roc_npc_flow *mcam, bool enable)
+{
+ struct npc_mcam_ena_dis_entry_req *req;
+ struct mbox *mbox = npc->mbox;
+ int rc = -ENOSPC;
+
+ if (enable)
+ req = mbox_alloc_msg_npc_mcam_ena_entry(mbox);
+ else
+ req = mbox_alloc_msg_npc_mcam_dis_entry(mbox);
+
+ if (req == NULL)
+ return rc;
+ req->entry = mcam->mcam_id;
+ mcam->enable = enable;
+ return mbox_process(mbox);
+}
+
+int
+npc_mcam_write_entry(struct npc *npc, struct roc_npc_flow *mcam)
+{
+ struct npc_mcam_write_entry_req *req;
+ struct mbox *mbox = npc->mbox;
+ struct mbox_msghdr *rsp;
+ int rc = -ENOSPC;
+ int i;
+
+ req = mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ if (req == NULL)
+ return rc;
+ req->entry = mcam->mcam_id;
+ req->intf = mcam->nix_intf;
+ req->enable_entry = mcam->enable;
+ req->entry_data.action = mcam->npc_action;
+ req->entry_data.vtag_action = mcam->vtag_action;
+ for (i = 0; i < NPC_MCAM_KEY_X4_WORDS; i++) {
+ req->entry_data.kw[i] = mcam->mcam_data[i];
+ req->entry_data.kw_mask[i] = mcam->mcam_mask[i];
+ }
+ return mbox_process_msg(mbox, (void *)&rsp);
+}
+
+static void
+npc_mcam_process_mkex_cfg(struct npc *npc, struct npc_get_kex_cfg_rsp *kex_rsp)
+{
+ volatile uint64_t(
+ *q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD];
+ struct npc_xtract_info *x_info = NULL;
+ int lid, lt, ld, fl, ix;
+ npc_dxcfg_t *p;
+ uint64_t keyw;
+ uint64_t val;
+
+ npc->keyx_supp_nmask[NPC_MCAM_RX] =
+ kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_supp_nmask[NPC_MCAM_TX] =
+ kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
+ npc->keyx_len[NPC_MCAM_RX] =
+ npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
+ npc->keyx_len[NPC_MCAM_TX] =
+ npc_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
+
+ keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_RX] = keyw;
+ keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
+ npc->keyw[NPC_MCAM_TX] = keyw;
+
+ /* Update KEX_LD_FLAG */
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ for (fl = 0; fl < NPC_MAX_LFL; fl++) {
+ x_info = &npc->prx_fxcfg[ix][ld][fl].xtract[0];
+ val = kex_rsp->intf_ld_flags[ix][ld][fl];
+ npc_update_kex_info(x_info, val);
+ }
+ }
+ }
+
+ /* Update LID, LT and LDATA cfg */
+ p = &npc->prx_dxcfg;
+ q = (volatile uint64_t(*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])(
+ &kex_rsp->intf_lid_lt_ld);
+ for (ix = 0; ix < NPC_MAX_INTF; ix++) {
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ for (lt = 0; lt < NPC_MAX_LT; lt++) {
+ for (ld = 0; ld < NPC_MAX_LD; ld++) {
+ x_info = &(*p)[ix][lid][lt].xtract[ld];
+ val = (*q)[ix][lid][lt][ld];
+ npc_update_kex_info(x_info, val);
+ }
+ }
+ }
+ }
+ /* Update LDATA Flags cfg */
+ npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
+ npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
+}
+
+int
+npc_mcam_fetch_kex_cfg(struct npc *npc)
+{
+ struct npc_get_kex_cfg_rsp *kex_rsp;
+ struct mbox *mbox = npc->mbox;
+ int rc = 0;
+
+ mbox_alloc_msg_npc_get_kex_cfg(mbox);
+ rc = mbox_process_msg(mbox, (void *)&kex_rsp);
+ if (rc) {
+ plt_err("Failed to fetch NPC KEX config");
+ goto done;
+ }
+
+ mbox_memcpy((char *)npc->profile_name, kex_rsp->mkex_pfl_name,
+ MKEX_NAME_LEN);
+
+ npc_mcam_process_mkex_cfg(npc, kex_rsp);
+
+done:
+ return rc;
+}
+
+int
+npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
+ struct npc_parse_state *pst)
+{
+ int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
+ struct npc_mcam_write_entry_req *req;
+ struct mbox *mbox = npc->mbox;
+ struct mbox_msghdr *rsp;
+ uint16_t ctr = ~(0);
+ int rc, idx;
+ int entry;
+
+ PLT_SET_USED(pst);
+
+ if (use_ctr) {
+ rc = npc_mcam_alloc_counter(npc, &ctr);
+ if (rc)
+ return rc;
+ }
+
+ entry = npc_check_preallocated_entry_cache(mbox, flow, npc);
+ if (entry < 0) {
+ npc_mcam_free_counter(npc, ctr);
+ return NPC_ERR_MCAM_ALLOC;
+ }
+
+ req = mbox_alloc_msg_npc_mcam_write_entry(mbox);
+ if (req == NULL)
+ return -ENOSPC;
+ req->set_cntr = use_ctr;
+ req->cntr = ctr;
+ req->entry = entry;
+
+ req->intf = (flow->nix_intf == NIX_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
+ req->enable_entry = 1;
+ req->entry_data.action = flow->npc_action;
+
+ /*
+ * Driver sets vtag action on per interface basis, not
+ * per flow basis. It is a matter of how we decide to support
+ * this pmd specific behavior. There are two ways:
+ * 1. Inherit the vtag action from the one configured
+ * for this interface. This can be read from the
+ * vtag_action configured for default mcam entry of
+ * this pf_func.
+ * 2. Do not support vtag action with npc_flow.
+ *
+ * Second approach is used now.
+ */
+ req->entry_data.vtag_action = 0ULL;
+
+ for (idx = 0; idx < ROC_NPC_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ req->entry_data.kw[idx] = flow->mcam_data[idx];
+ req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
+ }
+
+ if (flow->nix_intf == NIX_INTF_RX) {
+ req->entry_data.kw[0] |= (uint64_t)npc->channel;
+ req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+ } else {
+ uint16_t pf_func = (flow->npc_action >> 4) & 0xffff;
+
+ pf_func = plt_cpu_to_be_16(pf_func);
+ req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
+ req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
+ }
+
+ rc = mbox_process_msg(mbox, (void *)&rsp);
+ if (rc != 0)
+ return rc;
+
+ flow->mcam_id = entry;
+ if (use_ctr)
+ flow->ctr_id = ctr;
+ return 0;
+}
+
+int
+npc_program_mcam(struct npc *npc, struct npc_parse_state *pst, bool mcam_alloc)
+{
+ struct npc_mcam_read_base_rule_rsp *base_rule_rsp;
+ /* This is non-LDATA part in search key */
+ uint64_t key_data[2] = {0ULL, 0ULL};
+ uint64_t key_mask[2] = {0ULL, 0ULL};
+ int key_len, bit = 0, index, rc = 0;
+ int intf = pst->flow->nix_intf;
+ struct mcam_entry *base_entry;
+ int off, idx, data_off = 0;
+ uint8_t lid, mask, data;
+ uint16_t layer_info;
+ uint64_t lt, flags;
+
+ /* Skip till Layer A data start */
+ while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
+ if (npc->keyx_supp_nmask[intf] & (1 << bit))
+ data_off++;
+ bit++;
+ }
+
+ /* Each bit represents 1 nibble */
+ data_off *= 4;
+
+ index = 0;
+ for (lid = 0; lid < NPC_MAX_LID; lid++) {
+ /* Offset in key */
+ off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
+ lt = pst->lt[lid] & 0xf;
+ flags = pst->flags[lid] & 0xff;
+
+ /* NPC_LAYER_KEX_S */
+ layer_info = ((npc->keyx_supp_nmask[intf] >> off) & 0x7);
+
+ if (layer_info) {
+ for (idx = 0; idx <= 2; idx++) {
+ if (layer_info & (1 << idx)) {
+ if (idx == 2)
+ data = lt;
+ else if (idx == 1)
+ data = ((flags >> 4) & 0xf);
+ else
+ data = (flags & 0xf);
+
+ if (data_off >= 64) {
+ data_off = 0;
+ index++;
+ }
+ key_data[index] |=
+ ((uint64_t)data << data_off);
+ mask = 0xf;
+ if (lt == 0)
+ mask = 0;
+ key_mask[index] |=
+ ((uint64_t)mask << data_off);
+ data_off += 4;
+ }
+ }
+ }
+ }
+
+ /* Copy this into mcam string */
+ key_len = (pst->npc->keyx_len[intf] + 7) / 8;
+ memcpy(pst->flow->mcam_data, key_data, key_len);
+ memcpy(pst->flow->mcam_mask, key_mask, key_len);
+
+ if (pst->is_vf) {
+ (void)mbox_alloc_msg_npc_read_base_steer_rule(npc->mbox);
+ rc = mbox_process_msg(npc->mbox, (void *)&base_rule_rsp);
+ if (rc) {
+ plt_err("Failed to fetch VF's base MCAM entry");
+ return rc;
+ }
+ base_entry = &base_rule_rsp->entry_data;
+ for (idx = 0; idx < ROC_NPC_MAX_MCAM_WIDTH_DWORDS; idx++) {
+ pst->flow->mcam_data[idx] |= base_entry->kw[idx];
+ pst->flow->mcam_mask[idx] |= base_entry->kw_mask[idx];
+ }
+ }
+
+ /*
+ * Now we have mcam data and mask formatted as
+ * [Key_len/4 nibbles][0 or 1 nibble hole][data]
+ * hole is present if key_len is odd number of nibbles.
+ * mcam data must be split into 64 bits + 48 bits segments
+ * for each back W0, W1.
+ */
+
+ if (mcam_alloc)
+ return npc_mcam_alloc_and_write(npc, pst->flow, pst);
+ else
+ return 0;
+}
+
+int
+npc_flow_free_all_resources(struct npc *npc)
+{
+ struct npc_mcam_ents_info *info;
+ struct roc_npc_flow *flow;
+ struct plt_bitmap *bmap;
+ int entry_count = 0;
+ int rc, idx;
+
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ info = &npc->flow_entry_info[idx];
+ entry_count += info->live_ent;
+ }
+
+ if (entry_count == 0)
+ return 0;
+
+ /* Free all MCAM entries allocated */
+ rc = npc_mcam_free_all_entries(npc);
+
+ /* Free any MCAM counters and delete flow list */
+ for (idx = 0; idx < npc->flow_max_priority; idx++) {
+ while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
+ if (flow->ctr_id != NPC_COUNTER_NONE)
+ rc |= npc_mcam_free_counter(npc, flow->ctr_id);
+
+ TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
+ plt_free(flow);
+ bmap = npc->live_entries[flow->priority];
+ plt_bitmap_clear(bmap, flow->mcam_id);
+ }
+ info = &npc->flow_entry_info[idx];
+ info->free_ent = 0;
+ info->live_ent = 0;
+ }
+ return rc;
+}
diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h
index 185c60e..ef1e991 100644
--- a/drivers/common/cnxk/roc_npc_priv.h
+++ b/drivers/common/cnxk/roc_npc_priv.h
@@ -379,6 +379,22 @@ roc_npc_to_npc_priv(struct roc_npc *npc)
return (struct npc *)npc->reserved;
}
+int npc_mcam_free_counter(struct npc *npc, uint16_t ctr_id);
+int npc_mcam_read_counter(struct npc *npc, uint32_t ctr_id, uint64_t *count);
+int npc_mcam_clear_counter(struct npc *npc, uint32_t ctr_id);
+int npc_mcam_free_entry(struct npc *npc, uint32_t entry);
+int npc_mcam_free_all_entries(struct npc *npc);
+int npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
+ struct npc_parse_state *pst);
+int npc_mcam_alloc_entry(struct npc *npc, struct roc_npc_flow *mcam,
+ struct roc_npc_flow *ref_mcam, int prio,
+ int *resp_count);
+int npc_mcam_alloc_entries(struct npc *npc, int ref_mcam, int *alloc_entry,
+ int req_count, int prio, int *resp_count);
+
+int npc_mcam_ena_dis_entry(struct npc *npc, struct roc_npc_flow *mcam,
+ bool enable);
+int npc_mcam_write_entry(struct npc *npc, struct roc_npc_flow *mcam);
int npc_update_parse_state(struct npc_parse_state *pst,
struct npc_parse_item_info *info, int lid, int lt,
uint8_t flags);
@@ -386,7 +402,12 @@ void npc_get_hw_supp_mask(struct npc_parse_state *pst,
struct npc_parse_item_info *info, int lid, int lt);
int npc_parse_item_basic(const struct roc_npc_item_info *item,
struct npc_parse_item_info *info);
+int npc_mcam_fetch_kex_cfg(struct npc *npc);
int npc_check_preallocated_entry_cache(struct mbox *mbox,
struct roc_npc_flow *flow,
struct npc *npc);
+int npc_flow_free_all_resources(struct npc *npc);
+int npc_program_mcam(struct npc *npc, struct npc_parse_state *pst,
+ bool mcam_alloc);
+uint64_t npc_get_kex_capability(struct npc *npc);
#endif /* _ROC_NPC_PRIV_H_ */
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 43/52] common/cnxk: add npc parsing API
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (41 preceding siblings ...)
2021-03-05 13:39 ` [dpdk-dev] [PATCH 42/52] common/cnxk: add mcam utility API Nithin Dabilpuram
@ 2021-03-05 13:39 ` Nithin Dabilpuram
2021-03-05 13:39 ` [dpdk-dev] [PATCH 44/52] common/cnxk: add npc init and fini support Nithin Dabilpuram
` (12 subsequent siblings)
55 siblings, 0 replies; 275+ messages in thread
From: Nithin Dabilpuram @ 2021-03-05 13:39 UTC (permalink / raw)
To: dev
Cc: jerinj, skori, skoteshwar, pbhagavatula, kirankumark, psatheesh, asekhar
From: Kiran Kumar K <kirankumark@marvell.com>
Adding npc parsing API support to parse different patterns and actions.
Based on the pattern and actions ltype values will be chosen and
mcam data will be configured at perticular offsets.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_npc_parse.c | 703 ++++++++++++++++++++++++++++++++++++
drivers/common/cnxk/roc_npc_priv.h | 13 +
3 files changed, 717 insertions(+)
create mode 100644 drivers/common/cnxk/roc_npc_parse.c
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 35dd3b9..f464a6d 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -36,6 +36,7 @@ sources = files('roc_dev.c',
'roc_npa_debug.c',
'roc_npa_irq.c',
'roc_npc_mcam.c',
+ 'roc_npc_parse.c',
'roc_npc_utils.c',
'roc_platform.c',
'roc_utils.c')
diff --git a/drivers/common/cnxk/roc_npc_parse.c b/drivers/common/cnxk/roc_npc_parse.c
new file mode 100644
index 0000000..483d21e
--- /dev/null
+++ b/drivers/common/cnxk/roc_npc_parse.c
@@ -0,0 +1,703 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2020 Marvell.
+ */
+#include "roc_api.h"
+#include "roc_priv.h"
+
+const struct roc_npc_item_info *
+npc_parse_skip_void_and_any_items(const struct roc_npc_item_info *pattern)
+{
+ while ((pattern->type == ROC_NPC_ITEM_TYPE_VOID) ||
+ (pattern->type == ROC_NPC_ITEM_TYPE_ANY))
+ pattern++;
+
+ return pattern;
+}
+
+int
+npc_parse_meta_items(struct npc_parse_state *pst)
+{
+ PLT_SET_USED(pst);
+ return 0;
+}
+
+int
+npc_parse_cpt_hdr(struct npc_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt;
+ int rc;
+
+ /* Identify the pattern type into lid, lt */
+ if (pst->pattern->type != ROC_NPC_ITEM_TYPE_CPT_HDR)
+ return 0;
+
+ lid = NPC_LID_LA;
+ lt = NPC_LT_LA_CPT_HDR;
+ info.hw_hdr_len = 0;
+
+ /* Prepare for parsing the item */
+ info.hw_mask = &hw_mask;
+ info.len = pst->pattern->size;
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Basic validation of item parameters */
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc)
+ return rc;
+
+ /* Update pst if not validate only? clash check? */
+ return npc_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+int
+npc_parse_higig2_hdr(struct npc_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt;
+ int rc;
+
+ /* Identify the pattern type into lid, lt */
+ if (pst->pattern->type != ROC_NPC_ITEM_TYPE_HIGIG2)
+ return 0;
+
+ lid = NPC_LID_LA;
+ lt = NPC_LT_LA_HIGIG2_ETHER;
+ info.hw_hdr_len = 0;
+
+ if (pst->flow->nix_intf == NIX_INTF_TX) {
+ lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
+ info.hw_hdr_len = NPC_IH_LENGTH;
+ }
+
+ /* Prepare for parsing the item */
+ info.hw_mask = &hw_mask;
+ info.len = pst->pattern->size;
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Basic validation of item parameters */
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc)
+ return rc;
+
+ /* Update pst if not validate only? clash check? */
+ return npc_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+int
+npc_parse_la(struct npc_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt;
+ int rc;
+
+ /* Identify the pattern type into lid, lt */
+ if (pst->pattern->type != ROC_NPC_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LA;
+ lt = NPC_LT_LA_ETHER;
+ info.hw_hdr_len = 0;
+
+ if (pst->flow->nix_intf == NIX_INTF_TX) {
+ lt = NPC_LT_LA_IH_NIX_ETHER;
+ info.hw_hdr_len = NPC_IH_LENGTH;
+ if (pst->npc->switch_header_type == ROC_PRIV_FLAGS_HIGIG) {
+ lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
+ info.hw_hdr_len += NPC_HIGIG2_LENGTH;
+ }
+ } else {
+ if (pst->npc->switch_header_type == ROC_PRIV_FLAGS_HIGIG) {
+ lt = NPC_LT_LA_HIGIG2_ETHER;
+ info.hw_hdr_len = NPC_HIGIG2_LENGTH;
+ }
+ }
+
+ /* Prepare for parsing the item */
+ info.hw_mask = &hw_mask;
+ info.len = pst->pattern->size;
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ /* Basic validation of item parameters */
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc)
+ return rc;
+
+ /* Update pst if not validate only? clash check? */
+ return npc_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+int
+npc_parse_lb(struct npc_parse_state *pst)
+{
+ const struct roc_npc_item_info *pattern = pst->pattern;
+ const struct roc_npc_item_info *last_pattern;
+ char hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.def_mask = NULL;
+ info.hw_hdr_len = NPC_TPID_LENGTH;
+
+ lid = NPC_LID_LB;
+ lflags = 0;
+ last_pattern = pattern;
+
+ if (pst->pattern->type == ROC_NPC_ITEM_TYPE_VLAN) {
+ /* RTE vlan is either 802.1q or 802.1ad,
+ * this maps to either CTAG/STAG. We need to decide
+ * based on number of VLANS present. Matching is
+ * supported on first tag only.
+ */
+ info.hw_mask = NULL;
+ info.len = pst->pattern->size;
+
+ pattern = pst->pattern;
+ while (pattern->type == ROC_NPC_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+
+ /* Basic validation of Second/Third vlan item */
+ if (nr_vlans > 1) {
+ rc = npc_parse_item_basic(pattern, &info);
+ if (rc != 0)
+ return rc;
+ }
+ last_pattern = pattern;
+ pattern++;
+ pattern = npc_parse_skip_void_and_any_items(pattern);
+ }
+
+ switch (nr_vlans) {
+ case 1:
+ lt = NPC_LT_LB_CTAG;
+ break;
+ case 2:
+ lt = NPC_LT_LB_STAG_QINQ;
+ lflags = NPC_F_STAG_CTAG;
+ break;
+ case 3:
+ lt = NPC_LT_LB_STAG_QINQ;
+ lflags = NPC_F_STAG_STAG_CTAG;
+ break;
+ default:
+ return NPC_ERR_PATTERN_NOTSUP;
+ }
+ } else if (pst->pattern->type == ROC_NPC_ITEM_TYPE_E_TAG) {
+ /* we can support ETAG and match a subsequent CTAG
+ * without any matching support.
+ */
+ lt = NPC_LT_LB_ETAG;
+ lflags = 0;
+
+ last_pattern = pst->pattern;
+ pattern = npc_parse_skip_void_and_any_items(pst->pattern + 1);
+ if (pattern->type == ROC_NPC_ITEM_TYPE_VLAN) {
+ /* set supported mask to NULL for vlan tag */
+ info.hw_mask = NULL;
+ info.len = pattern->size;
+ rc = npc_parse_item_basic(pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ lflags = NPC_F_ETAG_CTAG;
+ last_pattern = pattern;
+ }
+ info.len = pattern->size;
+ } else if (pst->pattern->type == ROC_NPC_ITEM_TYPE_QINQ) {
+ info.hw_mask = NULL;
+ info.len = pst->pattern->size;
+ lt = NPC_LT_LB_STAG_QINQ;
+ lflags = NPC_F_STAG_CTAG;
+ } else {
+ return 0;
+ }
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ /* Point pattern to last item consumed */
+ pst->pattern = last_pattern;
+ return npc_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+static int
+npc_parse_mpls_label_stack(struct npc_parse_state *pst, int *flag)
+{
+ uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS, NPC_F_MPLS_3_LABELS,
+ NPC_F_MPLS_4_LABELS};
+ const struct roc_npc_item_info *pattern = pst->pattern;
+ struct npc_parse_item_info info;
+ int nr_labels = 0;
+ int rc;
+
+ /*
+ * pst->pattern points to first MPLS label. We only check
+ * that subsequent labels do not have anything to match.
+ */
+ info.hw_mask = NULL;
+ info.len = pattern->size;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ info.def_mask = NULL;
+
+ while (pattern->type == ROC_NPC_ITEM_TYPE_MPLS) {
+ nr_labels++;
+
+ /* Basic validation of Second/Third/Fourth mpls item */
+ if (nr_labels > 1) {
+ rc = npc_parse_item_basic(pattern, &info);
+ if (rc != 0)
+ return rc;
+ }
+ pst->last_pattern = pattern;
+ pattern++;
+ pattern = npc_parse_skip_void_and_any_items(pattern);
+ }
+
+ if (nr_labels < 1 || nr_labels > 4)
+ return NPC_ERR_PATTERN_NOTSUP;
+
+ *flag = flag_list[nr_labels - 1];
+ return 0;
+}
+
+static int
+npc_parse_mpls(struct npc_parse_state *pst, int lid)
+{
+ /* Find number of MPLS labels */
+ uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lt, lflags;
+ int rc;
+
+ lflags = 0;
+
+ if (lid == NPC_LID_LC)
+ lt = NPC_LT_LC_MPLS;
+ else if (lid == NPC_LID_LD)
+ lt = NPC_LT_LD_TU_MPLS_IN_IP;
+ else
+ lt = NPC_LT_LE_TU_MPLS_IN_UDP;
+
+ /* Prepare for parsing the first item */
+ info.hw_mask = &hw_mask;
+ info.len = pst->pattern->size;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ /*
+ * Parse for more labels.
+ * This sets lflags and pst->last_pattern correctly.
+ */
+ rc = npc_parse_mpls_label_stack(pst, &lflags);
+ if (rc != 0)
+ return rc;
+
+ pst->tunnel = 1;
+ pst->pattern = pst->last_pattern;
+
+ return npc_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+static inline void
+npc_check_lc_ip_tunnel(struct npc_parse_state *pst)
+{
+ const struct roc_npc_item_info *pattern = pst->pattern + 1;
+
+ pattern = npc_parse_skip_void_and_any_items(pattern);
+ if (pattern->type == ROC_NPC_ITEM_TYPE_MPLS ||
+ pattern->type == ROC_NPC_ITEM_TYPE_IPV4 ||
+ pattern->type == ROC_NPC_ITEM_TYPE_IPV6)
+ pst->tunnel = 1;
+}
+
+int
+npc_parse_lc(struct npc_parse_state *pst)
+{
+ uint8_t hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt;
+ int rc;
+
+ if (pst->pattern->type == ROC_NPC_ITEM_TYPE_MPLS)
+ return npc_parse_mpls(pst, NPC_LID_LC);
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LC;
+
+ switch (pst->pattern->type) {
+ case ROC_NPC_ITEM_TYPE_IPV4:
+ lt = NPC_LT_LC_IP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_IPV6:
+ lid = NPC_LID_LC;
+ lt = NPC_LT_LC_IP6;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_ARP_ETH_IPV4:
+ lt = NPC_LT_LC_ARP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_IPV6_EXT:
+ lid = NPC_LID_LC;
+ lt = NPC_LT_LC_IP6_EXT;
+ info.len = pst->pattern->size;
+ info.hw_hdr_len = 40;
+ break;
+ case ROC_NPC_ITEM_TYPE_L3_CUSTOM:
+ lt = NPC_LT_LC_CUSTOM0;
+ info.len = pst->pattern->size;
+ break;
+ default:
+ /* No match at this layer */
+ return 0;
+ }
+
+ /* Identify if IP tunnels MPLS or IPv4/v6 */
+ npc_check_lc_ip_tunnel(pst);
+
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ return npc_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+int
+npc_parse_ld(struct npc_parse_state *pst)
+{
+ char hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt, lflags;
+ int rc;
+
+ if (pst->tunnel) {
+ /* We have already parsed MPLS or IPv4/v6 followed
+ * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
+ * would be parsed as tunneled versions. Skip
+ * this layer, except for tunneled MPLS. If LC is
+ * MPLS, we have anyway skipped all stacked MPLS
+ * labels.
+ */
+ if (pst->pattern->type == ROC_NPC_ITEM_TYPE_MPLS)
+ return npc_parse_mpls(pst, NPC_LID_LD);
+ return 0;
+ }
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ info.hw_hdr_len = 0;
+
+ lid = NPC_LID_LD;
+ lflags = 0;
+
+ switch (pst->pattern->type) {
+ case ROC_NPC_ITEM_TYPE_ICMP:
+ if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
+ lt = NPC_LT_LD_ICMP6;
+ else
+ lt = NPC_LT_LD_ICMP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_UDP:
+ lt = NPC_LT_LD_UDP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_IGMP:
+ lt = NPC_LT_LD_IGMP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_TCP:
+ lt = NPC_LT_LD_TCP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LD_SCTP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_GRE:
+ lt = NPC_LT_LD_GRE;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_GRE_KEY:
+ lt = NPC_LT_LD_GRE;
+ info.len = pst->pattern->size;
+ info.hw_hdr_len = 4;
+ break;
+ case ROC_NPC_ITEM_TYPE_NVGRE:
+ lt = NPC_LT_LD_NVGRE;
+ lflags = NPC_F_GRE_NVGRE;
+ info.len = pst->pattern->size;
+ /* Further IP/Ethernet are parsed as tunneled */
+ pst->tunnel = 1;
+ break;
+ default:
+ return 0;
+ }
+
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ return npc_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+npc_parse_le(struct npc_parse_state *pst)
+{
+ const struct roc_npc_item_info *pattern = pst->pattern;
+ char hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt, lflags;
+ int rc;
+
+ if (pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type == ROC_NPC_ITEM_TYPE_MPLS)
+ return npc_parse_mpls(pst, NPC_LID_LE);
+
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_mask = NULL;
+ info.def_mask = NULL;
+ info.len = 0;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LE;
+ lflags = 0;
+
+ /* Ensure we are not matching anything in UDP */
+ rc = npc_parse_item_basic(pattern, &info);
+ if (rc)
+ return rc;
+
+ info.hw_mask = &hw_mask;
+ pattern = npc_parse_skip_void_and_any_items(pattern);
+ switch (pattern->type) {
+ case ROC_NPC_ITEM_TYPE_VXLAN:
+ lflags = NPC_F_UDP_VXLAN;
+ info.len = pattern->size;
+ lt = NPC_LT_LE_VXLAN;
+ break;
+ case ROC_NPC_ITEM_TYPE_GTPC:
+ lflags = NPC_F_UDP_GTP_GTPC;
+ info.len = pattern->size;
+ lt = NPC_LT_LE_GTPC;
+ break;
+ case ROC_NPC_ITEM_TYPE_GTPU:
+ lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
+ info.len = pattern->size;
+ lt = NPC_LT_LE_GTPU;
+ break;
+ case ROC_NPC_ITEM_TYPE_GENEVE:
+ lflags = NPC_F_UDP_GENEVE;
+ info.len = pattern->size;
+ lt = NPC_LT_LE_GENEVE;
+ break;
+ case ROC_NPC_ITEM_TYPE_VXLAN_GPE:
+ lflags = NPC_F_UDP_VXLANGPE;
+ info.len = pattern->size;
+ lt = NPC_LT_LE_VXLANGPE;
+ break;
+ case ROC_NPC_ITEM_TYPE_ESP:
+ lt = NPC_LT_LE_ESP;
+ info.len = pst->pattern->size;
+ break;
+ default:
+ return 0;
+ }
+
+ pst->tunnel = 1;
+
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = npc_parse_item_basic(pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ return npc_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+npc_parse_lf(struct npc_parse_state *pst)
+{
+ const struct roc_npc_item_info *pattern, *last_pattern;
+ char hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt, lflags;
+ int nr_vlans = 0;
+ int rc;
+
+ /* We hit this layer if there is a tunneling protocol */
+ if (!pst->tunnel)
+ return 0;
+
+ if (pst->pattern->type != ROC_NPC_ITEM_TYPE_ETH)
+ return 0;
+
+ lid = NPC_LID_LF;
+ lt = NPC_LT_LF_TU_ETHER;
+ lflags = 0;
+
+ /* No match support for vlan tags */
+ info.hw_mask = NULL;
+ info.len = pst->pattern->size;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+
+ /* Look ahead and find out any VLAN tags. These can be
+ * detected but no data matching is available.
+ */
+ last_pattern = pst->pattern;
+ pattern = pst->pattern + 1;
+ pattern = npc_parse_skip_void_and_any_items(pattern);
+ while (pattern->type == ROC_NPC_ITEM_TYPE_VLAN) {
+ nr_vlans++;
+ last_pattern = pattern;
+ pattern++;
+ pattern = npc_parse_skip_void_and_any_items(pattern);
+ }
+ switch (nr_vlans) {
+ case 0:
+ break;
+ case 1:
+ lflags = NPC_F_TU_ETHER_CTAG;
+ break;
+ case 2:
+ lflags = NPC_F_TU_ETHER_STAG_CTAG;
+ break;
+ default:
+ return NPC_ERR_PATTERN_NOTSUP;
+ }
+
+ info.hw_mask = &hw_mask;
+ info.len = pst->pattern->size;
+ info.hw_hdr_len = 0;
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ info.spec = NULL;
+ info.mask = NULL;
+
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ pst->pattern = last_pattern;
+
+ return npc_update_parse_state(pst, &info, lid, lt, lflags);
+}
+
+int
+npc_parse_lg(struct npc_parse_state *pst)
+{
+ char hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LG;
+
+ if (pst->pattern->type == ROC_NPC_ITEM_TYPE_IPV4) {
+ lt = NPC_LT_LG_TU_IP;
+ info.len = pst->pattern->size;
+ } else if (pst->pattern->type == ROC_NPC_ITEM_TYPE_IPV6) {
+ lt = NPC_LT_LG_TU_IP6;
+ info.len = pst->pattern->size;
+ } else {
+ /* There is no tunneled IP header */
+ return 0;
+ }
+
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ return npc_update_parse_state(pst, &info, lid, lt, 0);
+}
+
+int
+npc_parse_lh(struct npc_parse_state *pst)
+{
+ char hw_mask[NPC_MAX_EXTRACT_HW_LEN];
+ struct npc_parse_item_info info;
+ int lid, lt;
+ int rc;
+
+ if (!pst->tunnel)
+ return 0;
+
+ info.hw_mask = &hw_mask;
+ info.spec = NULL;
+ info.mask = NULL;
+ info.hw_hdr_len = 0;
+ lid = NPC_LID_LH;
+
+ switch (pst->pattern->type) {
+ case ROC_NPC_ITEM_TYPE_UDP:
+ lt = NPC_LT_LH_TU_UDP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_TCP:
+ lt = NPC_LT_LH_TU_TCP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_SCTP:
+ lt = NPC_LT_LH_TU_SCTP;
+ info.len = pst->pattern->size;
+ break;
+ case ROC_NPC_ITEM_TYPE_ESP:
+ lt = NPC_LT_LH_TU_ESP;
+ info.len = pst->pattern->size;
+ break;
+ default:
+ return 0;
+ }
+
+ npc_get_hw_supp_mask(pst, &info, lid, lt);
+ rc = npc_parse_item_basic(pst->pattern, &info);
+ if (rc != 0)
+ return rc;
+
+ return npc_update_parse_state(pst, &info, lid, lt, 0);
+}
diff --git a/drivers/common/cnxk/roc_npc_priv.h b/drivers/common/cnxk/roc_npc_priv.h
index ef1e991..23250fa 100644
--- a/drivers/common/cnxk/roc_npc_priv.h
+++ b/drivers/common/cnxk/roc_npc_priv.h
@@ -402,11 +402,24 @@ void npc_get_hw_supp_mask(struct npc_parse_state *pst,
struct npc_parse_item_info *info, int lid, int lt);
int npc_parse_item_basic(const struct roc_npc_item_info *item,
struct npc_parse_item_info *info);
+int npc_parse_meta_items(struct npc_parse_state *pst);
+int npc_parse_higig2_hdr(struct npc_parse_state *pst);
+int npc_parse_cpt_hdr(struct npc_parse_state *pst);
+int npc_parse_la(struct npc_parse_state *pst);
+int npc_parse_lb(struct npc_parse_state *pst);
+int npc_parse_lc(struct npc_parse_state *pst);
+int npc_parse_ld(struct npc_parse_state *pst);
+int npc_parse_le(struct npc_parse_state *pst);
+int npc_parse_lf(struct npc_parse_state *pst);
+int npc_parse_lg(struct npc_parse_state *pst);
+int npc_parse_lh(struct npc_parse_state *pst);
int npc_mcam_fetch_kex_cfg(struct npc *npc);
int npc_check_preallocated_entry_cache(struct mbox *mbox,
struct roc_npc_flow *flow,
struct npc *npc);
int npc_flow_free_all_resources(struct npc *npc);
+const struct roc_npc_item_info *
+npc_parse_skip_void_and_any_items(const struct roc_npc_item_info *pattern);
int npc_program_mcam(struct npc *npc, struct npc_parse_state *pst,
bool mcam_alloc);
uint64_t npc_get_kex_capability(struct npc *npc);
--
2.8.4
^ permalink raw reply [flat|nested] 275+ messages in thread
* [dpdk-dev] [PATCH 44/52] common/cnxk: add npc init and fini support
2021-03-05 13:38 [dpdk-dev] [PATCH 00/52] Add Marvell CNXK common driver Nithin Dabilpuram
` (42 preceding siblings ...)
2021-03-05 13:39 `