* [dpdk-dev] [PATCH 00/38] net/sfc: support port representors
@ 2021-08-27 6:56 Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
` (38 more replies)
0 siblings, 39 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev
Support port representors on SN1000 SmartNICs including:
- new syntax with controller, PF and VF specification
- PF representors
- two controllers: host and embedded SoC
The patch series depends on [1] (including build dependency) since it
provides representors info on admin PF only.
[1] https://patches.dpdk.org/project/dpdk/list/?series=18373
Andrew Rybchenko (2):
common/sfc_efx/base: update MCDI headers
common/sfc_efx/base: update EF100 registers definitions
Igor Romanov (23):
net/sfc: add switch mode device argument
net/sfc: insert switchdev mode MAE rules
common/sfc_efx/base: add an API to get mport ID by selector
net/sfc: support EF100 Tx override prefix
net/sfc: add representors proxy infrastructure
net/sfc: reserve TxQ and RxQ for port representors
net/sfc: move adapter state enum to separate header
net/sfc: add port representors infrastructure
common/sfc_efx/base: add filter ingress mport matching field
common/sfc_efx/base: add API to get mport selector by ID
common/sfc_efx/base: add mport alias MCDI wrappers
net/sfc: add representor proxy port API
net/sfc: implement representor queue setup and release
net/sfc: implement representor RxQ start/stop
net/sfc: implement representor TxQ start/stop
net/sfc: implement port representor start and stop
net/sfc: implement port representor link update
net/sfc: support multiple device probe
net/sfc: implement representor Tx routine
net/sfc: use xword type for EF100 Rx prefix
net/sfc: handle ingress m-port in EF100 Rx prefix
net/sfc: implement representor Rx routine
net/sfc: add simple port representor statistics
Viacheslav Galaktionov (13):
common/sfc_efx/base: allow creating invalid mport selectors
net/sfc: free MAE lock once switch domain is assigned
common/sfc_efx/base: add multi-host function M-port selector
common/sfc_efx/base: retrieve function interfaces for VNICs
common/sfc_efx/base: add a means to read MAE mport journal
common/sfc_efx/base: allow getting VNIC MCDI client handles
net/sfc: maintain controller to EFX interface mapping
net/sfc: store PCI address for represented entities
net/sfc: include controller and port in representor name
net/sfc: support new representor parameter syntax
net/sfc: use switch port ID as representor ID
net/sfc: implement the representor info API
net/sfc: update comment about representor support
doc/guides/nics/sfc_efx.rst | 24 +
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/common/sfc_efx/base/ef10_filter.c | 11 +-
drivers/common/sfc_efx/base/ef10_impl.h | 3 +-
drivers/common/sfc_efx/base/ef10_nic.c | 4 +-
drivers/common/sfc_efx/base/efx.h | 155 ++
drivers/common/sfc_efx/base/efx_impl.h | 6 +
drivers/common/sfc_efx/base/efx_mae.c | 506 +++++-
drivers/common/sfc_efx/base/efx_mcdi.c | 128 +-
drivers/common/sfc_efx/base/efx_mcdi.h | 54 +
drivers/common/sfc_efx/base/efx_regs_ef100.h | 106 +-
drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++-
drivers/common/sfc_efx/base/rhead_rx.c | 2 +-
drivers/common/sfc_efx/version.map | 9 +
drivers/net/sfc/meson.build | 2 +
drivers/net/sfc/sfc.c | 151 +-
drivers/net/sfc/sfc.h | 77 +-
drivers/net/sfc/sfc_dp.c | 46 +
drivers/net/sfc/sfc_dp.h | 25 +
drivers/net/sfc/sfc_ef100_rx.c | 36 +-
drivers/net/sfc/sfc_ef100_tx.c | 25 +
drivers/net/sfc/sfc_ethdev.c | 802 ++++++++-
drivers/net/sfc/sfc_ethdev_state.h | 72 +
drivers/net/sfc/sfc_ev.h | 56 +-
drivers/net/sfc/sfc_flow.c | 10 +-
drivers/net/sfc/sfc_intr.c | 12 +-
drivers/net/sfc/sfc_kvargs.c | 2 +
drivers/net/sfc/sfc_kvargs.h | 10 +
drivers/net/sfc/sfc_mae.c | 218 ++-
drivers/net/sfc/sfc_mae.h | 56 +
drivers/net/sfc/sfc_port.c | 2 +-
drivers/net/sfc/sfc_repr.c | 1107 ++++++++++++
drivers/net/sfc/sfc_repr.h | 44 +
drivers/net/sfc/sfc_repr_proxy.c | 1661 ++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 147 ++
drivers/net/sfc/sfc_repr_proxy_api.h | 47 +
drivers/net/sfc/sfc_sriov.c | 9 +-
drivers/net/sfc/sfc_switch.c | 207 ++-
drivers/net/sfc/sfc_switch.h | 56 +
drivers/net/sfc/sfc_tx.c | 42 +-
drivers/net/sfc/sfc_tx.h | 1 +
41 files changed, 6914 insertions(+), 234 deletions(-)
create mode 100644 drivers/net/sfc/sfc_ethdev_state.h
create mode 100644 drivers/net/sfc/sfc_repr.c
create mode 100644 drivers/net/sfc/sfc_repr.h
create mode 100644 drivers/net/sfc/sfc_repr_proxy.c
create mode 100644 drivers/net/sfc/sfc_repr_proxy.h
create mode 100644 drivers/net/sfc/sfc_repr_proxy_api.h
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 02/38] common/sfc_efx/base: update EF100 registers definitions Andrew Rybchenko
` (37 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev
Pickup new FW interface definitions.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++++++++-
1 file changed, 1176 insertions(+), 35 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx_regs_mcdi.h b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
index a3c9f076ec..2daf825a36 100644
--- a/drivers/common/sfc_efx/base/efx_regs_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
@@ -492,6 +492,24 @@
*/
#define MAE_FIELD_SUPPORTED_MATCH_MASK 0x5
+/* MAE_CT_VNI_MODE enum: Controls the layout of the VNI input to the conntrack
+ * lookup. (Values are not arbitrary - constrained by table access ABI.)
+ */
+/* enum: The VNI input to the conntrack lookup will be zero. */
+#define MAE_CT_VNI_MODE_ZERO 0x0
+/* enum: The VNI input to the conntrack lookup will be the VNI (VXLAN/Geneve)
+ * or VSID (NVGRE) field from the packet.
+ */
+#define MAE_CT_VNI_MODE_VNI 0x1
+/* enum: The VNI input to the conntrack lookup will be the VLAN ID from the
+ * outermost VLAN tag (in bottom 12 bits; top 12 bits zero).
+ */
+#define MAE_CT_VNI_MODE_1VLAN 0x2
+/* enum: The VNI input to the conntrack lookup will be the VLAN IDs from both
+ * VLAN tags (outermost in bottom 12 bits, innermost in top 12 bits).
+ */
+#define MAE_CT_VNI_MODE_2VLAN 0x3
+
/* MAE_FIELD enum: NB: this enum shares namespace with the support status enum.
*/
/* enum: Source mport upon entering the MAE. */
@@ -617,7 +635,8 @@
/* MAE_MCDI_ENCAP_TYPE enum: Encapsulation type. Defines how the payload will
* be parsed to an inner frame. Other values are reserved. Unknown values
- * should be treated same as NONE.
+ * should be treated same as NONE. (Values are not arbitrary - constrained by
+ * table access ABI.)
*/
#define MAE_MCDI_ENCAP_TYPE_NONE 0x0 /* enum */
/* enum: Don't assume enum aligns with support bitmask... */
@@ -634,6 +653,18 @@
/* enum: Selects the virtual NIC plugged into the MAE switch */
#define MAE_MPORT_END_VNIC 0x2
+/* MAE_COUNTER_TYPE enum: The datapath maintains several sets of counters, each
+ * being associated with a different table. Note that the same counter ID may
+ * be allocated by different counter blocks, so e.g. AR counter 42 is different
+ * from CT counter 42. Generation counts are also type-specific. This value is
+ * also present in the header of streaming counter packets, in the IDENTIFIER
+ * field (see packetiser packet format definitions).
+ */
+/* enum: Action Rule counters - can be referenced in AR response. */
+#define MAE_COUNTER_TYPE_AR 0x0
+/* enum: Conntrack counters - can be referenced in CT response. */
+#define MAE_COUNTER_TYPE_CT 0x1
+
/* MCDI_EVENT structuredef: The structure of an MCDI_EVENT on Siena/EF10/EF100
* platforms
*/
@@ -4547,6 +4578,8 @@
#define MC_CMD_MEDIA_BASE_T 0x6
/* enum: QSFP+. */
#define MC_CMD_MEDIA_QSFP_PLUS 0x7
+/* enum: DSFP. */
+#define MC_CMD_MEDIA_DSFP 0x8
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_OFST 48
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_LEN 4
/* enum: Native clause 22 */
@@ -7823,11 +7856,16 @@
/***********************************/
/* MC_CMD_GET_PHY_MEDIA_INFO
* Read media-specific data from PHY (e.g. SFP/SFP+ module ID information for
- * SFP+ PHYs). The 'media type' can be found via GET_PHY_CFG
- * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid 'page number' input values, and the
- * output data, are interpreted on a per-type basis. For SFP+: PAGE=0 or 1
+ * SFP+ PHYs). The "media type" can be found via GET_PHY_CFG
+ * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid "page number" input values, and the
+ * output data, are interpreted on a per-type basis. For SFP+, PAGE=0 or 1
* returns a 128-byte block read from module I2C address 0xA0 offset 0 or 0x80.
- * Anything else: currently undefined. Locks required: None. Return code: 0.
+ * For QSFP, PAGE=-1 is the lower (unbanked) page. PAGE=2 is the EEPROM and
+ * PAGE=3 is the module limits. For DSFP, module addressing requires a
+ * "BANK:PAGE". Not every bank has the same number of pages. See the Common
+ * Management Interface Specification (CMIS) for further details. A BANK:PAGE
+ * of "0xffff:0xffff" retrieves the lower (unbanked) page. Locks required -
+ * None. Return code - 0.
*/
#define MC_CMD_GET_PHY_MEDIA_INFO 0x4b
#define MC_CMD_GET_PHY_MEDIA_INFO_MSGSET 0x4b
@@ -7839,6 +7877,12 @@
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_LEN 4
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_OFST 0
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_LEN 4
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_LBN 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_WIDTH 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_LBN 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_WIDTH 16
/* MC_CMD_GET_PHY_MEDIA_INFO_OUT msgresponse */
#define MC_CMD_GET_PHY_MEDIA_INFO_OUT_LENMIN 5
@@ -9350,6 +9394,8 @@
#define NVRAM_PARTITION_TYPE_FPGA_JUMP 0xb08
/* enum: FPGA Validate XCLBIN */
#define NVRAM_PARTITION_TYPE_FPGA_XCLBIN_VALIDATE 0xb09
+/* enum: FPGA XOCL Configuration information */
+#define NVRAM_PARTITION_TYPE_FPGA_XOCL_CONFIG 0xb0a
/* enum: MUM firmware partition */
#define NVRAM_PARTITION_TYPE_MUM_FIRMWARE 0xc00
/* enum: SUC firmware partition (this is intentionally an alias of
@@ -9427,6 +9473,8 @@
#define NVRAM_PARTITION_TYPE_BUNDLE_LOG 0x1e02
/* enum: Partition for Solarflare gPXE bootrom installed via Bundle update. */
#define NVRAM_PARTITION_TYPE_EXPANSION_ROM_INTERNAL 0x1e03
+/* enum: Partition to store ASN.1 format Bundle Signature for checking. */
+#define NVRAM_PARTITION_TYPE_BUNDLE_SIGNATURE 0x1e04
/* enum: Test partition on SmartNIC system microcontroller (SUC) */
#define NVRAM_PARTITION_TYPE_SUC_TEST 0x1f00
/* enum: System microcontroller access to primary FPGA flash. */
@@ -10051,6 +10099,158 @@
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+/* MC_CMD_INIT_EVQ_V3_IN msgrequest: Extended request to specify per-queue
+ * event merge timeouts.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_LEN 556
+/* Size, in entries */
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_OFST 0
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_LEN 4
+/* Desired instance. Must be set to a specific instance, which is a function
+ * local queue index. The calling client must be the currently-assigned user of
+ * this VI (see MC_CMD_SET_VI_USER).
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_LEN 4
+/* The initial timer value. The load value is ignored if the timer mode is DIS.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_OFST 8
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_LEN 4
+/* The reload value is ignored in one-shot modes */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_OFST 12
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_LEN 4
+/* tbd */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_LBN 0
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_LBN 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_LBN 2
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_LBN 3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_LBN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_LBN 5
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_LBN 6
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LBN 7
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_WIDTH 4
+/* enum: All initialisation flags specified by host. */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_MANUAL 0x0
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the lowest latency achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LOW_LATENCY 0x1
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the best throughput achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_THROUGHPUT 0x2
+/* enum: MEDFORD only. Certain initialisation flags may be over-ridden by
+ * firmware based on licenses and firmware variant. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_AUTO 0x3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_LBN 11
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_OFST 20
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_DIS 0x0
+/* enum: Immediate */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_IMMED_START 0x1
+/* enum: Triggered */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_TRIG_START 0x2
+/* enum: Hold-off */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_INT_HLDOFF 0x3
+/* Target EVQ for wakeups if in wakeup mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_LEN 4
+/* Target interrupt if in interrupting mode (note union with target EVQ). Use
+ * MC_CMD_RESOURCE_INSTANCE_ANY unless a specific one required for test
+ * purposes.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_LEN 4
+/* Event Counter Mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_OFST 28
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_DIS 0x0
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RX 0x1
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_TX 0x2
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RXTX 0x3
+/* Event queue packet count threshold. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_OFST 32
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_LEN 4
+/* 64-bit address of 4k of 4k-aligned host memory buffer */
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LEN 8
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LBN 288
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_OFST 40
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LBN 320
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
+/* Receive event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_RX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_OFST 548
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_LEN 4
+/* Transmit event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_TX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_OFST 552
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_LEN 4
+
+/* MC_CMD_INIT_EVQ_V3_OUT msgresponse */
+#define MC_CMD_INIT_EVQ_V3_OUT_LEN 8
+/* Only valid if INTRFLAG was true */
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_OFST 0
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_LEN 4
+/* Actual configuration applied on the card */
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_LBN 0
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_LBN 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_LBN 2
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+
/* QUEUE_CRC_MODE structuredef */
#define QUEUE_CRC_MODE_LEN 1
#define QUEUE_CRC_MODE_MODE_LBN 0
@@ -10256,7 +10456,9 @@
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10360,7 +10562,9 @@
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10493,7 +10697,9 @@
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10639,7 +10845,9 @@
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10878,7 +11086,7 @@
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 0
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM 64
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Flags related to Qbb flow control mode. */
@@ -12228,6 +12436,8 @@
* rules inserted by MC_CMD_VNIC_ENCAP_RULE_ADD. (ef100 and later)
*/
#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_MATCHES 0x5
+/* enum: read the supported encapsulation types for the VNIC */
+#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_TYPES 0x6
/* MC_CMD_GET_PARSER_DISP_INFO_OUT msgresponse */
#define MC_CMD_GET_PARSER_DISP_INFO_OUT_LENMIN 8
@@ -12336,6 +12546,30 @@
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM 61
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM_MCDI2 253
+/* MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT msgresponse: Returns
+ * the supported encapsulation types for the VNIC
+ */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_LEN 8
+/* The op code OP_GET_SUPPORTED_VNIC_ENCAP_TYPES is returned */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_OFST 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_GET_PARSER_DISP_INFO_IN/OP */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+
/***********************************/
/* MC_CMD_PARSER_DISP_RW
@@ -16236,6 +16470,9 @@
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* MC_CMD_GET_CAPABILITIES_V8_OUT msgresponse */
#define MC_CMD_GET_CAPABILITIES_V8_OUT_LEN 160
@@ -16734,6 +16971,9 @@
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17246,6 +17486,9 @@
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17793,6 +18036,9 @@
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -19900,6 +20146,18 @@
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_OFST 4
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_LEN 4
+/* MC_CMD_GET_FUNCTION_INFO_OUT_V2 msgresponse */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN 12
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_OFST 0
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_LEN 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_OFST 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_LEN 4
+/* Values from PCIE_INTERFACE enumeration. For NICs with a single interface, or
+ * in the case of a V1 response, this should be HOST_PRIMARY.
+ */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_OFST 8
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_LEN 4
+
/***********************************/
/* MC_CMD_ENABLE_OFFLINE_BIST
@@ -25682,6 +25940,9 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_LBN 6
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_LBN 7
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_WIDTH 1
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_LBN 7
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_WIDTH 1
@@ -25691,6 +25952,12 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_LBN 9
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_LBN 10
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_LBN 11
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_WIDTH 1
/* MC_CMD_GET_RX_PREFIX_ID_OUT msgresponse */
#define MC_CMD_GET_RX_PREFIX_ID_OUT_LENMIN 8
@@ -25736,9 +26003,12 @@
#define RX_PREFIX_FIELD_INFO_PARTIAL_TSTAMP 0x4 /* enum */
#define RX_PREFIX_FIELD_INFO_RSS_HASH 0x5 /* enum */
#define RX_PREFIX_FIELD_INFO_USER_MARK 0x6 /* enum */
+#define RX_PREFIX_FIELD_INFO_INGRESS_MPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_INGRESS_VPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_CSUM_FRAME 0x8 /* enum */
#define RX_PREFIX_FIELD_INFO_VLAN_STRIP_TCI 0x9 /* enum */
+#define RX_PREFIX_FIELD_INFO_VLAN_STRIPPED 0xa /* enum */
+#define RX_PREFIX_FIELD_INFO_VSWITCH_STATUS 0xb /* enum */
#define RX_PREFIX_FIELD_INFO_TYPE_LBN 24
#define RX_PREFIX_FIELD_INFO_TYPE_WIDTH 8
@@ -26063,6 +26333,10 @@
#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_LINK 0x5
/* enum: Read internal link configuration. */
#define MC_CMD_FPGA_IN_OP_GET_INTERNAL_LINK 0x6
+/* enum: Get MAC statistics of FPGA external port. */
+#define MC_CMD_FPGA_IN_OP_GET_MAC_STATS 0x7
+/* enum: Set configuration on internal FPGA MAC. */
+#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_MAC 0x8
/* MC_CMD_FPGA_OP_GET_VERSION_IN msgrequest: Get the FPGA version string. A
* free-format string is returned in response to this command. Any checks on
@@ -26206,6 +26480,87 @@
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_OFST 4
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_LEN 4
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_IN msgrequest: Get FPGA external port MAC
+ * statistics.
+ */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_LEN 4
+/* Sub-command code. Must be OP_GET_MAC_STATS. */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_LEN 4
+
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_OUT msgresponse */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMIN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX 252
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LEN(num) (4+8*(num))
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_NUM(len) (((len)-4)/8)
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LEN 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LBN 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_OFST 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LBN 64
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MINNUM 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM 31
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM_MCDI2 127
+#define MC_CMD_FPGA_MAC_TX_TOTAL_PACKETS 0x0 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_BYTES 0x1 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_PACKETS 0x2 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_BYTES 0x3 /* enum */
+#define MC_CMD_FPGA_MAC_TX_BAD_FCS 0x4 /* enum */
+#define MC_CMD_FPGA_MAC_TX_PAUSE 0x5 /* enum */
+#define MC_CMD_FPGA_MAC_TX_USER_PAUSE 0x6 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_PACKETS 0x7 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_BYTES 0x8 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_PACKETS 0x9 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_BYTES 0xa /* enum */
+#define MC_CMD_FPGA_MAC_RX_BAD_FCS 0xb /* enum */
+#define MC_CMD_FPGA_MAC_RX_PAUSE 0xc /* enum */
+#define MC_CMD_FPGA_MAC_RX_USER_PAUSE 0xd /* enum */
+#define MC_CMD_FPGA_MAC_RX_UNDERSIZE 0xe /* enum */
+#define MC_CMD_FPGA_MAC_RX_OVERSIZE 0xf /* enum */
+#define MC_CMD_FPGA_MAC_RX_FRAMING_ERR 0x10 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_UNCORRECTED_ERRORS 0x11 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_CORRECTED_ERRORS 0x12 /* enum */
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN msgrequest: Configures the internal port
+ * MAC on the FPGA.
+ */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_LEN 20
+/* Sub-command code. Must be OP_SET_INTERNAL_MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_LEN 4
+/* Select which parameters to configure. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_LEN 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_LBN 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_LBN 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_LBN 2
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_WIDTH 1
+/* The MTU to be programmed into the MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_OFST 8
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_LEN 4
+/* Drain Tx FIFO */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_OFST 12
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_LEN 4
+/* flow control configuration. See MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_OFST 16
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_LEN 4
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT msgresponse */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT_LEN 0
+
/***********************************/
/* MC_CMD_EXTERNAL_MAE_GET_LINK_MODE
@@ -26483,6 +26838,12 @@
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_OFST 29
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_LBN 0
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_LBN 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_LBN 2
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_WIDTH 1
/* Only if MATCH_DST_PORT is set. Port number as bytes in network order. */
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_OFST 30
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_LEN 2
@@ -26544,6 +26905,257 @@
#define UUID_NODE_LBN 80
#define UUID_NODE_WIDTH 48
+
+/***********************************/
+/* MC_CMD_PLUGIN_ALLOC
+ * Create a handle to a datapath plugin's extension. This involves finding a
+ * currently-loaded plugin offering the given functionality (as identified by
+ * the UUID) and allocating a handle to track the usage of it. Plugin
+ * functionality is identified by 'extension' rather than any other identifier
+ * so that a single plugin bitfile may offer more than one piece of independent
+ * functionality. If two bitfiles are loaded which both offer the same
+ * extension, then the metadata is interrogated further to determine which is
+ * the newest and that is the one opened. See SF-123625-SW for architectural
+ * detail on datapath plugins.
+ */
+#define MC_CMD_PLUGIN_ALLOC 0x1ad
+#define MC_CMD_PLUGIN_ALLOC_MSGSET 0x1ad
+#undef MC_CMD_0x1ad_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ad_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_ALLOC_IN msgrequest */
+#define MC_CMD_PLUGIN_ALLOC_IN_LEN 24
+/* The functionality requested of the plugin, as a UUID structure */
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_LEN 16
+/* Additional options for opening the handle */
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_LBN 0
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_WIDTH 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_WIDTH 1
+/* Load the extension only if it is in the specified administrative group.
+ * Specify ANY to load the extension wherever it is found (if there are
+ * multiple choices then the extension with the highest MINOR_VER/PATCH_VER
+ * will be loaded). See MC_CMD_PLUGIN_GET_META_GLOBAL for a description of
+ * administrative groups.
+ */
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_OFST 20
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_LEN 2
+/* enum: Load the extension from any ADMIN_GROUP. */
+#define MC_CMD_PLUGIN_ALLOC_IN_ANY 0xffff
+/* Reserved */
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_OFST 22
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_LEN 2
+
+/* MC_CMD_PLUGIN_ALLOC_OUT msgresponse */
+#define MC_CMD_PLUGIN_ALLOC_OUT_LEN 4
+/* Unique identifier of this usage */
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_LEN 4
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_FREE
+ * Delete a handle to a plugin's extension.
+ */
+#define MC_CMD_PLUGIN_FREE 0x1ae
+#define MC_CMD_PLUGIN_FREE_MSGSET 0x1ae
+#undef MC_CMD_0x1ae_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ae_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_FREE_IN msgrequest */
+#define MC_CMD_PLUGIN_FREE_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_FREE_OUT msgresponse */
+#define MC_CMD_PLUGIN_FREE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_GLOBAL
+ * Returns the global metadata applying to the whole plugin extension. See the
+ * other metadata calls for subtypes of data.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL 0x1af
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_MSGSET 0x1af
+#undef MC_CMD_0x1af_PRIVILEGE_CTG
+
+#define MC_CMD_0x1af_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_LEN 36
+/* Unique identifier of this plugin extension. This is identical to the value
+ * which was requested when the handle was allocated.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_LEN 16
+/* semver sub-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_OFST 16
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_LEN 2
+/* semver micro-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_OFST 18
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_LEN 2
+/* Number of different messages which can be sent to this extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_OFST 20
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_LEN 4
+/* Byte offset within the VI window of the plugin's mapped CSR window. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_OFST 24
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_LEN 2
+/* Number of bytes mapped through to the plugin's CSRs. 0 if that feature was
+ * not requested by the plugin (in which case MAPPED_CSR_OFFSET and
+ * MAPPED_CSR_FLAGS are ignored).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_OFST 26
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_LEN 2
+/* Flags indicating how to perform the CSR window mapping. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_LBN 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_WIDTH 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_LBN 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_WIDTH 1
+/* Identifier of the set of extensions which all change state together.
+ * Extensions having the same ADMIN_GROUP will always load and unload at the
+ * same time. ADMIN_GROUP values themselves are arbitrary (but they contain a
+ * generation number as an implementation detail to ensure that they're not
+ * reused rapidly).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_OFST 32
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_LEN 1
+/* Bitshift in MC_CMD_DEVEL_CLIENT_PRIVILEGE_MODIFY's MASK parameters
+ * corresponding to this extension, i.e. set the bit 1<<PRIVILEGE_BIT to permit
+ * access to this extension.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_OFST 33
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_LEN 1
+/* Reserved */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_OFST 34
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_LEN 2
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER
+ * Returns metadata supplied by the plugin author which describes this
+ * extension in a human-readable way. Contrast with
+ * MC_CMD_PLUGIN_GET_META_GLOBAL, which returns information needed for software
+ * to operate.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER 0x1b0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_MSGSET 0x1b0
+#undef MC_CMD_0x1b0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b0_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_LEN 12
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_LEN 4
+/* Category of data to return */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_LEN 4
+/* enum: Top-level information about the extension. The returned data is an
+ * array of key/value pairs using the keys in RFC5013 (Dublin Core) to describe
+ * the extension. The data is a back-to-back list of zero-terminated strings;
+ * the even-numbered fields (0,2,4,...) are keys and their following odd-
+ * numbered fields are the corresponding values. Both keys and values are
+ * nominally UTF-8. Per RFC5013, the same key may be repeated any number of
+ * times. Note that all information (including the key/value structure itself
+ * and the UTF-8 encoding) may have been provided by the plugin author, so
+ * callers must be cautious about parsing it. Callers should parse only the
+ * top-level structure to separate out the keys and values; the contents of the
+ * values is not expected to be machine-readable.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_EXTENSION_KVS 0x0
+/* Byte position of the data to be returned within the full data block of the
+ * given SUBTYPE.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_OFST 8
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMIN 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LEN(num) (4+1*(num))
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_NUM(len) (((len)-4)/1)
+/* Full length of the data block of the requested SUBTYPE, in bytes. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_LEN 4
+/* The information requested by SUBTYPE. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM 248
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM_MCDI2 1016
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_MSG
+ * Returns the simple metadata for a specific plugin request message. This
+ * supplies information necessary for the host to know how to build an
+ * MC_CMD_PLUGIN_REQ request.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG 0x1b1
+#define MC_CMD_PLUGIN_GET_META_MSG_MSGSET 0x1b1
+#undef MC_CMD_0x1b1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b1_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_MSG_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_LEN 8
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_LEN 4
+/* Unique message ID to obtain */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_MSG_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_LEN 44
+/* Unique message ID. This is the same value as the input parameter; it exists
+ * to allow future MCDI extensions which enumerate all messages.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_LEN 4
+/* Packed index number of this message, assigned by the MC to give each message
+ * a unique ID in an array to allow for more efficient storage/management.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_LEN 4
+/* Short human-readable codename for this message. This is conventionally
+ * formatted as a C identifier in the basic ASCII character set with any spare
+ * bytes at the end set to 0, however this convention is not enforced by the MC
+ * so consumers must check for all potential malformations before using it for
+ * a trusted purpose.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_OFST 8
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_LEN 32
+/* Number of bytes of data which must be passed from the host kernel to the MC
+ * for this message's payload, and which are passed back again in the response.
+ * The MC's plugin metadata loader will have validated that the number of bytes
+ * specified here will fit in to MC_CMD_PLUGIN_REQ_IN_DATA in a single MCDI
+ * message.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_OFST 40
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_LEN 4
+
/* PLUGIN_EXTENSION structuredef: Used within MC_CMD_PLUGIN_GET_ALL to describe
* an individual extension.
*/
@@ -26561,6 +27173,100 @@
#define PLUGIN_EXTENSION_RESERVED_LBN 137
#define PLUGIN_EXTENSION_RESERVED_WIDTH 23
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_ALL
+ * Returns a list of all plugin extensions currently loaded and available. The
+ * UUIDs returned can be passed to MC_CMD_PLUGIN_ALLOC in order to obtain more
+ * detailed metadata via the MC_CMD_PLUGIN_GET_META_* family of requests. The
+ * ADMIN_GROUP field collects how extensions are grouped in to units which are
+ * loaded/unloaded together; extensions with the same value are in the same
+ * group.
+ */
+#define MC_CMD_PLUGIN_GET_ALL 0x1b2
+#define MC_CMD_PLUGIN_GET_ALL_MSGSET 0x1b2
+#undef MC_CMD_0x1b2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b2_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_ALL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_ALL_IN_LEN 4
+/* Additional options for querying. Note that if neither FLAG_INCLUDE_ENABLED
+ * nor FLAG_INCLUDE_DISABLED are specified then the result set will be empty.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_LBN 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_WIDTH 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_WIDTH 1
+
+/* MC_CMD_PLUGIN_GET_ALL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX 240
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LEN(num) (0+20*(num))
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_NUM(len) (((len)-0)/20)
+/* The list of available plugin extensions, as an array of PLUGIN_EXTENSION
+ * structs.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_LEN 20
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MINNUM 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM 12
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM_MCDI2 51
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_REQ
+ * Send a command to a plugin. A plugin may define an arbitrary number of
+ * 'messages' which it allows applications on the host system to send, each
+ * identified by a 32-bit ID.
+ */
+#define MC_CMD_PLUGIN_REQ 0x1b3
+#define MC_CMD_PLUGIN_REQ_MSGSET 0x1b3
+#undef MC_CMD_0x1b3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_REQ_IN msgrequest */
+#define MC_CMD_PLUGIN_REQ_IN_LENMIN 8
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_IN_LEN(num) (8+1*(num))
+#define MC_CMD_PLUGIN_REQ_IN_DATA_NUM(len) (((len)-8)/1)
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_LEN 4
+/* Message ID defined by the plugin author */
+#define MC_CMD_PLUGIN_REQ_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_REQ_IN_ID_LEN 4
+/* Data blob being the parameter to the message. This must be of the length
+ * specified by MC_CMD_PLUGIN_GET_META_MSG_IN_MCDI_PARAM_SIZE.
+ */
+#define MC_CMD_PLUGIN_REQ_IN_DATA_OFST 8
+#define MC_CMD_PLUGIN_REQ_IN_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM 244
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM_MCDI2 1012
+
+/* MC_CMD_PLUGIN_REQ_OUT msgresponse */
+#define MC_CMD_PLUGIN_REQ_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_OUT_LEN(num) (0+1*(num))
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_NUM(len) (((len)-0)/1)
+/* The input data, as transformed and/or updated by the plugin's eBPF. Will be
+ * the same size as the input DATA parameter.
+ */
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_OFST 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM 252
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM_MCDI2 1020
+
/* DESC_ADDR_REGION structuredef: Describes a contiguous region of DESC_ADDR
* space that maps to a contiguous region of TRGT_ADDR space. Addresses
* DESC_ADDR in the range [DESC_ADDR_BASE:DESC_ADDR_BASE + 1 <<
@@ -27219,6 +27925,38 @@
#define MC_CMD_VIRTIO_TEST_FEATURES_OUT_LEN 0
+/***********************************/
+/* MC_CMD_VIRTIO_GET_CAPABILITIES
+ * Get virtio capabilities supported by the device. Returns general virtio
+ * capabilities and limitations of the hardware / firmware implementation
+ * (hardware device as a whole), rather than that of individual configured
+ * virtio devices. At present, only the absolute maximum number of queues
+ * allowed on multi-queue devices is returned. Response is expected to be
+ * extended as necessary in the future.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES 0x1d3
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_MSGSET 0x1d3
+#undef MC_CMD_0x1d3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_IN msgrequest */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_LEN 4
+/* Type of device to get capabilities for. Matches the device id as defined by
+ * the virtio spec.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_VIRTIO_GET_FEATURES/MC_CMD_VIRTIO_GET_FEATURES_IN/DEVICE_ID */
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_OUT msgresponse */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_LEN 4
+/* Maximum number of queues supported for a single device instance */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_LEN 4
+
+
/***********************************/
/* MC_CMD_VIRTIO_INIT_QUEUE
* Create a virtio virtqueue. Fails with EALREADY if the queue already exists.
@@ -27490,6 +28228,24 @@
#define PCIE_FUNCTION_INTF_LBN 32
#define PCIE_FUNCTION_INTF_WIDTH 32
+/* QUEUE_ID structuredef: Structure representing an absolute queue identifier
+ * (absolute VI number + VI relative queue number). On Keystone, a VI can
+ * contain multiple queues (at present, up to 2), each with separate controls
+ * for direction. This structure is required to uniquely identify the absolute
+ * source queue for descriptor proxy functions.
+ */
+#define QUEUE_ID_LEN 4
+/* Absolute VI number */
+#define QUEUE_ID_ABS_VI_OFST 0
+#define QUEUE_ID_ABS_VI_LEN 2
+#define QUEUE_ID_ABS_VI_LBN 0
+#define QUEUE_ID_ABS_VI_WIDTH 16
+/* Relative queue number within the VI */
+#define QUEUE_ID_REL_QUEUE_LBN 16
+#define QUEUE_ID_REL_QUEUE_WIDTH 1
+#define QUEUE_ID_RESERVED_LBN 17
+#define QUEUE_ID_RESERVED_WIDTH 15
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_CREATE
@@ -28088,7 +28844,11 @@
* Enable descriptor proxying for function into target event queue. Returns VI
* allocation info for the proxy source function, so that the caller can map
* absolute VI IDs from descriptor proxy events back to the originating
- * function.
+ * function. This is a legacy function that only supports single queue proxy
+ * devices. It is also limited in that it can only be called after host driver
+ * attach (once VI allocation is known) and will return MC_CMD_ERR_ENOTCONN
+ * otherwise. For new code, see MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE which
+ * supports multi-queue devices and has no dependency on host driver attach.
*/
#define MC_CMD_DESC_PROXY_FUNC_ENABLE 0x178
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_MSGSET 0x178
@@ -28119,9 +28879,46 @@
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_OUT_VI_BASE_LEN 4
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE
+ * Enable descriptor proxying for a source queue on a host function into target
+ * event queue. Source queue number is a relative virtqueue number on the
+ * source function (0 to max_virtqueues-1). For a multi-queue device, the
+ * caller must enable all source queues individually. To retrieve absolute VI
+ * information for the source function (so that VI IDs from descriptor proxy
+ * events can be mapped back to source function / queue) see
+ * MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE 0x1d0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_MSGSET 0x1d0
+#undef MC_CMD_0x1d0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d0_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_LEN 12
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to enable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+/* Descriptor proxy sink queue (caller function relative). Must be extended
+ * width event queue
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_OFST 8
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT_LEN 0
+
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_DISABLE
- * Disable descriptor proxying for function
+ * Disable descriptor proxying for function. For multi-queue functions,
+ * disables all queues.
*/
#define MC_CMD_DESC_PROXY_FUNC_DISABLE 0x179
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_MSGSET 0x179
@@ -28141,6 +28938,77 @@
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_OUT_LEN 0
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE
+ * Disable descriptor proxying for a specific source queue on a function.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE 0x1d1
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_MSGSET 0x1d1
+#undef MC_CMD_0x1d1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d1_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_LEN 8
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to disable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_DESC_PROXY_GET_VI_INFO
+ * Returns absolute VI allocation information for the descriptor proxy source
+ * function referenced by HANDLE, so that the caller can map absolute VI IDs
+ * from descriptor proxy events back to the originating function and queue. The
+ * call is only valid after the host driver for the source function has
+ * attached (after receiving a driver attach event for the descriptor proxy
+ * function) and will fail with ENOTCONN otherwise.
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO 0x1d2
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_MSGSET 0x1d2
+#undef MC_CMD_0x1d2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d2_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_GET_VI_INFO_IN msgrequest */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_LEN 4
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMIN 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX 252
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_NUM(len) (((len)-0)/4)
+/* VI information (VI ID + VI relative queue number) for each of the source
+ * queues (in order from 0 to max_virtqueues-1), as array of QUEUE_ID
+ * structures.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_LEN 4
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MINNUM 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM 63
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM_MCDI2 255
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_LEN 2
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_LBN 16
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_WIDTH 1
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_LBN 17
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_WIDTH 15
+
+
/***********************************/
/* MC_CMD_GET_ADDR_SPC_ID
* Get Address space identifier for use in mem2mem descriptors for a given
@@ -29384,9 +30252,12 @@
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_OFST 4
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_LBN 3
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
-/* The total number of counters available to allocate. */
+/* Deprecated alias for AR_COUNTERS. */
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_OFST 8
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_LEN 4
/* The total number of counters lists available to allocate. A value of zero
* indicates that counter lists are not supported by the NIC. (But single
* counters may still be.)
@@ -29429,6 +30300,87 @@
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_OFST 48
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_LEN 4
+/* MC_CMD_MAE_GET_CAPS_V2_OUT msgresponse */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_LEN 60
+/* The number of field IDs that the NIC supports. Any field with a ID greater
+ * than or equal to the value returned in this field must be treated as having
+ * a support level of MAE_FIELD_UNSUPPORTED in all requests.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_OFST 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+/* Deprecated alias for AR_COUNTERS. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_LEN 4
+/* The total number of counters lists available to allocate. A value of zero
+ * indicates that counter lists are not supported by the NIC. (But single
+ * counters may still be.)
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_OFST 12
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_LEN 4
+/* The total number of encap header structures available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_OFST 16
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_LEN 4
+/* Reserved. Should be zero. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_OFST 20
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_LEN 4
+/* The total number of action sets available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_OFST 24
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_LEN 4
+/* The total number of action set lists available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_OFST 28
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_LEN 4
+/* The total number of outer rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_OFST 32
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_LEN 4
+/* The total number of action rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_OFST 36
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_LEN 4
+/* The number of priorities available for ACTION_RULE filters. It is invalid to
+ * install a MATCH_ACTION filter with a priority number >= ACTION_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_OFST 40
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_LEN 4
+/* The number of priorities available for OUTER_RULE filters. It is invalid to
+ * install an OUTER_RULE filter with a priority number >= OUTER_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_OFST 44
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_LEN 4
+/* MAE API major version. Currently 1. If this field is not present in the
+ * response (i.e. response shorter than 384 bits), then its value is zero. If
+ * the value does not match the client's expectations, the client should raise
+ * a fatal error.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_OFST 48
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_LEN 4
+/* Mask of supported counter types. Each bit position corresponds to a value of
+ * the MAE_COUNTER_TYPE enum. If this field is missing (i.e. V1 response),
+ * clients must assume that only AR counters are supported (i.e.
+ * COUNTER_TYPES_SUPPORTED==0x1). See also
+ * MC_CMD_MAE_COUNTERS_STREAM_START/COUNTER_TYPES_MASK.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_OFST 52
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_LEN 4
+/* The total number of conntrack counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_OFST 56
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_LEN 4
+
/***********************************/
/* MC_CMD_MAE_GET_AR_CAPS
@@ -29495,8 +30447,8 @@
/***********************************/
/* MC_CMD_MAE_COUNTER_ALLOC
- * Allocate match-action-engine counters, which can be referenced in Action
- * Rules.
+ * Allocate match-action-engine counters, which can be referenced in various
+ * tables.
*/
#define MC_CMD_MAE_COUNTER_ALLOC 0x143
#define MC_CMD_MAE_COUNTER_ALLOC_MSGSET 0x143
@@ -29504,12 +30456,25 @@
#define MC_CMD_0x143_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_LEN 4
/* The number of counters that the driver would like allocated */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTER_ALLOC_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_LEN 8
+/* The number of counters that the driver would like allocated */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_LEN 4
+/* Which type of counter to allocate. */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_OFST 4
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_ALLOC_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX 252
@@ -29518,7 +30483,8 @@
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. Packets with generation count >= GENERATION_COUNT will
* contain valid counter values for counter IDs allocated in this call, unless
- * the counter values are zero and zero squash is enabled.
+ * the counter values are zero and zero squash is enabled. Note that there is
+ * an independent GENERATION_COUNT object per counter type.
*/
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_LEN 4
@@ -29548,7 +30514,9 @@
#define MC_CMD_0x144_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMIN 8
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX 132
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2 132
@@ -29564,6 +30532,23 @@
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM 32
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* MC_CMD_MAE_COUNTER_FREE_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_LEN 136
+/* The number of counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_LEN 4
+/* An array containing the counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_OFST 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_LEN 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MINNUM 1
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM 32
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* Which type of counter to free. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_OFST 132
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_FREE_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX 136
@@ -29572,11 +30557,13 @@
#define MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. A packet with generation count == GENERATION_COUNT will
* contain the final values for these counter IDs, unless the counter values
- * are zero and zero squash is enabled. Receiving a packet with generation
- * count > GENERATION_COUNT guarantees that no more values will be written for
- * these counters. If values for these counter IDs are present, the counter ID
- * has been reallocated. A counter ID will not be reallocated within a single
- * read cycle as this would merge increments from the 'old' and 'new' counters.
+ * are zero and zero squash is enabled. Note that the GENERATION_COUNT value is
+ * specific to the COUNTER_TYPE (IDENTIFIER field in packet header). Receiving
+ * a packet with generation count > GENERATION_COUNT guarantees that no more
+ * values will be written for these counters. If values for these counter IDs
+ * are present, the counter ID has been reallocated. A counter ID will not be
+ * reallocated within a single read cycle as this would merge increments from
+ * the 'old' and 'new' counters.
*/
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_LEN 4
@@ -29616,7 +30603,9 @@
#define MC_CMD_0x151_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest */
+/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest: Using V1 is equivalent to V2
+ * with COUNTER_TYPES_MASK=0x1 (i.e. AR counters only).
+ */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN 8
/* The RxQ to write packets to. */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_QID_OFST 0
@@ -29634,6 +30623,35 @@
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_LBN 1
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_WIDTH 1
+/* MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_LEN 12
+/* The RxQ to write packets to. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_LEN 2
+/* Maximum size in bytes of packets that may be written to the RxQ. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_OFST 2
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_LEN 2
+/* Optional flags. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_LBN 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_WIDTH 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_LBN 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_WIDTH 1
+/* Mask of which counter types should be reported. Each bit position
+ * corresponds to a value of the MAE_COUNTER_TYPE enum. For example a value of
+ * 0x3 requests both AR and CT counters. A value of zero is invalid. Counter
+ * types not selected by the mask value won't be included in the stream. If a
+ * client wishes to change which counter types are reported, it must first call
+ * MAE_COUNTERS_STREAM_STOP, then restart it with the new mask value.
+ * Requesting a counter type which isn't supported by firmware (reported in
+ * MC_CMD_MAE_GET_CAPS/COUNTER_TYPES_SUPPORTED) will result in ENOTSUP.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_OFST 8
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_LEN 4
+
/* MC_CMD_MAE_COUNTERS_STREAM_START_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN 4
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_FLAGS_OFST 0
@@ -29661,14 +30679,32 @@
/* MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN 4
-/* Generation count. The final set of counter values will be written out in
- * packets with count == GENERATION_COUNT. An empty packet with count >
- * GENERATION_COUNT indicates that no more counter values will be written to
- * this stream.
+/* Generation count for AR counters. The final set of AR counter values will be
+ * written out in packets with count == GENERATION_COUNT. An empty packet with
+ * count > GENERATION_COUNT indicates that no more counter values of this type
+ * will be written to this stream.
*/
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT msgresponse */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMIN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX_MCDI2 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_NUM(len) (((len)-0)/4)
+/* Array of generation counts, indexed by MAE_COUNTER_TYPE. Note that since
+ * MAE_COUNTER_TYPE_AR==0, this response is backwards-compatible with V1. The
+ * final set of counter values will be written out in packets with count ==
+ * GENERATION_COUNT. An empty packet with count > GENERATION_COUNT indicates
+ * that no more counter values of this type will be written to this stream.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MINNUM 1
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM 8
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM_MCDI2 8
+
/***********************************/
/* MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS
@@ -29941,9 +30977,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_LEN 4
@@ -30021,9 +31058,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_LEN 4
@@ -30352,7 +31390,8 @@
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_LBN 64
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_WIDTH 32
/* Counter ID to increment if DO_CT or DO_RECIRC is set. Must be set to
- * COUNTER_ID_NULL otherwise.
+ * COUNTER_ID_NULL otherwise. Counter ID must have been allocated with
+ * COUNTER_TYPE=AR.
*/
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_OFST 12
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_LEN 4
@@ -30710,6 +31749,108 @@
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_LBN 352
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_WIDTH 32
+/* MAE_MPORT_DESC_V2 structuredef */
+#define MAE_MPORT_DESC_V2_LEN 56
+#define MAE_MPORT_DESC_V2_MPORT_ID_OFST 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_MPORT_ID_LBN 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_WIDTH 32
+/* Reserved for future purposes, contains information independent of caller */
+#define MAE_MPORT_DESC_V2_FLAGS_OFST 4
+#define MAE_MPORT_DESC_V2_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_FLAGS_LBN 32
+#define MAE_MPORT_DESC_V2_FLAGS_WIDTH 32
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_OFST 8
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_LBN 0
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_LBN 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELETE_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELETE_LBN 2
+#define MAE_MPORT_DESC_V2_CAN_DELETE_WIDTH 1
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_OFST 8
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_LBN 3
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_WIDTH 1
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LBN 64
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_WIDTH 32
+/* Not the ideal name; it's really the type of thing connected to the m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_OFST 12
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LEN 4
+/* enum: Connected to a MAC... */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT 0x0
+/* enum: Adds metadata and delivers to another m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS 0x1
+/* enum: Connected to a VNIC. */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC 0x2
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LBN 96
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_WIDTH 32
+/* 128-bit value available to drivers for m-port identification. */
+#define MAE_MPORT_DESC_V2_UUID_OFST 16
+#define MAE_MPORT_DESC_V2_UUID_LEN 16
+#define MAE_MPORT_DESC_V2_UUID_LBN 128
+#define MAE_MPORT_DESC_V2_UUID_WIDTH 128
+/* Big wadge of space reserved for other common properties */
+#define MAE_MPORT_DESC_V2_RESERVED_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LEN 8
+#define MAE_MPORT_DESC_V2_RESERVED_LO_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_LO_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_HI_OFST 36
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LBN 288
+#define MAE_MPORT_DESC_V2_RESERVED_HI_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_WIDTH 64
+/* Logical port index. Only valid when type NET Port. */
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_OFST 40
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LEN 4
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LBN 320
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_WIDTH 32
+/* The m-port delivered to */
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_OFST 40
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LBN 320
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_WIDTH 32
+/* The type of thing that owns the VNIC */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_OFST 40
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION 0x1 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN 0x2 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LBN 320
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_WIDTH 32
+/* The PCIe interface on which the function lives. CJK: We need an enumeration
+ * of interfaces that we extend as new interface (types) appear. This belongs
+ * elsewhere and should be referenced from here
+ */
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_WIDTH 32
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_OFST 48
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LEN 2
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LBN 384
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_WIDTH 16
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_OFST 50
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LEN 2
+/* enum: Indicates that the function is a PF */
+#define MAE_MPORT_DESC_V2_VF_IDX_NULL 0xffff
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LBN 400
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_WIDTH 16
+/* Reserved. Should be ignored for now. */
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_WIDTH 32
+/* A client handle for the VNIC's owner. Only valid for type VNIC. */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST 52
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LBN 416
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_WIDTH 32
+
/***********************************/
/* MC_CMD_MAE_MPORT_ENUMERATE
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 02/38] common/sfc_efx/base: update EF100 registers definitions
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 03/38] net/sfc: add switch mode device argument Andrew Rybchenko
` (36 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev
Pick up all changes and extra definitions.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx_regs_ef100.h | 106 +++++++++++++++----
drivers/common/sfc_efx/base/rhead_rx.c | 2 +-
2 files changed, 85 insertions(+), 23 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx_regs_ef100.h b/drivers/common/sfc_efx/base/efx_regs_ef100.h
index 2b766aabdd..0446377f64 100644
--- a/drivers/common/sfc_efx/base/efx_regs_ef100.h
+++ b/drivers/common/sfc_efx/base/efx_regs_ef100.h
@@ -323,12 +323,6 @@ extern "C" {
/* ES_RHEAD_BASE_EVENT */
#define ESF_GZ_E_TYPE_LBN 60
#define ESF_GZ_E_TYPE_WIDTH 4
-#define ESE_GZ_EF100_EV_DRIVER 5
-#define ESE_GZ_EF100_EV_MCDI 4
-#define ESE_GZ_EF100_EV_CONTROL 3
-#define ESE_GZ_EF100_EV_TX_TIMESTAMP 2
-#define ESE_GZ_EF100_EV_TX_COMPLETION 1
-#define ESE_GZ_EF100_EV_RX_PKTS 0
#define ESF_GZ_EV_EVQ_PHASE_LBN 59
#define ESF_GZ_EV_EVQ_PHASE_WIDTH 1
#define ESE_GZ_RHEAD_BASE_EVENT_STRUCT_SIZE 64
@@ -467,6 +461,23 @@ extern "C" {
#define ESE_GZ_XIL_CFGBAR_VSEC_STRUCT_SIZE 96
+/* ES_addr_spc */
+#define ESF_GZ_ADDR_SPC_FORMAT_1_FUNCTION_LBN 28
+#define ESF_GZ_ADDR_SPC_FORMAT_1_FUNCTION_WIDTH 8
+#define ESF_GZ_ADDR_SPC_FORMAT_2_FUNCTION_LBN 24
+#define ESF_GZ_ADDR_SPC_FORMAT_2_FUNCTION_WIDTH 12
+#define ESF_GZ_ADDR_SPC_FORMAT_1_PROFILE_ID_LBN 24
+#define ESF_GZ_ADDR_SPC_FORMAT_1_PROFILE_ID_WIDTH 4
+#define ESF_GZ_ADDR_SPC_PASID_LBN 2
+#define ESF_GZ_ADDR_SPC_PASID_WIDTH 22
+#define ESF_GZ_ADDR_SPC_FORMAT_LBN 0
+#define ESF_GZ_ADDR_SPC_FORMAT_WIDTH 2
+#define ESE_GZ_ADDR_SPC_FORMAT_1 3
+#define ESF_GZ_ADDR_SPC_FORMAT_2_PROFILE_ID_IDX_LBN 0
+#define ESF_GZ_ADDR_SPC_FORMAT_2_PROFILE_ID_IDX_WIDTH 2
+#define ESE_GZ_ADDR_SPC_STRUCT_SIZE 36
+
+
/* ES_rh_egres_hclass */
#define ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM_LBN 15
#define ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM_WIDTH 1
@@ -560,14 +571,18 @@ extern "C" {
#define ESF_GZ_RX_PREFIX_VLAN_STRIP_TCI_WIDTH 16
#define ESF_GZ_RX_PREFIX_CSUM_FRAME_LBN 144
#define ESF_GZ_RX_PREFIX_CSUM_FRAME_WIDTH 16
-#define ESF_GZ_RX_PREFIX_INGRESS_VPORT_LBN 128
-#define ESF_GZ_RX_PREFIX_INGRESS_VPORT_WIDTH 16
+#define ESF_GZ_RX_PREFIX_INGRESS_MPORT_LBN 128
+#define ESF_GZ_RX_PREFIX_INGRESS_MPORT_WIDTH 16
#define ESF_GZ_RX_PREFIX_USER_MARK_LBN 96
#define ESF_GZ_RX_PREFIX_USER_MARK_WIDTH 32
#define ESF_GZ_RX_PREFIX_RSS_HASH_LBN 64
#define ESF_GZ_RX_PREFIX_RSS_HASH_WIDTH 32
-#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN 32
-#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_WIDTH 32
+#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN 34
+#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_WIDTH 30
+#define ESF_GZ_RX_PREFIX_VSWITCH_STATUS_LBN 33
+#define ESF_GZ_RX_PREFIX_VSWITCH_STATUS_WIDTH 1
+#define ESF_GZ_RX_PREFIX_VLAN_STRIPPED_LBN 32
+#define ESF_GZ_RX_PREFIX_VLAN_STRIPPED_WIDTH 1
#define ESF_GZ_RX_PREFIX_CLASS_LBN 16
#define ESF_GZ_RX_PREFIX_CLASS_WIDTH 16
#define ESF_GZ_RX_PREFIX_USER_FLAG_LBN 15
@@ -674,12 +689,12 @@ extern "C" {
#define ESF_GZ_M2M_TRANSLATE_ADDR_WIDTH 1
#define ESF_GZ_M2M_RSVD_LBN 120
#define ESF_GZ_M2M_RSVD_WIDTH 2
-#define ESF_GZ_M2M_ADDR_SPC_LBN 108
-#define ESF_GZ_M2M_ADDR_SPC_WIDTH 12
-#define ESF_GZ_M2M_ADDR_SPC_PASID_LBN 86
-#define ESF_GZ_M2M_ADDR_SPC_PASID_WIDTH 22
-#define ESF_GZ_M2M_ADDR_SPC_MODE_LBN 84
-#define ESF_GZ_M2M_ADDR_SPC_MODE_WIDTH 2
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW0_LBN 84
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW0_WIDTH 32
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW1_LBN 116
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW1_WIDTH 4
+#define ESF_GZ_M2M_ADDR_SPC_ID_LBN 84
+#define ESF_GZ_M2M_ADDR_SPC_ID_WIDTH 36
#define ESF_GZ_M2M_LEN_MINUS_1_LBN 64
#define ESF_GZ_M2M_LEN_MINUS_1_WIDTH 20
#define ESF_GZ_M2M_ADDR_DW0_LBN 0
@@ -722,12 +737,12 @@ extern "C" {
#define ESF_GZ_TX_SEG_TRANSLATE_ADDR_WIDTH 1
#define ESF_GZ_TX_SEG_RSVD2_LBN 120
#define ESF_GZ_TX_SEG_RSVD2_WIDTH 2
-#define ESF_GZ_TX_SEG_ADDR_SPC_LBN 108
-#define ESF_GZ_TX_SEG_ADDR_SPC_WIDTH 12
-#define ESF_GZ_TX_SEG_ADDR_SPC_PASID_LBN 86
-#define ESF_GZ_TX_SEG_ADDR_SPC_PASID_WIDTH 22
-#define ESF_GZ_TX_SEG_ADDR_SPC_MODE_LBN 84
-#define ESF_GZ_TX_SEG_ADDR_SPC_MODE_WIDTH 2
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW0_LBN 84
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW0_WIDTH 32
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW1_LBN 116
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW1_WIDTH 4
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_LBN 84
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_WIDTH 36
#define ESF_GZ_TX_SEG_RSVD_LBN 80
#define ESF_GZ_TX_SEG_RSVD_WIDTH 4
#define ESF_GZ_TX_SEG_LEN_LBN 64
@@ -824,6 +839,12 @@ extern "C" {
+/* Enum D2VIO_MSG_OP */
+#define ESE_GZ_QUE_JBDNE 3
+#define ESE_GZ_QUE_EVICT 2
+#define ESE_GZ_QUE_EMPTY 1
+#define ESE_GZ_NOP 0
+
/* Enum DESIGN_PARAMS */
#define ESE_EF100_DP_GZ_RX_MAX_RUNT 17
#define ESE_EF100_DP_GZ_VI_STRIDES 16
@@ -871,6 +892,19 @@ extern "C" {
#define ESE_GZ_PCI_BASE_CONFIG_SPACE_SIZE 256
#define ESE_GZ_PCI_EXPRESS_XCAP_HDR_SIZE 4
+/* Enum RH_DSC_TYPE */
+#define ESE_GZ_TX_TOMB 0xF
+#define ESE_GZ_TX_VIO 0xE
+#define ESE_GZ_TX_TSO_OVRRD 0x8
+#define ESE_GZ_TX_D2CMP 0x7
+#define ESE_GZ_TX_DATA 0x6
+#define ESE_GZ_TX_D2M 0x5
+#define ESE_GZ_TX_M2M 0x4
+#define ESE_GZ_TX_SEG 0x3
+#define ESE_GZ_TX_TSO 0x2
+#define ESE_GZ_TX_OVRRD 0x1
+#define ESE_GZ_TX_SEND 0x0
+
/* Enum RH_HCLASS_L2_CLASS */
#define ESE_GZ_RH_HCLASS_L2_CLASS_E2_0123VLAN 1
#define ESE_GZ_RH_HCLASS_L2_CLASS_OTHER 0
@@ -907,6 +941,25 @@ extern "C" {
#define ESE_GZ_RH_HCLASS_TUNNEL_CLASS_VXLAN 1
#define ESE_GZ_RH_HCLASS_TUNNEL_CLASS_NONE 0
+/* Enum SF_CTL_EVENT_SUBTYPE */
+#define ESE_GZ_EF100_CTL_EV_EVQ_TIMEOUT 0x3
+#define ESE_GZ_EF100_CTL_EV_FLUSH 0x2
+#define ESE_GZ_EF100_CTL_EV_TIME_SYNC 0x1
+#define ESE_GZ_EF100_CTL_EV_UNSOL_OVERFLOW 0x0
+
+/* Enum SF_EVENT_TYPE */
+#define ESE_GZ_EF100_EV_DRIVER 0x5
+#define ESE_GZ_EF100_EV_MCDI 0x4
+#define ESE_GZ_EF100_EV_CONTROL 0x3
+#define ESE_GZ_EF100_EV_TX_TIMESTAMP 0x2
+#define ESE_GZ_EF100_EV_TX_COMPLETION 0x1
+#define ESE_GZ_EF100_EV_RX_PKTS 0x0
+
+/* Enum SF_EW_EVENT_TYPE */
+#define ESE_GZ_EF100_EWEV_VIRTQ_DESC 0x2
+#define ESE_GZ_EF100_EWEV_TXQ_DESC 0x1
+#define ESE_GZ_EF100_EWEV_64BIT 0x0
+
/* Enum TX_DESC_CSO_PARTIAL_EN */
#define ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP 2
#define ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP 1
@@ -922,6 +975,15 @@ extern "C" {
#define ESE_GZ_TX_DESC_IP4_ID_INC_MOD16 2
#define ESE_GZ_TX_DESC_IP4_ID_INC_MOD15 1
#define ESE_GZ_TX_DESC_IP4_ID_NO_OP 0
+
+/* Enum VIRTIO_NET_HDR_F */
+#define ESE_GZ_NEEDS_CSUM 0x1
+
+/* Enum VIRTIO_NET_HDR_GSO */
+#define ESE_GZ_TCPV6 0x4
+#define ESE_GZ_UDP 0x3
+#define ESE_GZ_TCPV4 0x1
+#define ESE_GZ_NONE 0x0
/*************************************************************************
* NOTE: the comment line above marks the end of the autogenerated section
*/
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index 76b8ce302a..692c3e1d49 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -37,7 +37,7 @@ static const efx_rx_prefix_layout_t rhead_default_rx_prefix_layout = {
RHEAD_RX_PREFIX_FIELD(PARTIAL_TSTAMP, B_FALSE),
RHEAD_RX_PREFIX_FIELD(RSS_HASH, B_FALSE),
RHEAD_RX_PREFIX_FIELD(USER_MARK, B_FALSE),
- RHEAD_RX_PREFIX_FIELD(INGRESS_VPORT, B_FALSE),
+ RHEAD_RX_PREFIX_FIELD(INGRESS_MPORT, B_FALSE),
RHEAD_RX_PREFIX_FIELD(CSUM_FRAME, B_TRUE),
RHEAD_RX_PREFIX_FIELD(VLAN_STRIP_TCI, B_TRUE),
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 03/38] net/sfc: add switch mode device argument
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 02/38] common/sfc_efx/base: update EF100 registers definitions Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 04/38] net/sfc: insert switchdev mode MAE rules Andrew Rybchenko
` (35 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Add the argument that allows user to choose either switchdev or legacy
mode. Legacy mode enables switching by using Ethernet virtual bridging
(EVB) API. In switchdev mode, VF traffic goes via port representor
(if any) on PF, and software virtual switch (for example, Open vSwitch)
steers the traffic.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
doc/guides/nics/sfc_efx.rst | 13 +++++++++
drivers/net/sfc/sfc.h | 2 ++
drivers/net/sfc/sfc_ethdev.c | 54 ++++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_kvargs.c | 1 +
drivers/net/sfc/sfc_kvargs.h | 8 ++++++
drivers/net/sfc/sfc_sriov.c | 9 ++++--
6 files changed, 85 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 163bc2533f..d66cb76dab 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -371,6 +371,19 @@ boolean parameters value.
If this parameter is not specified then ef100 device will operate as
network device.
+- ``switch_mode`` [legacy|switchdev] (see below for default)
+
+ In legacy mode, NIC firmware provides Ethernet virtual bridging (EVB) API
+ to configure switching inside NIC to deliver traffic to physical (PF) and
+ virtual (VF) PCI functions. PF driver is responsible to build the
+ infrastructure for VFs, and traffic goes to/from VF by default in accordance
+ with MAC address assigned, permissions and filters installed by VF drivers.
+ In switchdev mode VF traffic goes via port representor (if any) on PF, and
+ software virtual switch (for example, Open vSwitch) makes the decision.
+ Software virtual switch may install MAE rules to pass established traffic
+ flows via hardware and offload software datapath as the result.
+ Default is legacy.
+
- ``rx_datapath`` [auto|efx|ef10|ef10_essb] (default **auto**)
Choose receive datapath implementation.
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 331e06bac6..b045baca9e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -313,6 +313,8 @@ struct sfc_adapter {
boolean_t tso_encap;
uint32_t rxd_wait_timeout_ns;
+
+ bool switchdev;
};
static inline struct sfc_adapter_shared *
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3..41add341a0 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2188,6 +2188,44 @@ sfc_register_dp(void)
}
}
+static int
+sfc_parse_switch_mode(struct sfc_adapter *sa)
+{
+ const char *switch_mode = NULL;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rc = sfc_kvargs_process(sa, SFC_KVARG_SWITCH_MODE,
+ sfc_kvarg_string_handler, &switch_mode);
+ if (rc != 0)
+ goto fail_kvargs;
+
+ /* Check representors when supported */
+ if (switch_mode == NULL ||
+ strcasecmp(switch_mode, SFC_KVARG_SWITCH_MODE_LEGACY) == 0) {
+ sa->switchdev = false;
+ } else if (strcasecmp(switch_mode,
+ SFC_KVARG_SWITCH_MODE_SWITCHDEV) == 0) {
+ sa->switchdev = true;
+ } else {
+ sfc_err(sa, "invalid switch mode device argument '%s'",
+ switch_mode);
+ rc = EINVAL;
+ goto fail_mode;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_mode:
+fail_kvargs:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+ return rc;
+}
+
static int
sfc_eth_dev_init(struct rte_eth_dev *dev)
{
@@ -2270,6 +2308,10 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
sfc_adapter_lock_init(sa);
sfc_adapter_lock(sa);
+ rc = sfc_parse_switch_mode(sa);
+ if (rc != 0)
+ goto fail_switch_mode;
+
sfc_log_init(sa, "probing");
rc = sfc_probe(sa);
if (rc != 0)
@@ -2285,6 +2327,13 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
if (rc != 0)
goto fail_attach;
+ if (sa->switchdev && sa->mae.status != SFC_MAE_STATUS_SUPPORTED) {
+ sfc_err(sa,
+ "failed to enable switchdev mode without MAE support");
+ rc = ENOTSUP;
+ goto fail_switchdev_no_mae;
+ }
+
encp = efx_nic_cfg_get(sa->nic);
/*
@@ -2299,6 +2348,9 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
sfc_log_init(sa, "done");
return 0;
+fail_switchdev_no_mae:
+ sfc_detach(sa);
+
fail_attach:
sfc_eth_dev_clear_ops(dev);
@@ -2306,6 +2358,7 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
sfc_unprobe(sa);
fail_probe:
+fail_switch_mode:
sfc_adapter_unlock(sa);
sfc_adapter_lock_fini(sa);
rte_free(dev->data->mac_addrs);
@@ -2370,6 +2423,7 @@ RTE_PMD_REGISTER_PCI(net_sfc_efx, sfc_efx_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_sfc_efx, pci_id_sfc_efx_map);
RTE_PMD_REGISTER_KMOD_DEP(net_sfc_efx, "* igb_uio | uio_pci_generic | vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_sfc_efx,
+ SFC_KVARG_SWITCH_MODE "=" SFC_KVARG_VALUES_SWITCH_MODE " "
SFC_KVARG_RX_DATAPATH "=" SFC_KVARG_VALUES_RX_DATAPATH " "
SFC_KVARG_TX_DATAPATH "=" SFC_KVARG_VALUES_TX_DATAPATH " "
SFC_KVARG_PERF_PROFILE "=" SFC_KVARG_VALUES_PERF_PROFILE " "
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index 974c05e68e..cd16213637 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -22,6 +22,7 @@ sfc_kvargs_parse(struct sfc_adapter *sa)
struct rte_eth_dev *eth_dev = (sa)->eth_dev;
struct rte_devargs *devargs = eth_dev->device->devargs;
const char **params = (const char *[]){
+ SFC_KVARG_SWITCH_MODE,
SFC_KVARG_STATS_UPDATE_PERIOD_MS,
SFC_KVARG_PERF_PROFILE,
SFC_KVARG_RX_DATAPATH,
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index ff76e7d9fc..8e34ec92a2 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -18,6 +18,14 @@ extern "C" {
#define SFC_KVARG_VALUES_BOOL "[1|y|yes|on|0|n|no|off]"
+#define SFC_KVARG_SWITCH_MODE_LEGACY "legacy"
+#define SFC_KVARG_SWITCH_MODE_SWITCHDEV "switchdev"
+
+#define SFC_KVARG_SWITCH_MODE "switch_mode"
+#define SFC_KVARG_VALUES_SWITCH_MODE \
+ "[" SFC_KVARG_SWITCH_MODE_LEGACY "|" \
+ SFC_KVARG_SWITCH_MODE_SWITCHDEV "]"
+
#define SFC_KVARG_PERF_PROFILE "perf_profile"
#define SFC_KVARG_PERF_PROFILE_AUTO "auto"
diff --git a/drivers/net/sfc/sfc_sriov.c b/drivers/net/sfc/sfc_sriov.c
index baa0242433..385b172e2e 100644
--- a/drivers/net/sfc/sfc_sriov.c
+++ b/drivers/net/sfc/sfc_sriov.c
@@ -53,7 +53,7 @@ sfc_sriov_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
sriov->num_vfs = pci_dev->max_vfs;
- if (sriov->num_vfs == 0)
+ if (sa->switchdev || sriov->num_vfs == 0)
goto done;
vport_config = calloc(sriov->num_vfs + 1, sizeof(*vport_config));
@@ -110,6 +110,11 @@ sfc_sriov_vswitch_create(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
+ if (sa->switchdev) {
+ sfc_log_init(sa, "don't create vswitch in switchdev mode");
+ goto done;
+ }
+
if (sriov->num_vfs == 0) {
sfc_log_init(sa, "no VFs enabled");
goto done;
@@ -152,7 +157,7 @@ sfc_sriov_vswitch_destroy(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- if (sriov->num_vfs == 0)
+ if (sa->switchdev || sriov->num_vfs == 0)
goto done;
rc = efx_evb_vswitch_destroy(sa->nic, sriov->vswitch);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 04/38] net/sfc: insert switchdev mode MAE rules
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (2 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 03/38] net/sfc: add switch mode device argument Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 05/38] common/sfc_efx/base: add an API to get mport ID by selector Andrew Rybchenko
` (34 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
By default, the firmware is in EVB mode, but insertion of the first MAE
rule resets it to switchdev mode automatically and removes all automatic
MAE rules added by EVB support. On initialisation, insert MAE rules that
forward traffic between PHY and PF.
Add an API for creation and insertion of driver-internal MAE
rules(flows).
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 8 ++
drivers/net/sfc/sfc_mae.c | 211 ++++++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_mae.h | 49 +++++++++
3 files changed, 268 insertions(+)
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 274a98e228..cd2c97f3b2 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -895,6 +895,10 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_mae_attach;
+ rc = sfc_mae_switchdev_init(sa);
+ if (rc != 0)
+ goto fail_mae_switchdev_init;
+
sfc_log_init(sa, "fini nic");
efx_nic_fini(enp);
@@ -923,6 +927,9 @@ sfc_attach(struct sfc_adapter *sa)
fail_sw_xstats_init:
sfc_flow_fini(sa);
+ sfc_mae_switchdev_fini(sa);
+
+fail_mae_switchdev_init:
sfc_mae_detach(sa);
fail_mae_attach:
@@ -969,6 +976,7 @@ sfc_detach(struct sfc_adapter *sa)
sfc_flow_fini(sa);
+ sfc_mae_switchdev_fini(sa);
sfc_mae_detach(sa);
sfc_mae_counter_rxq_detach(sa);
sfc_filter_detach(sa);
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 4b520bc619..b3607a178b 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -44,6 +44,139 @@ sfc_mae_counter_registry_fini(struct sfc_mae_counter_registry *registry)
sfc_mae_counters_fini(®istry->counters);
}
+static int
+sfc_mae_internal_rule_find_empty_slot(struct sfc_adapter *sa,
+ struct sfc_mae_rule **rule)
+{
+ struct sfc_mae *mae = &sa->mae;
+ struct sfc_mae_internal_rules *internal_rules = &mae->internal_rules;
+ unsigned int entry;
+ int rc;
+
+ for (entry = 0; entry < SFC_MAE_NB_RULES_MAX; entry++) {
+ if (internal_rules->rules[entry].spec == NULL)
+ break;
+ }
+
+ if (entry == SFC_MAE_NB_RULES_MAX) {
+ rc = ENOSPC;
+ sfc_err(sa, "failed too many rules (%u rules used)", entry);
+ goto fail_too_many_rules;
+ }
+
+ *rule = &internal_rules->rules[entry];
+
+ return 0;
+
+fail_too_many_rules:
+ return rc;
+}
+
+int
+sfc_mae_rule_add_mport_match_deliver(struct sfc_adapter *sa,
+ const efx_mport_sel_t *mport_match,
+ const efx_mport_sel_t *mport_deliver,
+ int prio, struct sfc_mae_rule **rulep)
+{
+ struct sfc_mae *mae = &sa->mae;
+ struct sfc_mae_rule *rule;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (prio > 0 && (unsigned int)prio >= mae->nb_action_rule_prios_max) {
+ rc = EINVAL;
+ sfc_err(sa, "failed: invalid priority %d (max %u)", prio,
+ mae->nb_action_rule_prios_max);
+ goto fail_invalid_prio;
+ }
+ if (prio < 0)
+ prio = mae->nb_action_rule_prios_max - 1;
+
+ rc = sfc_mae_internal_rule_find_empty_slot(sa, &rule);
+ if (rc != 0)
+ goto fail_find_empty_slot;
+
+ sfc_log_init(sa, "init MAE match spec");
+ rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION,
+ (uint32_t)prio, &rule->spec);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init MAE match spec");
+ goto fail_match_init;
+ }
+
+ rc = efx_mae_match_spec_mport_set(rule->spec, mport_match, NULL);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get MAE match mport selector");
+ goto fail_mport_set;
+ }
+
+ rc = efx_mae_action_set_spec_init(sa->nic, &rule->actions);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init MAE action set");
+ goto fail_action_init;
+ }
+
+ rc = efx_mae_action_set_populate_deliver(rule->actions,
+ mport_deliver);
+ if (rc != 0) {
+ sfc_err(sa, "failed to populate deliver action");
+ goto fail_populate_deliver;
+ }
+
+ rc = efx_mae_action_set_alloc(sa->nic, rule->actions,
+ &rule->action_set);
+ if (rc != 0) {
+ sfc_err(sa, "failed to allocate action set");
+ goto fail_action_set_alloc;
+ }
+
+ rc = efx_mae_action_rule_insert(sa->nic, rule->spec, NULL,
+ &rule->action_set,
+ &rule->rule_id);
+ if (rc != 0) {
+ sfc_err(sa, "failed to insert action rule");
+ goto fail_rule_insert;
+ }
+
+ *rulep = rule;
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_rule_insert:
+ efx_mae_action_set_free(sa->nic, &rule->action_set);
+
+fail_action_set_alloc:
+fail_populate_deliver:
+ efx_mae_action_set_spec_fini(sa->nic, rule->actions);
+
+fail_action_init:
+fail_mport_set:
+ efx_mae_match_spec_fini(sa->nic, rule->spec);
+
+fail_match_init:
+fail_find_empty_slot:
+fail_invalid_prio:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_mae_rule_del(struct sfc_adapter *sa, struct sfc_mae_rule *rule)
+{
+ if (rule == NULL || rule->spec == NULL)
+ return;
+
+ efx_mae_action_rule_remove(sa->nic, &rule->rule_id);
+ efx_mae_action_set_free(sa->nic, &rule->action_set);
+ efx_mae_action_set_spec_fini(sa->nic, rule->actions);
+ efx_mae_match_spec_fini(sa->nic, rule->spec);
+
+ rule->spec = NULL;
+}
+
int
sfc_mae_attach(struct sfc_adapter *sa)
{
@@ -3443,3 +3576,81 @@ sfc_mae_flow_query(struct rte_eth_dev *dev,
"Query for action of this type is not supported");
}
}
+
+int
+sfc_mae_switchdev_init(struct sfc_adapter *sa)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ struct sfc_mae *mae = &sa->mae;
+ efx_mport_sel_t pf;
+ efx_mport_sel_t phy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sa->switchdev) {
+ sfc_log_init(sa, "switchdev is not enabled - skip");
+ return 0;
+ }
+
+ if (mae->status != SFC_MAE_STATUS_SUPPORTED) {
+ rc = ENOTSUP;
+ sfc_err(sa, "failed to init switchdev - no MAE support");
+ goto fail_no_mae;
+ }
+
+ rc = efx_mae_mport_by_pcie_function(encp->enc_pf, EFX_PCI_VF_INVALID,
+ &pf);
+ if (rc != 0) {
+ sfc_err(sa, "failed get PF mport");
+ goto fail_pf_get;
+ }
+
+ rc = efx_mae_mport_by_phy_port(encp->enc_assigned_port, &phy);
+ if (rc != 0) {
+ sfc_err(sa, "failed get PHY mport");
+ goto fail_phy_get;
+ }
+
+ rc = sfc_mae_rule_add_mport_match_deliver(sa, &pf, &phy,
+ SFC_MAE_RULE_PRIO_LOWEST,
+ &mae->switchdev_rule_pf_to_ext);
+ if (rc != 0) {
+ sfc_err(sa, "failed add MAE rule to forward from PF to PHY");
+ goto fail_pf_add;
+ }
+
+ rc = sfc_mae_rule_add_mport_match_deliver(sa, &phy, &pf,
+ SFC_MAE_RULE_PRIO_LOWEST,
+ &mae->switchdev_rule_ext_to_pf);
+ if (rc != 0) {
+ sfc_err(sa, "failed add MAE rule to forward from PHY to PF");
+ goto fail_phy_add;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_phy_add:
+ sfc_mae_rule_del(sa, mae->switchdev_rule_pf_to_ext);
+
+fail_pf_add:
+fail_phy_get:
+fail_pf_get:
+fail_no_mae:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_mae_switchdev_fini(struct sfc_adapter *sa)
+{
+ struct sfc_mae *mae = &sa->mae;
+
+ if (!sa->switchdev)
+ return;
+
+ sfc_mae_rule_del(sa, mae->switchdev_rule_pf_to_ext);
+ sfc_mae_rule_del(sa, mae->switchdev_rule_ext_to_pf);
+}
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 7e3b6a7a97..684f0daf7a 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -139,6 +139,26 @@ struct sfc_mae_counter_registry {
uint32_t service_id;
};
+/** Rules to forward traffic from PHY port to PF and from PF to PHY port */
+#define SFC_MAE_NB_SWITCHDEV_RULES (2)
+/** Maximum required internal MAE rules */
+#define SFC_MAE_NB_RULES_MAX (SFC_MAE_NB_SWITCHDEV_RULES)
+
+struct sfc_mae_rule {
+ efx_mae_match_spec_t *spec;
+ efx_mae_actions_t *actions;
+ efx_mae_aset_id_t action_set;
+ efx_mae_rule_id_t rule_id;
+};
+
+struct sfc_mae_internal_rules {
+ /*
+ * Rules required to sustain switchdev mode or to provide
+ * port representor functionality.
+ */
+ struct sfc_mae_rule rules[SFC_MAE_NB_RULES_MAX];
+};
+
struct sfc_mae {
/** Assigned switch domain identifier */
uint16_t switch_domain_id;
@@ -164,6 +184,14 @@ struct sfc_mae {
bool counter_rxq_running;
/** Counter registry */
struct sfc_mae_counter_registry counter_registry;
+ /** Driver-internal flow rules */
+ struct sfc_mae_internal_rules internal_rules;
+ /**
+ * Switchdev default rules. They forward traffic from PHY port
+ * to PF and vice versa.
+ */
+ struct sfc_mae_rule *switchdev_rule_pf_to_ext;
+ struct sfc_mae_rule *switchdev_rule_ext_to_pf;
};
struct sfc_adapter;
@@ -306,6 +334,27 @@ sfc_flow_insert_cb_t sfc_mae_flow_insert;
sfc_flow_remove_cb_t sfc_mae_flow_remove;
sfc_flow_query_cb_t sfc_mae_flow_query;
+/**
+ * The value used to represent the lowest priority.
+ * Used in MAE rule API.
+ */
+#define SFC_MAE_RULE_PRIO_LOWEST (-1)
+
+/**
+ * Insert a driver-internal flow rule that matches traffic originating from
+ * some m-port selector and redirects it to another one
+ * (eg. PF --> PHY, PHY --> PF).
+ *
+ * If requested priority is negative, use the lowest priority.
+ */
+int sfc_mae_rule_add_mport_match_deliver(struct sfc_adapter *sa,
+ const efx_mport_sel_t *mport_match,
+ const efx_mport_sel_t *mport_deliver,
+ int prio, struct sfc_mae_rule **rulep);
+void sfc_mae_rule_del(struct sfc_adapter *sa, struct sfc_mae_rule *rule);
+int sfc_mae_switchdev_init(struct sfc_adapter *sa);
+void sfc_mae_switchdev_fini(struct sfc_adapter *sa);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 05/38] common/sfc_efx/base: add an API to get mport ID by selector
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (3 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 04/38] net/sfc: insert switchdev mode MAE rules Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 06/38] net/sfc: support EF100 Tx override prefix Andrew Rybchenko
` (33 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The mport ID is required to set appropriate egress mport ID
in Tx prefix for port representor TxQ.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx.h | 21 +++++++++
drivers/common/sfc_efx/base/efx_mae.c | 64 +++++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 86 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 24e1314cc3..94803815ac 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4181,6 +4181,19 @@ typedef struct efx_mport_sel_s {
uint32_t sel;
} efx_mport_sel_t;
+/*
+ * MPORT ID. Used to refer dynamically to a specific MPORT.
+ * The difference between MPORT selector and MPORT ID is that
+ * selector can specify an exact MPORT ID or it can specify a
+ * pattern by which an exact MPORT ID can be selected. For example,
+ * static MPORT selector can specify MPORT of a current PF, which
+ * will be translated to the dynamic MPORT ID based on which PF is
+ * using that MPORT selector.
+ */
+typedef struct efx_mport_id_s {
+ uint32_t id;
+} efx_mport_id_t;
+
#define EFX_MPORT_NULL (0U)
/*
@@ -4210,6 +4223,14 @@ efx_mae_mport_by_pcie_function(
__in uint32_t vf,
__out efx_mport_sel_t *mportp);
+/* Get MPORT ID by an MPORT selector */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_id_by_selector(
+ __in efx_nic_t *enp,
+ __in const efx_mport_sel_t *mport_selectorp,
+ __out efx_mport_id_t *mport_idp);
+
/*
* Fields which have BE postfix in their named constants are expected
* to be passed by callers in big-endian byte order. They will appear
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index c22206e227..b38b1143d6 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -731,6 +731,70 @@ efx_mae_mport_by_pcie_function(
return (0);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+static __checkReturn efx_rc_t
+efx_mcdi_mae_mport_lookup(
+ __in efx_nic_t *enp,
+ __in const efx_mport_sel_t *mport_selectorp,
+ __out efx_mport_id_t *mport_idp)
+{
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_LOOKUP_IN_LEN,
+ MC_CMD_MAE_MPORT_LOOKUP_OUT_LEN);
+ efx_rc_t rc;
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_LOOKUP;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_LOOKUP_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_LOOKUP_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_LOOKUP_IN_MPORT_SELECTOR,
+ mport_selectorp->sel);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail1;
+ }
+
+ mport_idp->id = MCDI_OUT_DWORD(req, MAE_MPORT_LOOKUP_OUT_MPORT_ID);
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_mport_id_by_selector(
+ __in efx_nic_t *enp,
+ __in const efx_mport_sel_t *mport_selectorp,
+ __out efx_mport_id_t *mport_idp)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ rc = efx_mcdi_mae_mport_lookup(enp, mport_selectorp, mport_idp);
+ if (rc != 0)
+ goto fail2;
+
+ return (0);
+
fail2:
EFSYS_PROBE(fail2);
fail1:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 0c5bcdfa84..3dc21878c0 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -126,6 +126,7 @@ INTERNAL {
efx_mae_match_specs_equal;
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
+ efx_mae_mport_id_by_selector;
efx_mae_outer_rule_insert;
efx_mae_outer_rule_remove;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 06/38] net/sfc: support EF100 Tx override prefix
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (4 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 05/38] common/sfc_efx/base: add an API to get mport ID by selector Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 07/38] net/sfc: add representors proxy infrastructure Andrew Rybchenko
` (32 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Add internal mbuf dynamic flag and field to request EF100 native
Tx datapath to use Tx prefix descriptor to override egress m-port.
Overriding egress m-port is necessary on representor Tx burst
so that the packet will reach corresponding VF.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_dp.c | 46 ++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_dp.h | 25 ++++++++++++++++++
drivers/net/sfc/sfc_ef100_tx.c | 25 ++++++++++++++++++
drivers/net/sfc/sfc_ethdev.c | 4 +++
4 files changed, 100 insertions(+)
diff --git a/drivers/net/sfc/sfc_dp.c b/drivers/net/sfc/sfc_dp.c
index 24ed0898c8..66a84c99c8 100644
--- a/drivers/net/sfc/sfc_dp.c
+++ b/drivers/net/sfc/sfc_dp.c
@@ -12,6 +12,9 @@
#include <errno.h>
#include <rte_log.h>
+#include <rte_mbuf_dyn.h>
+
+#include "efx.h"
#include "sfc_dp.h"
#include "sfc_log.h"
@@ -77,3 +80,46 @@ sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry)
return 0;
}
+
+uint64_t sfc_dp_mport_override;
+int sfc_dp_mport_offset = -1;
+
+int
+sfc_dp_mport_register(void)
+{
+ static const struct rte_mbuf_dynfield mport = {
+ .name = "rte_net_sfc_dynfield_mport",
+ .size = sizeof(efx_mport_id_t),
+ .align = __alignof__(efx_mport_id_t),
+ };
+ static const struct rte_mbuf_dynflag mport_override = {
+ .name = "rte_net_sfc_dynflag_mport_override",
+ };
+
+ int field_offset;
+ int flag;
+
+ if (sfc_dp_mport_override != 0) {
+ SFC_GENERIC_LOG(INFO, "%s() already registered", __func__);
+ return 0;
+ }
+
+ field_offset = rte_mbuf_dynfield_register(&mport);
+ if (field_offset < 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to register mport dynfield",
+ __func__);
+ return -1;
+ }
+
+ flag = rte_mbuf_dynflag_register(&mport_override);
+ if (flag < 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to register mport dynflag",
+ __func__);
+ return -1;
+ }
+
+ sfc_dp_mport_offset = field_offset;
+ sfc_dp_mport_override = UINT64_C(1) << flag;
+
+ return 0;
+}
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 7fd8f34b0f..f3c6892426 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -126,6 +126,31 @@ struct sfc_dp *sfc_dp_find_by_caps(struct sfc_dp_list *head,
unsigned int avail_caps);
int sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry);
+/**
+ * Dynamically registered mbuf flag "mport_override" (as a bitmask).
+ *
+ * If this flag is set in an mbuf then the dynamically registered
+ * mbuf field "mport" holds a valid value. This is used to direct
+ * port representor transmit traffic to the correct target port.
+ */
+extern uint64_t sfc_dp_mport_override;
+
+/**
+ * Dynamically registered mbuf field "mport" (mbuf byte offset).
+ *
+ * If the dynamically registered "mport_override" flag is set in
+ * an mbuf then the mbuf "mport" field holds a valid value. This
+ * is used to direct port representor transmit traffic to the
+ * correct target port.
+ */
+extern int sfc_dp_mport_offset;
+
+/**
+ * Register dynamic mbuf flag and field which can be used to require Tx override
+ * prefix descriptor with egress mport set.
+ */
+int sfc_dp_mport_register(void);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d34..51eecbe832 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -10,6 +10,7 @@
#include <stdbool.h>
#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
#include <rte_io.h>
#include <rte_net.h>
@@ -309,6 +310,19 @@ sfc_ef100_tx_reap(struct sfc_ef100_txq *txq)
sfc_ef100_tx_reap_num_descs(txq, sfc_ef100_tx_process_events(txq));
}
+static void
+sfc_ef100_tx_qdesc_prefix_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
+{
+ efx_mport_id_t *mport_id =
+ RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset, efx_mport_id_t *);
+
+ EFX_POPULATE_OWORD_3(*tx_desc,
+ ESF_GZ_TX_PREFIX_EGRESS_MPORT,
+ mport_id->id,
+ ESF_GZ_TX_PREFIX_EGRESS_MPORT_EN, 1,
+ ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_PREFIX);
+}
+
static uint8_t
sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
{
@@ -525,6 +539,11 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
SFC_MBUF_SEG_LEN_MAX));
}
+ if (m->ol_flags & sfc_dp_mport_override) {
+ /* Tx override prefix descriptor will be used */
+ extra_descs++;
+ }
+
/*
* Any segment of scattered packet cannot be bigger than maximum
* segment length. Make sure that subsequent segments do not need
@@ -671,6 +690,12 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
break;
}
+ if (m_seg->ol_flags & sfc_dp_mport_override) {
+ id = added++ & txq->ptr_mask;
+ sfc_ef100_tx_qdesc_prefix_create(m_seg,
+ &txq->txq_hw_ring[id]);
+ }
+
if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
} else {
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 41add341a0..8e17189875 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2245,6 +2245,10 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
return 1;
}
+ rc = sfc_dp_mport_register();
+ if (rc != 0)
+ return rc;
+
sfc_register_dp();
logtype_main = sfc_register_logtype(&pci_dev->addr,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 07/38] net/sfc: add representors proxy infrastructure
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (5 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 06/38] net/sfc: support EF100 Tx override prefix Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 08/38] net/sfc: reserve TxQ and RxQ for port representors Andrew Rybchenko
` (31 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Representor proxy is a mediator between virtual functions and port
representors. It forwards traffic between virtual functions and port
representors performing base PF ethdev + VF's representor traffic
(de-)multiplexing. The implementation will be provided by later patches.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/meson.build | 1 +
drivers/net/sfc/sfc.c | 35 ++++++
drivers/net/sfc/sfc.h | 5 +
drivers/net/sfc/sfc_repr_proxy.c | 210 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 34 +++++
5 files changed, 285 insertions(+)
create mode 100644 drivers/net/sfc/sfc_repr_proxy.c
create mode 100644 drivers/net/sfc/sfc_repr_proxy.h
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 948c65968a..4fc2063f7a 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -97,4 +97,5 @@ sources = files(
'sfc_ef100_rx.c',
'sfc_ef100_tx.c',
'sfc_service.c',
+ 'sfc_repr_proxy.c',
)
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index cd2c97f3b2..591b8971b3 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -27,6 +27,25 @@
#include "sfc_sw_stats.h"
+bool
+sfc_repr_supported(const struct sfc_adapter *sa)
+{
+ if (!sa->switchdev)
+ return false;
+
+ /*
+ * Representor proxy should use service lcore on PF's socket
+ * (sa->socket_id) to be efficient. But the proxy will fall back
+ * to any socket if it is not possible to get the service core
+ * on the same socket. Check that at least service core on any
+ * socket is available.
+ */
+ if (sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE)
+ return false;
+
+ return true;
+}
+
int
sfc_dma_alloc(const struct sfc_adapter *sa, const char *name, uint16_t id,
size_t len, int socket_id, efsys_mem_t *esmp)
@@ -434,9 +453,16 @@ sfc_try_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_flows_insert;
+ rc = sfc_repr_proxy_start(sa);
+ if (rc != 0)
+ goto fail_repr_proxy_start;
+
sfc_log_init(sa, "done");
return 0;
+fail_repr_proxy_start:
+ sfc_flow_stop(sa);
+
fail_flows_insert:
sfc_tx_stop(sa);
@@ -540,6 +566,7 @@ sfc_stop(struct sfc_adapter *sa)
sa->state = SFC_ADAPTER_STOPPING;
+ sfc_repr_proxy_stop(sa);
sfc_flow_stop(sa);
sfc_tx_stop(sa);
sfc_rx_stop(sa);
@@ -899,6 +926,10 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_mae_switchdev_init;
+ rc = sfc_repr_proxy_attach(sa);
+ if (rc != 0)
+ goto fail_repr_proxy_attach;
+
sfc_log_init(sa, "fini nic");
efx_nic_fini(enp);
@@ -927,6 +958,9 @@ sfc_attach(struct sfc_adapter *sa)
fail_sw_xstats_init:
sfc_flow_fini(sa);
+ sfc_repr_proxy_detach(sa);
+
+fail_repr_proxy_attach:
sfc_mae_switchdev_fini(sa);
fail_mae_switchdev_init:
@@ -976,6 +1010,7 @@ sfc_detach(struct sfc_adapter *sa)
sfc_flow_fini(sa);
+ sfc_repr_proxy_detach(sa);
sfc_mae_switchdev_fini(sa);
sfc_mae_detach(sa);
sfc_mae_counter_rxq_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index b045baca9e..8f65857f65 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -30,6 +30,8 @@
#include "sfc_sriov.h"
#include "sfc_mae.h"
#include "sfc_dp.h"
+#include "sfc_repr_proxy.h"
+#include "sfc_service.h"
#ifdef __cplusplus
extern "C" {
@@ -260,6 +262,7 @@ struct sfc_adapter {
struct sfc_sw_xstats sw_xstats;
struct sfc_filter filter;
struct sfc_mae mae;
+ struct sfc_repr_proxy repr_proxy;
struct sfc_flow_list flow_list;
@@ -388,6 +391,8 @@ sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
return sas->counters_rxq_allocated ? 1 : 0;
}
+bool sfc_repr_supported(const struct sfc_adapter *sa);
+
/** Get the number of milliseconds since boot from the default timer */
static inline uint64_t
sfc_get_system_msecs(void)
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
new file mode 100644
index 0000000000..eb29376988
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <rte_service.h>
+#include <rte_service_component.h>
+
+#include "sfc_log.h"
+#include "sfc_service.h"
+#include "sfc_repr_proxy.h"
+#include "sfc.h"
+
+static int32_t
+sfc_repr_proxy_routine(void *arg)
+{
+ struct sfc_repr_proxy *rp = arg;
+
+ /* Representor proxy boilerplate will be here */
+ RTE_SET_USED(rp);
+
+ return 0;
+}
+
+int
+sfc_repr_proxy_attach(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct rte_service_spec service;
+ uint32_t cid;
+ uint32_t sid;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ cid = sfc_get_service_lcore(sa->socket_id);
+ if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
+ /* Warn and try to allocate on any NUMA node */
+ sfc_warn(sa,
+ "repr proxy: unable to get service lcore at socket %d",
+ sa->socket_id);
+
+ cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+ }
+ if (cid == RTE_MAX_LCORE) {
+ rc = ENOTSUP;
+ sfc_err(sa, "repr proxy: failed to get service lcore");
+ goto fail_get_service_lcore;
+ }
+
+ memset(&service, 0, sizeof(service));
+ snprintf(service.name, sizeof(service.name),
+ "net_sfc_%hu_repr_proxy", sfc_sa2shared(sa)->port_id);
+ service.socket_id = rte_lcore_to_socket_id(cid);
+ service.callback = sfc_repr_proxy_routine;
+ service.callback_userdata = rp;
+
+ rc = rte_service_component_register(&service, &sid);
+ if (rc != 0) {
+ rc = ENOEXEC;
+ sfc_err(sa, "repr proxy: failed to register service component");
+ goto fail_register;
+ }
+
+ rc = rte_service_map_lcore_set(sid, cid, 1);
+ if (rc != 0) {
+ rc = -rc;
+ sfc_err(sa, "repr proxy: failed to map lcore");
+ goto fail_map_lcore;
+ }
+
+ rp->service_core_id = cid;
+ rp->service_id = sid;
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_map_lcore:
+ rte_service_component_unregister(sid);
+
+fail_register:
+ /*
+ * No need to rollback service lcore get since
+ * it just makes socket_id based search and remembers it.
+ */
+
+fail_get_service_lcore:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_repr_proxy_detach(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ rte_service_map_lcore_set(rp->service_id, rp->service_core_id, 0);
+ rte_service_component_unregister(rp->service_id);
+
+ sfc_log_init(sa, "done");
+}
+
+int
+sfc_repr_proxy_start(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ /*
+ * The condition to start the proxy is insufficient. It will be
+ * complemented with representor port start/stop support.
+ */
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ /* Service core may be in "stopped" state, start it */
+ rc = rte_service_lcore_start(rp->service_core_id);
+ if (rc != 0 && rc != -EALREADY) {
+ rc = -rc;
+ sfc_err(sa, "failed to start service core for %s: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(rc));
+ goto fail_start_core;
+ }
+
+ /* Run the service */
+ rc = rte_service_component_runstate_set(rp->service_id, 1);
+ if (rc < 0) {
+ rc = -rc;
+ sfc_err(sa, "failed to run %s component: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(rc));
+ goto fail_component_runstate_set;
+ }
+ rc = rte_service_runstate_set(rp->service_id, 1);
+ if (rc < 0) {
+ rc = -rc;
+ sfc_err(sa, "failed to run %s: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(rc));
+ goto fail_runstate_set;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_runstate_set:
+ rte_service_component_runstate_set(rp->service_id, 0);
+
+fail_component_runstate_set:
+ /* Service lcore may be shared and we never stop it */
+
+fail_start_core:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_repr_proxy_stop(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ rc = rte_service_runstate_set(rp->service_id, 0);
+ if (rc < 0) {
+ sfc_err(sa, "failed to stop %s: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(-rc));
+ }
+
+ rc = rte_service_component_runstate_set(rp->service_id, 0);
+ if (rc < 0) {
+ sfc_err(sa, "failed to stop %s component: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(-rc));
+ }
+
+ /* Service lcore may be shared and we never stop it */
+
+ sfc_log_init(sa, "done");
+}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
new file mode 100644
index 0000000000..40ce352335
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_REPR_PROXY_H
+#define _SFC_REPR_PROXY_H
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct sfc_repr_proxy {
+ uint32_t service_core_id;
+ uint32_t service_id;
+};
+
+struct sfc_adapter;
+
+int sfc_repr_proxy_attach(struct sfc_adapter *sa);
+void sfc_repr_proxy_detach(struct sfc_adapter *sa);
+int sfc_repr_proxy_start(struct sfc_adapter *sa);
+void sfc_repr_proxy_stop(struct sfc_adapter *sa);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_REPR_PROXY_H */
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 08/38] net/sfc: reserve TxQ and RxQ for port representors
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (6 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 07/38] net/sfc: add representors proxy infrastructure Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 09/38] net/sfc: move adapter state enum to separate header Andrew Rybchenko
` (30 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
A Tx/Rx queue pair is required to forward traffic between
port representors and virtual functions.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 51 ++++++++++++++++++++++++++++++--
drivers/net/sfc/sfc.h | 15 ++++++++++
drivers/net/sfc/sfc_ev.h | 40 ++++++++++++++++++-------
drivers/net/sfc/sfc_repr_proxy.c | 12 +++++---
drivers/net/sfc/sfc_repr_proxy.h | 8 +++++
drivers/net/sfc/sfc_tx.c | 29 ++++++++++--------
6 files changed, 124 insertions(+), 31 deletions(-)
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 591b8971b3..9abd6d600b 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -46,6 +46,12 @@ sfc_repr_supported(const struct sfc_adapter *sa)
return true;
}
+bool
+sfc_repr_available(const struct sfc_adapter_shared *sas)
+{
+ return sas->nb_repr_rxq > 0 && sas->nb_repr_txq > 0;
+}
+
int
sfc_dma_alloc(const struct sfc_adapter *sa, const char *name, uint16_t id,
size_t len, int socket_id, efsys_mem_t *esmp)
@@ -296,6 +302,41 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
sas->counters_rxq_allocated = false;
}
+ if (sfc_repr_supported(sa) &&
+ evq_allocated >= SFC_REPR_PROXY_NB_RXQ_MIN +
+ SFC_REPR_PROXY_NB_TXQ_MIN &&
+ rxq_allocated >= SFC_REPR_PROXY_NB_RXQ_MIN &&
+ txq_allocated >= SFC_REPR_PROXY_NB_TXQ_MIN) {
+ unsigned int extra;
+
+ txq_allocated -= SFC_REPR_PROXY_NB_TXQ_MIN;
+ rxq_allocated -= SFC_REPR_PROXY_NB_RXQ_MIN;
+ evq_allocated -= SFC_REPR_PROXY_NB_RXQ_MIN +
+ SFC_REPR_PROXY_NB_TXQ_MIN;
+
+ sas->nb_repr_rxq = SFC_REPR_PROXY_NB_RXQ_MIN;
+ sas->nb_repr_txq = SFC_REPR_PROXY_NB_TXQ_MIN;
+
+ /* Allocate extra representor RxQs up to the maximum */
+ extra = MIN(evq_allocated, rxq_allocated);
+ extra = MIN(extra,
+ SFC_REPR_PROXY_NB_RXQ_MAX - sas->nb_repr_rxq);
+ evq_allocated -= extra;
+ rxq_allocated -= extra;
+ sas->nb_repr_rxq += extra;
+
+ /* Allocate extra representor TxQs up to the maximum */
+ extra = MIN(evq_allocated, txq_allocated);
+ extra = MIN(extra,
+ SFC_REPR_PROXY_NB_TXQ_MAX - sas->nb_repr_txq);
+ evq_allocated -= extra;
+ txq_allocated -= extra;
+ sas->nb_repr_txq += extra;
+ } else {
+ sas->nb_repr_rxq = 0;
+ sas->nb_repr_txq = 0;
+ }
+
/* Add remaining allocated queues */
sa->rxq_max += MIN(rxq_allocated, evq_allocated / 2);
sa->txq_max += MIN(txq_allocated, evq_allocated - sa->rxq_max);
@@ -313,8 +354,10 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
static int
sfc_set_drv_limits(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
const struct rte_eth_dev_data *data = sa->eth_dev->data;
- uint32_t rxq_reserved = sfc_nb_reserved_rxq(sfc_sa2shared(sa));
+ uint32_t rxq_reserved = sfc_nb_reserved_rxq(sas);
+ uint32_t txq_reserved = sfc_nb_txq_reserved(sas);
efx_drv_limits_t lim;
memset(&lim, 0, sizeof(lim));
@@ -325,10 +368,12 @@ sfc_set_drv_limits(struct sfc_adapter *sa)
* sfc_estimate_resource_limits().
*/
lim.edl_min_evq_count = lim.edl_max_evq_count =
- 1 + data->nb_rx_queues + data->nb_tx_queues + rxq_reserved;
+ 1 + data->nb_rx_queues + data->nb_tx_queues +
+ rxq_reserved + txq_reserved;
lim.edl_min_rxq_count = lim.edl_max_rxq_count =
data->nb_rx_queues + rxq_reserved;
- lim.edl_min_txq_count = lim.edl_max_txq_count = data->nb_tx_queues;
+ lim.edl_min_txq_count = lim.edl_max_txq_count =
+ data->nb_tx_queues + txq_reserved;
return efx_nic_set_drv_limits(sa->nic, &lim);
}
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 8f65857f65..79f9d7979e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -191,6 +191,8 @@ struct sfc_adapter_shared {
char *dp_tx_name;
bool counters_rxq_allocated;
+ unsigned int nb_repr_rxq;
+ unsigned int nb_repr_txq;
};
/* Adapter process private data */
@@ -392,6 +394,19 @@ sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
}
bool sfc_repr_supported(const struct sfc_adapter *sa);
+bool sfc_repr_available(const struct sfc_adapter_shared *sas);
+
+static inline unsigned int
+sfc_repr_nb_rxq(const struct sfc_adapter_shared *sas)
+{
+ return sas->nb_repr_rxq;
+}
+
+static inline unsigned int
+sfc_repr_nb_txq(const struct sfc_adapter_shared *sas)
+{
+ return sas->nb_repr_txq;
+}
/** Get the number of milliseconds since boot from the default timer */
static inline uint64_t
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index b2a0380205..590cfb1694 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -70,14 +70,21 @@ sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
static inline unsigned int
sfc_nb_reserved_rxq(const struct sfc_adapter_shared *sas)
{
- return sfc_nb_counter_rxq(sas);
+ return sfc_nb_counter_rxq(sas) + sfc_repr_nb_rxq(sas);
+}
+
+/* Return the number of Tx queues reserved for driver's internal use */
+static inline unsigned int
+sfc_nb_txq_reserved(const struct sfc_adapter_shared *sas)
+{
+ return sfc_repr_nb_txq(sas);
}
static inline unsigned int
sfc_nb_reserved_evq(const struct sfc_adapter_shared *sas)
{
- /* An EvQ is required for each reserved RxQ */
- return 1 + sfc_nb_reserved_rxq(sas);
+ /* An EvQ is required for each reserved Rx/Tx queue */
+ return 1 + sfc_nb_reserved_rxq(sas) + sfc_nb_txq_reserved(sas);
}
/*
@@ -112,6 +119,7 @@ sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
* Own event queue is allocated for management, each Rx and each Tx queue.
* Zero event queue is used for management events.
* When counters are supported, one Rx event queue is reserved.
+ * When representors are supported, Rx and Tx event queues are reserved.
* Rx event queues follow reserved event queues.
* Tx event queues follow Rx event queues.
*/
@@ -150,27 +158,37 @@ sfc_evq_sw_index_by_rxq_sw_index(struct sfc_adapter *sa,
}
static inline sfc_ethdev_qid_t
-sfc_ethdev_tx_qid_by_txq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+sfc_ethdev_tx_qid_by_txq_sw_index(struct sfc_adapter_shared *sas,
sfc_sw_index_t txq_sw_index)
{
- /* Only ethdev queues are present for now */
- return txq_sw_index;
+ if (txq_sw_index < sfc_nb_txq_reserved(sas))
+ return SFC_ETHDEV_QID_INVALID;
+
+ return txq_sw_index - sfc_nb_txq_reserved(sas);
}
static inline sfc_sw_index_t
-sfc_txq_sw_index_by_ethdev_tx_qid(__rte_unused struct sfc_adapter_shared *sas,
+sfc_txq_sw_index_by_ethdev_tx_qid(struct sfc_adapter_shared *sas,
sfc_ethdev_qid_t ethdev_qid)
{
- /* Only ethdev queues are present for now */
- return ethdev_qid;
+ return sfc_nb_txq_reserved(sas) + ethdev_qid;
}
static inline sfc_sw_index_t
sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
sfc_sw_index_t txq_sw_index)
{
- return sfc_nb_reserved_evq(sfc_sa2shared(sa)) +
- sa->eth_dev->data->nb_rx_queues + txq_sw_index;
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+ sfc_ethdev_qid_t ethdev_qid;
+
+ ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, txq_sw_index);
+ if (ethdev_qid == SFC_ETHDEV_QID_INVALID) {
+ return sfc_nb_reserved_evq(sas) - sfc_nb_txq_reserved(sas) +
+ txq_sw_index;
+ }
+
+ return sfc_nb_reserved_evq(sas) + sa->eth_dev->data->nb_rx_queues +
+ ethdev_qid;
}
int sfc_ev_attach(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index eb29376988..6d3962304f 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -29,6 +29,7 @@ sfc_repr_proxy_routine(void *arg)
int
sfc_repr_proxy_attach(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
struct rte_service_spec service;
uint32_t cid;
@@ -37,7 +38,7 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return 0;
}
@@ -102,11 +103,12 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
void
sfc_repr_proxy_detach(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
sfc_log_init(sa, "entry");
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return;
}
@@ -120,6 +122,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
int
sfc_repr_proxy_start(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
int rc;
@@ -129,7 +132,7 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
* The condition to start the proxy is insufficient. It will be
* complemented with representor port start/stop support.
*/
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return 0;
}
@@ -180,12 +183,13 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
void
sfc_repr_proxy_stop(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
int rc;
sfc_log_init(sa, "entry");
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return;
}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index 40ce352335..953b9922c8 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -16,6 +16,14 @@
extern "C" {
#endif
+/* Number of supported RxQs with different mbuf memory pools */
+#define SFC_REPR_PROXY_NB_RXQ_MIN (1)
+#define SFC_REPR_PROXY_NB_RXQ_MAX (1)
+
+/* One TxQ is required and sufficient for port representors support */
+#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
+#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+
struct sfc_repr_proxy {
uint32_t service_core_id;
uint32_t service_id;
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d2..c1b2e964f8 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -376,6 +376,8 @@ sfc_tx_configure(struct sfc_adapter *sa)
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
const struct rte_eth_conf *dev_conf = &sa->eth_dev->data->dev_conf;
const unsigned int nb_tx_queues = sa->eth_dev->data->nb_tx_queues;
+ const unsigned int nb_rsvd_tx_queues = sfc_nb_txq_reserved(sas);
+ const unsigned int nb_txq_total = nb_tx_queues + nb_rsvd_tx_queues;
int rc = 0;
sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
@@ -395,11 +397,11 @@ sfc_tx_configure(struct sfc_adapter *sa)
if (rc != 0)
goto fail_check_mode;
- if (nb_tx_queues == sas->txq_count)
+ if (nb_txq_total == sas->txq_count)
goto done;
if (sas->txq_info == NULL) {
- sas->txq_info = rte_calloc_socket("sfc-txqs", nb_tx_queues,
+ sas->txq_info = rte_calloc_socket("sfc-txqs", nb_txq_total,
sizeof(sas->txq_info[0]), 0,
sa->socket_id);
if (sas->txq_info == NULL)
@@ -410,7 +412,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
* since it should not be shared.
*/
rc = ENOMEM;
- sa->txq_ctrl = calloc(nb_tx_queues, sizeof(sa->txq_ctrl[0]));
+ sa->txq_ctrl = calloc(nb_txq_total, sizeof(sa->txq_ctrl[0]));
if (sa->txq_ctrl == NULL)
goto fail_txqs_ctrl_alloc;
} else {
@@ -422,23 +424,23 @@ sfc_tx_configure(struct sfc_adapter *sa)
new_txq_info =
rte_realloc(sas->txq_info,
- nb_tx_queues * sizeof(sas->txq_info[0]), 0);
- if (new_txq_info == NULL && nb_tx_queues > 0)
+ nb_txq_total * sizeof(sas->txq_info[0]), 0);
+ if (new_txq_info == NULL && nb_txq_total > 0)
goto fail_txqs_realloc;
new_txq_ctrl = realloc(sa->txq_ctrl,
- nb_tx_queues * sizeof(sa->txq_ctrl[0]));
- if (new_txq_ctrl == NULL && nb_tx_queues > 0)
+ nb_txq_total * sizeof(sa->txq_ctrl[0]));
+ if (new_txq_ctrl == NULL && nb_txq_total > 0)
goto fail_txqs_ctrl_realloc;
sas->txq_info = new_txq_info;
sa->txq_ctrl = new_txq_ctrl;
- if (nb_tx_queues > sas->ethdev_txq_count) {
- memset(&sas->txq_info[sas->ethdev_txq_count], 0,
- (nb_tx_queues - sas->ethdev_txq_count) *
+ if (nb_txq_total > sas->txq_count) {
+ memset(&sas->txq_info[sas->txq_count], 0,
+ (nb_txq_total - sas->txq_count) *
sizeof(sas->txq_info[0]));
- memset(&sa->txq_ctrl[sas->ethdev_txq_count], 0,
- (nb_tx_queues - sas->ethdev_txq_count) *
+ memset(&sa->txq_ctrl[sas->txq_count], 0,
+ (nb_txq_total - sas->txq_count) *
sizeof(sa->txq_ctrl[0]));
}
}
@@ -455,7 +457,8 @@ sfc_tx_configure(struct sfc_adapter *sa)
sas->ethdev_txq_count++;
}
- sas->txq_count = sas->ethdev_txq_count;
+ /* TODO: initialize reserved queues when supported. */
+ sas->txq_count = sas->ethdev_txq_count + nb_rsvd_tx_queues;
done:
return 0;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 09/38] net/sfc: move adapter state enum to separate header
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (7 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 08/38] net/sfc: reserve TxQ and RxQ for port representors Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 10/38] common/sfc_efx/base: allow creating invalid mport selectors Andrew Rybchenko
` (29 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Adapter state will be reused by representors, that will have
a separate adapter. Rename adapter state to ethdev state
so that the meaning of it is clearer.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 47 ++++++++++---------
drivers/net/sfc/sfc.h | 54 +---------------------
drivers/net/sfc/sfc_ethdev.c | 40 ++++++++---------
drivers/net/sfc/sfc_ethdev_state.h | 72 ++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_flow.c | 10 ++---
drivers/net/sfc/sfc_intr.c | 12 ++---
drivers/net/sfc/sfc_mae.c | 2 +-
drivers/net/sfc/sfc_port.c | 2 +-
8 files changed, 130 insertions(+), 109 deletions(-)
create mode 100644 drivers/net/sfc/sfc_ethdev_state.h
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 9abd6d600b..152234cb61 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -26,7 +26,6 @@
#include "sfc_tweak.h"
#include "sfc_sw_stats.h"
-
bool
sfc_repr_supported(const struct sfc_adapter *sa)
{
@@ -440,7 +439,7 @@ sfc_try_start(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
SFC_ASSERT(sfc_adapter_is_locked(sa));
- SFC_ASSERT(sa->state == SFC_ADAPTER_STARTING);
+ SFC_ASSERT(sa->state == SFC_ETHDEV_STARTING);
sfc_log_init(sa, "set FW subvariant");
rc = sfc_set_fw_subvariant(sa);
@@ -545,9 +544,9 @@ sfc_start(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
switch (sa->state) {
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
break;
- case SFC_ADAPTER_STARTED:
+ case SFC_ETHDEV_STARTED:
sfc_notice(sa, "already started");
return 0;
default:
@@ -555,7 +554,7 @@ sfc_start(struct sfc_adapter *sa)
goto fail_bad_state;
}
- sa->state = SFC_ADAPTER_STARTING;
+ sa->state = SFC_ETHDEV_STARTING;
rc = 0;
do {
@@ -578,13 +577,13 @@ sfc_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_try_start;
- sa->state = SFC_ADAPTER_STARTED;
+ sa->state = SFC_ETHDEV_STARTED;
sfc_log_init(sa, "done");
return 0;
fail_try_start:
fail_sriov_vswitch_create:
- sa->state = SFC_ADAPTER_CONFIGURED;
+ sa->state = SFC_ETHDEV_CONFIGURED;
fail_bad_state:
sfc_log_init(sa, "failed %d", rc);
return rc;
@@ -598,9 +597,9 @@ sfc_stop(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
switch (sa->state) {
- case SFC_ADAPTER_STARTED:
+ case SFC_ETHDEV_STARTED:
break;
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
sfc_notice(sa, "already stopped");
return;
default:
@@ -609,7 +608,7 @@ sfc_stop(struct sfc_adapter *sa)
return;
}
- sa->state = SFC_ADAPTER_STOPPING;
+ sa->state = SFC_ETHDEV_STOPPING;
sfc_repr_proxy_stop(sa);
sfc_flow_stop(sa);
@@ -620,7 +619,7 @@ sfc_stop(struct sfc_adapter *sa)
sfc_intr_stop(sa);
efx_nic_fini(sa->nic);
- sa->state = SFC_ADAPTER_CONFIGURED;
+ sa->state = SFC_ETHDEV_CONFIGURED;
sfc_log_init(sa, "done");
}
@@ -631,7 +630,7 @@ sfc_restart(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return EINVAL;
sfc_stop(sa);
@@ -652,7 +651,7 @@ sfc_restart_if_required(void *arg)
if (rte_atomic32_cmpset((volatile uint32_t *)&sa->restart_required,
1, 0)) {
sfc_adapter_lock(sa);
- if (sa->state == SFC_ADAPTER_STARTED)
+ if (sa->state == SFC_ETHDEV_STARTED)
(void)sfc_restart(sa);
sfc_adapter_unlock(sa);
}
@@ -685,9 +684,9 @@ sfc_configure(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- SFC_ASSERT(sa->state == SFC_ADAPTER_INITIALIZED ||
- sa->state == SFC_ADAPTER_CONFIGURED);
- sa->state = SFC_ADAPTER_CONFIGURING;
+ SFC_ASSERT(sa->state == SFC_ETHDEV_INITIALIZED ||
+ sa->state == SFC_ETHDEV_CONFIGURED);
+ sa->state = SFC_ETHDEV_CONFIGURING;
rc = sfc_check_conf(sa);
if (rc != 0)
@@ -713,7 +712,7 @@ sfc_configure(struct sfc_adapter *sa)
if (rc != 0)
goto fail_sw_xstats_configure;
- sa->state = SFC_ADAPTER_CONFIGURED;
+ sa->state = SFC_ETHDEV_CONFIGURED;
sfc_log_init(sa, "done");
return 0;
@@ -731,7 +730,7 @@ sfc_configure(struct sfc_adapter *sa)
fail_intr_configure:
fail_check_conf:
- sa->state = SFC_ADAPTER_INITIALIZED;
+ sa->state = SFC_ETHDEV_INITIALIZED;
sfc_log_init(sa, "failed %d", rc);
return rc;
}
@@ -743,8 +742,8 @@ sfc_close(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- SFC_ASSERT(sa->state == SFC_ADAPTER_CONFIGURED);
- sa->state = SFC_ADAPTER_CLOSING;
+ SFC_ASSERT(sa->state == SFC_ETHDEV_CONFIGURED);
+ sa->state = SFC_ETHDEV_CLOSING;
sfc_sw_xstats_close(sa);
sfc_tx_close(sa);
@@ -752,7 +751,7 @@ sfc_close(struct sfc_adapter *sa)
sfc_port_close(sa);
sfc_intr_close(sa);
- sa->state = SFC_ADAPTER_INITIALIZED;
+ sa->state = SFC_ETHDEV_INITIALIZED;
sfc_log_init(sa, "done");
}
@@ -993,7 +992,7 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_sriov_vswitch_create;
- sa->state = SFC_ADAPTER_INITIALIZED;
+ sa->state = SFC_ETHDEV_INITIALIZED;
sfc_log_init(sa, "done");
return 0;
@@ -1067,7 +1066,7 @@ sfc_detach(struct sfc_adapter *sa)
efx_tunnel_fini(sa->nic);
sfc_sriov_detach(sa);
- sa->state = SFC_ADAPTER_UNINITIALIZED;
+ sa->state = SFC_ETHDEV_UNINITIALIZED;
}
static int
@@ -1325,7 +1324,7 @@ sfc_unprobe(struct sfc_adapter *sa)
sfc_mem_bar_fini(sa);
sfc_flow_fini(sa);
- sa->state = SFC_ADAPTER_UNINITIALIZED;
+ sa->state = SFC_ETHDEV_UNINITIALIZED;
}
uint32_t
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 79f9d7979e..628f32c13f 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -32,62 +32,12 @@
#include "sfc_dp.h"
#include "sfc_repr_proxy.h"
#include "sfc_service.h"
+#include "sfc_ethdev_state.h"
#ifdef __cplusplus
extern "C" {
#endif
-/*
- * +---------------+
- * | UNINITIALIZED |<-----------+
- * +---------------+ |
- * |.eth_dev_init |.eth_dev_uninit
- * V |
- * +---------------+------------+
- * | INITIALIZED |
- * +---------------+<-----------<---------------+
- * |.dev_configure | |
- * V |failed |
- * +---------------+------------+ |
- * | CONFIGURING | |
- * +---------------+----+ |
- * |success | |
- * | | +---------------+
- * | | | CLOSING |
- * | | +---------------+
- * | | ^
- * V |.dev_configure |
- * +---------------+----+ |.dev_close
- * | CONFIGURED |----------------------------+
- * +---------------+<-----------+
- * |.dev_start |
- * V |
- * +---------------+ |
- * | STARTING |------------^
- * +---------------+ failed |
- * |success |
- * | +---------------+
- * | | STOPPING |
- * | +---------------+
- * | ^
- * V |.dev_stop
- * +---------------+------------+
- * | STARTED |
- * +---------------+
- */
-enum sfc_adapter_state {
- SFC_ADAPTER_UNINITIALIZED = 0,
- SFC_ADAPTER_INITIALIZED,
- SFC_ADAPTER_CONFIGURING,
- SFC_ADAPTER_CONFIGURED,
- SFC_ADAPTER_CLOSING,
- SFC_ADAPTER_STARTING,
- SFC_ADAPTER_STARTED,
- SFC_ADAPTER_STOPPING,
-
- SFC_ADAPTER_NSTATES
-};
-
enum sfc_dev_filter_mode {
SFC_DEV_FILTER_MODE_PROMISC = 0,
SFC_DEV_FILTER_MODE_ALLMULTI,
@@ -245,7 +195,7 @@ struct sfc_adapter {
* change its state should acquire the lock.
*/
rte_spinlock_t lock;
- enum sfc_adapter_state state;
+ enum sfc_ethdev_state state;
struct rte_eth_dev *eth_dev;
struct rte_kvargs *kvargs;
int socket_id;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8e17189875..ff762bb90b 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -213,9 +213,9 @@ sfc_dev_configure(struct rte_eth_dev *dev)
sfc_adapter_lock(sa);
switch (sa->state) {
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
/* FALLTHROUGH */
- case SFC_ADAPTER_INITIALIZED:
+ case SFC_ETHDEV_INITIALIZED:
rc = sfc_configure(sa);
break;
default:
@@ -257,7 +257,7 @@ sfc_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
sfc_log_init(sa, "entry");
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, ¤t_link);
} else if (wait_to_complete) {
efx_link_mode_t link_mode;
@@ -346,15 +346,15 @@ sfc_dev_close(struct rte_eth_dev *dev)
sfc_adapter_lock(sa);
switch (sa->state) {
- case SFC_ADAPTER_STARTED:
+ case SFC_ETHDEV_STARTED:
sfc_stop(sa);
- SFC_ASSERT(sa->state == SFC_ADAPTER_CONFIGURED);
+ SFC_ASSERT(sa->state == SFC_ETHDEV_CONFIGURED);
/* FALLTHROUGH */
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
sfc_close(sa);
- SFC_ASSERT(sa->state == SFC_ADAPTER_INITIALIZED);
+ SFC_ASSERT(sa->state == SFC_ETHDEV_INITIALIZED);
/* FALLTHROUGH */
- case SFC_ADAPTER_INITIALIZED:
+ case SFC_ETHDEV_INITIALIZED:
break;
default:
sfc_err(sa, "unexpected adapter state %u on close", sa->state);
@@ -410,7 +410,7 @@ sfc_dev_filter_set(struct rte_eth_dev *dev, enum sfc_dev_filter_mode mode,
sfc_warn(sa, "the change is to be applied on the next "
"start provided that isolated mode is "
"disabled prior the next start");
- } else if ((sa->state == SFC_ADAPTER_STARTED) &&
+ } else if ((sa->state == SFC_ETHDEV_STARTED) &&
((rc = sfc_set_rx_mode(sa)) != 0)) {
*toggle = !(enabled);
sfc_warn(sa, "Failed to %s %s mode, rc = %d",
@@ -704,7 +704,7 @@ sfc_stats_reset(struct rte_eth_dev *dev)
sfc_adapter_lock(sa);
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
/*
* The operation cannot be done if port is not started; it
* will be scheduled to be done during the next port start
@@ -905,7 +905,7 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
sfc_adapter_lock(sa);
- if (sa->state == SFC_ADAPTER_STARTED)
+ if (sa->state == SFC_ETHDEV_STARTED)
efx_mac_fcntl_get(sa->nic, &wanted_fc, &link_fc);
else
link_fc = sa->port.flow_ctrl;
@@ -971,7 +971,7 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
sfc_adapter_lock(sa);
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = efx_mac_fcntl_set(sa->nic, fcntl, fc_conf->autoneg);
if (rc != 0)
goto fail_mac_fcntl_set;
@@ -1051,7 +1051,7 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
goto fail_check_scatter;
if (pdu != sa->port.pdu) {
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
sfc_stop(sa);
old_pdu = sa->port.pdu;
@@ -1128,7 +1128,7 @@ sfc_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
goto unlock;
}
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
sfc_notice(sa, "the port is not started");
sfc_notice(sa, "the new MAC address will be set on port start");
@@ -1215,7 +1215,7 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
port->nb_mcast_addrs = nb_mc_addr;
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return 0;
rc = efx_mac_multicast_list_set(sa->nic, port->mcast_addrs,
@@ -1356,7 +1356,7 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
sfc_adapter_lock(sa);
rc = EINVAL;
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
goto fail_not_started;
rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
@@ -1420,7 +1420,7 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
sfc_adapter_lock(sa);
rc = EINVAL;
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
goto fail_not_started;
txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
@@ -1528,7 +1528,7 @@ sfc_dev_udp_tunnel_op(struct rte_eth_dev *dev,
if (rc != 0)
goto fail_op;
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = efx_tunnel_reconfigure(sa->nic);
if (rc == EAGAIN) {
/*
@@ -1664,7 +1664,7 @@ sfc_dev_rss_hash_update(struct rte_eth_dev *dev,
}
if (rss_conf->rss_key != NULL) {
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
for (key_i = 0; key_i < n_contexts; key_i++) {
rc = efx_rx_scale_key_set(sa->nic,
contexts[key_i],
@@ -1791,7 +1791,7 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
}
}
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = efx_rx_scale_tbl_set(sa->nic, EFX_RSS_CONTEXT_DEFAULT,
rss_tbl_new, EFX_RSS_TBL_SIZE);
if (rc != 0)
diff --git a/drivers/net/sfc/sfc_ethdev_state.h b/drivers/net/sfc/sfc_ethdev_state.h
new file mode 100644
index 0000000000..51fb51e20e
--- /dev/null
+++ b/drivers/net/sfc/sfc_ethdev_state.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_ETHDEV_STATE_H
+#define _SFC_ETHDEV_STATE_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * +---------------+
+ * | UNINITIALIZED |<-----------+
+ * +---------------+ |
+ * |.eth_dev_init |.eth_dev_uninit
+ * V |
+ * +---------------+------------+
+ * | INITIALIZED |
+ * +---------------+<-----------<---------------+
+ * |.dev_configure | |
+ * V |failed |
+ * +---------------+------------+ |
+ * | CONFIGURING | |
+ * +---------------+----+ |
+ * |success | |
+ * | | +---------------+
+ * | | | CLOSING |
+ * | | +---------------+
+ * | | ^
+ * V |.dev_configure |
+ * +---------------+----+ |.dev_close
+ * | CONFIGURED |----------------------------+
+ * +---------------+<-----------+
+ * |.dev_start |
+ * V |
+ * +---------------+ |
+ * | STARTING |------------^
+ * +---------------+ failed |
+ * |success |
+ * | +---------------+
+ * | | STOPPING |
+ * | +---------------+
+ * | ^
+ * V |.dev_stop
+ * +---------------+------------+
+ * | STARTED |
+ * +---------------+
+ */
+enum sfc_ethdev_state {
+ SFC_ETHDEV_UNINITIALIZED = 0,
+ SFC_ETHDEV_INITIALIZED,
+ SFC_ETHDEV_CONFIGURING,
+ SFC_ETHDEV_CONFIGURED,
+ SFC_ETHDEV_CLOSING,
+ SFC_ETHDEV_STARTING,
+ SFC_ETHDEV_STARTED,
+ SFC_ETHDEV_STOPPING,
+
+ SFC_ETHDEV_NSTATES
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _SFC_ETHDEV_STATE_H */
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 4f5993a68d..36ee79f331 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -2724,7 +2724,7 @@ sfc_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&sa->flow_list, flow, entries);
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = sfc_flow_insert(sa, flow, error);
if (rc != 0)
goto fail_flow_insert;
@@ -2767,7 +2767,7 @@ sfc_flow_destroy(struct rte_eth_dev *dev,
goto fail_bad_value;
}
- if (sa->state == SFC_ADAPTER_STARTED)
+ if (sa->state == SFC_ETHDEV_STARTED)
rc = sfc_flow_remove(sa, flow, error);
TAILQ_REMOVE(&sa->flow_list, flow, entries);
@@ -2790,7 +2790,7 @@ sfc_flow_flush(struct rte_eth_dev *dev,
sfc_adapter_lock(sa);
while ((flow = TAILQ_FIRST(&sa->flow_list)) != NULL) {
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
int rc;
rc = sfc_flow_remove(sa, flow, error);
@@ -2828,7 +2828,7 @@ sfc_flow_query(struct rte_eth_dev *dev,
goto fail_no_backend;
}
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
ret = rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Can't query the flow: the adapter is not started");
@@ -2858,7 +2858,7 @@ sfc_flow_isolate(struct rte_eth_dev *dev, int enable,
int ret = 0;
sfc_adapter_lock(sa);
- if (sa->state != SFC_ADAPTER_INITIALIZED) {
+ if (sa->state != SFC_ETHDEV_INITIALIZED) {
rte_flow_error_set(error, EBUSY,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "please close the port first");
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..69414fd839 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -60,9 +60,9 @@ sfc_intr_line_handler(void *cb_arg)
sfc_log_init(sa, "entry");
- if (sa->state != SFC_ADAPTER_STARTED &&
- sa->state != SFC_ADAPTER_STARTING &&
- sa->state != SFC_ADAPTER_STOPPING) {
+ if (sa->state != SFC_ETHDEV_STARTED &&
+ sa->state != SFC_ETHDEV_STARTING &&
+ sa->state != SFC_ETHDEV_STOPPING) {
sfc_log_init(sa,
"interrupt on stopped adapter, don't reenable");
goto exit;
@@ -106,9 +106,9 @@ sfc_intr_message_handler(void *cb_arg)
sfc_log_init(sa, "entry");
- if (sa->state != SFC_ADAPTER_STARTED &&
- sa->state != SFC_ADAPTER_STARTING &&
- sa->state != SFC_ADAPTER_STOPPING) {
+ if (sa->state != SFC_ETHDEV_STARTED &&
+ sa->state != SFC_ETHDEV_STARTING &&
+ sa->state != SFC_ETHDEV_STOPPING) {
sfc_log_init(sa, "adapter not-started, don't reenable");
goto exit;
}
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index b3607a178b..7be77054ab 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -3414,7 +3414,7 @@ sfc_mae_flow_verify(struct sfc_adapter *sa,
SFC_ASSERT(sfc_adapter_is_locked(sa));
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return EAGAIN;
if (outer_rule != NULL) {
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb81..7a3f59a112 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -48,7 +48,7 @@ sfc_port_update_mac_stats(struct sfc_adapter *sa, boolean_t force_upload)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return 0;
/*
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 10/38] common/sfc_efx/base: allow creating invalid mport selectors
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (8 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 09/38] net/sfc: move adapter state enum to separate header Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 11/38] net/sfc: add port representors infrastructure Andrew Rybchenko
` (28 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
There isn't always a valid mport that can be used. For these cases,
special invalid selectors can be generated. Requests that use such
selectors in any way will be rejected.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 11 +++++++++++
drivers/common/sfc_efx/base/efx_mae.c | 25 +++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 37 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 94803815ac..c0d1535017 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4196,6 +4196,17 @@ typedef struct efx_mport_id_s {
#define EFX_MPORT_NULL (0U)
+/*
+ * Generate an invalid MPORT selector.
+ *
+ * The resulting MPORT selector is opaque to the caller. Requests
+ * that attempt to use it will be rejected.
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_invalid(
+ __out efx_mport_sel_t *mportp);
+
/*
* Get MPORT selector of a physical port.
*
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index b38b1143d6..b7afe8fdc8 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -660,6 +660,31 @@ static const efx_mae_mv_bit_desc_t __efx_mae_action_rule_mv_bit_desc_set[] = {
#undef EFX_MAE_MV_BIT_DESC
};
+ __checkReturn efx_rc_t
+efx_mae_mport_invalid(
+ __out efx_mport_sel_t *mportp)
+{
+ efx_dword_t dword;
+ efx_rc_t rc;
+
+ if (mportp == NULL) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ EFX_POPULATE_DWORD_1(dword,
+ MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_INVALID);
+
+ memset(mportp, 0, sizeof (*mportp));
+ mportp->sel = dword.ed_u32[0];
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
__checkReturn efx_rc_t
efx_mae_mport_by_phy_port(
__in uint32_t phy_port,
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 3dc21878c0..611757ccde 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -127,6 +127,7 @@ INTERNAL {
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
efx_mae_mport_id_by_selector;
+ efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
efx_mae_outer_rule_remove;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 11/38] net/sfc: add port representors infrastructure
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (9 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 10/38] common/sfc_efx/base: allow creating invalid mport selectors Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 12/38] common/sfc_efx/base: add filter ingress mport matching field Andrew Rybchenko
` (27 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Provide minimal implementation for port representors that only can be
configured and can provide device information.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
doc/guides/nics/sfc_efx.rst | 13 +-
drivers/net/sfc/meson.build | 1 +
drivers/net/sfc/sfc_ethdev.c | 156 +++++++++++-
drivers/net/sfc/sfc_kvargs.c | 1 +
drivers/net/sfc/sfc_kvargs.h | 2 +
drivers/net/sfc/sfc_repr.c | 458 +++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr.h | 36 +++
drivers/net/sfc/sfc_switch.h | 5 +
8 files changed, 663 insertions(+), 9 deletions(-)
create mode 100644 drivers/net/sfc/sfc_repr.c
create mode 100644 drivers/net/sfc/sfc_repr.h
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index d66cb76dab..4719031508 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -74,6 +74,8 @@ SFC EFX PMD has support for:
- SR-IOV PF
+- Port representors (see :ref: switch_representation)
+
Non-supported Features
----------------------
@@ -382,7 +384,16 @@ boolean parameters value.
software virtual switch (for example, Open vSwitch) makes the decision.
Software virtual switch may install MAE rules to pass established traffic
flows via hardware and offload software datapath as the result.
- Default is legacy.
+ Default is legacy, unless representors are specified, in which case switchdev
+ is chosen.
+
+- ``representor`` parameter [list]
+
+ Instantiate port representor Ethernet devices for specified Virtual
+ Functions list.
+
+ It is a standard parameter whose format is described in
+ :ref:`ethernet_device_standard_device_arguments`.
- ``rx_datapath`` [auto|efx|ef10|ef10_essb] (default **auto**)
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 4fc2063f7a..98365e9e73 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -98,4 +98,5 @@ sources = files(
'sfc_ef100_tx.c',
'sfc_service.c',
'sfc_repr_proxy.c',
+ 'sfc_repr.c',
)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index ff762bb90b..8308cbdfef 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -28,6 +28,7 @@
#include "sfc_flow.h"
#include "sfc_dp.h"
#include "sfc_dp_rx.h"
+#include "sfc_repr.h"
#include "sfc_sw_stats.h"
#define SFC_XSTAT_ID_INVALID_VAL UINT64_MAX
@@ -1908,6 +1909,10 @@ static const struct eth_dev_ops sfc_eth_dev_ops = {
.pool_ops_supported = sfc_pool_ops_supported,
};
+struct sfc_ethdev_init_data {
+ uint16_t nb_representors;
+};
+
/**
* Duplicate a string in potentially shared memory required for
* multi-process support.
@@ -2189,7 +2194,7 @@ sfc_register_dp(void)
}
static int
-sfc_parse_switch_mode(struct sfc_adapter *sa)
+sfc_parse_switch_mode(struct sfc_adapter *sa, bool has_representors)
{
const char *switch_mode = NULL;
int rc;
@@ -2201,9 +2206,9 @@ sfc_parse_switch_mode(struct sfc_adapter *sa)
if (rc != 0)
goto fail_kvargs;
- /* Check representors when supported */
- if (switch_mode == NULL ||
- strcasecmp(switch_mode, SFC_KVARG_SWITCH_MODE_LEGACY) == 0) {
+ if (switch_mode == NULL) {
+ sa->switchdev = has_representors;
+ } else if (strcasecmp(switch_mode, SFC_KVARG_SWITCH_MODE_LEGACY) == 0) {
sa->switchdev = false;
} else if (strcasecmp(switch_mode,
SFC_KVARG_SWITCH_MODE_SWITCHDEV) == 0) {
@@ -2227,10 +2232,11 @@ sfc_parse_switch_mode(struct sfc_adapter *sa)
}
static int
-sfc_eth_dev_init(struct rte_eth_dev *dev)
+sfc_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
{
struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct sfc_ethdev_init_data *init_data = init_params;
uint32_t logtype_main;
struct sfc_adapter *sa;
int rc;
@@ -2312,7 +2318,7 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
sfc_adapter_lock_init(sa);
sfc_adapter_lock(sa);
- rc = sfc_parse_switch_mode(sa);
+ rc = sfc_parse_switch_mode(sa, init_data->nb_representors > 0);
if (rc != 0)
goto fail_switch_mode;
@@ -2402,11 +2408,145 @@ static const struct rte_pci_id pci_id_sfc_efx_map[] = {
{ .vendor_id = 0 /* sentinel */ }
};
+static int
+sfc_parse_rte_devargs(const char *args, struct rte_eth_devargs *devargs)
+{
+ struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+ int rc;
+
+ if (args != NULL) {
+ rc = rte_eth_devargs_parse(args, ð_da);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR,
+ "Failed to parse generic devargs '%s'",
+ args);
+ return rc;
+ }
+ }
+
+ *devargs = eth_da;
+
+ return 0;
+}
+
+static int
+sfc_eth_dev_create(struct rte_pci_device *pci_dev,
+ struct sfc_ethdev_init_data *init_data,
+ struct rte_eth_dev **devp)
+{
+ struct rte_eth_dev *dev;
+ int rc;
+
+ rc = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+ sizeof(struct sfc_adapter_shared),
+ eth_dev_pci_specific_init, pci_dev,
+ sfc_eth_dev_init, init_data);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR, "Failed to create sfc ethdev '%s'",
+ pci_dev->device.name);
+ return rc;
+ }
+
+ dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (dev == NULL) {
+ SFC_GENERIC_LOG(ERR, "Failed to find allocated sfc ethdev '%s'",
+ pci_dev->device.name);
+ return -ENODEV;
+ }
+
+ *devp = dev;
+
+ return 0;
+}
+
+static int
+sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
+ const struct rte_eth_devargs *eth_da)
+{
+ struct sfc_adapter *sa;
+ unsigned int i;
+ int rc;
+
+ if (eth_da->nb_representor_ports == 0)
+ return 0;
+
+ sa = sfc_adapter_by_eth_dev(dev);
+
+ if (!sa->switchdev) {
+ sfc_err(sa, "cannot create representors in non-switchdev mode");
+ return -EINVAL;
+ }
+
+ if (!sfc_repr_available(sfc_sa2shared(sa))) {
+ sfc_err(sa, "cannot create representors: unsupported");
+
+ return -ENOTSUP;
+ }
+
+ for (i = 0; i < eth_da->nb_representor_ports; ++i) {
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ efx_mport_sel_t mport_sel;
+
+ rc = efx_mae_mport_by_pcie_function(encp->enc_pf,
+ eth_da->representor_ports[i], &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to get representor %u m-port: %s - ignore",
+ eth_da->representor_ports[i],
+ rte_strerror(-rc));
+ continue;
+ }
+
+ rc = sfc_repr_create(dev, eth_da->representor_ports[i],
+ sa->mae.switch_domain_id, &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa, "cannot create representor %u: %s - ignore",
+ eth_da->representor_ports[i],
+ rte_strerror(-rc));
+ }
+ }
+
+ return 0;
+}
+
static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
{
- return rte_eth_dev_pci_generic_probe(pci_dev,
- sizeof(struct sfc_adapter_shared), sfc_eth_dev_init);
+ struct sfc_ethdev_init_data init_data;
+ struct rte_eth_devargs eth_da;
+ struct rte_eth_dev *dev;
+ int rc;
+
+ if (pci_dev->device.devargs != NULL) {
+ rc = sfc_parse_rte_devargs(pci_dev->device.devargs->args,
+ ð_da);
+ if (rc != 0)
+ return rc;
+ } else {
+ memset(ð_da, 0, sizeof(eth_da));
+ }
+
+ init_data.nb_representors = eth_da.nb_representor_ports;
+
+ if (eth_da.nb_representor_ports > 0 &&
+ rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ SFC_GENERIC_LOG(ERR,
+ "Create representors from secondary process not supported, dev '%s'",
+ pci_dev->device.name);
+ return -ENOTSUP;
+ }
+
+ rc = sfc_eth_dev_create(pci_dev, &init_data, &dev);
+ if (rc != 0)
+ return rc;
+
+ rc = sfc_eth_dev_create_representors(dev, ð_da);
+ if (rc != 0) {
+ (void)rte_eth_dev_destroy(dev, sfc_eth_dev_uninit);
+ return rc;
+ }
+
+ return 0;
}
static int sfc_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index cd16213637..783cb43ae6 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -23,6 +23,7 @@ sfc_kvargs_parse(struct sfc_adapter *sa)
struct rte_devargs *devargs = eth_dev->device->devargs;
const char **params = (const char *[]){
SFC_KVARG_SWITCH_MODE,
+ SFC_KVARG_REPRESENTOR,
SFC_KVARG_STATS_UPDATE_PERIOD_MS,
SFC_KVARG_PERF_PROFILE,
SFC_KVARG_RX_DATAPATH,
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 8e34ec92a2..2226f2b3d9 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -26,6 +26,8 @@ extern "C" {
"[" SFC_KVARG_SWITCH_MODE_LEGACY "|" \
SFC_KVARG_SWITCH_MODE_SWITCHDEV "]"
+#define SFC_KVARG_REPRESENTOR "representor"
+
#define SFC_KVARG_PERF_PROFILE "perf_profile"
#define SFC_KVARG_PERF_PROFILE_AUTO "auto"
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
new file mode 100644
index 0000000000..603a613ec6
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr.c
@@ -0,0 +1,458 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdint.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <ethdev_driver.h>
+
+#include "efx.h"
+
+#include "sfc_log.h"
+#include "sfc_debug.h"
+#include "sfc_repr.h"
+#include "sfc_ethdev_state.h"
+#include "sfc_switch.h"
+
+/** Multi-process shared representor private data */
+struct sfc_repr_shared {
+ uint16_t pf_port_id;
+ uint16_t repr_id;
+ uint16_t switch_domain_id;
+ uint16_t switch_port_id;
+};
+
+/** Primary process representor private data */
+struct sfc_repr {
+ /**
+ * PMD setup and configuration is not thread safe. Since it is not
+ * performance sensitive, it is better to guarantee thread-safety
+ * and add device level lock. Adapter control operations which
+ * change its state should acquire the lock.
+ */
+ rte_spinlock_t lock;
+ enum sfc_ethdev_state state;
+};
+
+#define sfcr_err(sr, ...) \
+ do { \
+ const struct sfc_repr *_sr = (sr); \
+ \
+ (void)_sr; \
+ SFC_GENERIC_LOG(ERR, __VA_ARGS__); \
+ } while (0)
+
+#define sfcr_info(sr, ...) \
+ do { \
+ const struct sfc_repr *_sr = (sr); \
+ \
+ (void)_sr; \
+ SFC_GENERIC_LOG(INFO, \
+ RTE_FMT("%s() " \
+ RTE_FMT_HEAD(__VA_ARGS__ ,), \
+ __func__, \
+ RTE_FMT_TAIL(__VA_ARGS__ ,))); \
+ } while (0)
+
+static inline struct sfc_repr_shared *
+sfc_repr_shared_by_eth_dev(struct rte_eth_dev *eth_dev)
+{
+ struct sfc_repr_shared *srs = eth_dev->data->dev_private;
+
+ return srs;
+}
+
+static inline struct sfc_repr *
+sfc_repr_by_eth_dev(struct rte_eth_dev *eth_dev)
+{
+ struct sfc_repr *sr = eth_dev->process_private;
+
+ return sr;
+}
+
+/*
+ * Add wrapper functions to acquire/release lock to be able to remove or
+ * change the lock in one place.
+ */
+
+static inline void
+sfc_repr_lock_init(struct sfc_repr *sr)
+{
+ rte_spinlock_init(&sr->lock);
+}
+
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+
+static inline int
+sfc_repr_lock_is_locked(struct sfc_repr *sr)
+{
+ return rte_spinlock_is_locked(&sr->lock);
+}
+
+#endif
+
+static inline void
+sfc_repr_lock(struct sfc_repr *sr)
+{
+ rte_spinlock_lock(&sr->lock);
+}
+
+static inline void
+sfc_repr_unlock(struct sfc_repr *sr)
+{
+ rte_spinlock_unlock(&sr->lock);
+}
+
+static inline void
+sfc_repr_lock_fini(__rte_unused struct sfc_repr *sr)
+{
+ /* Just for symmetry of the API */
+}
+
+static int
+sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
+ const struct rte_eth_conf *conf)
+{
+ const struct rte_eth_rss_conf *rss_conf;
+ int ret = 0;
+
+ sfcr_info(sr, "entry");
+
+ if (conf->link_speeds != 0) {
+ sfcr_err(sr, "specific link speeds not supported");
+ ret = -EINVAL;
+ }
+
+ switch (conf->rxmode.mq_mode) {
+ case ETH_MQ_RX_RSS:
+ if (nb_rx_queues != 1) {
+ sfcr_err(sr, "Rx RSS is not supported with %u queues",
+ nb_rx_queues);
+ ret = -EINVAL;
+ break;
+ }
+
+ rss_conf = &conf->rx_adv_conf.rss_conf;
+ if (rss_conf->rss_key != NULL || rss_conf->rss_key_len != 0 ||
+ rss_conf->rss_hf != 0) {
+ sfcr_err(sr, "Rx RSS configuration is not supported");
+ ret = -EINVAL;
+ }
+ break;
+ case ETH_MQ_RX_NONE:
+ break;
+ default:
+ sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
+ ret = -EINVAL;
+ break;
+ }
+
+ if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+ sfcr_err(sr, "Tx mode MQ modes not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->lpbk_mode != 0) {
+ sfcr_err(sr, "loopback not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->dcb_capability_en != 0) {
+ sfcr_err(sr, "priority-based flow control not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+ sfcr_err(sr, "Flow Director not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->intr_conf.lsc != 0) {
+ sfcr_err(sr, "link status change interrupt not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->intr_conf.rxq != 0) {
+ sfcr_err(sr, "receive queue interrupt not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->intr_conf.rmv != 0) {
+ sfcr_err(sr, "remove interrupt not supported");
+ ret = -EINVAL;
+ }
+
+ sfcr_info(sr, "done %d", ret);
+
+ return ret;
+}
+
+
+static int
+sfc_repr_configure(struct sfc_repr *sr, uint16_t nb_rx_queues,
+ const struct rte_eth_conf *conf)
+{
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ ret = sfc_repr_check_conf(sr, nb_rx_queues, conf);
+ if (ret != 0)
+ goto fail_check_conf;
+
+ sr->state = SFC_ETHDEV_CONFIGURED;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_check_conf:
+ sfcr_info(sr, "failed %s", rte_strerror(-ret));
+ return ret;
+}
+
+static int
+sfc_repr_dev_configure(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct rte_eth_dev_data *dev_data = dev->data;
+ int ret;
+
+ sfcr_info(sr, "entry n_rxq=%u n_txq=%u",
+ dev_data->nb_rx_queues, dev_data->nb_tx_queues);
+
+ sfc_repr_lock(sr);
+ switch (sr->state) {
+ case SFC_ETHDEV_CONFIGURED:
+ /* FALLTHROUGH */
+ case SFC_ETHDEV_INITIALIZED:
+ ret = sfc_repr_configure(sr, dev_data->nb_rx_queues,
+ &dev_data->dev_conf);
+ break;
+ default:
+ sfcr_err(sr, "unexpected adapter state %u to configure",
+ sr->state);
+ ret = -EINVAL;
+ break;
+ }
+ sfc_repr_unlock(sr);
+
+ sfcr_info(sr, "done %s", rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+
+ dev_info->device = dev->device;
+
+ dev_info->max_rx_queues = SFC_REPR_RXQ_MAX;
+ dev_info->max_tx_queues = SFC_REPR_TXQ_MAX;
+ dev_info->default_rxconf.rx_drop_en = 1;
+ dev_info->switch_info.domain_id = srs->switch_domain_id;
+ dev_info->switch_info.port_id = srs->switch_port_id;
+
+ return 0;
+}
+
+static void
+sfc_repr_close(struct sfc_repr *sr)
+{
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ SFC_ASSERT(sr->state == SFC_ETHDEV_CONFIGURED);
+ sr->state = SFC_ETHDEV_CLOSING;
+
+ /* Put representor close actions here */
+
+ sr->state = SFC_ETHDEV_INITIALIZED;
+}
+
+static int
+sfc_repr_dev_close(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+
+ sfcr_info(sr, "entry");
+
+ sfc_repr_lock(sr);
+ switch (sr->state) {
+ case SFC_ETHDEV_CONFIGURED:
+ sfc_repr_close(sr);
+ SFC_ASSERT(sr->state == SFC_ETHDEV_INITIALIZED);
+ /* FALLTHROUGH */
+ case SFC_ETHDEV_INITIALIZED:
+ break;
+ default:
+ sfcr_err(sr, "unexpected adapter state %u on close", sr->state);
+ break;
+ }
+
+ /*
+ * Cleanup all resources.
+ * Rollback primary process sfc_repr_eth_dev_init() below.
+ */
+
+ dev->dev_ops = NULL;
+
+ sfc_repr_unlock(sr);
+ sfc_repr_lock_fini(sr);
+
+ sfcr_info(sr, "done");
+
+ free(sr);
+
+ return 0;
+}
+
+static const struct eth_dev_ops sfc_repr_dev_ops = {
+ .dev_configure = sfc_repr_dev_configure,
+ .dev_close = sfc_repr_dev_close,
+ .dev_infos_get = sfc_repr_dev_infos_get,
+};
+
+
+struct sfc_repr_init_data {
+ uint16_t pf_port_id;
+ uint16_t repr_id;
+ uint16_t switch_domain_id;
+ efx_mport_sel_t mport_sel;
+};
+
+static int
+sfc_repr_assign_mae_switch_port(uint16_t switch_domain_id,
+ const struct sfc_mae_switch_port_request *req,
+ uint16_t *switch_port_id)
+{
+ int rc;
+
+ rc = sfc_mae_assign_switch_port(switch_domain_id, req, switch_port_id);
+
+ SFC_ASSERT(rc >= 0);
+ return -rc;
+}
+
+static int
+sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
+{
+ const struct sfc_repr_init_data *repr_data = init_params;
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_mae_switch_port_request switch_port_request;
+ efx_mport_sel_t ethdev_mport_sel;
+ struct sfc_repr *sr;
+ int ret;
+
+ /*
+ * Currently there is no mport we can use for representor's
+ * ethdev. Use an invalid one for now. This way representors
+ * can be instantiated.
+ */
+ efx_mae_mport_invalid(ðdev_mport_sel);
+
+ memset(&switch_port_request, 0, sizeof(switch_port_request));
+ switch_port_request.type = SFC_MAE_SWITCH_PORT_REPRESENTOR;
+ switch_port_request.ethdev_mportp = ðdev_mport_sel;
+ switch_port_request.entity_mportp = &repr_data->mport_sel;
+ switch_port_request.ethdev_port_id = dev->data->port_id;
+
+ ret = sfc_repr_assign_mae_switch_port(repr_data->switch_domain_id,
+ &switch_port_request,
+ &srs->switch_port_id);
+ if (ret != 0) {
+ SFC_GENERIC_LOG(ERR,
+ "%s() failed to assign MAE switch port (domain id %u)",
+ __func__, repr_data->switch_domain_id);
+ goto fail_mae_assign_switch_port;
+ }
+
+ /*
+ * Allocate process private data from heap, since it should not
+ * be located in shared memory allocated using rte_malloc() API.
+ */
+ sr = calloc(1, sizeof(*sr));
+ if (sr == NULL) {
+ ret = -ENOMEM;
+ goto fail_alloc_sr;
+ }
+
+ sfc_repr_lock_init(sr);
+ sfc_repr_lock(sr);
+
+ dev->process_private = sr;
+
+ srs->pf_port_id = repr_data->pf_port_id;
+ srs->repr_id = repr_data->repr_id;
+ srs->switch_domain_id = repr_data->switch_domain_id;
+
+ dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+ dev->data->representor_id = srs->repr_id;
+ dev->data->parent_port_id = srs->pf_port_id;
+
+ dev->data->mac_addrs = rte_zmalloc("sfcr", RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ret = -ENOMEM;
+ goto fail_mac_addrs;
+ }
+
+ dev->dev_ops = &sfc_repr_dev_ops;
+
+ sr->state = SFC_ETHDEV_INITIALIZED;
+ sfc_repr_unlock(sr);
+
+ return 0;
+
+fail_mac_addrs:
+ sfc_repr_unlock(sr);
+ free(sr);
+
+fail_alloc_sr:
+fail_mae_assign_switch_port:
+ SFC_GENERIC_LOG(ERR, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+int
+sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
+ uint16_t switch_domain_id, const efx_mport_sel_t *mport_sel)
+{
+ struct sfc_repr_init_data repr_data;
+ char name[RTE_ETH_NAME_MAX_LEN];
+ int ret;
+
+ if (snprintf(name, sizeof(name), "net_%s_representor_%u",
+ parent->device->name, representor_id) >=
+ (int)sizeof(name)) {
+ SFC_GENERIC_LOG(ERR, "%s() failed name too long", __func__);
+ return -ENAMETOOLONG;
+ }
+
+ memset(&repr_data, 0, sizeof(repr_data));
+ repr_data.pf_port_id = parent->data->port_id;
+ repr_data.repr_id = representor_id;
+ repr_data.switch_domain_id = switch_domain_id;
+ repr_data.mport_sel = *mport_sel;
+
+ ret = rte_eth_dev_create(parent->device, name,
+ sizeof(struct sfc_repr_shared),
+ NULL, NULL,
+ sfc_repr_eth_dev_init, &repr_data);
+ if (ret != 0)
+ SFC_GENERIC_LOG(ERR, "%s() failed to create device", __func__);
+
+ SFC_GENERIC_LOG(INFO, "%s() done: %s", __func__, rte_strerror(-ret));
+
+ return ret;
+}
diff --git a/drivers/net/sfc/sfc_repr.h b/drivers/net/sfc/sfc_repr.h
new file mode 100644
index 0000000000..1347206006
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_REPR_H
+#define _SFC_REPR_H
+
+#include <stdint.h>
+
+#include <rte_ethdev.h>
+
+#include "efx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** Max count of the representor Rx queues */
+#define SFC_REPR_RXQ_MAX 1
+
+/** Max count of the representor Tx queues */
+#define SFC_REPR_TXQ_MAX 1
+
+int sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
+ uint16_t switch_domain_id,
+ const efx_mport_sel_t *mport_sel);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_REPR_H */
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index 84a02a61f8..a1a2ab9848 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -27,6 +27,11 @@ enum sfc_mae_switch_port_type {
* and thus refers to its underlying PCIe function
*/
SFC_MAE_SWITCH_PORT_INDEPENDENT = 0,
+ /**
+ * The switch port is operated by a representor RTE ethdev
+ * and thus refers to the represented PCIe function
+ */
+ SFC_MAE_SWITCH_PORT_REPRESENTOR,
};
struct sfc_mae_switch_port_request {
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 12/38] common/sfc_efx/base: add filter ingress mport matching field
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (10 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 11/38] net/sfc: add port representors infrastructure Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 13/38] common/sfc_efx/base: add API to get mport selector by ID Andrew Rybchenko
` (26 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The field changes the mport for which the filter is created.
It is required to filter traffic from VF on an alias mport.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/ef10_filter.c | 11 +++++++++--
drivers/common/sfc_efx/base/efx.h | 3 +++
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_filter.c b/drivers/common/sfc_efx/base/ef10_filter.c
index ac6006c9b4..6d19797d16 100644
--- a/drivers/common/sfc_efx/base/ef10_filter.c
+++ b/drivers/common/sfc_efx/base/ef10_filter.c
@@ -171,6 +171,7 @@ efx_mcdi_filter_op_add(
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_FILTER_OP_V3_IN_LEN,
MC_CMD_FILTER_OP_EXT_OUT_LEN);
efx_filter_match_flags_t match_flags;
+ uint32_t port_id;
efx_rc_t rc;
req.emr_cmd = MC_CMD_FILTER_OP;
@@ -180,10 +181,11 @@ efx_mcdi_filter_op_add(
req.emr_out_length = MC_CMD_FILTER_OP_EXT_OUT_LEN;
/*
- * Remove match flag for encapsulated filters that does not correspond
+ * Remove EFX match flags that do not correspond
* to the MCDI match flags
*/
match_flags = spec->efs_match_flags & ~EFX_FILTER_MATCH_ENCAP_TYPE;
+ match_flags &= ~EFX_FILTER_MATCH_MPORT;
switch (filter_op) {
case MC_CMD_FILTER_OP_IN_OP_REPLACE:
@@ -202,7 +204,12 @@ efx_mcdi_filter_op_add(
goto fail1;
}
- MCDI_IN_SET_DWORD(req, FILTER_OP_EXT_IN_PORT_ID, enp->en_vport_id);
+ if (spec->efs_match_flags & EFX_FILTER_MATCH_MPORT)
+ port_id = spec->efs_ingress_mport;
+ else
+ port_id = enp->en_vport_id;
+
+ MCDI_IN_SET_DWORD(req, FILTER_OP_EXT_IN_PORT_ID, port_id);
MCDI_IN_SET_DWORD(req, FILTER_OP_EXT_IN_MATCH_FIELDS,
match_flags);
if (spec->efs_dmaq_id == EFX_FILTER_SPEC_RX_DMAQ_ID_DROP) {
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index c0d1535017..7f04b42bae 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -3389,6 +3389,8 @@ typedef uint8_t efx_filter_flags_t;
#define EFX_FILTER_MATCH_OUTER_VID 0x00000100
/* Match by IP transport protocol */
#define EFX_FILTER_MATCH_IP_PROTO 0x00000200
+/* Match by ingress MPORT */
+#define EFX_FILTER_MATCH_MPORT 0x00000400
/* Match by VNI or VSID */
#define EFX_FILTER_MATCH_VNI_OR_VSID 0x00000800
/* For encapsulated packets, match by inner frame local MAC address */
@@ -3451,6 +3453,7 @@ typedef struct efx_filter_spec_s {
efx_oword_t efs_loc_host;
uint8_t efs_vni_or_vsid[EFX_VNI_OR_VSID_LEN];
uint8_t efs_ifrm_loc_mac[EFX_MAC_ADDR_LEN];
+ uint32_t efs_ingress_mport;
} efx_filter_spec_t;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 13/38] common/sfc_efx/base: add API to get mport selector by ID
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (11 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 12/38] common/sfc_efx/base: add filter ingress mport matching field Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 14/38] common/sfc_efx/base: add mport alias MCDI wrappers Andrew Rybchenko
` (25 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The conversion is required when mport ID is received via
mport allocation and mport selector is required for filter
creation.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx.h | 13 +++++++++++++
drivers/common/sfc_efx/base/efx_mae.c | 17 +++++++++++++++++
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 31 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 7f04b42bae..a59c2e47ef 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4237,6 +4237,19 @@ efx_mae_mport_by_pcie_function(
__in uint32_t vf,
__out efx_mport_sel_t *mportp);
+/*
+ * Get MPORT selector by an MPORT ID
+ *
+ * The resulting MPORT selector is opaque to the caller and can be
+ * passed as an argument to efx_mae_match_spec_mport_set()
+ * and efx_mae_action_set_populate_deliver().
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_by_id(
+ __in const efx_mport_id_t *mport_idp,
+ __out efx_mport_sel_t *mportp);
+
/* Get MPORT ID by an MPORT selector */
LIBEFX_API
extern __checkReturn efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index b7afe8fdc8..f5d981f973 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -827,6 +827,23 @@ efx_mae_mport_id_by_selector(
return (rc);
}
+ __checkReturn efx_rc_t
+efx_mae_mport_by_id(
+ __in const efx_mport_id_t *mport_idp,
+ __out efx_mport_sel_t *mportp)
+{
+ efx_dword_t dword;
+
+ EFX_POPULATE_DWORD_2(dword,
+ MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_MPORT_ID,
+ MAE_MPORT_SELECTOR_MPORT_ID, mport_idp->id);
+
+ memset(mportp, 0, sizeof (*mportp));
+ mportp->sel = __LE_TO_CPU_32(dword.ed_u32[0]);
+
+ return (0);
+}
+
__checkReturn efx_rc_t
efx_mae_match_spec_field_set(
__in efx_mae_match_spec_t *spec,
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 611757ccde..8c5d813c19 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -126,6 +126,7 @@ INTERNAL {
efx_mae_match_specs_equal;
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
+ efx_mae_mport_by_id;
efx_mae_mport_id_by_selector;
efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 14/38] common/sfc_efx/base: add mport alias MCDI wrappers
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (12 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 13/38] common/sfc_efx/base: add API to get mport selector by ID Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 15/38] net/sfc: add representor proxy port API Andrew Rybchenko
` (24 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The APIs allow creation of mports for port representor
traffic filtering.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx.h | 13 ++++
drivers/common/sfc_efx/base/efx_mae.c | 90 +++++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 2 +
3 files changed, 105 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index a59c2e47ef..0a178128ba 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4599,6 +4599,19 @@ efx_mae_action_rule_remove(
__in efx_nic_t *enp,
__in const efx_mae_rule_id_t *ar_idp);
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mcdi_mport_alloc_alias(
+ __in efx_nic_t *enp,
+ __out efx_mport_id_t *mportp,
+ __out_opt uint32_t *labelp);
+
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_free(
+ __in efx_nic_t *enp,
+ __in const efx_mport_id_t *mportp);
+
#endif /* EFSYS_OPT_MAE */
#if EFSYS_OPT_VIRTIO
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index f5d981f973..3f498fe189 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -3142,4 +3142,94 @@ efx_mae_action_rule_remove(
return (rc);
}
+ __checkReturn efx_rc_t
+efx_mcdi_mport_alloc_alias(
+ __in efx_nic_t *enp,
+ __out efx_mport_id_t *mportp,
+ __out_opt uint32_t *labelp)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_ALLOC_ALIAS_IN_LEN,
+ MC_CMD_MAE_MPORT_ALLOC_ALIAS_OUT_LEN);
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_ALLOC;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_ALLOC_ALIAS_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_ALLOC_ALIAS_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_ALLOC_IN_TYPE,
+ MC_CMD_MAE_MPORT_ALLOC_IN_MPORT_TYPE_ALIAS);
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_ALLOC_ALIAS_IN_DELIVER_MPORT,
+ MAE_MPORT_SELECTOR_ASSIGNED);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ mportp->id = MCDI_OUT_DWORD(req, MAE_MPORT_ALLOC_OUT_MPORT_ID);
+ if (labelp != NULL)
+ *labelp = MCDI_OUT_DWORD(req, MAE_MPORT_ALLOC_ALIAS_OUT_LABEL);
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_mport_free(
+ __in efx_nic_t *enp,
+ __in const efx_mport_id_t *mportp)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_FREE_IN_LEN,
+ MC_CMD_MAE_MPORT_FREE_OUT_LEN);
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_FREE;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_FREE_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_FREE_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_FREE_IN_MPORT_ID, mportp->id);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
#endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 8c5d813c19..3488367f68 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -127,6 +127,7 @@ INTERNAL {
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
efx_mae_mport_by_id;
+ efx_mae_mport_free;
efx_mae_mport_id_by_selector;
efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
@@ -136,6 +137,7 @@ INTERNAL {
efx_mcdi_get_proxy_handle;
efx_mcdi_get_timeout;
efx_mcdi_init;
+ efx_mcdi_mport_alloc_alias;
efx_mcdi_new_epoch;
efx_mcdi_reboot;
efx_mcdi_request_abort;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 15/38] net/sfc: add representor proxy port API
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (13 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 14/38] common/sfc_efx/base: add mport alias MCDI wrappers Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 16/38] net/sfc: implement representor queue setup and release Andrew Rybchenko
` (23 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The API is required to create and destroy representor proxy
port assigned to representor.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 12 +
drivers/net/sfc/sfc.h | 1 +
drivers/net/sfc/sfc_ethdev.c | 2 +
drivers/net/sfc/sfc_repr.c | 20 ++
drivers/net/sfc/sfc_repr_proxy.c | 320 ++++++++++++++++++++++++++-
drivers/net/sfc/sfc_repr_proxy.h | 30 +++
drivers/net/sfc/sfc_repr_proxy_api.h | 29 +++
7 files changed, 412 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/sfc/sfc_repr_proxy_api.h
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 152234cb61..f79f4d5ffc 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -1043,6 +1043,18 @@ sfc_attach(struct sfc_adapter *sa)
return rc;
}
+void
+sfc_pre_detach(struct sfc_adapter *sa)
+{
+ sfc_log_init(sa, "entry");
+
+ SFC_ASSERT(!sfc_adapter_is_locked(sa));
+
+ sfc_repr_proxy_pre_detach(sa);
+
+ sfc_log_init(sa, "done");
+}
+
void
sfc_detach(struct sfc_adapter *sa)
{
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 628f32c13f..c3e92f3ab6 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -376,6 +376,7 @@ uint32_t sfc_register_logtype(const struct rte_pci_addr *pci_addr,
int sfc_probe(struct sfc_adapter *sa);
void sfc_unprobe(struct sfc_adapter *sa);
int sfc_attach(struct sfc_adapter *sa);
+void sfc_pre_detach(struct sfc_adapter *sa);
void sfc_detach(struct sfc_adapter *sa);
int sfc_start(struct sfc_adapter *sa);
void sfc_stop(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8308cbdfef..8578ba0765 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -345,6 +345,8 @@ sfc_dev_close(struct rte_eth_dev *dev)
return 0;
}
+ sfc_pre_detach(sa);
+
sfc_adapter_lock(sa);
switch (sa->state) {
case SFC_ETHDEV_STARTED:
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 603a613ec6..f684b1d7ef 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -19,6 +19,7 @@
#include "sfc_debug.h"
#include "sfc_repr.h"
#include "sfc_ethdev_state.h"
+#include "sfc_repr_proxy_api.h"
#include "sfc_switch.h"
/** Multi-process shared representor private data */
@@ -285,6 +286,7 @@ static int
sfc_repr_dev_close(struct rte_eth_dev *dev)
{
struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
sfcr_info(sr, "entry");
@@ -306,6 +308,8 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
* Rollback primary process sfc_repr_eth_dev_init() below.
*/
+ (void)sfc_repr_proxy_del_port(srs->pf_port_id, srs->repr_id);
+
dev->dev_ops = NULL;
sfc_repr_unlock(sr);
@@ -378,6 +382,18 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
goto fail_mae_assign_switch_port;
}
+ ret = sfc_repr_proxy_add_port(repr_data->pf_port_id,
+ repr_data->repr_id,
+ dev->data->port_id,
+ &repr_data->mport_sel);
+ if (ret != 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to add repr proxy port",
+ __func__);
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ goto fail_create_port;
+ }
+
/*
* Allocate process private data from heap, since it should not
* be located in shared memory allocated using rte_malloc() API.
@@ -419,6 +435,10 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
free(sr);
fail_alloc_sr:
+ (void)sfc_repr_proxy_del_port(repr_data->pf_port_id,
+ repr_data->repr_id);
+
+fail_create_port:
fail_mae_assign_switch_port:
SFC_GENERIC_LOG(ERR, "%s() failed: %s", __func__, rte_strerror(-ret));
return ret;
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index 6d3962304f..f64fa2efc7 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -13,17 +13,191 @@
#include "sfc_log.h"
#include "sfc_service.h"
#include "sfc_repr_proxy.h"
+#include "sfc_repr_proxy_api.h"
#include "sfc.h"
+/**
+ * Amount of time to wait for the representor proxy routine (which is
+ * running on a service core) to handle a request sent via mbox.
+ */
+#define SFC_REPR_PROXY_MBOX_POLL_TIMEOUT_MS 1000
+
+static struct sfc_repr_proxy *
+sfc_repr_proxy_by_adapter(struct sfc_adapter *sa)
+{
+ return &sa->repr_proxy;
+}
+
+static struct sfc_adapter *
+sfc_get_adapter_by_pf_port_id(uint16_t pf_port_id)
+{
+ struct rte_eth_dev *dev;
+ struct sfc_adapter *sa;
+
+ SFC_ASSERT(pf_port_id < RTE_MAX_ETHPORTS);
+
+ dev = &rte_eth_devices[pf_port_id];
+ sa = sfc_adapter_by_eth_dev(dev);
+
+ sfc_adapter_lock(sa);
+
+ return sa;
+}
+
+static void
+sfc_put_adapter(struct sfc_adapter *sa)
+{
+ sfc_adapter_unlock(sa);
+}
+
+static int
+sfc_repr_proxy_mbox_send(struct sfc_repr_proxy_mbox *mbox,
+ struct sfc_repr_proxy_port *port,
+ enum sfc_repr_proxy_mbox_op op)
+{
+ const unsigned int wait_ms = SFC_REPR_PROXY_MBOX_POLL_TIMEOUT_MS;
+ unsigned int i;
+
+ mbox->op = op;
+ mbox->port = port;
+ mbox->ack = false;
+
+ /*
+ * Release ordering enforces marker set after data is populated.
+ * Paired with acquire ordering in sfc_repr_proxy_mbox_handle().
+ */
+ __atomic_store_n(&mbox->write_marker, true, __ATOMIC_RELEASE);
+
+ /*
+ * Wait for the representor routine to process the request.
+ * Give up on timeout.
+ */
+ for (i = 0; i < wait_ms; i++) {
+ /*
+ * Paired with release ordering in sfc_repr_proxy_mbox_handle()
+ * on acknowledge write.
+ */
+ if (__atomic_load_n(&mbox->ack, __ATOMIC_ACQUIRE))
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ if (i == wait_ms) {
+ SFC_GENERIC_LOG(ERR,
+ "%s() failed to wait for representor proxy routine ack",
+ __func__);
+ return ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_mbox_handle(struct sfc_repr_proxy *rp)
+{
+ struct sfc_repr_proxy_mbox *mbox = &rp->mbox;
+
+ /*
+ * Paired with release ordering in sfc_repr_proxy_mbox_send()
+ * on marker set.
+ */
+ if (!__atomic_load_n(&mbox->write_marker, __ATOMIC_ACQUIRE))
+ return;
+
+ mbox->write_marker = false;
+
+ switch (mbox->op) {
+ case SFC_REPR_PROXY_MBOX_ADD_PORT:
+ TAILQ_INSERT_TAIL(&rp->ports, mbox->port, entries);
+ break;
+ case SFC_REPR_PROXY_MBOX_DEL_PORT:
+ TAILQ_REMOVE(&rp->ports, mbox->port, entries);
+ break;
+ default:
+ SFC_ASSERT(0);
+ return;
+ }
+
+ /*
+ * Paired with acquire ordering in sfc_repr_proxy_mbox_send()
+ * on acknowledge read.
+ */
+ __atomic_store_n(&mbox->ack, true, __ATOMIC_RELEASE);
+}
+
static int32_t
sfc_repr_proxy_routine(void *arg)
{
struct sfc_repr_proxy *rp = arg;
- /* Representor proxy boilerplate will be here */
- RTE_SET_USED(rp);
+ sfc_repr_proxy_mbox_handle(rp);
+
+ return 0;
+}
+
+static int
+sfc_repr_proxy_ports_init(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rc = efx_mcdi_mport_alloc_alias(sa->nic, &rp->mport_alias, NULL);
+ if (rc != 0) {
+ sfc_err(sa, "failed to alloc mport alias: %s",
+ rte_strerror(rc));
+ goto fail_alloc_mport_alias;
+ }
+
+ TAILQ_INIT(&rp->ports);
+
+ sfc_log_init(sa, "done");
return 0;
+
+fail_alloc_mport_alias:
+
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_repr_proxy_pre_detach(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ bool close_ports[RTE_MAX_ETHPORTS] = {0};
+ struct sfc_repr_proxy_port *port;
+ unsigned int i;
+
+ SFC_ASSERT(!sfc_adapter_is_locked(sa));
+
+ sfc_adapter_lock(sa);
+
+ if (sfc_repr_available(sfc_sa2shared(sa))) {
+ TAILQ_FOREACH(port, &rp->ports, entries)
+ close_ports[port->rte_port_id] = true;
+ } else {
+ sfc_log_init(sa, "representors not supported - skip");
+ }
+
+ sfc_adapter_unlock(sa);
+
+ for (i = 0; i < RTE_DIM(close_ports); i++) {
+ if (close_ports[i]) {
+ rte_eth_dev_stop(i);
+ rte_eth_dev_close(i);
+ }
+ }
+}
+
+static void
+sfc_repr_proxy_ports_fini(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+
+ efx_mae_mport_free(sa->nic, &rp->mport_alias);
}
int
@@ -43,6 +217,10 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
return 0;
}
+ rc = sfc_repr_proxy_ports_init(sa);
+ if (rc != 0)
+ goto fail_ports_init;
+
cid = sfc_get_service_lcore(sa->socket_id);
if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
/* Warn and try to allocate on any NUMA node */
@@ -96,6 +274,9 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
*/
fail_get_service_lcore:
+ sfc_repr_proxy_ports_fini(sa);
+
+fail_ports_init:
sfc_log_init(sa, "failed: %s", rte_strerror(rc));
return rc;
}
@@ -115,6 +296,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
rte_service_map_lcore_set(rp->service_id, rp->service_core_id, 0);
rte_service_component_unregister(rp->service_id);
+ sfc_repr_proxy_ports_fini(sa);
sfc_log_init(sa, "done");
}
@@ -165,6 +347,8 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
goto fail_runstate_set;
}
+ rp->started = true;
+
sfc_log_init(sa, "done");
return 0;
@@ -210,5 +394,137 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
+ rp->started = false;
+
+ sfc_log_init(sa, "done");
+}
+
+static struct sfc_repr_proxy_port *
+sfc_repr_proxy_find_port(struct sfc_repr_proxy *rp, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->repr_id == repr_id)
+ return port;
+ }
+
+ return NULL;
+}
+
+int
+sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t rte_port_id, const efx_mport_sel_t *mport_sel)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->rte_port_id == rte_port_id) {
+ rc = EEXIST;
+ sfc_err(sa, "%s() failed: port exists", __func__);
+ goto fail_port_exists;
+ }
+ }
+
+ port = rte_zmalloc("sfc-repr-proxy-port", sizeof(*port),
+ sa->socket_id);
+ if (port == NULL) {
+ rc = ENOMEM;
+ sfc_err(sa, "failed to alloc memory for proxy port");
+ goto fail_alloc_port;
+ }
+
+ rc = efx_mae_mport_id_by_selector(sa->nic, mport_sel,
+ &port->egress_mport);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed get MAE mport id by selector (repr_id %u): %s",
+ repr_id, rte_strerror(rc));
+ goto fail_mport_id;
+ }
+
+ port->rte_port_id = rte_port_id;
+ port->repr_id = repr_id;
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_ADD_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to add proxy port %u",
+ port->repr_id);
+ goto fail_port_add;
+ }
+ } else {
+ TAILQ_INSERT_TAIL(&rp->ports, port, entries);
+ }
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+
+fail_port_add:
+fail_mport_id:
+ rte_free(port);
+fail_alloc_port:
+fail_port_exists:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ sfc_put_adapter(sa);
+
+ return rc;
+}
+
+int
+sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "failed: no such port");
+ rc = ENOENT;
+ goto fail_no_port;
+ }
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_DEL_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to remove proxy port %u",
+ port->repr_id);
+ goto fail_port_remove;
+ }
+ } else {
+ TAILQ_REMOVE(&rp->ports, port, entries);
+ }
+
+ rte_free(port);
+
sfc_log_init(sa, "done");
+
+ sfc_put_adapter(sa);
+
+ return 0;
+
+fail_port_remove:
+fail_no_port:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ sfc_put_adapter(sa);
+
+ return rc;
}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index 953b9922c8..e4a6213c10 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -12,6 +12,8 @@
#include <stdint.h>
+#include "efx.h"
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -24,14 +26,42 @@ extern "C" {
#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+struct sfc_repr_proxy_port {
+ TAILQ_ENTRY(sfc_repr_proxy_port) entries;
+ uint16_t repr_id;
+ uint16_t rte_port_id;
+ efx_mport_id_t egress_mport;
+};
+
+enum sfc_repr_proxy_mbox_op {
+ SFC_REPR_PROXY_MBOX_ADD_PORT,
+ SFC_REPR_PROXY_MBOX_DEL_PORT,
+};
+
+struct sfc_repr_proxy_mbox {
+ struct sfc_repr_proxy_port *port;
+ enum sfc_repr_proxy_mbox_op op;
+
+ bool write_marker;
+ bool ack;
+};
+
+TAILQ_HEAD(sfc_repr_proxy_ports, sfc_repr_proxy_port);
+
struct sfc_repr_proxy {
uint32_t service_core_id;
uint32_t service_id;
+ efx_mport_id_t mport_alias;
+ struct sfc_repr_proxy_ports ports;
+ bool started;
+
+ struct sfc_repr_proxy_mbox mbox;
};
struct sfc_adapter;
int sfc_repr_proxy_attach(struct sfc_adapter *sa);
+void sfc_repr_proxy_pre_detach(struct sfc_adapter *sa);
void sfc_repr_proxy_detach(struct sfc_adapter *sa);
int sfc_repr_proxy_start(struct sfc_adapter *sa);
void sfc_repr_proxy_stop(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_repr_proxy_api.h b/drivers/net/sfc/sfc_repr_proxy_api.h
new file mode 100644
index 0000000000..af9009ca3c
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr_proxy_api.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_REPR_PROXY_API_H
+#define _SFC_REPR_PROXY_API_H
+
+#include <stdint.h>
+
+#include "efx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t rte_port_id,
+ const efx_mport_sel_t *mport_set);
+int sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_REPR_PROXY_API_H */
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 16/38] net/sfc: implement representor queue setup and release
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (14 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 15/38] net/sfc: add representor proxy port API Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 17/38] net/sfc: implement representor RxQ start/stop Andrew Rybchenko
` (22 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement queue creation and destruction both in port representors
and representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 279 +++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.c | 132 +++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 22 +++
drivers/net/sfc/sfc_repr_proxy_api.h | 15 ++
4 files changed, 448 insertions(+)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index f684b1d7ef..b3876586cc 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -30,6 +30,25 @@ struct sfc_repr_shared {
uint16_t switch_port_id;
};
+struct sfc_repr_rxq {
+ /* Datapath members */
+ struct rte_ring *ring;
+
+ /* Non-datapath members */
+ struct sfc_repr_shared *shared;
+ uint16_t queue_id;
+};
+
+struct sfc_repr_txq {
+ /* Datapath members */
+ struct rte_ring *ring;
+ efx_mport_id_t egress_mport;
+
+ /* Non-datapath members */
+ struct sfc_repr_shared *shared;
+ uint16_t queue_id;
+};
+
/** Primary process representor private data */
struct sfc_repr {
/**
@@ -50,6 +69,14 @@ struct sfc_repr {
SFC_GENERIC_LOG(ERR, __VA_ARGS__); \
} while (0)
+#define sfcr_warn(sr, ...) \
+ do { \
+ const struct sfc_repr *_sr = (sr); \
+ \
+ (void)_sr; \
+ SFC_GENERIC_LOG(WARNING, __VA_ARGS__); \
+ } while (0)
+
#define sfcr_info(sr, ...) \
do { \
const struct sfc_repr *_sr = (sr); \
@@ -269,6 +296,243 @@ sfc_repr_dev_infos_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+sfc_repr_ring_create(uint16_t pf_port_id, uint16_t repr_id,
+ const char *type_name, uint16_t qid, uint16_t nb_desc,
+ unsigned int socket_id, struct rte_ring **ring)
+{
+ char ring_name[RTE_RING_NAMESIZE];
+ int ret;
+
+ ret = snprintf(ring_name, sizeof(ring_name), "sfc_%u_repr_%u_%sq%u",
+ pf_port_id, repr_id, type_name, qid);
+ if (ret >= (int)sizeof(ring_name))
+ return -ENAMETOOLONG;
+
+ /*
+ * Single producer/consumer rings are used since the API for Tx/Rx
+ * packet burst for representors are guaranteed to be called from
+ * a single thread, and the user of the other end (representor proxy)
+ * is also single-threaded.
+ */
+ *ring = rte_ring_create(ring_name, nb_desc, socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (*ring == NULL)
+ return -rte_errno;
+
+ return 0;
+}
+
+static int
+sfc_repr_rx_qcheck_conf(struct sfc_repr *sr,
+ const struct rte_eth_rxconf *rx_conf)
+{
+ int ret = 0;
+
+ sfcr_info(sr, "entry");
+
+ if (rx_conf->rx_thresh.pthresh != 0 ||
+ rx_conf->rx_thresh.hthresh != 0 ||
+ rx_conf->rx_thresh.wthresh != 0) {
+ sfcr_warn(sr,
+ "RxQ prefetch/host/writeback thresholds are not supported");
+ }
+
+ if (rx_conf->rx_free_thresh != 0)
+ sfcr_warn(sr, "RxQ free threshold is not supported");
+
+ if (rx_conf->rx_drop_en == 0)
+ sfcr_warn(sr, "RxQ drop disable is not supported");
+
+ if (rx_conf->rx_deferred_start) {
+ sfcr_err(sr, "Deferred start is not supported");
+ ret = -EINVAL;
+ }
+
+ sfcr_info(sr, "done: %s", rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+ uint16_t nb_rx_desc, unsigned int socket_id,
+ __rte_unused const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_rxq *rxq;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ ret = sfc_repr_rx_qcheck_conf(sr, rx_conf);
+ if (ret != 0)
+ goto fail_check_conf;
+
+ ret = -ENOMEM;
+ rxq = rte_zmalloc_socket("sfc-repr-rxq", sizeof(*rxq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq == NULL) {
+ sfcr_err(sr, "%s() failed to alloc RxQ", __func__);
+ goto fail_rxq_alloc;
+ }
+
+ rxq->shared = srs;
+ rxq->queue_id = rx_queue_id;
+
+ ret = sfc_repr_ring_create(srs->pf_port_id, srs->repr_id,
+ "rx", rxq->queue_id, nb_rx_desc,
+ socket_id, &rxq->ring);
+ if (ret != 0) {
+ sfcr_err(sr, "%s() failed to create ring", __func__);
+ goto fail_ring_create;
+ }
+
+ ret = sfc_repr_proxy_add_rxq(srs->pf_port_id, srs->repr_id,
+ rxq->queue_id, rxq->ring, mb_pool);
+ if (ret != 0) {
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ sfcr_err(sr, "%s() failed to add proxy RxQ", __func__);
+ goto fail_proxy_add_rxq;
+ }
+
+ dev->data->rx_queues[rx_queue_id] = rxq;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_proxy_add_rxq:
+ rte_ring_free(rxq->ring);
+
+fail_ring_create:
+ rte_free(rxq);
+
+fail_rxq_alloc:
+fail_check_conf:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static void
+sfc_repr_rx_queue_release(void *queue)
+{
+ struct sfc_repr_rxq *rxq = queue;
+ struct sfc_repr_shared *srs;
+
+ if (rxq == NULL)
+ return;
+
+ srs = rxq->shared;
+ sfc_repr_proxy_del_rxq(srs->pf_port_id, srs->repr_id, rxq->queue_id);
+ rte_ring_free(rxq->ring);
+ rte_free(rxq);
+}
+
+static int
+sfc_repr_tx_qcheck_conf(struct sfc_repr *sr,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int ret = 0;
+
+ sfcr_info(sr, "entry");
+
+ if (tx_conf->tx_rs_thresh != 0)
+ sfcr_warn(sr, "RS bit in transmit descriptor is not supported");
+
+ if (tx_conf->tx_free_thresh != 0)
+ sfcr_warn(sr, "TxQ free threshold is not supported");
+
+ if (tx_conf->tx_thresh.pthresh != 0 ||
+ tx_conf->tx_thresh.hthresh != 0 ||
+ tx_conf->tx_thresh.wthresh != 0) {
+ sfcr_warn(sr,
+ "prefetch/host/writeback thresholds are not supported");
+ }
+
+ if (tx_conf->tx_deferred_start) {
+ sfcr_err(sr, "Deferred start is not supported");
+ ret = -EINVAL;
+ }
+
+ sfcr_info(sr, "done: %s", rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+ uint16_t nb_tx_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_txq *txq;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ ret = sfc_repr_tx_qcheck_conf(sr, tx_conf);
+ if (ret != 0)
+ goto fail_check_conf;
+
+ ret = -ENOMEM;
+ txq = rte_zmalloc_socket("sfc-repr-txq", sizeof(*txq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL)
+ goto fail_txq_alloc;
+
+ txq->shared = srs;
+ txq->queue_id = tx_queue_id;
+
+ ret = sfc_repr_ring_create(srs->pf_port_id, srs->repr_id,
+ "tx", txq->queue_id, nb_tx_desc,
+ socket_id, &txq->ring);
+ if (ret != 0)
+ goto fail_ring_create;
+
+ ret = sfc_repr_proxy_add_txq(srs->pf_port_id, srs->repr_id,
+ txq->queue_id, txq->ring,
+ &txq->egress_mport);
+ if (ret != 0)
+ goto fail_proxy_add_txq;
+
+ dev->data->tx_queues[tx_queue_id] = txq;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_proxy_add_txq:
+ rte_ring_free(txq->ring);
+
+fail_ring_create:
+ rte_free(txq);
+
+fail_txq_alloc:
+fail_check_conf:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static void
+sfc_repr_tx_queue_release(void *queue)
+{
+ struct sfc_repr_txq *txq = queue;
+ struct sfc_repr_shared *srs;
+
+ if (txq == NULL)
+ return;
+
+ srs = txq->shared;
+ sfc_repr_proxy_del_txq(srs->pf_port_id, srs->repr_id, txq->queue_id);
+ rte_ring_free(txq->ring);
+ rte_free(txq);
+}
+
static void
sfc_repr_close(struct sfc_repr *sr)
{
@@ -287,6 +551,7 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
{
struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ unsigned int i;
sfcr_info(sr, "entry");
@@ -303,6 +568,16 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
break;
}
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ sfc_repr_rx_queue_release(dev->data->rx_queues[i]);
+ dev->data->rx_queues[i] = NULL;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ sfc_repr_tx_queue_release(dev->data->tx_queues[i]);
+ dev->data->tx_queues[i] = NULL;
+ }
+
/*
* Cleanup all resources.
* Rollback primary process sfc_repr_eth_dev_init() below.
@@ -326,6 +601,10 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_configure = sfc_repr_dev_configure,
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
+ .rx_queue_setup = sfc_repr_rx_queue_setup,
+ .rx_queue_release = sfc_repr_rx_queue_release,
+ .tx_queue_setup = sfc_repr_tx_queue_setup,
+ .tx_queue_release = sfc_repr_tx_queue_release,
};
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index f64fa2efc7..6a89cca40a 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -528,3 +528,135 @@ sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id)
return rc;
}
+
+int
+sfc_repr_proxy_add_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *rx_ring,
+ struct rte_mempool *mp)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_rxq *rxq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return ENOENT;
+ }
+
+ rxq = &port->rxq[queue_id];
+ if (rp->dp_rxq[queue_id].mp != NULL && rp->dp_rxq[queue_id].mp != mp) {
+ sfc_err(sa, "multiple mempools per queue are not supported");
+ sfc_put_adapter(sa);
+ return ENOTSUP;
+ }
+
+ rxq->ring = rx_ring;
+ rxq->mb_pool = mp;
+ rp->dp_rxq[queue_id].mp = mp;
+ rp->dp_rxq[queue_id].ref_count++;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+}
+
+void
+sfc_repr_proxy_del_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_rxq *rxq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return;
+ }
+
+ rxq = &port->rxq[queue_id];
+
+ rxq->ring = NULL;
+ rxq->mb_pool = NULL;
+ rp->dp_rxq[queue_id].ref_count--;
+ if (rp->dp_rxq[queue_id].ref_count == 0)
+ rp->dp_rxq[queue_id].mp = NULL;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+}
+
+int
+sfc_repr_proxy_add_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *tx_ring,
+ efx_mport_id_t *egress_mport)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_txq *txq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return ENOENT;
+ }
+
+ txq = &port->txq[queue_id];
+
+ txq->ring = tx_ring;
+
+ *egress_mport = port->egress_mport;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+}
+
+void
+sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_txq *txq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return;
+ }
+
+ txq = &port->txq[queue_id];
+
+ txq->ring = NULL;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index e4a6213c10..bd7ad7148a 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -12,8 +12,13 @@
#include <stdint.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
#include "efx.h"
+#include "sfc_repr.h"
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -26,11 +31,27 @@ extern "C" {
#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+struct sfc_repr_proxy_rxq {
+ struct rte_ring *ring;
+ struct rte_mempool *mb_pool;
+};
+
+struct sfc_repr_proxy_txq {
+ struct rte_ring *ring;
+};
+
struct sfc_repr_proxy_port {
TAILQ_ENTRY(sfc_repr_proxy_port) entries;
uint16_t repr_id;
uint16_t rte_port_id;
efx_mport_id_t egress_mport;
+ struct sfc_repr_proxy_rxq rxq[SFC_REPR_RXQ_MAX];
+ struct sfc_repr_proxy_txq txq[SFC_REPR_TXQ_MAX];
+};
+
+struct sfc_repr_proxy_dp_rxq {
+ struct rte_mempool *mp;
+ unsigned int ref_count;
};
enum sfc_repr_proxy_mbox_op {
@@ -54,6 +75,7 @@ struct sfc_repr_proxy {
efx_mport_id_t mport_alias;
struct sfc_repr_proxy_ports ports;
bool started;
+ struct sfc_repr_proxy_dp_rxq dp_rxq[SFC_REPR_PROXY_NB_RXQ_MAX];
struct sfc_repr_proxy_mbox mbox;
};
diff --git a/drivers/net/sfc/sfc_repr_proxy_api.h b/drivers/net/sfc/sfc_repr_proxy_api.h
index af9009ca3c..d1c0760efa 100644
--- a/drivers/net/sfc/sfc_repr_proxy_api.h
+++ b/drivers/net/sfc/sfc_repr_proxy_api.h
@@ -12,6 +12,9 @@
#include <stdint.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
#include "efx.h"
#ifdef __cplusplus
@@ -23,6 +26,18 @@ int sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
const efx_mport_sel_t *mport_set);
int sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id);
+int sfc_repr_proxy_add_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *rx_ring,
+ struct rte_mempool *mp);
+void sfc_repr_proxy_del_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id);
+
+int sfc_repr_proxy_add_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *tx_ring,
+ efx_mport_id_t *egress_mport);
+void sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 17/38] net/sfc: implement representor RxQ start/stop
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (15 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 16/38] net/sfc: implement representor queue setup and release Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 18/38] net/sfc: implement representor TxQ start/stop Andrew Rybchenko
` (21 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Add extra libefx flags to Rx queue information initialization
function interface to be able to specify the ingress m-port
flag for a representor RxQ. Rx prefix of packets on that queue
will contain ingress m-port field required for packet forwarding
in representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ev.h | 8 ++
drivers/net/sfc/sfc_repr_proxy.c | 194 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 7 ++
3 files changed, 209 insertions(+)
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 590cfb1694..bcb7fbe466 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -110,6 +110,14 @@ sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
return sas->counters_rxq_allocated ? 0 : SFC_SW_INDEX_INVALID;
}
+static inline sfc_sw_index_t
+sfc_repr_rxq_sw_index(const struct sfc_adapter_shared *sas,
+ unsigned int repr_queue_id)
+{
+ return sfc_counters_rxq_sw_index(sas) + sfc_repr_nb_rxq(sas) +
+ repr_queue_id;
+}
+
/*
* Functions below define event queue to transmit/receive queue and vice
* versa mapping.
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index 6a89cca40a..03b6421b04 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -15,6 +15,8 @@
#include "sfc_repr_proxy.h"
#include "sfc_repr_proxy_api.h"
#include "sfc.h"
+#include "sfc_ev.h"
+#include "sfc_rx.h"
/**
* Amount of time to wait for the representor proxy routine (which is
@@ -136,6 +138,181 @@ sfc_repr_proxy_routine(void *arg)
return 0;
}
+static int
+sfc_repr_proxy_rxq_attach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++) {
+ sfc_sw_index_t sw_index = sfc_repr_rxq_sw_index(sas, i);
+
+ rp->dp_rxq[i].sw_index = sw_index;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_rxq_detach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++)
+ rp->dp_rxq[i].sw_index = 0;
+
+ sfc_log_init(sa, "done");
+}
+
+static int
+sfc_repr_proxy_rxq_init(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_dp_rxq *rxq)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ uint16_t nb_rx_desc = SFC_REPR_PROXY_RX_DESC_COUNT;
+ struct sfc_rxq_info *rxq_info;
+ struct rte_eth_rxconf rxconf = {
+ .rx_free_thresh = SFC_REPR_PROXY_RXQ_REFILL_LEVEL,
+ .rx_drop_en = 1,
+ };
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rxq_info = &sas->rxq_info[rxq->sw_index];
+ if (rxq_info->state & SFC_RXQ_INITIALIZED) {
+ sfc_log_init(sa, "RxQ is already initialized - skip");
+ return 0;
+ }
+
+ nb_rx_desc = RTE_MIN(nb_rx_desc, sa->rxq_max_entries);
+ nb_rx_desc = RTE_MAX(nb_rx_desc, sa->rxq_min_entries);
+
+ rc = sfc_rx_qinit_info(sa, rxq->sw_index, EFX_RXQ_FLAG_INGRESS_MPORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy RxQ info");
+ goto fail_repr_rxq_init_info;
+ }
+
+ rc = sfc_rx_qinit(sa, rxq->sw_index, nb_rx_desc, sa->socket_id, &rxconf,
+ rxq->mp);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy RxQ");
+ goto fail_repr_rxq_init;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_repr_rxq_init:
+fail_repr_rxq_init_info:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+ return rc;
+}
+
+static void
+sfc_repr_proxy_rxq_fini(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_rxq_info *rxq_info;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++) {
+ struct sfc_repr_proxy_dp_rxq *rxq = &rp->dp_rxq[i];
+
+ rxq_info = &sas->rxq_info[rxq->sw_index];
+ if (rxq_info->state != SFC_RXQ_INITIALIZED) {
+ sfc_log_init(sa,
+ "representor RxQ %u is already finalized - skip",
+ i);
+ continue;
+ }
+
+ sfc_rx_qfini(sa, rxq->sw_index);
+ }
+
+ sfc_log_init(sa, "done");
+}
+
+static void
+sfc_repr_proxy_rxq_stop(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++)
+ sfc_rx_qstop(sa, sa->repr_proxy.dp_rxq[i].sw_index);
+
+ sfc_repr_proxy_rxq_fini(sa);
+
+ sfc_log_init(sa, "done");
+}
+
+static int
+sfc_repr_proxy_rxq_start(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++) {
+ struct sfc_repr_proxy_dp_rxq *rxq = &rp->dp_rxq[i];
+
+ rc = sfc_repr_proxy_rxq_init(sa, rxq);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy RxQ %u",
+ i);
+ goto fail_init;
+ }
+
+ rc = sfc_rx_qstart(sa, rxq->sw_index);
+ if (rc != 0) {
+ sfc_err(sa, "failed to start representor proxy RxQ %u",
+ i);
+ goto fail_start;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_start:
+fail_init:
+ sfc_repr_proxy_rxq_stop(sa);
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
static int
sfc_repr_proxy_ports_init(struct sfc_adapter *sa)
{
@@ -217,6 +394,10 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
return 0;
}
+ rc = sfc_repr_proxy_rxq_attach(sa);
+ if (rc != 0)
+ goto fail_rxq_attach;
+
rc = sfc_repr_proxy_ports_init(sa);
if (rc != 0)
goto fail_ports_init;
@@ -277,6 +458,9 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
sfc_repr_proxy_ports_fini(sa);
fail_ports_init:
+ sfc_repr_proxy_rxq_detach(sa);
+
+fail_rxq_attach:
sfc_log_init(sa, "failed: %s", rte_strerror(rc));
return rc;
}
@@ -297,6 +481,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
rte_service_map_lcore_set(rp->service_id, rp->service_core_id, 0);
rte_service_component_unregister(rp->service_id);
sfc_repr_proxy_ports_fini(sa);
+ sfc_repr_proxy_rxq_detach(sa);
sfc_log_init(sa, "done");
}
@@ -319,6 +504,10 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
return 0;
}
+ rc = sfc_repr_proxy_rxq_start(sa);
+ if (rc != 0)
+ goto fail_rxq_start;
+
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
if (rc != 0 && rc != -EALREADY) {
@@ -360,6 +549,9 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
fail_start_core:
+ sfc_repr_proxy_rxq_stop(sa);
+
+fail_rxq_start:
sfc_log_init(sa, "failed: %s", rte_strerror(rc));
return rc;
}
@@ -394,6 +586,8 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
+ sfc_repr_proxy_rxq_stop(sa);
+
rp->started = false;
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index bd7ad7148a..dca3fca2b9 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -18,6 +18,7 @@
#include "efx.h"
#include "sfc_repr.h"
+#include "sfc_dp.h"
#ifdef __cplusplus
extern "C" {
@@ -31,6 +32,10 @@ extern "C" {
#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+#define SFC_REPR_PROXY_RX_DESC_COUNT 256
+#define SFC_REPR_PROXY_RXQ_REFILL_LEVEL (SFC_REPR_PROXY_RX_DESC_COUNT / 4)
+#define SFC_REPR_PROXY_RX_BURST 32
+
struct sfc_repr_proxy_rxq {
struct rte_ring *ring;
struct rte_mempool *mb_pool;
@@ -52,6 +57,8 @@ struct sfc_repr_proxy_port {
struct sfc_repr_proxy_dp_rxq {
struct rte_mempool *mp;
unsigned int ref_count;
+
+ sfc_sw_index_t sw_index;
};
enum sfc_repr_proxy_mbox_op {
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 18/38] net/sfc: implement representor TxQ start/stop
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (16 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 17/38] net/sfc: implement representor RxQ start/stop Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 19/38] net/sfc: implement port representor start and stop Andrew Rybchenko
` (20 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement Tx queue start and stop in port representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ev.h | 8 ++
drivers/net/sfc/sfc_repr_proxy.c | 166 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 11 ++
drivers/net/sfc/sfc_tx.c | 15 ++-
drivers/net/sfc/sfc_tx.h | 1 +
5 files changed, 199 insertions(+), 2 deletions(-)
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index bcb7fbe466..a4ababc2bc 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -118,6 +118,14 @@ sfc_repr_rxq_sw_index(const struct sfc_adapter_shared *sas,
repr_queue_id;
}
+static inline sfc_sw_index_t
+sfc_repr_txq_sw_index(const struct sfc_adapter_shared *sas,
+ unsigned int repr_queue_id)
+{
+ /* Reserved TxQ for representors is the first reserved TxQ */
+ return sfc_repr_available(sas) ? repr_queue_id : SFC_SW_INDEX_INVALID;
+}
+
/*
* Functions below define event queue to transmit/receive queue and vice
* versa mapping.
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index 03b6421b04..a5be8fa270 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -17,6 +17,7 @@
#include "sfc.h"
#include "sfc_ev.h"
#include "sfc_rx.h"
+#include "sfc_tx.h"
/**
* Amount of time to wait for the representor proxy routine (which is
@@ -138,6 +139,155 @@ sfc_repr_proxy_routine(void *arg)
return 0;
}
+static int
+sfc_repr_proxy_txq_attach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++) {
+ sfc_sw_index_t sw_index = sfc_repr_txq_sw_index(sas, i);
+
+ rp->dp_txq[i].sw_index = sw_index;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_txq_detach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++)
+ rp->dp_txq[i].sw_index = 0;
+
+ sfc_log_init(sa, "done");
+}
+
+int
+sfc_repr_proxy_txq_init(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ const struct rte_eth_txconf tx_conf = {
+ .tx_free_thresh = SFC_REPR_PROXY_TXQ_FREE_THRESH,
+ };
+ struct sfc_txq_info *txq_info;
+ unsigned int init_i;
+ unsigned int i;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ for (init_i = 0; init_i < sfc_repr_nb_txq(sas); init_i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[init_i];
+
+ txq_info = &sfc_sa2shared(sa)->txq_info[txq->sw_index];
+ if (txq_info->state == SFC_TXQ_INITIALIZED) {
+ sfc_log_init(sa,
+ "representor proxy TxQ %u is already initialized - skip",
+ init_i);
+ continue;
+ }
+
+ sfc_tx_qinit_info(sa, txq->sw_index);
+
+ rc = sfc_tx_qinit(sa, txq->sw_index,
+ SFC_REPR_PROXY_TX_DESC_COUNT, sa->socket_id,
+ &tx_conf);
+
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy TxQ %u",
+ init_i);
+ goto fail_init;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_init:
+ for (i = 0; i < init_i; i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[i];
+
+ txq_info = &sfc_sa2shared(sa)->txq_info[txq->sw_index];
+ if (txq_info->state == SFC_TXQ_INITIALIZED)
+ sfc_tx_qfini(sa, txq->sw_index);
+ }
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+ return rc;
+}
+
+void
+sfc_repr_proxy_txq_fini(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_txq_info *txq_info;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[i];
+
+ txq_info = &sfc_sa2shared(sa)->txq_info[txq->sw_index];
+ if (txq_info->state != SFC_TXQ_INITIALIZED) {
+ sfc_log_init(sa,
+ "representor proxy TxQ %u is already finalized - skip",
+ i);
+ continue;
+ }
+
+ sfc_tx_qfini(sa, txq->sw_index);
+ }
+
+ sfc_log_init(sa, "done");
+}
+
+static int
+sfc_repr_proxy_txq_start(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+
+ sfc_log_init(sa, "entry");
+
+ RTE_SET_USED(rp);
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_txq_stop(struct sfc_adapter *sa)
+{
+ sfc_log_init(sa, "entry");
+ sfc_log_init(sa, "done");
+}
+
static int
sfc_repr_proxy_rxq_attach(struct sfc_adapter *sa)
{
@@ -398,6 +548,10 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_rxq_attach;
+ rc = sfc_repr_proxy_txq_attach(sa);
+ if (rc != 0)
+ goto fail_txq_attach;
+
rc = sfc_repr_proxy_ports_init(sa);
if (rc != 0)
goto fail_ports_init;
@@ -458,6 +612,9 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
sfc_repr_proxy_ports_fini(sa);
fail_ports_init:
+ sfc_repr_proxy_txq_detach(sa);
+
+fail_txq_attach:
sfc_repr_proxy_rxq_detach(sa);
fail_rxq_attach:
@@ -482,6 +639,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
rte_service_component_unregister(rp->service_id);
sfc_repr_proxy_ports_fini(sa);
sfc_repr_proxy_rxq_detach(sa);
+ sfc_repr_proxy_txq_detach(sa);
sfc_log_init(sa, "done");
}
@@ -508,6 +666,10 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_rxq_start;
+ rc = sfc_repr_proxy_txq_start(sa);
+ if (rc != 0)
+ goto fail_txq_start;
+
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
if (rc != 0 && rc != -EALREADY) {
@@ -549,6 +711,9 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
fail_start_core:
+ sfc_repr_proxy_txq_stop(sa);
+
+fail_txq_start:
sfc_repr_proxy_rxq_stop(sa);
fail_rxq_start:
@@ -587,6 +752,7 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
sfc_repr_proxy_rxq_stop(sa);
+ sfc_repr_proxy_txq_stop(sa);
rp->started = false;
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index dca3fca2b9..1fe7ff3695 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -36,6 +36,10 @@ extern "C" {
#define SFC_REPR_PROXY_RXQ_REFILL_LEVEL (SFC_REPR_PROXY_RX_DESC_COUNT / 4)
#define SFC_REPR_PROXY_RX_BURST 32
+#define SFC_REPR_PROXY_TX_DESC_COUNT 256
+#define SFC_REPR_PROXY_TXQ_FREE_THRESH (SFC_REPR_PROXY_TX_DESC_COUNT / 4)
+#define SFC_REPR_PROXY_TX_BURST 32
+
struct sfc_repr_proxy_rxq {
struct rte_ring *ring;
struct rte_mempool *mb_pool;
@@ -61,6 +65,10 @@ struct sfc_repr_proxy_dp_rxq {
sfc_sw_index_t sw_index;
};
+struct sfc_repr_proxy_dp_txq {
+ sfc_sw_index_t sw_index;
+};
+
enum sfc_repr_proxy_mbox_op {
SFC_REPR_PROXY_MBOX_ADD_PORT,
SFC_REPR_PROXY_MBOX_DEL_PORT,
@@ -83,6 +91,7 @@ struct sfc_repr_proxy {
struct sfc_repr_proxy_ports ports;
bool started;
struct sfc_repr_proxy_dp_rxq dp_rxq[SFC_REPR_PROXY_NB_RXQ_MAX];
+ struct sfc_repr_proxy_dp_txq dp_txq[SFC_REPR_PROXY_NB_TXQ_MAX];
struct sfc_repr_proxy_mbox mbox;
};
@@ -92,6 +101,8 @@ struct sfc_adapter;
int sfc_repr_proxy_attach(struct sfc_adapter *sa);
void sfc_repr_proxy_pre_detach(struct sfc_adapter *sa);
void sfc_repr_proxy_detach(struct sfc_adapter *sa);
+int sfc_repr_proxy_txq_init(struct sfc_adapter *sa);
+void sfc_repr_proxy_txq_fini(struct sfc_adapter *sa);
int sfc_repr_proxy_start(struct sfc_adapter *sa);
void sfc_repr_proxy_stop(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index c1b2e964f8..13392cdd5a 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -290,7 +290,7 @@ sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
txq->evq = NULL;
}
-static int
+int
sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
@@ -378,6 +378,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
const unsigned int nb_tx_queues = sa->eth_dev->data->nb_tx_queues;
const unsigned int nb_rsvd_tx_queues = sfc_nb_txq_reserved(sas);
const unsigned int nb_txq_total = nb_tx_queues + nb_rsvd_tx_queues;
+ bool reconfigure;
int rc = 0;
sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
@@ -401,6 +402,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
goto done;
if (sas->txq_info == NULL) {
+ reconfigure = false;
sas->txq_info = rte_calloc_socket("sfc-txqs", nb_txq_total,
sizeof(sas->txq_info[0]), 0,
sa->socket_id);
@@ -419,6 +421,8 @@ sfc_tx_configure(struct sfc_adapter *sa)
struct sfc_txq_info *new_txq_info;
struct sfc_txq *new_txq_ctrl;
+ reconfigure = true;
+
if (nb_tx_queues < sas->ethdev_txq_count)
sfc_tx_fini_queues(sa, nb_tx_queues);
@@ -457,12 +461,18 @@ sfc_tx_configure(struct sfc_adapter *sa)
sas->ethdev_txq_count++;
}
- /* TODO: initialize reserved queues when supported. */
sas->txq_count = sas->ethdev_txq_count + nb_rsvd_tx_queues;
+ if (!reconfigure) {
+ rc = sfc_repr_proxy_txq_init(sa);
+ if (rc != 0)
+ goto fail_repr_proxy_txq_init;
+ }
+
done:
return 0;
+fail_repr_proxy_txq_init:
fail_tx_qinit_info:
fail_txqs_ctrl_realloc:
fail_txqs_realloc:
@@ -480,6 +490,7 @@ void
sfc_tx_close(struct sfc_adapter *sa)
{
sfc_tx_fini_queues(sa, 0);
+ sfc_repr_proxy_txq_fini(sa);
free(sa->txq_ctrl);
sa->txq_ctrl = NULL;
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index f1700b13ca..1a33199fdc 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -108,6 +108,7 @@ struct sfc_txq_info *sfc_txq_info_by_dp_txq(const struct sfc_dp_txq *dp_txq);
int sfc_tx_configure(struct sfc_adapter *sa);
void sfc_tx_close(struct sfc_adapter *sa);
+int sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
int sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 19/38] net/sfc: implement port representor start and stop
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (17 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 18/38] net/sfc: implement representor TxQ start/stop Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 20/38] net/sfc: implement port representor link update Andrew Rybchenko
` (19 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement queue start and stop operation both in port
representors and representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_mae.h | 9 +-
drivers/net/sfc/sfc_repr.c | 181 +++++++++++
drivers/net/sfc/sfc_repr_proxy.c | 453 ++++++++++++++++++++++++++-
drivers/net/sfc/sfc_repr_proxy.h | 16 +
drivers/net/sfc/sfc_repr_proxy_api.h | 3 +
5 files changed, 644 insertions(+), 18 deletions(-)
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 684f0daf7a..d835056aef 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -139,10 +139,17 @@ struct sfc_mae_counter_registry {
uint32_t service_id;
};
+/**
+ * MAE rules used to capture traffic generated by VFs and direct it to
+ * representors (one for each VF).
+ */
+#define SFC_MAE_NB_REPR_RULES_MAX (64)
+
/** Rules to forward traffic from PHY port to PF and from PF to PHY port */
#define SFC_MAE_NB_SWITCHDEV_RULES (2)
/** Maximum required internal MAE rules */
-#define SFC_MAE_NB_RULES_MAX (SFC_MAE_NB_SWITCHDEV_RULES)
+#define SFC_MAE_NB_RULES_MAX (SFC_MAE_NB_SWITCHDEV_RULES + \
+ SFC_MAE_NB_REPR_RULES_MAX)
struct sfc_mae_rule {
efx_mae_match_spec_t *spec;
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index b3876586cc..bfe6dd4c9b 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -9,6 +9,7 @@
#include <stdint.h>
+#include <rte_mbuf.h>
#include <rte_ethdev.h>
#include <rte_malloc.h>
#include <ethdev_driver.h>
@@ -21,6 +22,7 @@
#include "sfc_ethdev_state.h"
#include "sfc_repr_proxy_api.h"
#include "sfc_switch.h"
+#include "sfc_dp_tx.h"
/** Multi-process shared representor private data */
struct sfc_repr_shared {
@@ -144,6 +146,179 @@ sfc_repr_lock_fini(__rte_unused struct sfc_repr *sr)
/* Just for symmetry of the API */
}
+static void
+sfc_repr_rx_queue_stop(void *queue)
+{
+ struct sfc_repr_rxq *rxq = queue;
+
+ if (rxq == NULL)
+ return;
+
+ rte_ring_reset(rxq->ring);
+}
+
+static void
+sfc_repr_tx_queue_stop(void *queue)
+{
+ struct sfc_repr_txq *txq = queue;
+
+ if (txq == NULL)
+ return;
+
+ rte_ring_reset(txq->ring);
+}
+
+static int
+sfc_repr_start(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_shared *srs;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ switch (sr->state) {
+ case SFC_ETHDEV_CONFIGURED:
+ break;
+ case SFC_ETHDEV_STARTED:
+ sfcr_info(sr, "already started");
+ return 0;
+ default:
+ ret = -EINVAL;
+ goto fail_bad_state;
+ }
+
+ sr->state = SFC_ETHDEV_STARTING;
+
+ srs = sfc_repr_shared_by_eth_dev(dev);
+ ret = sfc_repr_proxy_start_repr(srs->pf_port_id, srs->repr_id);
+ if (ret != 0) {
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ goto fail_start;
+ }
+
+ sr->state = SFC_ETHDEV_STARTED;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_start:
+ sr->state = SFC_ETHDEV_CONFIGURED;
+
+fail_bad_state:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static int
+sfc_repr_dev_start(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ sfc_repr_lock(sr);
+ ret = sfc_repr_start(dev);
+ sfc_repr_unlock(sr);
+
+ if (ret != 0)
+ goto fail_start;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_start:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static int
+sfc_repr_stop(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_shared *srs;
+ unsigned int i;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ switch (sr->state) {
+ case SFC_ETHDEV_STARTED:
+ break;
+ case SFC_ETHDEV_CONFIGURED:
+ sfcr_info(sr, "already stopped");
+ return 0;
+ default:
+ sfcr_err(sr, "stop in unexpected state %u", sr->state);
+ SFC_ASSERT(B_FALSE);
+ ret = -EINVAL;
+ goto fail_bad_state;
+ }
+
+ srs = sfc_repr_shared_by_eth_dev(dev);
+ ret = sfc_repr_proxy_stop_repr(srs->pf_port_id, srs->repr_id);
+ if (ret != 0) {
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ goto fail_stop;
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ sfc_repr_rx_queue_stop(dev->data->rx_queues[i]);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ sfc_repr_tx_queue_stop(dev->data->tx_queues[i]);
+
+ sr->state = SFC_ETHDEV_CONFIGURED;
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_bad_state:
+fail_stop:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_dev_stop(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ sfc_repr_lock(sr);
+
+ ret = sfc_repr_stop(dev);
+ if (ret != 0) {
+ sfcr_err(sr, "%s() failed to stop representor", __func__);
+ goto fail_stop;
+ }
+
+ sfc_repr_unlock(sr);
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_stop:
+ sfc_repr_unlock(sr);
+
+ sfcr_err(sr, "%s() failed %s", __func__, rte_strerror(-ret));
+
+ return ret;
+}
+
static int
sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
const struct rte_eth_conf *conf)
@@ -557,6 +732,10 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
sfc_repr_lock(sr);
switch (sr->state) {
+ case SFC_ETHDEV_STARTED:
+ sfc_repr_stop(dev);
+ SFC_ASSERT(sr->state == SFC_ETHDEV_CONFIGURED);
+ /* FALLTHROUGH */
case SFC_ETHDEV_CONFIGURED:
sfc_repr_close(sr);
SFC_ASSERT(sr->state == SFC_ETHDEV_INITIALIZED);
@@ -599,6 +778,8 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_configure = sfc_repr_dev_configure,
+ .dev_start = sfc_repr_dev_start,
+ .dev_stop = sfc_repr_dev_stop,
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
.rx_queue_setup = sfc_repr_rx_queue_setup,
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index a5be8fa270..ea03d5afdd 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -53,6 +53,19 @@ sfc_put_adapter(struct sfc_adapter *sa)
sfc_adapter_unlock(sa);
}
+static struct sfc_repr_proxy_port *
+sfc_repr_proxy_find_port(struct sfc_repr_proxy *rp, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->repr_id == repr_id)
+ return port;
+ }
+
+ return NULL;
+}
+
static int
sfc_repr_proxy_mbox_send(struct sfc_repr_proxy_mbox *mbox,
struct sfc_repr_proxy_port *port,
@@ -117,6 +130,12 @@ sfc_repr_proxy_mbox_handle(struct sfc_repr_proxy *rp)
case SFC_REPR_PROXY_MBOX_DEL_PORT:
TAILQ_REMOVE(&rp->ports, mbox->port, entries);
break;
+ case SFC_REPR_PROXY_MBOX_START_PORT:
+ mbox->port->started = true;
+ break;
+ case SFC_REPR_PROXY_MBOX_STOP_PORT:
+ mbox->port->started = false;
+ break;
default:
SFC_ASSERT(0);
return;
@@ -463,6 +482,158 @@ sfc_repr_proxy_rxq_start(struct sfc_adapter *sa)
return rc;
}
+static int
+sfc_repr_proxy_mae_rule_insert(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ efx_mport_sel_t mport_alias_selector;
+ efx_mport_sel_t mport_vf_selector;
+ struct sfc_mae_rule *mae_rule;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rc = efx_mae_mport_by_id(&port->egress_mport,
+ &mport_vf_selector);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get VF mport for repr %u",
+ port->repr_id);
+ goto fail_get_vf;
+ }
+
+ rc = efx_mae_mport_by_id(&rp->mport_alias, &mport_alias_selector);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get mport selector for repr %u",
+ port->repr_id);
+ goto fail_get_alias;
+ }
+
+ rc = sfc_mae_rule_add_mport_match_deliver(sa, &mport_vf_selector,
+ &mport_alias_selector, -1,
+ &mae_rule);
+ if (rc != 0) {
+ sfc_err(sa, "failed to insert MAE rule for repr %u",
+ port->repr_id);
+ goto fail_rule_add;
+ }
+
+ port->mae_rule = mae_rule;
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_rule_add:
+fail_get_alias:
+fail_get_vf:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+static void
+sfc_repr_proxy_mae_rule_remove(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ struct sfc_mae_rule *mae_rule = port->mae_rule;
+
+ sfc_mae_rule_del(sa, mae_rule);
+}
+
+static int
+sfc_repr_proxy_mport_filter_insert(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_rxq *rxq_ctrl;
+ struct sfc_repr_proxy_filter *filter = &rp->mport_filter;
+ efx_mport_sel_t mport_alias_selector;
+ static const efx_filter_match_flags_t flags[RTE_DIM(filter->specs)] = {
+ EFX_FILTER_MATCH_UNKNOWN_UCAST_DST,
+ EFX_FILTER_MATCH_UNKNOWN_MCAST_DST };
+ unsigned int i;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (sfc_repr_nb_rxq(sas) == 1) {
+ rxq_ctrl = &sa->rxq_ctrl[rp->dp_rxq[0].sw_index];
+ } else {
+ sfc_err(sa, "multiple representor proxy RxQs not supported");
+ rc = ENOTSUP;
+ goto fail_multiple_queues;
+ }
+
+ rc = efx_mae_mport_by_id(&rp->mport_alias, &mport_alias_selector);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get repr proxy mport by ID");
+ goto fail_get_selector;
+ }
+
+ memset(filter->specs, 0, sizeof(filter->specs));
+ for (i = 0; i < RTE_DIM(filter->specs); i++) {
+ filter->specs[i].efs_priority = EFX_FILTER_PRI_MANUAL;
+ filter->specs[i].efs_flags = EFX_FILTER_FLAG_RX;
+ filter->specs[i].efs_dmaq_id = rxq_ctrl->hw_index;
+ filter->specs[i].efs_match_flags = flags[i] |
+ EFX_FILTER_MATCH_MPORT;
+ filter->specs[i].efs_ingress_mport = mport_alias_selector.sel;
+
+ rc = efx_filter_insert(sa->nic, &filter->specs[i]);
+ if (rc != 0) {
+ sfc_err(sa, "failed to insert repr proxy filter");
+ goto fail_insert;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_insert:
+ while (i-- > 0)
+ efx_filter_remove(sa->nic, &filter->specs[i]);
+
+fail_get_selector:
+fail_multiple_queues:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+static void
+sfc_repr_proxy_mport_filter_remove(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_repr_proxy_filter *filter = &rp->mport_filter;
+ unsigned int i;
+
+ for (i = 0; i < RTE_DIM(filter->specs); i++)
+ efx_filter_remove(sa->nic, &filter->specs[i]);
+}
+
+static int
+sfc_repr_proxy_port_rule_insert(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ int rc;
+
+ rc = sfc_repr_proxy_mae_rule_insert(sa, port);
+ if (rc != 0)
+ goto fail_mae_rule_insert;
+
+ return 0;
+
+fail_mae_rule_insert:
+ return rc;
+}
+
+static void
+sfc_repr_proxy_port_rule_remove(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ sfc_repr_proxy_mae_rule_remove(sa, port);
+}
+
static int
sfc_repr_proxy_ports_init(struct sfc_adapter *sa)
{
@@ -644,24 +815,105 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
sfc_log_init(sa, "done");
}
+static int
+sfc_repr_proxy_do_start_port(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ rc = sfc_repr_proxy_port_rule_insert(sa, port);
+ if (rc != 0)
+ goto fail_filter_insert;
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_START_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to start proxy port %u",
+ port->repr_id);
+ goto fail_port_start;
+ }
+ } else {
+ port->started = true;
+ }
+
+ return 0;
+
+fail_port_start:
+ sfc_repr_proxy_port_rule_remove(sa, port);
+fail_filter_insert:
+ sfc_err(sa, "%s() failed %s", __func__, rte_strerror(rc));
+
+ return rc;
+}
+
+static int
+sfc_repr_proxy_do_stop_port(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_STOP_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to stop proxy port %u: %s",
+ port->repr_id, rte_strerror(rc));
+ return rc;
+ }
+ } else {
+ port->started = false;
+ }
+
+ sfc_repr_proxy_port_rule_remove(sa, port);
+
+ return 0;
+}
+
+static bool
+sfc_repr_proxy_port_enabled(struct sfc_repr_proxy_port *port)
+{
+ return port->rte_port_id != RTE_MAX_ETHPORTS && port->enabled;
+}
+
+static bool
+sfc_repr_proxy_ports_disabled(struct sfc_repr_proxy *rp)
+{
+ struct sfc_repr_proxy_port *port;
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port))
+ return false;
+ }
+
+ return true;
+}
+
int
sfc_repr_proxy_start(struct sfc_adapter *sa)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_repr_proxy_port *last_port = NULL;
+ struct sfc_repr_proxy_port *port;
int rc;
sfc_log_init(sa, "entry");
- /*
- * The condition to start the proxy is insufficient. It will be
- * complemented with representor port start/stop support.
- */
+ /* Representor proxy is not started when no representors are started */
if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return 0;
}
+ if (sfc_repr_proxy_ports_disabled(rp)) {
+ sfc_log_init(sa, "no started representor ports - skip");
+ return 0;
+ }
+
rc = sfc_repr_proxy_rxq_start(sa);
if (rc != 0)
goto fail_rxq_start;
@@ -698,12 +950,40 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
goto fail_runstate_set;
}
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port)) {
+ rc = sfc_repr_proxy_do_start_port(sa, port);
+ if (rc != 0)
+ goto fail_start_id;
+
+ last_port = port;
+ }
+ }
+
+ rc = sfc_repr_proxy_mport_filter_insert(sa);
+ if (rc != 0)
+ goto fail_mport_filter_insert;
+
rp->started = true;
sfc_log_init(sa, "done");
return 0;
+fail_mport_filter_insert:
+fail_start_id:
+ if (last_port != NULL) {
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port)) {
+ (void)sfc_repr_proxy_do_stop_port(sa, port);
+ if (port == last_port)
+ break;
+ }
+ }
+ }
+
+ rte_service_runstate_set(rp->service_id, 0);
+
fail_runstate_set:
rte_service_component_runstate_set(rp->service_id, 0);
@@ -726,6 +1006,7 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_repr_proxy_port *port;
int rc;
sfc_log_init(sa, "entry");
@@ -735,6 +1016,24 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
return;
}
+ if (sfc_repr_proxy_ports_disabled(rp)) {
+ sfc_log_init(sa, "no started representor ports - skip");
+ return;
+ }
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port)) {
+ rc = sfc_repr_proxy_do_stop_port(sa, port);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to stop representor proxy port %u: %s",
+ port->repr_id, rte_strerror(rc));
+ }
+ }
+ }
+
+ sfc_repr_proxy_mport_filter_remove(sa);
+
rc = rte_service_runstate_set(rp->service_id, 0);
if (rc < 0) {
sfc_err(sa, "failed to stop %s: %s",
@@ -759,19 +1058,6 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
sfc_log_init(sa, "done");
}
-static struct sfc_repr_proxy_port *
-sfc_repr_proxy_find_port(struct sfc_repr_proxy *rp, uint16_t repr_id)
-{
- struct sfc_repr_proxy_port *port;
-
- TAILQ_FOREACH(port, &rp->ports, entries) {
- if (port->repr_id == repr_id)
- return port;
- }
-
- return NULL;
-}
-
int
sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
uint16_t rte_port_id, const efx_mport_sel_t *mport_sel)
@@ -1020,3 +1306,136 @@ sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
sfc_log_init(sa, "done");
sfc_put_adapter(sa);
}
+
+int
+sfc_repr_proxy_start_repr(uint16_t pf_port_id, uint16_t repr_id)
+{
+ bool proxy_start_required = false;
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ rc = ENOENT;
+ goto fail_not_found;
+ }
+
+ if (port->enabled) {
+ rc = EALREADY;
+ sfc_err(sa, "failed: repr %u proxy port already started",
+ repr_id);
+ goto fail_already_started;
+ }
+
+ if (sa->state == SFC_ETHDEV_STARTED) {
+ if (sfc_repr_proxy_ports_disabled(rp)) {
+ proxy_start_required = true;
+ } else {
+ rc = sfc_repr_proxy_do_start_port(sa, port);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to start repr %u proxy port",
+ repr_id);
+ goto fail_start_id;
+ }
+ }
+ }
+
+ port->enabled = true;
+
+ if (proxy_start_required) {
+ rc = sfc_repr_proxy_start(sa);
+ if (rc != 0) {
+ sfc_err(sa, "failed to start proxy");
+ goto fail_proxy_start;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+
+fail_proxy_start:
+ port->enabled = false;
+
+fail_start_id:
+fail_already_started:
+fail_not_found:
+ sfc_err(sa, "failed to start repr %u proxy port: %s", repr_id,
+ rte_strerror(rc));
+ sfc_put_adapter(sa);
+
+ return rc;
+}
+
+int
+sfc_repr_proxy_stop_repr(uint16_t pf_port_id, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_port *p;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return ENOENT;
+ }
+
+ if (!port->enabled) {
+ sfc_log_init(sa, "repr %u proxy port is not started - skip",
+ repr_id);
+ sfc_put_adapter(sa);
+ return 0;
+ }
+
+ if (sa->state == SFC_ETHDEV_STARTED) {
+ bool last_enabled = true;
+
+ TAILQ_FOREACH(p, &rp->ports, entries) {
+ if (p == port)
+ continue;
+
+ if (sfc_repr_proxy_port_enabled(p)) {
+ last_enabled = false;
+ break;
+ }
+ }
+
+ rc = 0;
+ if (last_enabled)
+ sfc_repr_proxy_stop(sa);
+ else
+ rc = sfc_repr_proxy_do_stop_port(sa, port);
+
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to stop representor proxy TxQ %u: %s",
+ repr_id, rte_strerror(rc));
+ sfc_put_adapter(sa);
+ return rc;
+ }
+ }
+
+ port->enabled = false;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index 1fe7ff3695..c350713a55 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -19,6 +19,8 @@
#include "sfc_repr.h"
#include "sfc_dp.h"
+#include "sfc_flow.h"
+#include "sfc_mae.h"
#ifdef __cplusplus
extern "C" {
@@ -49,6 +51,14 @@ struct sfc_repr_proxy_txq {
struct rte_ring *ring;
};
+struct sfc_repr_proxy_filter {
+ /*
+ * 2 filters are required to match all incoming traffic, unknown
+ * unicast and unknown multicast.
+ */
+ efx_filter_spec_t specs[2];
+};
+
struct sfc_repr_proxy_port {
TAILQ_ENTRY(sfc_repr_proxy_port) entries;
uint16_t repr_id;
@@ -56,6 +66,9 @@ struct sfc_repr_proxy_port {
efx_mport_id_t egress_mport;
struct sfc_repr_proxy_rxq rxq[SFC_REPR_RXQ_MAX];
struct sfc_repr_proxy_txq txq[SFC_REPR_TXQ_MAX];
+ struct sfc_mae_rule *mae_rule;
+ bool enabled;
+ bool started;
};
struct sfc_repr_proxy_dp_rxq {
@@ -72,6 +85,8 @@ struct sfc_repr_proxy_dp_txq {
enum sfc_repr_proxy_mbox_op {
SFC_REPR_PROXY_MBOX_ADD_PORT,
SFC_REPR_PROXY_MBOX_DEL_PORT,
+ SFC_REPR_PROXY_MBOX_START_PORT,
+ SFC_REPR_PROXY_MBOX_STOP_PORT,
};
struct sfc_repr_proxy_mbox {
@@ -92,6 +107,7 @@ struct sfc_repr_proxy {
bool started;
struct sfc_repr_proxy_dp_rxq dp_rxq[SFC_REPR_PROXY_NB_RXQ_MAX];
struct sfc_repr_proxy_dp_txq dp_txq[SFC_REPR_PROXY_NB_TXQ_MAX];
+ struct sfc_repr_proxy_filter mport_filter;
struct sfc_repr_proxy_mbox mbox;
};
diff --git a/drivers/net/sfc/sfc_repr_proxy_api.h b/drivers/net/sfc/sfc_repr_proxy_api.h
index d1c0760efa..95b065801d 100644
--- a/drivers/net/sfc/sfc_repr_proxy_api.h
+++ b/drivers/net/sfc/sfc_repr_proxy_api.h
@@ -38,6 +38,9 @@ int sfc_repr_proxy_add_txq(uint16_t pf_port_id, uint16_t repr_id,
void sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
uint16_t queue_id);
+int sfc_repr_proxy_start_repr(uint16_t pf_port_id, uint16_t repr_id);
+int sfc_repr_proxy_stop_repr(uint16_t pf_port_id, uint16_t repr_id);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 20/38] net/sfc: implement port representor link update
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (18 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 19/38] net/sfc: implement port representor start and stop Andrew Rybchenko
@ 2021-08-27 6:56 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 21/38] net/sfc: support multiple device probe Andrew Rybchenko
` (18 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:56 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement the callback by reporting link down if the representor
is not started, otherwise report link up with undefined link speed.
Link speed is undefined since representors can pass traffic to each
other even if the PF link is down.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index bfe6dd4c9b..207e7c77a0 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -471,6 +471,24 @@ sfc_repr_dev_infos_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+sfc_repr_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct rte_eth_link link;
+
+ if (sr->state != SFC_ETHDEV_STARTED) {
+ sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
+ } else {
+ memset(&link, 0, sizeof(link));
+ link.link_status = ETH_LINK_UP;
+ link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ }
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
static int
sfc_repr_ring_create(uint16_t pf_port_id, uint16_t repr_id,
const char *type_name, uint16_t qid, uint16_t nb_desc,
@@ -782,6 +800,7 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_stop = sfc_repr_dev_stop,
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
+ .link_update = sfc_repr_dev_link_update,
.rx_queue_setup = sfc_repr_rx_queue_setup,
.rx_queue_release = sfc_repr_rx_queue_release,
.tx_queue_setup = sfc_repr_tx_queue_setup,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 21/38] net/sfc: support multiple device probe
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (19 preceding siblings ...)
2021-08-27 6:56 ` [dpdk-dev] [PATCH 20/38] net/sfc: implement port representor link update Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 22/38] net/sfc: implement representor Tx routine Andrew Rybchenko
` (17 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Support probing the device multiple times so that additional port
representors can be created with hotplug EAL API. To hotplug a
representor, the PF must be hotplugged with different representor
device argument.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ethdev.c | 55 ++++++++++++++++++++++++------------
drivers/net/sfc/sfc_repr.c | 35 +++++++++++++----------
2 files changed, 57 insertions(+), 33 deletions(-)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8578ba0765..8f9afb2c67 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2432,31 +2432,40 @@ sfc_parse_rte_devargs(const char *args, struct rte_eth_devargs *devargs)
}
static int
-sfc_eth_dev_create(struct rte_pci_device *pci_dev,
- struct sfc_ethdev_init_data *init_data,
- struct rte_eth_dev **devp)
+sfc_eth_dev_find_or_create(struct rte_pci_device *pci_dev,
+ struct sfc_ethdev_init_data *init_data,
+ struct rte_eth_dev **devp,
+ bool *dev_created)
{
struct rte_eth_dev *dev;
+ bool created = false;
int rc;
- rc = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
- sizeof(struct sfc_adapter_shared),
- eth_dev_pci_specific_init, pci_dev,
- sfc_eth_dev_init, init_data);
- if (rc != 0) {
- SFC_GENERIC_LOG(ERR, "Failed to create sfc ethdev '%s'",
- pci_dev->device.name);
- return rc;
- }
-
dev = rte_eth_dev_allocated(pci_dev->device.name);
if (dev == NULL) {
- SFC_GENERIC_LOG(ERR, "Failed to find allocated sfc ethdev '%s'",
+ rc = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+ sizeof(struct sfc_adapter_shared),
+ eth_dev_pci_specific_init, pci_dev,
+ sfc_eth_dev_init, init_data);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR, "Failed to create sfc ethdev '%s'",
+ pci_dev->device.name);
+ return rc;
+ }
+
+ created = true;
+
+ dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (dev == NULL) {
+ SFC_GENERIC_LOG(ERR,
+ "Failed to find allocated sfc ethdev '%s'",
pci_dev->device.name);
- return -ENODEV;
+ return -ENODEV;
+ }
}
*devp = dev;
+ *dev_created = created;
return 0;
}
@@ -2517,6 +2526,7 @@ static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct sfc_ethdev_init_data init_data;
struct rte_eth_devargs eth_da;
struct rte_eth_dev *dev;
+ bool dev_created;
int rc;
if (pci_dev->device.devargs != NULL) {
@@ -2538,13 +2548,21 @@ static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
return -ENOTSUP;
}
- rc = sfc_eth_dev_create(pci_dev, &init_data, &dev);
+ /*
+ * Driver supports RTE_PCI_DRV_PROBE_AGAIN. Hence create device only
+ * if it does not already exist. Re-probing an existing device is
+ * expected to allow additional representors to be configured.
+ */
+ rc = sfc_eth_dev_find_or_create(pci_dev, &init_data, &dev,
+ &dev_created);
if (rc != 0)
return rc;
rc = sfc_eth_dev_create_representors(dev, ð_da);
if (rc != 0) {
- (void)rte_eth_dev_destroy(dev, sfc_eth_dev_uninit);
+ if (dev_created)
+ (void)rte_eth_dev_destroy(dev, sfc_eth_dev_uninit);
+
return rc;
}
@@ -2560,7 +2578,8 @@ static struct rte_pci_driver sfc_efx_pmd = {
.id_table = pci_id_sfc_efx_map,
.drv_flags =
RTE_PCI_DRV_INTR_LSC |
- RTE_PCI_DRV_NEED_MAPPING,
+ RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
.probe = sfc_eth_dev_pci_probe,
.remove = sfc_eth_dev_pci_remove,
};
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 207e7c77a0..7a34a0a904 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -930,6 +930,7 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
struct sfc_repr_init_data repr_data;
char name[RTE_ETH_NAME_MAX_LEN];
int ret;
+ struct rte_eth_dev *dev;
if (snprintf(name, sizeof(name), "net_%s_representor_%u",
parent->device->name, representor_id) >=
@@ -938,20 +939,24 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
return -ENAMETOOLONG;
}
- memset(&repr_data, 0, sizeof(repr_data));
- repr_data.pf_port_id = parent->data->port_id;
- repr_data.repr_id = representor_id;
- repr_data.switch_domain_id = switch_domain_id;
- repr_data.mport_sel = *mport_sel;
-
- ret = rte_eth_dev_create(parent->device, name,
- sizeof(struct sfc_repr_shared),
- NULL, NULL,
- sfc_repr_eth_dev_init, &repr_data);
- if (ret != 0)
- SFC_GENERIC_LOG(ERR, "%s() failed to create device", __func__);
-
- SFC_GENERIC_LOG(INFO, "%s() done: %s", __func__, rte_strerror(-ret));
+ dev = rte_eth_dev_allocated(name);
+ if (dev == NULL) {
+ memset(&repr_data, 0, sizeof(repr_data));
+ repr_data.pf_port_id = parent->data->port_id;
+ repr_data.repr_id = representor_id;
+ repr_data.switch_domain_id = switch_domain_id;
+ repr_data.mport_sel = *mport_sel;
+
+ ret = rte_eth_dev_create(parent->device, name,
+ sizeof(struct sfc_repr_shared),
+ NULL, NULL,
+ sfc_repr_eth_dev_init, &repr_data);
+ if (ret != 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to create device",
+ __func__);
+ return ret;
+ }
+ }
- return ret;
+ return 0;
}
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 22/38] net/sfc: implement representor Tx routine
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (20 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 21/38] net/sfc: support multiple device probe Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 23/38] net/sfc: use xword type for EF100 Rx prefix Andrew Rybchenko
` (16 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Forward traffic that is transmitted from a port representor to the
corresponding virtual function using the dedicated TxQ.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 45 ++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.c | 88 +++++++++++++++++++++++++++++++-
drivers/net/sfc/sfc_repr_proxy.h | 8 +++
3 files changed, 140 insertions(+), 1 deletion(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 7a34a0a904..e7386fb480 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -168,6 +168,49 @@ sfc_repr_tx_queue_stop(void *queue)
rte_ring_reset(txq->ring);
}
+static uint16_t
+sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sfc_repr_txq *txq = tx_queue;
+ unsigned int n_tx;
+ void **objs;
+ uint16_t i;
+
+ /*
+ * mbuf is likely cache-hot. Set flag and egress m-port here instead of
+ * doing that in representors proxy. Also, it should help to avoid
+ * cache bounce. Moreover, potentially, it allows to use one
+ * multi-producer single-consumer ring for all representors.
+ *
+ * The only potential problem is doing so many times if enqueue
+ * fails and sender retries.
+ */
+ for (i = 0; i < nb_pkts; ++i) {
+ struct rte_mbuf *m = tx_pkts[i];
+
+ m->ol_flags |= sfc_dp_mport_override;
+ *RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset,
+ efx_mport_id_t *) = txq->egress_mport;
+ }
+
+ objs = (void *)&tx_pkts[0];
+ n_tx = rte_ring_sp_enqueue_burst(txq->ring, objs, nb_pkts, NULL);
+
+ /*
+ * Remove m-port override flag from packets that were not enqueued
+ * Setting the flag only for enqueued packets after the burst is
+ * not possible since the ownership of enqueued packets is
+ * transferred to representor proxy.
+ */
+ for (i = n_tx; i < nb_pkts; ++i) {
+ struct rte_mbuf *m = tx_pkts[i];
+
+ m->ol_flags &= ~sfc_dp_mport_override;
+ }
+
+ return n_tx;
+}
+
static int
sfc_repr_start(struct rte_eth_dev *dev)
{
@@ -782,6 +825,7 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
(void)sfc_repr_proxy_del_port(srs->pf_port_id, srs->repr_id);
+ dev->tx_pkt_burst = NULL;
dev->dev_ops = NULL;
sfc_repr_unlock(sr);
@@ -902,6 +946,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
goto fail_mac_addrs;
}
+ dev->tx_pkt_burst = sfc_repr_tx_burst;
dev->dev_ops = &sfc_repr_dev_ops;
sr->state = SFC_ETHDEV_INITIALIZED;
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index ea03d5afdd..d8934bab65 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -25,6 +25,12 @@
*/
#define SFC_REPR_PROXY_MBOX_POLL_TIMEOUT_MS 1000
+/**
+ * Amount of time to wait for the representor proxy routine (which is
+ * running on a service core) to terminate after service core is stopped.
+ */
+#define SFC_REPR_PROXY_ROUTINE_TERMINATE_TIMEOUT_MS 10000
+
static struct sfc_repr_proxy *
sfc_repr_proxy_by_adapter(struct sfc_adapter *sa)
{
@@ -148,16 +154,71 @@ sfc_repr_proxy_mbox_handle(struct sfc_repr_proxy *rp)
__atomic_store_n(&mbox->ack, true, __ATOMIC_RELEASE);
}
+static void
+sfc_repr_proxy_handle_tx(struct sfc_repr_proxy_dp_txq *rp_txq,
+ struct sfc_repr_proxy_txq *repr_txq)
+{
+ /*
+ * With multiple representor proxy queues configured it is
+ * possible that not all of the corresponding representor
+ * queues were created. Skip the queues that do not exist.
+ */
+ if (repr_txq->ring == NULL)
+ return;
+
+ if (rp_txq->available < RTE_DIM(rp_txq->tx_pkts)) {
+ rp_txq->available +=
+ rte_ring_sc_dequeue_burst(repr_txq->ring,
+ (void **)(&rp_txq->tx_pkts[rp_txq->available]),
+ RTE_DIM(rp_txq->tx_pkts) - rp_txq->available,
+ NULL);
+
+ if (rp_txq->available == rp_txq->transmitted)
+ return;
+ }
+
+ rp_txq->transmitted += rp_txq->pkt_burst(rp_txq->dp,
+ &rp_txq->tx_pkts[rp_txq->transmitted],
+ rp_txq->available - rp_txq->transmitted);
+
+ if (rp_txq->available == rp_txq->transmitted) {
+ rp_txq->available = 0;
+ rp_txq->transmitted = 0;
+ }
+}
+
static int32_t
sfc_repr_proxy_routine(void *arg)
{
+ struct sfc_repr_proxy_port *port;
struct sfc_repr_proxy *rp = arg;
+ unsigned int i;
sfc_repr_proxy_mbox_handle(rp);
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (!port->started)
+ continue;
+
+ for (i = 0; i < rp->nb_txq; i++)
+ sfc_repr_proxy_handle_tx(&rp->dp_txq[i], &port->txq[i]);
+ }
+
return 0;
}
+static struct sfc_txq_info *
+sfc_repr_proxy_txq_info_get(struct sfc_adapter *sa, unsigned int repr_queue_id)
+{
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy_dp_txq *dp_txq;
+
+ SFC_ASSERT(repr_queue_id < sfc_repr_nb_txq(sas));
+ dp_txq = &sa->repr_proxy.dp_txq[repr_queue_id];
+
+ return &sas->txq_info[dp_txq->sw_index];
+}
+
static int
sfc_repr_proxy_txq_attach(struct sfc_adapter *sa)
{
@@ -289,11 +350,20 @@ sfc_repr_proxy_txq_fini(struct sfc_adapter *sa)
static int
sfc_repr_proxy_txq_start(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
sfc_log_init(sa, "entry");
- RTE_SET_USED(rp);
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[i];
+
+ txq->dp = sfc_repr_proxy_txq_info_get(sa, i)->dp;
+ txq->pkt_burst = sa->eth_dev->tx_pkt_burst;
+ txq->available = 0;
+ txq->transmitted = 0;
+ }
sfc_log_init(sa, "done");
@@ -922,6 +992,8 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_txq_start;
+ rp->nb_txq = sfc_repr_nb_txq(sas);
+
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
if (rc != 0 && rc != -EALREADY) {
@@ -1007,6 +1079,9 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
struct sfc_repr_proxy_port *port;
+ const unsigned int wait_ms_total =
+ SFC_REPR_PROXY_ROUTINE_TERMINATE_TIMEOUT_MS;
+ unsigned int i;
int rc;
sfc_log_init(sa, "entry");
@@ -1050,6 +1125,17 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
+ /*
+ * Wait for the representor proxy routine to finish the last iteration.
+ * Give up on timeout.
+ */
+ for (i = 0; i < wait_ms_total; i++) {
+ if (rte_service_may_be_active(rp->service_id) == 0)
+ break;
+
+ rte_delay_ms(1);
+ }
+
sfc_repr_proxy_rxq_stop(sa);
sfc_repr_proxy_txq_stop(sa);
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index c350713a55..d47e0a431a 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -79,6 +79,13 @@ struct sfc_repr_proxy_dp_rxq {
};
struct sfc_repr_proxy_dp_txq {
+ eth_tx_burst_t pkt_burst;
+ struct sfc_dp_txq *dp;
+
+ unsigned int available;
+ unsigned int transmitted;
+ struct rte_mbuf *tx_pkts[SFC_REPR_PROXY_TX_BURST];
+
sfc_sw_index_t sw_index;
};
@@ -110,6 +117,7 @@ struct sfc_repr_proxy {
struct sfc_repr_proxy_filter mport_filter;
struct sfc_repr_proxy_mbox mbox;
+ unsigned int nb_txq;
};
struct sfc_adapter;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 23/38] net/sfc: use xword type for EF100 Rx prefix
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (21 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 22/38] net/sfc: implement representor Tx routine Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 24/38] net/sfc: handle ingress m-port in " Andrew Rybchenko
` (15 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Layout of the EF100 Rx prefix is defined in terms of a 32 bytes long
value type (xword). Replace oword with xword to avoid truncation.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ef100_rx.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d4cb96881c..15fce55361 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -379,7 +379,7 @@ static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
static bool
sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
- const efx_oword_t *rx_prefix,
+ const efx_xword_t *rx_prefix,
struct rte_mbuf *m)
{
const efx_word_t *class;
@@ -399,19 +399,19 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
m->packet_type = sfc_ef100_rx_class_decode(*class, &ol_flags);
if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
- EFX_TEST_OWORD_BIT(rx_prefix[0],
+ EFX_TEST_XWORD_BIT(rx_prefix[0],
ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
ol_flags |= PKT_RX_RSS_HASH;
- /* EFX_OWORD_FIELD converts little-endian to CPU */
- m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
+ /* EFX_XWORD_FIELD converts little-endian to CPU */
+ m->hash.rss = EFX_XWORD_FIELD(rx_prefix[0],
ESF_GZ_RX_PREFIX_RSS_HASH);
}
if (rxq->flags & SFC_EF100_RXQ_USER_MARK) {
uint32_t user_mark;
- /* EFX_OWORD_FIELD converts little-endian to CPU */
- user_mark = EFX_OWORD_FIELD(rx_prefix[0],
+ /* EFX_XWORD_FIELD converts little-endian to CPU */
+ user_mark = EFX_XWORD_FIELD(rx_prefix[0],
ESF_GZ_RX_PREFIX_USER_MARK);
if (user_mark != SFC_EF100_USER_MARK_INVALID) {
ol_flags |= PKT_RX_FDIR_ID;
@@ -480,7 +480,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
while (rxq->ready_pkts > 0 && rx_pkts != rx_pkts_end) {
struct rte_mbuf *pkt;
struct rte_mbuf *lastseg;
- const efx_oword_t *rx_prefix;
+ const efx_xword_t *rx_prefix;
uint16_t pkt_len;
uint16_t seg_len;
bool deliver;
@@ -495,9 +495,9 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
pkt->rearm_data[0] = rxq->rearm_data;
/* data_off already moved past Rx prefix */
- rx_prefix = (const efx_oword_t *)sfc_ef100_rx_pkt_prefix(pkt);
+ rx_prefix = (const efx_xword_t *)sfc_ef100_rx_pkt_prefix(pkt);
- pkt_len = EFX_OWORD_FIELD(rx_prefix[0],
+ pkt_len = EFX_XWORD_FIELD(rx_prefix[0],
ESF_GZ_RX_PREFIX_LENGTH);
SFC_ASSERT(pkt_len > 0);
rte_pktmbuf_pkt_len(pkt) = pkt_len;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 24/38] net/sfc: handle ingress m-port in EF100 Rx prefix
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (22 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 23/38] net/sfc: use xword type for EF100 Rx prefix Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 25/38] net/sfc: implement representor Rx routine Andrew Rybchenko
` (14 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Set ingress mport dynamic field in mbuf in EF100.
For a given PF, Rx queues of representor devices
function on top of the only Rx queue operated by
the PF representor proxy facility. This field is
a means to demultiplex traffic hitting the queue.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ef100_rx.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 15fce55361..bbf3bf4dc0 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -62,6 +62,7 @@ struct sfc_ef100_rxq {
#define SFC_EF100_RXQ_RSS_HASH 0x10
#define SFC_EF100_RXQ_USER_MARK 0x20
#define SFC_EF100_RXQ_FLAG_INTR_EN 0x40
+#define SFC_EF100_RXQ_INGRESS_MPORT 0x80
unsigned int ptr_mask;
unsigned int evq_phase_bit_shift;
unsigned int ready_pkts;
@@ -370,6 +371,8 @@ static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
SFC_EF100_RX_PREFIX_FIELD(LENGTH, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(RSS_HASH_VALID, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(CLASS, B_FALSE),
+ EFX_RX_PREFIX_FIELD(INGRESS_MPORT,
+ ESF_GZ_RX_PREFIX_INGRESS_MPORT, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(RSS_HASH, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(USER_MARK, B_FALSE),
@@ -419,6 +422,15 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
}
}
+ if (rxq->flags & SFC_EF100_RXQ_INGRESS_MPORT) {
+ ol_flags |= sfc_dp_mport_override;
+ *RTE_MBUF_DYNFIELD(m,
+ sfc_dp_mport_offset,
+ typeof(&((efx_mport_id_t *)0)->id)) =
+ EFX_XWORD_FIELD(rx_prefix[0],
+ ESF_GZ_RX_PREFIX_INGRESS_MPORT);
+ }
+
m->ol_flags = ol_flags;
return true;
}
@@ -806,6 +818,12 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
else
rxq->flags &= ~SFC_EF100_RXQ_USER_MARK;
+ if ((unsup_rx_prefix_fields &
+ (1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT)) == 0)
+ rxq->flags |= SFC_EF100_RXQ_INGRESS_MPORT;
+ else
+ rxq->flags &= ~SFC_EF100_RXQ_INGRESS_MPORT;
+
rxq->prefix_size = pinfo->erpl_length;
rxq->rearm_data = sfc_ef100_mk_mbuf_rearm_data(rxq->dp.dpq.port_id,
rxq->prefix_size);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 25/38] net/sfc: implement representor Rx routine
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (23 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 24/38] net/sfc: handle ingress m-port in " Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 26/38] net/sfc: add simple port representor statistics Andrew Rybchenko
` (13 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement traffic forwarding for representor and representor proxy
from virtual functions to representor Rx queues.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 12 +++
drivers/net/sfc/sfc_repr_proxy.c | 134 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 11 +++
3 files changed, 157 insertions(+)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index e7386fb480..a436b7e5e1 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -168,6 +168,16 @@ sfc_repr_tx_queue_stop(void *queue)
rte_ring_reset(txq->ring);
}
+static uint16_t
+sfc_repr_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct sfc_repr_rxq *rxq = rx_queue;
+ void **objs = (void *)&rx_pkts[0];
+
+ /* mbufs port is already filled correctly by representors proxy */
+ return rte_ring_sc_dequeue_burst(rxq->ring, objs, nb_pkts, NULL);
+}
+
static uint16_t
sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
@@ -825,6 +835,7 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
(void)sfc_repr_proxy_del_port(srs->pf_port_id, srs->repr_id);
+ dev->rx_pkt_burst = NULL;
dev->tx_pkt_burst = NULL;
dev->dev_ops = NULL;
@@ -946,6 +957,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
goto fail_mac_addrs;
}
+ dev->rx_pkt_burst = sfc_repr_rx_burst;
dev->tx_pkt_burst = sfc_repr_tx_burst;
dev->dev_ops = &sfc_repr_dev_ops;
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index d8934bab65..535b07ea52 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -18,6 +18,7 @@
#include "sfc_ev.h"
#include "sfc_rx.h"
#include "sfc_tx.h"
+#include "sfc_dp_rx.h"
/**
* Amount of time to wait for the representor proxy routine (which is
@@ -31,6 +32,8 @@
*/
#define SFC_REPR_PROXY_ROUTINE_TERMINATE_TIMEOUT_MS 10000
+#define SFC_REPR_INVALID_ROUTE_PORT_ID (UINT16_MAX)
+
static struct sfc_repr_proxy *
sfc_repr_proxy_by_adapter(struct sfc_adapter *sa)
{
@@ -187,6 +190,113 @@ sfc_repr_proxy_handle_tx(struct sfc_repr_proxy_dp_txq *rp_txq,
}
}
+static struct sfc_repr_proxy_port *
+sfc_repr_proxy_rx_route_mbuf(struct sfc_repr_proxy *rp, struct rte_mbuf *m)
+{
+ struct sfc_repr_proxy_port *port;
+ efx_mport_id_t mport_id;
+
+ mport_id.id = *RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset,
+ typeof(&((efx_mport_id_t *)0)->id));
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->egress_mport.id == mport_id.id) {
+ m->port = port->rte_port_id;
+ m->ol_flags &= ~sfc_dp_mport_override;
+ return port;
+ }
+ }
+
+ return NULL;
+}
+
+/*
+ * Returns true if a packet is encountered which should be forwarded to a
+ * port which is different from the one that is currently routed.
+ */
+static bool
+sfc_repr_proxy_rx_route(struct sfc_repr_proxy *rp,
+ struct sfc_repr_proxy_dp_rxq *rp_rxq)
+{
+ unsigned int i;
+
+ for (i = rp_rxq->routed;
+ i < rp_rxq->available && !rp_rxq->stop_route;
+ i++, rp_rxq->routed++) {
+ struct sfc_repr_proxy_port *port;
+ struct rte_mbuf *m = rp_rxq->pkts[i];
+
+ port = sfc_repr_proxy_rx_route_mbuf(rp, m);
+ /* Cannot find destination representor */
+ if (port == NULL) {
+ /* Effectively drop the packet */
+ rp_rxq->forwarded++;
+ continue;
+ }
+
+ /* Currently routed packets are mapped to a different port */
+ if (port->repr_id != rp_rxq->route_port_id &&
+ rp_rxq->route_port_id != SFC_REPR_INVALID_ROUTE_PORT_ID)
+ return true;
+
+ rp_rxq->route_port_id = port->repr_id;
+ }
+
+ return false;
+}
+
+static void
+sfc_repr_proxy_rx_forward(struct sfc_repr_proxy *rp,
+ struct sfc_repr_proxy_dp_rxq *rp_rxq)
+{
+ struct sfc_repr_proxy_port *port;
+
+ if (rp_rxq->route_port_id != SFC_REPR_INVALID_ROUTE_PORT_ID) {
+ port = sfc_repr_proxy_find_port(rp, rp_rxq->route_port_id);
+
+ if (port != NULL && port->started) {
+ rp_rxq->forwarded +=
+ rte_ring_sp_enqueue_burst(port->rxq[0].ring,
+ (void **)(&rp_rxq->pkts[rp_rxq->forwarded]),
+ rp_rxq->routed - rp_rxq->forwarded, NULL);
+ } else {
+ /* Drop all routed packets if the port is not started */
+ rp_rxq->forwarded = rp_rxq->routed;
+ }
+ }
+
+ if (rp_rxq->forwarded == rp_rxq->routed) {
+ rp_rxq->route_port_id = SFC_REPR_INVALID_ROUTE_PORT_ID;
+ rp_rxq->stop_route = false;
+ } else {
+ /* Stall packet routing if not all packets were forwarded */
+ rp_rxq->stop_route = true;
+ }
+
+ if (rp_rxq->available == rp_rxq->forwarded)
+ rp_rxq->available = rp_rxq->forwarded = rp_rxq->routed = 0;
+}
+
+static void
+sfc_repr_proxy_handle_rx(struct sfc_repr_proxy *rp,
+ struct sfc_repr_proxy_dp_rxq *rp_rxq)
+{
+ bool route_again;
+
+ if (rp_rxq->available < RTE_DIM(rp_rxq->pkts)) {
+ rp_rxq->available += rp_rxq->pkt_burst(rp_rxq->dp,
+ &rp_rxq->pkts[rp_rxq->available],
+ RTE_DIM(rp_rxq->pkts) - rp_rxq->available);
+ if (rp_rxq->available == rp_rxq->forwarded)
+ return;
+ }
+
+ do {
+ route_again = sfc_repr_proxy_rx_route(rp, rp_rxq);
+ sfc_repr_proxy_rx_forward(rp, rp_rxq);
+ } while (route_again && !rp_rxq->stop_route);
+}
+
static int32_t
sfc_repr_proxy_routine(void *arg)
{
@@ -204,6 +314,9 @@ sfc_repr_proxy_routine(void *arg)
sfc_repr_proxy_handle_tx(&rp->dp_txq[i], &port->txq[i]);
}
+ for (i = 0; i < rp->nb_rxq; i++)
+ sfc_repr_proxy_handle_rx(rp, &rp->dp_rxq[i]);
+
return 0;
}
@@ -412,6 +525,18 @@ sfc_repr_proxy_rxq_detach(struct sfc_adapter *sa)
sfc_log_init(sa, "done");
}
+static struct sfc_rxq_info *
+sfc_repr_proxy_rxq_info_get(struct sfc_adapter *sa, unsigned int repr_queue_id)
+{
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy_dp_rxq *dp_rxq;
+
+ SFC_ASSERT(repr_queue_id < sfc_repr_nb_rxq(sas));
+ dp_rxq = &sa->repr_proxy.dp_rxq[repr_queue_id];
+
+ return &sas->rxq_info[dp_rxq->sw_index];
+}
+
static int
sfc_repr_proxy_rxq_init(struct sfc_adapter *sa,
struct sfc_repr_proxy_dp_rxq *rxq)
@@ -539,6 +664,14 @@ sfc_repr_proxy_rxq_start(struct sfc_adapter *sa)
i);
goto fail_start;
}
+
+ rxq->dp = sfc_repr_proxy_rxq_info_get(sa, i)->dp;
+ rxq->pkt_burst = sa->eth_dev->rx_pkt_burst;
+ rxq->available = 0;
+ rxq->routed = 0;
+ rxq->forwarded = 0;
+ rxq->stop_route = false;
+ rxq->route_port_id = SFC_REPR_INVALID_ROUTE_PORT_ID;
}
sfc_log_init(sa, "done");
@@ -993,6 +1126,7 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
goto fail_txq_start;
rp->nb_txq = sfc_repr_nb_txq(sas);
+ rp->nb_rxq = sfc_repr_nb_rxq(sas);
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index d47e0a431a..b49b1a2a96 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -75,6 +75,16 @@ struct sfc_repr_proxy_dp_rxq {
struct rte_mempool *mp;
unsigned int ref_count;
+ eth_rx_burst_t pkt_burst;
+ struct sfc_dp_rxq *dp;
+
+ uint16_t route_port_id;
+ bool stop_route;
+ unsigned int available;
+ unsigned int forwarded;
+ unsigned int routed;
+ struct rte_mbuf *pkts[SFC_REPR_PROXY_TX_BURST];
+
sfc_sw_index_t sw_index;
};
@@ -118,6 +128,7 @@ struct sfc_repr_proxy {
struct sfc_repr_proxy_mbox mbox;
unsigned int nb_txq;
+ unsigned int nb_rxq;
};
struct sfc_adapter;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 26/38] net/sfc: add simple port representor statistics
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (24 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 25/38] net/sfc: implement representor Rx routine Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 27/38] net/sfc: free MAE lock once switch domain is assigned Andrew Rybchenko
` (12 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Gather statistics of enqueued and dequeued packets in Rx and Tx burst
callbacks to report in stats_get callback.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 60 ++++++++++++++++++++++++++++++++++++--
1 file changed, 58 insertions(+), 2 deletions(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index a436b7e5e1..4fd81c3f6b 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -32,9 +32,14 @@ struct sfc_repr_shared {
uint16_t switch_port_id;
};
+struct sfc_repr_queue_stats {
+ union sfc_pkts_bytes packets_bytes;
+};
+
struct sfc_repr_rxq {
/* Datapath members */
struct rte_ring *ring;
+ struct sfc_repr_queue_stats stats;
/* Non-datapath members */
struct sfc_repr_shared *shared;
@@ -45,6 +50,7 @@ struct sfc_repr_txq {
/* Datapath members */
struct rte_ring *ring;
efx_mport_id_t egress_mport;
+ struct sfc_repr_queue_stats stats;
/* Non-datapath members */
struct sfc_repr_shared *shared;
@@ -173,15 +179,30 @@ sfc_repr_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
{
struct sfc_repr_rxq *rxq = rx_queue;
void **objs = (void *)&rx_pkts[0];
+ unsigned int n_rx;
/* mbufs port is already filled correctly by representors proxy */
- return rte_ring_sc_dequeue_burst(rxq->ring, objs, nb_pkts, NULL);
+ n_rx = rte_ring_sc_dequeue_burst(rxq->ring, objs, nb_pkts, NULL);
+
+ if (n_rx > 0) {
+ unsigned int n_bytes = 0;
+ unsigned int i = 0;
+
+ do {
+ n_bytes += rx_pkts[i]->pkt_len;
+ } while (++i < n_rx);
+
+ sfc_pkts_bytes_add(&rxq->stats.packets_bytes, n_rx, n_bytes);
+ }
+
+ return n_rx;
}
static uint16_t
sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct sfc_repr_txq *txq = tx_queue;
+ unsigned int n_bytes = 0;
unsigned int n_tx;
void **objs;
uint16_t i;
@@ -201,6 +222,7 @@ sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m->ol_flags |= sfc_dp_mport_override;
*RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset,
efx_mport_id_t *) = txq->egress_mport;
+ n_bytes += tx_pkts[i]->pkt_len;
}
objs = (void *)&tx_pkts[0];
@@ -210,14 +232,18 @@ sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* Remove m-port override flag from packets that were not enqueued
* Setting the flag only for enqueued packets after the burst is
* not possible since the ownership of enqueued packets is
- * transferred to representor proxy.
+ * transferred to representor proxy. The same logic applies to
+ * counting the enqueued packets' bytes.
*/
for (i = n_tx; i < nb_pkts; ++i) {
struct rte_mbuf *m = tx_pkts[i];
m->ol_flags &= ~sfc_dp_mport_override;
+ n_bytes -= m->pkt_len;
}
+ sfc_pkts_bytes_add(&txq->stats.packets_bytes, n_tx, n_bytes);
+
return n_tx;
}
@@ -849,6 +875,35 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
return 0;
}
+static int
+sfc_repr_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+ union sfc_pkts_bytes queue_stats;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ struct sfc_repr_rxq *rxq = dev->data->rx_queues[i];
+
+ sfc_pkts_bytes_get(&rxq->stats.packets_bytes,
+ &queue_stats);
+
+ stats->ipackets += queue_stats.pkts;
+ stats->ibytes += queue_stats.bytes;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ struct sfc_repr_txq *txq = dev->data->tx_queues[i];
+
+ sfc_pkts_bytes_get(&txq->stats.packets_bytes,
+ &queue_stats);
+
+ stats->opackets += queue_stats.pkts;
+ stats->obytes += queue_stats.bytes;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_configure = sfc_repr_dev_configure,
.dev_start = sfc_repr_dev_start,
@@ -856,6 +911,7 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
.link_update = sfc_repr_dev_link_update,
+ .stats_get = sfc_repr_stats_get,
.rx_queue_setup = sfc_repr_rx_queue_setup,
.rx_queue_release = sfc_repr_rx_queue_release,
.tx_queue_setup = sfc_repr_tx_queue_setup,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 27/38] net/sfc: free MAE lock once switch domain is assigned
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (25 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 26/38] net/sfc: add simple port representor statistics Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 28/38] common/sfc_efx/base: add multi-host function M-port selector Andrew Rybchenko
` (11 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, stable, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
If for some reason the hardware switch ID initialization function fails,
MAE lock is still held after the function finishes. This patch fixes that.
Fixes: 1e7fbdf0ba19 ("net/sfc: support concept of switch domains/ports")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_switch.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index c37cdf4a61..80c884a599 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -214,9 +214,9 @@ sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
fail_mem_alloc:
sfc_hw_switch_id_fini(sa, hw_switch_id);
- rte_spinlock_unlock(&sfc_mae_switch.lock);
fail_hw_switch_id_init:
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
return rc;
}
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 28/38] common/sfc_efx/base: add multi-host function M-port selector
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (26 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 27/38] net/sfc: free MAE lock once switch domain is assigned Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs Andrew Rybchenko
` (10 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Provide helper function to compose multi-host aware PCIe
function M-port selector.
The firmware expects mport selectors to use different sets of values to
represent a PCIe interface in mport selectors and elsewhere. In order to
avoid having the user perform the conversion themselves, it is now done
automatically when a selector is constructed.
In addition, a type has been added to libefx for possible PCIe interfaces.
This is done to abstract different representations away from the users.
Allow to support matching traffic coming from an arbitrary PCIe
end-point of the NIC and redirect traffic to it.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 22 +++++++
drivers/common/sfc_efx/base/efx_mae.c | 86 +++++++++++++++++++++++----
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 96 insertions(+), 13 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 0a178128ba..159e7957a3 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -82,6 +82,13 @@ efx_family(
#if EFSYS_OPT_PCI
+/* PCIe interface numbers for multi-host configurations. */
+typedef enum efx_pcie_interface_e {
+ EFX_PCIE_INTERFACE_CALLER = 1000,
+ EFX_PCIE_INTERFACE_HOST_PRIMARY,
+ EFX_PCIE_INTERFACE_NIC_EMBEDDED,
+} efx_pcie_interface_t;
+
typedef struct efx_pci_ops_s {
/*
* Function for reading PCIe configuration space.
@@ -4237,6 +4244,21 @@ efx_mae_mport_by_pcie_function(
__in uint32_t vf,
__out efx_mport_sel_t *mportp);
+/*
+ * Get MPORT selector of a multi-host PCIe function.
+ *
+ * The resulting MPORT selector is opaque to the caller and can be
+ * passed as an argument to efx_mae_match_spec_mport_set()
+ * and efx_mae_action_set_populate_deliver().
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_by_pcie_mh_function(
+ __in efx_pcie_interface_t intf,
+ __in uint32_t pf,
+ __in uint32_t vf,
+ __out efx_mport_sel_t *mportp);
+
/*
* Get MPORT selector by an MPORT ID
*
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 3f498fe189..37cc48eafc 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -727,35 +727,95 @@ efx_mae_mport_by_pcie_function(
efx_dword_t dword;
efx_rc_t rc;
+ rc = efx_mae_mport_by_pcie_mh_function(EFX_PCIE_INTERFACE_CALLER,
+ pf, vf, mportp);
+ if (rc != 0)
+ goto fail1;
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+static __checkReturn efx_rc_t
+efx_mae_intf_to_selector(
+ __in efx_pcie_interface_t intf,
+ __out uint32_t *selector_intfp)
+{
+ efx_rc_t rc;
+
+ switch (intf) {
+ case EFX_PCIE_INTERFACE_HOST_PRIMARY:
+ EFX_STATIC_ASSERT(MAE_MPORT_SELECTOR_HOST_PRIMARY <=
+ EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_INTF_ID));
+ *selector_intfp = MAE_MPORT_SELECTOR_HOST_PRIMARY;
+ break;
+ case EFX_PCIE_INTERFACE_NIC_EMBEDDED:
+ EFX_STATIC_ASSERT(MAE_MPORT_SELECTOR_NIC_EMBEDDED <=
+ EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_INTF_ID));
+ *selector_intfp = MAE_MPORT_SELECTOR_NIC_EMBEDDED;
+ break;
+ case EFX_PCIE_INTERFACE_CALLER:
+ EFX_STATIC_ASSERT(MAE_MPORT_SELECTOR_CALLER_INTF <=
+ EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_INTF_ID));
+ *selector_intfp = MAE_MPORT_SELECTOR_CALLER_INTF;
+ break;
+ default:
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_mport_by_pcie_mh_function(
+ __in efx_pcie_interface_t intf,
+ __in uint32_t pf,
+ __in uint32_t vf,
+ __out efx_mport_sel_t *mportp)
+{
+ uint32_t selector_intf;
+ efx_dword_t dword;
+ efx_rc_t rc;
+
EFX_STATIC_ASSERT(EFX_PCI_VF_INVALID ==
MAE_MPORT_SELECTOR_FUNC_VF_ID_NULL);
- if (pf > EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_PF_ID)) {
- rc = EINVAL;
+ rc = efx_mae_intf_to_selector(intf, &selector_intf);
+ if (rc != 0)
goto fail1;
+
+ if (pf > EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_MH_PF_ID)) {
+ rc = EINVAL;
+ goto fail2;
}
if (vf > EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_VF_ID)) {
rc = EINVAL;
- goto fail2;
+ goto fail3;
}
- EFX_POPULATE_DWORD_3(dword,
- MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_FUNC,
- MAE_MPORT_SELECTOR_FUNC_PF_ID, pf,
+
+ EFX_POPULATE_DWORD_4(dword,
+ MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_MH_FUNC,
+ MAE_MPORT_SELECTOR_FUNC_INTF_ID, selector_intf,
+ MAE_MPORT_SELECTOR_FUNC_MH_PF_ID, pf,
MAE_MPORT_SELECTOR_FUNC_VF_ID, vf);
memset(mportp, 0, sizeof (*mportp));
- /*
- * The constructed DWORD is little-endian,
- * but the resulting value is meant to be
- * passed to MCDIs, where it will undergo
- * host-order to little endian conversion.
- */
- mportp->sel = EFX_DWORD_FIELD(dword, EFX_DWORD_0);
+ mportp->sel = dword.ed_u32[0];
return (0);
+fail3:
+ EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 3488367f68..225909892b 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -125,6 +125,7 @@ INTERNAL {
efx_mae_match_specs_class_cmp;
efx_mae_match_specs_equal;
efx_mae_mport_by_pcie_function;
+ efx_mae_mport_by_pcie_mh_function;
efx_mae_mport_by_phy_port;
efx_mae_mport_by_id;
efx_mae_mport_free;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (27 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 28/38] common/sfc_efx/base: add multi-host function M-port selector Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 30/38] common/sfc_efx/base: add a means to read MAE mport journal Andrew Rybchenko
` (9 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
This information is required to be able to fully identify the function.
Add this information to the NIC configuration structure for easy access.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_impl.h | 3 +-
drivers/common/sfc_efx/base/ef10_nic.c | 4 +-
drivers/common/sfc_efx/base/efx.h | 1 +
drivers/common/sfc_efx/base/efx_impl.h | 6 +++
drivers/common/sfc_efx/base/efx_mcdi.c | 55 +++++++++++++++++++++++--
5 files changed, 64 insertions(+), 5 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 7c8d51b7a5..d48f238479 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -1372,7 +1372,8 @@ extern __checkReturn efx_rc_t
efx_mcdi_get_function_info(
__in efx_nic_t *enp,
__out uint32_t *pfp,
- __out_opt uint32_t *vfp);
+ __out_opt uint32_t *vfp,
+ __out_opt efx_pcie_interface_t *intfp);
LIBEFX_INTERNAL
extern __checkReturn efx_rc_t
diff --git a/drivers/common/sfc_efx/base/ef10_nic.c b/drivers/common/sfc_efx/base/ef10_nic.c
index eda0ad3068..3cd9ff89d0 100644
--- a/drivers/common/sfc_efx/base/ef10_nic.c
+++ b/drivers/common/sfc_efx/base/ef10_nic.c
@@ -1847,6 +1847,7 @@ efx_mcdi_nic_board_cfg(
efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
ef10_link_state_t els;
efx_port_t *epp = &(enp->en_port);
+ efx_pcie_interface_t intf;
uint32_t board_type = 0;
uint32_t base, nvec;
uint32_t port;
@@ -1875,11 +1876,12 @@ efx_mcdi_nic_board_cfg(
* - PCIe PF: pf = PF number, vf = 0xffff.
* - PCIe VF: pf = parent PF, vf = VF number.
*/
- if ((rc = efx_mcdi_get_function_info(enp, &pf, &vf)) != 0)
+ if ((rc = efx_mcdi_get_function_info(enp, &pf, &vf, &intf)) != 0)
goto fail3;
encp->enc_pf = pf;
encp->enc_vf = vf;
+ encp->enc_intf = intf;
if ((rc = ef10_mcdi_get_pf_count(enp, &encp->enc_hw_pf_count)) != 0)
goto fail4;
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 159e7957a3..996126217e 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1511,6 +1511,7 @@ typedef struct efx_nic_cfg_s {
uint32_t enc_bist_mask;
#endif /* EFSYS_OPT_BIST */
#if EFSYS_OPT_RIVERHEAD || EFX_OPTS_EF10()
+ efx_pcie_interface_t enc_intf;
uint32_t enc_pf;
uint32_t enc_vf;
uint32_t enc_privilege_mask;
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 992edbabe3..e0efbb8cdd 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1529,6 +1529,12 @@ efx_mcdi_get_workarounds(
#if EFSYS_OPT_RIVERHEAD || EFX_OPTS_EF10()
+LIBEFX_INTERNAL
+extern __checkReturn efx_rc_t
+efx_mcdi_intf_from_pcie(
+ __in uint32_t pcie_intf,
+ __out efx_pcie_interface_t *efx_intf);
+
LIBEFX_INTERNAL
extern __checkReturn efx_rc_t
efx_mcdi_init_evq(
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index b68fc0503d..69bf7ce70f 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -2130,6 +2130,36 @@ efx_mcdi_mac_stats_periodic(
#if EFSYS_OPT_RIVERHEAD || EFX_OPTS_EF10()
+ __checkReturn efx_rc_t
+efx_mcdi_intf_from_pcie(
+ __in uint32_t pcie_intf,
+ __out efx_pcie_interface_t *efx_intf)
+{
+ efx_rc_t rc;
+
+ switch (pcie_intf) {
+ case PCIE_INTERFACE_CALLER:
+ *efx_intf = EFX_PCIE_INTERFACE_CALLER;
+ break;
+ case PCIE_INTERFACE_HOST_PRIMARY:
+ *efx_intf = EFX_PCIE_INTERFACE_HOST_PRIMARY;
+ break;
+ case PCIE_INTERFACE_NIC_EMBEDDED:
+ *efx_intf = EFX_PCIE_INTERFACE_NIC_EMBEDDED;
+ break;
+ default:
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+ return (rc);
+}
+
/*
* This function returns the pf and vf number of a function. If it is a pf the
* vf number is 0xffff. The vf number is the index of the vf on that
@@ -2140,18 +2170,21 @@ efx_mcdi_mac_stats_periodic(
efx_mcdi_get_function_info(
__in efx_nic_t *enp,
__out uint32_t *pfp,
- __out_opt uint32_t *vfp)
+ __out_opt uint32_t *vfp,
+ __out_opt efx_pcie_interface_t *intfp)
{
+ efx_pcie_interface_t intf;
efx_mcdi_req_t req;
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_GET_FUNCTION_INFO_IN_LEN,
- MC_CMD_GET_FUNCTION_INFO_OUT_LEN);
+ MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN);
+ uint32_t pcie_intf;
efx_rc_t rc;
req.emr_cmd = MC_CMD_GET_FUNCTION_INFO;
req.emr_in_buf = payload;
req.emr_in_length = MC_CMD_GET_FUNCTION_INFO_IN_LEN;
req.emr_out_buf = payload;
- req.emr_out_length = MC_CMD_GET_FUNCTION_INFO_OUT_LEN;
+ req.emr_out_length = MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN;
efx_mcdi_execute(enp, &req);
@@ -2169,8 +2202,24 @@ efx_mcdi_get_function_info(
if (vfp != NULL)
*vfp = MCDI_OUT_DWORD(req, GET_FUNCTION_INFO_OUT_VF);
+ if (req.emr_out_length < MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN) {
+ intf = EFX_PCIE_INTERFACE_HOST_PRIMARY;
+ } else {
+ pcie_intf = MCDI_OUT_DWORD(req,
+ GET_FUNCTION_INFO_OUT_V2_INTF);
+
+ rc = efx_mcdi_intf_from_pcie(pcie_intf, &intf);
+ if (rc != 0)
+ goto fail3;
+ }
+
+ if (intfp != NULL)
+ *intfp = intf;
+
return (0);
+fail3:
+ EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 30/38] common/sfc_efx/base: add a means to read MAE mport journal
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (28 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles Andrew Rybchenko
` (8 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
This is required to provide the driver with the current state of mports.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 56 +++++++
drivers/common/sfc_efx/base/efx_mae.c | 224 +++++++++++++++++++++++++
drivers/common/sfc_efx/base/efx_mcdi.h | 54 ++++++
drivers/common/sfc_efx/version.map | 1 +
4 files changed, 335 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 996126217e..e77b297950 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4205,6 +4205,42 @@ typedef struct efx_mport_id_s {
uint32_t id;
} efx_mport_id_t;
+typedef enum efx_mport_type_e {
+ EFX_MPORT_TYPE_NET_PORT = 0,
+ EFX_MPORT_TYPE_ALIAS,
+ EFX_MPORT_TYPE_VNIC,
+} efx_mport_type_t;
+
+typedef enum efx_mport_vnic_client_type_e {
+ EFX_MPORT_VNIC_CLIENT_FUNCTION = 1,
+ EFX_MPORT_VNIC_CLIENT_PLUGIN,
+} efx_mport_vnic_client_type_t;
+
+typedef struct efx_mport_desc_s {
+ efx_mport_id_t emd_id;
+ boolean_t emd_can_receive_on;
+ boolean_t emd_can_deliver_to;
+ boolean_t emd_can_delete;
+ boolean_t emd_zombie;
+ efx_mport_type_t emd_type;
+ union {
+ struct {
+ uint32_t ep_index;
+ } emd_net_port;
+ struct {
+ efx_mport_id_t ea_target_mport_id;
+ } emd_alias;
+ struct {
+ efx_mport_vnic_client_type_t ev_client_type;
+ efx_pcie_interface_t ev_intf;
+ uint16_t ev_pf;
+ uint16_t ev_vf;
+ /* MCDI client handle for this VNIC. */
+ uint32_t ev_handle;
+ } emd_vnic;
+ };
+} efx_mport_desc_t;
+
#define EFX_MPORT_NULL (0U)
/*
@@ -4635,6 +4671,26 @@ efx_mae_mport_free(
__in efx_nic_t *enp,
__in const efx_mport_id_t *mportp);
+typedef __checkReturn efx_rc_t
+(efx_mae_read_mport_journal_cb)(
+ __in void *cb_datap,
+ __in efx_mport_desc_t *mportp,
+ __in size_t mport_len);
+
+/*
+ * Read mport descriptions from the MAE journal (which describes added and
+ * removed mports) and pass them to a user-supplied callback. The user gets
+ * only one chance to process the data it's given. Once the callback function
+ * finishes, that particular mport description will be gone.
+ * The journal will be fully repopulated on PCI reset (efx_nic_reset function).
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_read_mport_journal(
+ __in efx_nic_t *enp,
+ __in efx_mae_read_mport_journal_cb *cbp,
+ __in void *cb_datap);
+
#endif /* EFSYS_OPT_MAE */
#if EFSYS_OPT_VIRTIO
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 37cc48eafc..110addd92d 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -3292,4 +3292,228 @@ efx_mae_mport_free(
return (rc);
}
+static __checkReturn efx_rc_t
+efx_mae_read_mport_journal_single(
+ __in uint8_t *entry_buf,
+ __out efx_mport_desc_t *desc)
+{
+ uint32_t pcie_intf;
+ efx_rc_t rc;
+
+ memset(desc, 0, sizeof (*desc));
+
+ desc->emd_id.id = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_MPORT_ID);
+
+ desc->emd_can_receive_on = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_CAN_RECEIVE_ON);
+
+ desc->emd_can_deliver_to = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_CAN_DELIVER_TO);
+
+ desc->emd_can_delete = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_CAN_DELETE);
+
+ desc->emd_zombie = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_IS_ZOMBIE);
+
+ desc->emd_type = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_MPORT_TYPE);
+
+ /*
+ * We can't check everything here. If some additional checks are
+ * required, they should be performed by the callback function.
+ */
+ switch (desc->emd_type) {
+ case EFX_MPORT_TYPE_NET_PORT:
+ desc->emd_net_port.ep_index =
+ MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_NET_PORT_IDX);
+ break;
+ case EFX_MPORT_TYPE_ALIAS:
+ desc->emd_alias.ea_target_mport_id.id =
+ MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID);
+ break;
+ case EFX_MPORT_TYPE_VNIC:
+ desc->emd_vnic.ev_client_type =
+ MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE);
+ if (desc->emd_vnic.ev_client_type !=
+ EFX_MPORT_VNIC_CLIENT_FUNCTION)
+ break;
+
+ pcie_intf = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE);
+ rc = efx_mcdi_intf_from_pcie(pcie_intf,
+ &desc->emd_vnic.ev_intf);
+ if (rc != 0)
+ goto fail1;
+
+ desc->emd_vnic.ev_pf = MCDI_STRUCT_WORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX);
+ desc->emd_vnic.ev_vf = MCDI_STRUCT_WORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX);
+ desc->emd_vnic.ev_handle = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE);
+ break;
+ default:
+ rc = EINVAL;
+ goto fail2;
+ }
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+static __checkReturn efx_rc_t
+efx_mae_read_mport_journal_batch(
+ __in efx_nic_t *enp,
+ __in efx_mae_read_mport_journal_cb *cbp,
+ __in void *cb_datap,
+ __out uint32_t *morep)
+{
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_READ_JOURNAL_IN_LEN,
+ MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMAX_MCDI2);
+ uint32_t n_entries;
+ uint32_t entry_sz;
+ uint8_t *entry_buf;
+ unsigned int i;
+ efx_rc_t rc;
+
+ EFX_STATIC_ASSERT(EFX_MPORT_TYPE_NET_PORT ==
+ MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT);
+ EFX_STATIC_ASSERT(EFX_MPORT_TYPE_ALIAS ==
+ MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS);
+ EFX_STATIC_ASSERT(EFX_MPORT_TYPE_VNIC ==
+ MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC);
+
+ EFX_STATIC_ASSERT(EFX_MPORT_VNIC_CLIENT_FUNCTION ==
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION);
+ EFX_STATIC_ASSERT(EFX_MPORT_VNIC_CLIENT_PLUGIN ==
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN);
+
+ if (cbp == NULL) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_READ_JOURNAL;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_READ_JOURNAL_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMAX_MCDI2;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_READ_JOURNAL_IN_FLAGS, 0);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ if (req.emr_out_length_used <
+ MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMIN) {
+ rc = EMSGSIZE;
+ goto fail3;
+ }
+
+ if (morep != NULL) {
+ *morep = MCDI_OUT_DWORD_FIELD(req,
+ MAE_MPORT_READ_JOURNAL_OUT_FLAGS,
+ MAE_MPORT_READ_JOURNAL_OUT_MORE);
+ }
+ n_entries = MCDI_OUT_DWORD(req,
+ MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_COUNT);
+ entry_sz = MCDI_OUT_DWORD(req,
+ MAE_MPORT_READ_JOURNAL_OUT_SIZEOF_MPORT_DESC);
+ entry_buf = MCDI_OUT2(req, uint8_t,
+ MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_DATA);
+
+ if (entry_sz < MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST +
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN) {
+ rc = EINVAL;
+ goto fail4;
+ }
+ if (n_entries * entry_sz / entry_sz != n_entries) {
+ rc = EINVAL;
+ goto fail5;
+ }
+ if (req.emr_out_length_used !=
+ MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMIN + n_entries * entry_sz) {
+ rc = EINVAL;
+ goto fail6;
+ }
+
+ for (i = 0; i < n_entries; i++) {
+ efx_mport_desc_t desc;
+
+ rc = efx_mae_read_mport_journal_single(entry_buf, &desc);
+ if (rc != 0)
+ continue;
+
+ (*cbp)(cb_datap, &desc, sizeof (desc));
+ entry_buf += entry_sz;
+ }
+
+ return (0);
+
+fail6:
+ EFSYS_PROBE(fail6);
+fail5:
+ EFSYS_PROBE(fail5);
+fail4:
+ EFSYS_PROBE(fail4);
+fail3:
+ EFSYS_PROBE(fail3);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_read_mport_journal(
+ __in efx_nic_t *enp,
+ __in efx_mae_read_mport_journal_cb *cbp,
+ __in void *cb_datap)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ uint32_t more = 0;
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ do {
+ rc = efx_mae_read_mport_journal_batch(enp, cbp, cb_datap,
+ &more);
+ if (rc != 0)
+ goto fail2;
+ } while (more != 0);
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
#endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index 90b70de97b..96f237b1b0 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -462,6 +462,60 @@ efx_mcdi_phy_module_get_info(
EFX_DWORD_FIELD(*(MCDI_OUT2(_emr, efx_dword_t, _ofst) + \
(_idx)), _field)
+#define MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, _type, _arr_ofst, _idx, \
+ _member_ofst) \
+ ((_type *)(MCDI_OUT2(_emr, uint8_t, _arr_ofst) + \
+ _idx * MC_CMD_ ## _arr_ofst ## _LEN + \
+ _member_ofst ## _OFST))
+
+#define MCDI_OUT_INDEXED_MEMBER_DWORD(_emr, _arr_ofst, _idx, \
+ _member_ofst) \
+ EFX_DWORD_FIELD( \
+ *(MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, efx_dword_t, \
+ _arr_ofst, _idx, \
+ _member_ofst)), \
+ EFX_DWORD_0)
+
+#define MCDI_OUT_INDEXED_MEMBER_QWORD(_emr, _arr_ofst, _idx, \
+ _member_ofst) \
+ ((uint64_t)EFX_QWORD_FIELD( \
+ *(MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, efx_qword_t, \
+ _arr_ofst, _idx, \
+ _member_ofst)), \
+ EFX_DWORD_0) | \
+ (uint64_t)EFX_QWORD_FIELD( \
+ *(MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, efx_qword_t, \
+ _arr_ofst, _idx, \
+ _member_ofst)), \
+ EFX_DWORD_1) << 32)
+
+#define MCDI_STRUCT_MEMBER(_buf, _type, _ofst) \
+ ((_type *)((char *)_buf + _ofst ## _OFST)) \
+
+#define MCDI_STRUCT_BYTE(_buf, _ofst) \
+ EFX_BYTE_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_byte_t, _ofst), \
+ EFX_BYTE_0)
+
+#define MCDI_STRUCT_BYTE_FIELD(_buf, _ofst, _field) \
+ EFX_BYTE_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_byte_t, _ofst), \
+ _field)
+
+#define MCDI_STRUCT_WORD(_buf, _ofst) \
+ EFX_WORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_word_t, _ofst), \
+ EFX_WORD_0)
+
+#define MCDI_STRUCT_WORD_FIELD(_buf, _ofst, _field) \
+ EFX_WORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_word_t, _ofst), \
+ _field)
+
+#define MCDI_STRUCT_DWORD(_buf, _ofst) \
+ EFX_DWORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_dword_t, _ofst), \
+ EFX_DWORD_0)
+
+#define MCDI_STRUCT_DWORD_FIELD(_buf, _ofst, _field) \
+ EFX_DWORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_dword_t, _ofst), \
+ _field)
+
#define MCDI_EV_FIELD(_eqp, _field) \
EFX_QWORD_FIELD(*_eqp, MCDI_EVENT_ ## _field)
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 225909892b..10216bb37d 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -133,6 +133,7 @@ INTERNAL {
efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
efx_mae_outer_rule_remove;
+ efx_mae_read_mport_journal;
efx_mcdi_fini;
efx_mcdi_get_proxy_handle;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (29 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 30/38] common/sfc_efx/base: add a means to read MAE mport journal Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 32/38] net/sfc: maintain controller to EFX interface mapping Andrew Rybchenko
` (7 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Equality checks between VNICs should be done by comparing their client
handles. This means that clients should be able to retrieve client handles
for arbitrary functions and themselves.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 15 ++++++
drivers/common/sfc_efx/base/efx_mcdi.c | 73 ++++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 2 +
3 files changed, 90 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index e77b297950..b61984a8e3 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -391,6 +391,21 @@ extern __checkReturn boolean_t
efx_mcdi_request_abort(
__in efx_nic_t *enp);
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mcdi_get_client_handle(
+ __in efx_nic_t *enp,
+ __in efx_pcie_interface_t intf,
+ __in uint16_t pf,
+ __in uint16_t vf,
+ __out uint32_t *handle);
+
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mcdi_get_own_client_handle(
+ __in efx_nic_t *enp,
+ __out uint32_t *handle);
+
LIBEFX_API
extern void
efx_mcdi_fini(
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index 69bf7ce70f..cdf7181e0d 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -647,6 +647,79 @@ efx_mcdi_request_abort(
return (aborted);
}
+ __checkReturn efx_rc_t
+efx_mcdi_get_client_handle(
+ __in efx_nic_t *enp,
+ __in efx_pcie_interface_t intf,
+ __in uint16_t pf,
+ __in uint16_t vf,
+ __out uint32_t *handle)
+{
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_GET_CLIENT_HANDLE_IN_LEN,
+ MC_CMD_GET_CLIENT_HANDLE_OUT_LEN);
+ efx_rc_t rc;
+
+ if (handle == NULL) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_GET_CLIENT_HANDLE;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_GET_CLIENT_HANDLE_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_GET_CLIENT_HANDLE_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, GET_CLIENT_HANDLE_IN_TYPE,
+ MC_CMD_GET_CLIENT_HANDLE_IN_TYPE_FUNC);
+ MCDI_IN_SET_WORD(req, GET_CLIENT_HANDLE_IN_FUNC_PF, pf);
+ MCDI_IN_SET_WORD(req, GET_CLIENT_HANDLE_IN_FUNC_VF, vf);
+ MCDI_IN_SET_DWORD(req, GET_CLIENT_HANDLE_IN_FUNC_INTF, intf);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ if (req.emr_out_length_used < MC_CMD_GET_CLIENT_HANDLE_OUT_LEN) {
+ rc = EMSGSIZE;
+ goto fail3;
+ }
+
+ *handle = MCDI_OUT_DWORD(req, GET_CLIENT_HANDLE_OUT_HANDLE);
+
+ return 0;
+fail3:
+ EFSYS_PROBE(fail3);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mcdi_get_own_client_handle(
+ __in efx_nic_t *enp,
+ __out uint32_t *handle)
+{
+ efx_rc_t rc;
+
+ rc = efx_mcdi_get_client_handle(enp, PCIE_INTERFACE_CALLER,
+ PCIE_FUNCTION_PF_NULL, PCIE_FUNCTION_VF_NULL, handle);
+ if (rc != 0)
+ goto fail1;
+
+ return (0);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
void
efx_mcdi_get_timeout(
__in efx_nic_t *enp,
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 10216bb37d..346deb4b12 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -136,6 +136,8 @@ INTERNAL {
efx_mae_read_mport_journal;
efx_mcdi_fini;
+ efx_mcdi_get_client_handle;
+ efx_mcdi_get_own_client_handle;
efx_mcdi_get_proxy_handle;
efx_mcdi_get_timeout;
efx_mcdi_init;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 32/38] net/sfc: maintain controller to EFX interface mapping
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (30 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 33/38] net/sfc: store PCI address for represented entities Andrew Rybchenko
` (6 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Newer hardware may have arbitrarily complex controller configurations,
and for this reason the mapping has been made dynamic: it is represented
with a dynamic array that is indexed by controller numbers and each
element contains an EFX interface number. Since the number of controllers
is expected to be small, this approach should not hurt the performance.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_ethdev.c | 184 +++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_switch.c | 57 +++++++++++
drivers/net/sfc/sfc_switch.h | 8 ++
3 files changed, 249 insertions(+)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8f9afb2c67..8536a2b111 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -30,6 +30,7 @@
#include "sfc_dp_rx.h"
#include "sfc_repr.h"
#include "sfc_sw_stats.h"
+#include "sfc_switch.h"
#define SFC_XSTAT_ID_INVALID_VAL UINT64_MAX
#define SFC_XSTAT_ID_INVALID_NAME '\0'
@@ -1862,6 +1863,177 @@ sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
return sap->dp_rx->intr_disable(rxq_info->dp);
}
+struct sfc_mport_journal_ctx {
+ struct sfc_adapter *sa;
+ uint16_t switch_domain_id;
+ uint32_t mcdi_handle;
+ bool controllers_assigned;
+ efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+};
+
+static int
+sfc_journal_ctx_add_controller(struct sfc_mport_journal_ctx *ctx,
+ efx_pcie_interface_t intf)
+{
+ efx_pcie_interface_t *new_controllers;
+ size_t i, target;
+ size_t new_size;
+
+ if (ctx->controllers == NULL) {
+ ctx->controllers = rte_malloc("sfc_controller_mapping",
+ sizeof(ctx->controllers[0]), 0);
+ if (ctx->controllers == NULL)
+ return ENOMEM;
+
+ ctx->controllers[0] = intf;
+ ctx->nb_controllers = 1;
+
+ return 0;
+ }
+
+ for (i = 0; i < ctx->nb_controllers; i++) {
+ if (ctx->controllers[i] == intf)
+ return 0;
+ if (ctx->controllers[i] > intf)
+ break;
+ }
+ target = i;
+
+ ctx->nb_controllers += 1;
+ new_size = ctx->nb_controllers * sizeof(ctx->controllers[0]);
+
+ new_controllers = rte_realloc(ctx->controllers, new_size, 0);
+ if (new_controllers == NULL) {
+ rte_free(ctx->controllers);
+ return ENOMEM;
+ }
+ ctx->controllers = new_controllers;
+
+ for (i = target + 1; i < ctx->nb_controllers; i++)
+ ctx->controllers[i] = ctx->controllers[i - 1];
+
+ ctx->controllers[target] = intf;
+
+ return 0;
+}
+
+static efx_rc_t
+sfc_process_mport_journal_entry(struct sfc_mport_journal_ctx *ctx,
+ efx_mport_desc_t *mport)
+{
+ efx_mport_sel_t ethdev_mport;
+ int rc;
+
+ sfc_dbg(ctx->sa,
+ "processing mport id %u (controller %u pf %u vf %u)",
+ mport->emd_id.id, mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf, mport->emd_vnic.ev_vf);
+ efx_mae_mport_invalid(ðdev_mport);
+
+ if (!ctx->controllers_assigned) {
+ rc = sfc_journal_ctx_add_controller(ctx,
+ mport->emd_vnic.ev_intf);
+ if (rc != 0)
+ return rc;
+ }
+
+ return 0;
+}
+
+static efx_rc_t
+sfc_process_mport_journal_cb(void *data, efx_mport_desc_t *mport,
+ size_t mport_len)
+{
+ struct sfc_mport_journal_ctx *ctx = data;
+
+ if (ctx == NULL || ctx->sa == NULL) {
+ sfc_err(ctx->sa, "received NULL context or SFC adapter");
+ return EINVAL;
+ }
+
+ if (mport_len != sizeof(*mport)) {
+ sfc_err(ctx->sa, "actual and expected mport buffer sizes differ");
+ return EINVAL;
+ }
+
+ SFC_ASSERT(sfc_adapter_is_locked(ctx->sa));
+
+ /*
+ * If a zombie flag is set, it means the mport has been marked for
+ * deletion and cannot be used for any new operations. The mport will
+ * be destroyed completely once all references to it are released.
+ */
+ if (mport->emd_zombie) {
+ sfc_dbg(ctx->sa, "mport is a zombie, skipping");
+ return 0;
+ }
+ if (mport->emd_type != EFX_MPORT_TYPE_VNIC) {
+ sfc_dbg(ctx->sa, "mport is not a VNIC, skipping");
+ return 0;
+ }
+ if (mport->emd_vnic.ev_client_type != EFX_MPORT_VNIC_CLIENT_FUNCTION) {
+ sfc_dbg(ctx->sa, "mport is not a function, skipping");
+ return 0;
+ }
+ if (mport->emd_vnic.ev_handle == ctx->mcdi_handle) {
+ sfc_dbg(ctx->sa, "mport is this driver instance, skipping");
+ return 0;
+ }
+
+ return sfc_process_mport_journal_entry(ctx, mport);
+}
+
+static int
+sfc_process_mport_journal(struct sfc_adapter *sa)
+{
+ struct sfc_mport_journal_ctx ctx;
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ efx_rc_t efx_rc;
+ int rc;
+
+ memset(&ctx, 0, sizeof(ctx));
+ ctx.sa = sa;
+ ctx.switch_domain_id = sa->mae.switch_domain_id;
+
+ efx_rc = efx_mcdi_get_own_client_handle(sa->nic, &ctx.mcdi_handle);
+ if (efx_rc != 0) {
+ sfc_err(sa, "failed to get own MCDI handle");
+ SFC_ASSERT(efx_rc > 0);
+ return efx_rc;
+ }
+
+ rc = sfc_mae_switch_domain_controllers(ctx.switch_domain_id,
+ &controllers, &nb_controllers);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get controller mapping");
+ return rc;
+ }
+
+ ctx.controllers_assigned = controllers != NULL;
+ ctx.controllers = NULL;
+ ctx.nb_controllers = 0;
+
+ efx_rc = efx_mae_read_mport_journal(sa->nic,
+ sfc_process_mport_journal_cb, &ctx);
+ if (efx_rc != 0) {
+ sfc_err(sa, "failed to process MAE mport journal");
+ SFC_ASSERT(efx_rc > 0);
+ return efx_rc;
+ }
+
+ if (controllers == NULL) {
+ rc = sfc_mae_switch_domain_map_controllers(ctx.switch_domain_id,
+ ctx.controllers,
+ ctx.nb_controllers);
+ if (rc != 0)
+ return rc;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops sfc_eth_dev_ops = {
.dev_configure = sfc_dev_configure,
.dev_start = sfc_dev_start,
@@ -2494,6 +2666,18 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
return -ENOTSUP;
}
+ /*
+ * This is needed to construct the DPDK controller -> EFX interface
+ * mapping.
+ */
+ sfc_adapter_lock(sa);
+ rc = sfc_process_mport_journal(sa);
+ sfc_adapter_unlock(sa);
+ if (rc != 0) {
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
for (i = 0; i < eth_da->nb_representor_ports; ++i) {
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
efx_mport_sel_t mport_sel;
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 80c884a599..f72f6648b8 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -87,6 +87,10 @@ struct sfc_mae_switch_domain {
struct sfc_mae_switch_ports ports;
/** RTE switch domain ID allocated for a group of devices */
uint16_t id;
+ /** DPDK controller -> EFX interface mapping */
+ efx_pcie_interface_t *controllers;
+ /** Number of DPDK controllers and EFX interfaces */
+ size_t nb_controllers;
};
TAILQ_HEAD(sfc_mae_switch_domains, sfc_mae_switch_domain);
@@ -220,6 +224,59 @@ sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
return rc;
}
+int
+sfc_mae_switch_domain_controllers(uint16_t switch_domain_id,
+ const efx_pcie_interface_t **controllers,
+ size_t *nb_controllers)
+{
+ struct sfc_mae_switch_domain *domain;
+
+ if (controllers == NULL || nb_controllers == NULL)
+ return EINVAL;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ *controllers = domain->controllers;
+ *nb_controllers = domain->nb_controllers;
+
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return 0;
+}
+
+int
+sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
+ efx_pcie_interface_t *controllers,
+ size_t nb_controllers)
+{
+ struct sfc_mae_switch_domain *domain;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ /* Controller mapping may be set only once */
+ if (domain->controllers != NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ domain->controllers = controllers;
+ domain->nb_controllers = nb_controllers;
+
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return 0;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_port *
sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index a1a2ab9848..1eee5fc0b6 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -44,6 +44,14 @@ struct sfc_mae_switch_port_request {
int sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
uint16_t *switch_domain_id);
+int sfc_mae_switch_domain_controllers(uint16_t switch_domain_id,
+ const efx_pcie_interface_t **controllers,
+ size_t *nb_controllers);
+
+int sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
+ efx_pcie_interface_t *controllers,
+ size_t nb_controllers);
+
int sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
uint16_t *switch_port_id);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 33/38] net/sfc: store PCI address for represented entities
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (31 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 32/38] net/sfc: maintain controller to EFX interface mapping Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 34/38] net/sfc: include controller and port in representor name Andrew Rybchenko
` (5 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
This information will be useful when representor info API is implemented.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_ethdev.c | 11 +++++++++--
drivers/net/sfc/sfc_repr.c | 20 +++++++++++++++-----
drivers/net/sfc/sfc_repr.h | 10 +++++++++-
drivers/net/sfc/sfc_switch.c | 14 ++++++++++++++
drivers/net/sfc/sfc_switch.h | 11 +++++++++++
5 files changed, 58 insertions(+), 8 deletions(-)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 8536a2b111..49ba820501 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2680,6 +2680,7 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
for (i = 0; i < eth_da->nb_representor_ports; ++i) {
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ struct sfc_repr_entity_info entity;
efx_mport_sel_t mport_sel;
rc = efx_mae_mport_by_pcie_function(encp->enc_pf,
@@ -2692,8 +2693,14 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
continue;
}
- rc = sfc_repr_create(dev, eth_da->representor_ports[i],
- sa->mae.switch_domain_id, &mport_sel);
+ memset(&entity, 0, sizeof(entity));
+ entity.type = eth_da->type;
+ entity.intf = encp->enc_intf;
+ entity.pf = encp->enc_pf;
+ entity.vf = eth_da->representor_ports[i];
+
+ rc = sfc_repr_create(dev, &entity, sa->mae.switch_domain_id,
+ &mport_sel);
if (rc != 0) {
sfc_err(sa, "cannot create representor %u: %s - ignore",
eth_da->representor_ports[i],
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 4fd81c3f6b..a42e70c92c 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -924,6 +924,9 @@ struct sfc_repr_init_data {
uint16_t repr_id;
uint16_t switch_domain_id;
efx_mport_sel_t mport_sel;
+ efx_pcie_interface_t intf;
+ uint16_t pf;
+ uint16_t vf;
};
static int
@@ -961,6 +964,9 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
switch_port_request.ethdev_mportp = ðdev_mport_sel;
switch_port_request.entity_mportp = &repr_data->mport_sel;
switch_port_request.ethdev_port_id = dev->data->port_id;
+ switch_port_request.port_data.repr.intf = repr_data->intf;
+ switch_port_request.port_data.repr.pf = repr_data->pf;
+ switch_port_request.port_data.repr.vf = repr_data->vf;
ret = sfc_repr_assign_mae_switch_port(repr_data->switch_domain_id,
&switch_port_request,
@@ -1037,8 +1043,10 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
}
int
-sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
- uint16_t switch_domain_id, const efx_mport_sel_t *mport_sel)
+sfc_repr_create(struct rte_eth_dev *parent,
+ struct sfc_repr_entity_info *entity,
+ uint16_t switch_domain_id,
+ const efx_mport_sel_t *mport_sel)
{
struct sfc_repr_init_data repr_data;
char name[RTE_ETH_NAME_MAX_LEN];
@@ -1046,8 +1054,7 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
struct rte_eth_dev *dev;
if (snprintf(name, sizeof(name), "net_%s_representor_%u",
- parent->device->name, representor_id) >=
- (int)sizeof(name)) {
+ parent->device->name, entity->vf) >= (int)sizeof(name)) {
SFC_GENERIC_LOG(ERR, "%s() failed name too long", __func__);
return -ENAMETOOLONG;
}
@@ -1056,9 +1063,12 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
if (dev == NULL) {
memset(&repr_data, 0, sizeof(repr_data));
repr_data.pf_port_id = parent->data->port_id;
- repr_data.repr_id = representor_id;
+ repr_data.repr_id = entity->vf;
repr_data.switch_domain_id = switch_domain_id;
repr_data.mport_sel = *mport_sel;
+ repr_data.intf = entity->intf;
+ repr_data.pf = entity->pf;
+ repr_data.vf = entity->vf;
ret = rte_eth_dev_create(parent->device, name,
sizeof(struct sfc_repr_shared),
diff --git a/drivers/net/sfc/sfc_repr.h b/drivers/net/sfc/sfc_repr.h
index 1347206006..2093973761 100644
--- a/drivers/net/sfc/sfc_repr.h
+++ b/drivers/net/sfc/sfc_repr.h
@@ -26,7 +26,15 @@ extern "C" {
/** Max count of the representor Tx queues */
#define SFC_REPR_TXQ_MAX 1
-int sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
+struct sfc_repr_entity_info {
+ enum rte_eth_representor_type type;
+ efx_pcie_interface_t intf;
+ uint16_t pf;
+ uint16_t vf;
+};
+
+int sfc_repr_create(struct rte_eth_dev *parent,
+ struct sfc_repr_entity_info *entity,
uint16_t switch_domain_id,
const efx_mport_sel_t *mport_sel);
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index f72f6648b8..7a0b332f33 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -63,6 +63,8 @@ struct sfc_mae_switch_port {
enum sfc_mae_switch_port_type type;
/** RTE switch port ID */
uint16_t id;
+
+ union sfc_mae_switch_port_data data;
};
TAILQ_HEAD(sfc_mae_switch_ports, sfc_mae_switch_port);
@@ -335,6 +337,18 @@ sfc_mae_assign_switch_port(uint16_t switch_domain_id,
port->ethdev_mport = *req->ethdev_mportp;
port->ethdev_port_id = req->ethdev_port_id;
+ switch (req->type) {
+ case SFC_MAE_SWITCH_PORT_INDEPENDENT:
+ /* No data */
+ break;
+ case SFC_MAE_SWITCH_PORT_REPRESENTOR:
+ memcpy(&port->data.repr, &req->port_data,
+ sizeof(port->data.repr));
+ break;
+ default:
+ SFC_ASSERT(B_FALSE);
+ }
+
*switch_port_id = port->id;
rte_spinlock_unlock(&sfc_mae_switch.lock);
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index 1eee5fc0b6..a072507375 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -34,11 +34,22 @@ enum sfc_mae_switch_port_type {
SFC_MAE_SWITCH_PORT_REPRESENTOR,
};
+struct sfc_mae_switch_port_repr_data {
+ efx_pcie_interface_t intf;
+ uint16_t pf;
+ uint16_t vf;
+};
+
+union sfc_mae_switch_port_data {
+ struct sfc_mae_switch_port_repr_data repr;
+};
+
struct sfc_mae_switch_port_request {
enum sfc_mae_switch_port_type type;
const efx_mport_sel_t *entity_mportp;
const efx_mport_sel_t *ethdev_mportp;
uint16_t ethdev_port_id;
+ union sfc_mae_switch_port_data port_data;
};
int sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 34/38] net/sfc: include controller and port in representor name
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (32 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 33/38] net/sfc: store PCI address for represented entities Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 35/38] net/sfc: support new representor parameter syntax Andrew Rybchenko
` (4 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Make representor names unique on multi-host configurations.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_repr.c | 28 ++++++++++++++++++++++++++--
drivers/net/sfc/sfc_switch.c | 28 ++++++++++++++++++++++++++++
drivers/net/sfc/sfc_switch.h | 4 ++++
3 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index a42e70c92c..d50efe6562 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -1050,11 +1050,35 @@ sfc_repr_create(struct rte_eth_dev *parent,
{
struct sfc_repr_init_data repr_data;
char name[RTE_ETH_NAME_MAX_LEN];
+ int controller;
int ret;
+ int rc;
struct rte_eth_dev *dev;
- if (snprintf(name, sizeof(name), "net_%s_representor_%u",
- parent->device->name, entity->vf) >= (int)sizeof(name)) {
+ controller = -1;
+ rc = sfc_mae_switch_domain_get_controller(switch_domain_id,
+ entity->intf, &controller);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to get DPDK controller for %d",
+ __func__, entity->intf);
+ return -rc;
+ }
+
+ switch (entity->type) {
+ case RTE_ETH_REPRESENTOR_VF:
+ ret = snprintf(name, sizeof(name), "net_%s_representor_c%upf%uvf%u",
+ parent->device->name, controller, entity->pf,
+ entity->vf);
+ break;
+ case RTE_ETH_REPRESENTOR_PF:
+ ret = snprintf(name, sizeof(name), "net_%s_representor_c%upf%u",
+ parent->device->name, controller, entity->pf);
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ if (ret >= (int)sizeof(name)) {
SFC_GENERIC_LOG(ERR, "%s() failed name too long", __func__);
return -ENAMETOOLONG;
}
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 7a0b332f33..225d07fa15 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -279,6 +279,34 @@ sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
return 0;
}
+int
+sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
+ efx_pcie_interface_t intf,
+ int *controller)
+{
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ size_t i;
+ int rc;
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
+ &nb_controllers);
+ if (rc != 0)
+ return rc;
+
+ if (controllers == NULL)
+ return ENOENT;
+
+ for (i = 0; i < nb_controllers; i++) {
+ if (controllers[i] == intf) {
+ *controller = i;
+ return 0;
+ }
+ }
+
+ return ENOENT;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_port *
sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index a072507375..294baae9a2 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -63,6 +63,10 @@ int sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
efx_pcie_interface_t *controllers,
size_t nb_controllers);
+int sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
+ efx_pcie_interface_t intf,
+ int *controller);
+
int sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
uint16_t *switch_port_id);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 35/38] net/sfc: support new representor parameter syntax
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (33 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 34/38] net/sfc: include controller and port in representor name Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 36/38] net/sfc: use switch port ID as representor ID Andrew Rybchenko
` (3 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Allow the user to specify representor entities using the structured
parameter values.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_ethdev.c | 181 ++++++++++++++++++++++++++++-------
drivers/net/sfc/sfc_switch.c | 24 +++++
drivers/net/sfc/sfc_switch.h | 4 +
3 files changed, 176 insertions(+), 33 deletions(-)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 49ba820501..29c8d220a2 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2642,18 +2642,143 @@ sfc_eth_dev_find_or_create(struct rte_pci_device *pci_dev,
return 0;
}
+static int
+sfc_eth_dev_create_repr(struct sfc_adapter *sa,
+ efx_pcie_interface_t controller,
+ uint16_t port,
+ uint16_t repr_port,
+ enum rte_eth_representor_type type)
+{
+ struct sfc_repr_entity_info entity;
+ efx_mport_sel_t mport_sel;
+ int rc;
+
+ switch (type) {
+ case RTE_ETH_REPRESENTOR_NONE:
+ return 0;
+ case RTE_ETH_REPRESENTOR_VF:
+ case RTE_ETH_REPRESENTOR_PF:
+ break;
+ case RTE_ETH_REPRESENTOR_SF:
+ sfc_err(sa, "SF representors are not supported");
+ return ENOTSUP;
+ default:
+ sfc_err(sa, "unknown representor type: %d", type);
+ return ENOTSUP;
+ }
+
+ rc = efx_mae_mport_by_pcie_mh_function(controller,
+ port,
+ repr_port,
+ &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to get m-port selector for controller %u port %u repr_port %u: %s",
+ controller, port, repr_port, rte_strerror(-rc));
+ return rc;
+ }
+
+ memset(&entity, 0, sizeof(entity));
+ entity.type = type;
+ entity.intf = controller;
+ entity.pf = port;
+ entity.vf = repr_port;
+
+ rc = sfc_repr_create(sa->eth_dev, &entity, sa->mae.switch_domain_id,
+ &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to create representor for controller %u port %u repr_port %u: %s",
+ controller, port, repr_port, rte_strerror(-rc));
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+sfc_eth_dev_create_repr_port(struct sfc_adapter *sa,
+ const struct rte_eth_devargs *eth_da,
+ efx_pcie_interface_t controller,
+ uint16_t port)
+{
+ int first_error = 0;
+ uint16_t i;
+ int rc;
+
+ if (eth_da->type == RTE_ETH_REPRESENTOR_PF) {
+ return sfc_eth_dev_create_repr(sa, controller, port,
+ EFX_PCI_VF_INVALID,
+ eth_da->type);
+ }
+
+ for (i = 0; i < eth_da->nb_representor_ports; i++) {
+ rc = sfc_eth_dev_create_repr(sa, controller, port,
+ eth_da->representor_ports[i],
+ eth_da->type);
+ if (rc != 0 && first_error == 0)
+ first_error = rc;
+ }
+
+ return first_error;
+}
+
+static int
+sfc_eth_dev_create_repr_controller(struct sfc_adapter *sa,
+ const struct rte_eth_devargs *eth_da,
+ efx_pcie_interface_t controller)
+{
+ const efx_nic_cfg_t *encp;
+ int first_error = 0;
+ uint16_t default_port;
+ uint16_t i;
+ int rc;
+
+ if (eth_da->nb_ports == 0) {
+ encp = efx_nic_cfg_get(sa->nic);
+ default_port = encp->enc_intf == controller ? encp->enc_pf : 0;
+ return sfc_eth_dev_create_repr_port(sa, eth_da, controller,
+ default_port);
+ }
+
+ for (i = 0; i < eth_da->nb_ports; i++) {
+ rc = sfc_eth_dev_create_repr_port(sa, eth_da, controller,
+ eth_da->ports[i]);
+ if (rc != 0 && first_error == 0)
+ first_error = rc;
+ }
+
+ return first_error;
+}
+
static int
sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
const struct rte_eth_devargs *eth_da)
{
+ efx_pcie_interface_t intf;
+ const efx_nic_cfg_t *encp;
struct sfc_adapter *sa;
- unsigned int i;
+ uint16_t switch_domain_id;
+ uint16_t i;
int rc;
- if (eth_da->nb_representor_ports == 0)
- return 0;
-
sa = sfc_adapter_by_eth_dev(dev);
+ switch_domain_id = sa->mae.switch_domain_id;
+
+ switch (eth_da->type) {
+ case RTE_ETH_REPRESENTOR_NONE:
+ return 0;
+ case RTE_ETH_REPRESENTOR_PF:
+ case RTE_ETH_REPRESENTOR_VF:
+ break;
+ case RTE_ETH_REPRESENTOR_SF:
+ sfc_err(sa, "SF representors are not supported");
+ return -ENOTSUP;
+ default:
+ sfc_err(sa, "unknown representor type: %d",
+ eth_da->type);
+ return -ENOTSUP;
+ }
if (!sa->switchdev) {
sfc_err(sa, "cannot create representors in non-switchdev mode");
@@ -2678,34 +2803,20 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
return -rc;
}
- for (i = 0; i < eth_da->nb_representor_ports; ++i) {
- const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
- struct sfc_repr_entity_info entity;
- efx_mport_sel_t mport_sel;
-
- rc = efx_mae_mport_by_pcie_function(encp->enc_pf,
- eth_da->representor_ports[i], &mport_sel);
- if (rc != 0) {
- sfc_err(sa,
- "failed to get representor %u m-port: %s - ignore",
- eth_da->representor_ports[i],
- rte_strerror(-rc));
- continue;
- }
-
- memset(&entity, 0, sizeof(entity));
- entity.type = eth_da->type;
- entity.intf = encp->enc_intf;
- entity.pf = encp->enc_pf;
- entity.vf = eth_da->representor_ports[i];
-
- rc = sfc_repr_create(dev, &entity, sa->mae.switch_domain_id,
- &mport_sel);
- if (rc != 0) {
- sfc_err(sa, "cannot create representor %u: %s - ignore",
- eth_da->representor_ports[i],
- rte_strerror(-rc));
+ if (eth_da->nb_mh_controllers > 0) {
+ for (i = 0; i < eth_da->nb_mh_controllers; i++) {
+ rc = sfc_mae_switch_domain_get_intf(switch_domain_id,
+ eth_da->mh_controllers[i],
+ &intf);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get representor");
+ continue;
+ }
+ sfc_eth_dev_create_repr_controller(sa, eth_da, intf);
}
+ } else {
+ encp = efx_nic_cfg_get(sa->nic);
+ sfc_eth_dev_create_repr_controller(sa, eth_da, encp->enc_intf);
}
return 0;
@@ -2729,9 +2840,13 @@ static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
memset(ð_da, 0, sizeof(eth_da));
}
- init_data.nb_representors = eth_da.nb_representor_ports;
+ /* If no VF representors specified, check for PF ones */
+ if (eth_da.nb_representor_ports > 0)
+ init_data.nb_representors = eth_da.nb_representor_ports;
+ else
+ init_data.nb_representors = eth_da.nb_ports;
- if (eth_da.nb_representor_ports > 0 &&
+ if (init_data.nb_representors > 0 &&
rte_eal_process_type() != RTE_PROC_PRIMARY) {
SFC_GENERIC_LOG(ERR,
"Create representors from secondary process not supported, dev '%s'",
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 225d07fa15..5cd9b46d26 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -307,6 +307,30 @@ sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
return ENOENT;
}
+int sfc_mae_switch_domain_get_intf(uint16_t switch_domain_id,
+ int controller,
+ efx_pcie_interface_t *intf)
+{
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ int rc;
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
+ &nb_controllers);
+ if (rc != 0)
+ return rc;
+
+ if (controllers == NULL)
+ return ENOENT;
+
+ if ((size_t)controller > nb_controllers)
+ return EINVAL;
+
+ *intf = controllers[controller];
+
+ return 0;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_port *
sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index 294baae9a2..d187c6dbbb 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -67,6 +67,10 @@ int sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
efx_pcie_interface_t intf,
int *controller);
+int sfc_mae_switch_domain_get_intf(uint16_t switch_domain_id,
+ int controller,
+ efx_pcie_interface_t *intf);
+
int sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
uint16_t *switch_port_id);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 36/38] net/sfc: use switch port ID as representor ID
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (34 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 35/38] net/sfc: support new representor parameter syntax Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 37/38] net/sfc: implement the representor info API Andrew Rybchenko
` (2 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Representor IDs must be unique for each representor. VFs, which are
currently used, are not unique as they may repeat in combination with
different PCI controllers and PFs. On the other hand, switch port IDs
are unique, so they are a better fit for this role.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_repr.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index d50efe6562..4cbfdbcb66 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -921,7 +921,6 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
struct sfc_repr_init_data {
uint16_t pf_port_id;
- uint16_t repr_id;
uint16_t switch_domain_id;
efx_mport_sel_t mport_sel;
efx_pcie_interface_t intf;
@@ -979,7 +978,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
}
ret = sfc_repr_proxy_add_port(repr_data->pf_port_id,
- repr_data->repr_id,
+ srs->switch_port_id,
dev->data->port_id,
&repr_data->mport_sel);
if (ret != 0) {
@@ -1006,7 +1005,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
dev->process_private = sr;
srs->pf_port_id = repr_data->pf_port_id;
- srs->repr_id = repr_data->repr_id;
+ srs->repr_id = srs->switch_port_id;
srs->switch_domain_id = repr_data->switch_domain_id;
dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
@@ -1034,7 +1033,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
fail_alloc_sr:
(void)sfc_repr_proxy_del_port(repr_data->pf_port_id,
- repr_data->repr_id);
+ srs->switch_port_id);
fail_create_port:
fail_mae_assign_switch_port:
@@ -1087,7 +1086,6 @@ sfc_repr_create(struct rte_eth_dev *parent,
if (dev == NULL) {
memset(&repr_data, 0, sizeof(repr_data));
repr_data.pf_port_id = parent->data->port_id;
- repr_data.repr_id = entity->vf;
repr_data.switch_domain_id = switch_domain_id;
repr_data.mport_sel = *mport_sel;
repr_data.intf = entity->intf;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 37/38] net/sfc: implement the representor info API
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (35 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 36/38] net/sfc: use switch port ID as representor ID Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 38/38] net/sfc: update comment about representor support Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Let the driver provide the user with information about available
representors by implementing the representor_info_get operation.
Due to the lack of any structure to representor IDs, every ID range
describes exactly one representor.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/net/sfc/sfc_ethdev.c | 229 +++++++++++++++++++++++++
drivers/net/sfc/sfc_switch.c | 104 +++++++++--
drivers/net/sfc/sfc_switch.h | 24 +++
4 files changed, 352 insertions(+), 11 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..911e500ce5 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -55,6 +55,12 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Updated Solarflare network PMD.**
+
+ Updated the Solarflare ``sfc_efx`` driver with changes including:
+
+ * Added port representors support on SN1000 SmartNICs
+
Removed Items
-------------
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 29c8d220a2..62b81ed61a 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1922,7 +1922,11 @@ static efx_rc_t
sfc_process_mport_journal_entry(struct sfc_mport_journal_ctx *ctx,
efx_mport_desc_t *mport)
{
+ struct sfc_mae_switch_port_request req;
+ efx_mport_sel_t entity_selector;
efx_mport_sel_t ethdev_mport;
+ uint16_t switch_port_id;
+ efx_rc_t efx_rc;
int rc;
sfc_dbg(ctx->sa,
@@ -1938,6 +1942,63 @@ sfc_process_mport_journal_entry(struct sfc_mport_journal_ctx *ctx,
return rc;
}
+ /* Build Mport selector */
+ efx_rc = efx_mae_mport_by_pcie_mh_function(mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf,
+ &entity_selector);
+ if (efx_rc != 0) {
+ sfc_err(ctx->sa, "failed to build entity mport selector for c%upf%uvf%u",
+ mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf);
+ return efx_rc;
+ }
+
+ rc = sfc_mae_switch_port_id_by_entity(ctx->switch_domain_id,
+ &entity_selector,
+ SFC_MAE_SWITCH_PORT_REPRESENTOR,
+ &switch_port_id);
+ switch (rc) {
+ case 0:
+ /* Already registered */
+ break;
+ case ENOENT:
+ /*
+ * No representor has been created for this entity.
+ * Create a dummy switch registry entry with an invalid ethdev
+ * mport selector. When a corresponding representor is created,
+ * this entry will be updated.
+ */
+ req.type = SFC_MAE_SWITCH_PORT_REPRESENTOR;
+ req.entity_mportp = &entity_selector;
+ req.ethdev_mportp = ðdev_mport;
+ req.ethdev_port_id = RTE_MAX_ETHPORTS;
+ req.port_data.repr.intf = mport->emd_vnic.ev_intf;
+ req.port_data.repr.pf = mport->emd_vnic.ev_pf;
+ req.port_data.repr.vf = mport->emd_vnic.ev_vf;
+
+ rc = sfc_mae_assign_switch_port(ctx->switch_domain_id,
+ &req, &switch_port_id);
+ if (rc != 0) {
+ sfc_err(ctx->sa,
+ "failed to assign MAE switch port for c%upf%uvf%u: %s",
+ mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf,
+ rte_strerror(rc));
+ return rc;
+ }
+ break;
+ default:
+ sfc_err(ctx->sa, "failed to find MAE switch port for c%upf%uvf%u: %s",
+ mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf,
+ rte_strerror(rc));
+ return rc;
+ }
+
return 0;
}
@@ -2034,6 +2095,173 @@ sfc_process_mport_journal(struct sfc_adapter *sa)
return 0;
}
+static void
+sfc_count_representors_cb(enum sfc_mae_switch_port_type type,
+ const efx_mport_sel_t *ethdev_mportp __rte_unused,
+ uint16_t ethdev_port_id __rte_unused,
+ const efx_mport_sel_t *entity_mportp __rte_unused,
+ uint16_t switch_port_id __rte_unused,
+ union sfc_mae_switch_port_data *port_datap
+ __rte_unused,
+ void *user_datap)
+{
+ int *counter = user_datap;
+
+ SFC_ASSERT(counter != NULL);
+
+ if (type == SFC_MAE_SWITCH_PORT_REPRESENTOR)
+ (*counter)++;
+}
+
+struct sfc_get_representors_ctx {
+ struct rte_eth_representor_info *info;
+ struct sfc_adapter *sa;
+ uint16_t switch_domain_id;
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+};
+
+static void
+sfc_get_representors_cb(enum sfc_mae_switch_port_type type,
+ const efx_mport_sel_t *ethdev_mportp __rte_unused,
+ uint16_t ethdev_port_id __rte_unused,
+ const efx_mport_sel_t *entity_mportp __rte_unused,
+ uint16_t switch_port_id,
+ union sfc_mae_switch_port_data *port_datap,
+ void *user_datap)
+{
+ struct sfc_get_representors_ctx *ctx = user_datap;
+ struct rte_eth_representor_range *range;
+ int ret;
+ int rc;
+
+ SFC_ASSERT(ctx != NULL);
+ SFC_ASSERT(ctx->info != NULL);
+ SFC_ASSERT(ctx->sa != NULL);
+
+ if (type != SFC_MAE_SWITCH_PORT_REPRESENTOR) {
+ sfc_dbg(ctx->sa, "not a representor, skipping");
+ return;
+ }
+ if (ctx->info->nb_ranges >= ctx->info->nb_ranges_alloc) {
+ sfc_dbg(ctx->sa, "info structure is full already");
+ return;
+ }
+
+ range = &ctx->info->ranges[ctx->info->nb_ranges];
+ rc = sfc_mae_switch_controller_from_mapping(ctx->controllers,
+ ctx->nb_controllers,
+ port_datap->repr.intf,
+ &range->controller);
+ if (rc != 0) {
+ sfc_err(ctx->sa, "invalid representor controller: %d",
+ port_datap->repr.intf);
+ range->controller = -1;
+ }
+ range->pf = port_datap->repr.pf;
+ range->id_base = switch_port_id;
+ range->id_end = switch_port_id;
+
+ if (port_datap->repr.vf != EFX_PCI_VF_INVALID) {
+ range->type = RTE_ETH_REPRESENTOR_VF;
+ range->vf = port_datap->repr.vf;
+ ret = snprintf(range->name, RTE_DEV_NAME_MAX_LEN,
+ "c%dpf%dvf%d", range->controller, range->pf,
+ range->vf);
+ } else {
+ range->type = RTE_ETH_REPRESENTOR_PF;
+ ret = snprintf(range->name, RTE_DEV_NAME_MAX_LEN,
+ "c%dpf%d", range->controller, range->pf);
+ }
+ if (ret >= RTE_DEV_NAME_MAX_LEN) {
+ sfc_err(ctx->sa, "representor name has been truncated: %s",
+ range->name);
+ }
+
+ ctx->info->nb_ranges++;
+}
+
+static int
+sfc_representor_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_representor_info *info)
+{
+ struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+ struct sfc_get_representors_ctx get_repr_ctx;
+ const efx_nic_cfg_t *nic_cfg;
+ uint16_t switch_domain_id;
+ uint32_t nb_repr;
+ int controller;
+ int rc;
+
+ sfc_adapter_lock(sa);
+
+ if (sa->mae.status != SFC_MAE_STATUS_SUPPORTED) {
+ sfc_adapter_unlock(sa);
+ return -ENOTSUP;
+ }
+
+ rc = sfc_process_mport_journal(sa);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ switch_domain_id = sa->mae.switch_domain_id;
+
+ nb_repr = 0;
+ rc = sfc_mae_switch_ports_iterate(switch_domain_id,
+ sfc_count_representors_cb,
+ &nb_repr);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ if (info == NULL) {
+ sfc_adapter_unlock(sa);
+ return nb_repr;
+ }
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id,
+ &get_repr_ctx.controllers,
+ &get_repr_ctx.nb_controllers);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ nic_cfg = efx_nic_cfg_get(sa->nic);
+
+ rc = sfc_mae_switch_domain_get_controller(switch_domain_id,
+ nic_cfg->enc_intf,
+ &controller);
+ if (rc != 0) {
+ sfc_err(sa, "invalid controller: %d", nic_cfg->enc_intf);
+ controller = -1;
+ }
+
+ info->controller = controller;
+ info->pf = nic_cfg->enc_pf;
+
+ get_repr_ctx.info = info;
+ get_repr_ctx.sa = sa;
+ get_repr_ctx.switch_domain_id = switch_domain_id;
+ rc = sfc_mae_switch_ports_iterate(switch_domain_id,
+ sfc_get_representors_cb,
+ &get_repr_ctx);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ sfc_adapter_unlock(sa);
+ return nb_repr;
+}
+
static const struct eth_dev_ops sfc_eth_dev_ops = {
.dev_configure = sfc_dev_configure,
.dev_start = sfc_dev_start,
@@ -2081,6 +2309,7 @@ static const struct eth_dev_ops sfc_eth_dev_ops = {
.xstats_get_by_id = sfc_xstats_get_by_id,
.xstats_get_names_by_id = sfc_xstats_get_names_by_id,
.pool_ops_supported = sfc_pool_ops_supported,
+ .representor_info_get = sfc_representor_info_get,
};
struct sfc_ethdev_init_data {
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 5cd9b46d26..dc5b9a676c 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -151,6 +151,34 @@ sfc_mae_find_switch_domain_by_id(uint16_t switch_domain_id)
return NULL;
}
+int
+sfc_mae_switch_ports_iterate(uint16_t switch_domain_id,
+ sfc_mae_switch_port_iterator_cb *cb,
+ void *data)
+{
+ struct sfc_mae_switch_domain *domain;
+ struct sfc_mae_switch_port *port;
+
+ if (cb == NULL)
+ return EINVAL;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ TAILQ_FOREACH(port, &domain->ports, switch_domain_ports) {
+ cb(port->type, &port->ethdev_mport, port->ethdev_port_id,
+ &port->entity_mport, port->id, &port->data, data);
+ }
+
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return 0;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_domain *
sfc_mae_find_switch_domain_by_hw_switch_id(const struct sfc_hw_switch_id *id)
@@ -280,19 +308,12 @@ sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
}
int
-sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
- efx_pcie_interface_t intf,
- int *controller)
+sfc_mae_switch_controller_from_mapping(const efx_pcie_interface_t *controllers,
+ size_t nb_controllers,
+ efx_pcie_interface_t intf,
+ int *controller)
{
- const efx_pcie_interface_t *controllers;
- size_t nb_controllers;
size_t i;
- int rc;
-
- rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
- &nb_controllers);
- if (rc != 0)
- return rc;
if (controllers == NULL)
return ENOENT;
@@ -307,6 +328,26 @@ sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
return ENOENT;
}
+int
+sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
+ efx_pcie_interface_t intf,
+ int *controller)
+{
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ int rc;
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
+ &nb_controllers);
+ if (rc != 0)
+ return rc;
+
+ return sfc_mae_switch_controller_from_mapping(controllers,
+ nb_controllers,
+ intf,
+ controller);
+}
+
int sfc_mae_switch_domain_get_intf(uint16_t switch_domain_id,
int controller,
efx_pcie_interface_t *intf)
@@ -350,6 +391,30 @@ sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
return NULL;
}
+/* This function expects to be called only when the lock is held */
+static int
+sfc_mae_find_switch_port_id_by_entity(uint16_t switch_domain_id,
+ const efx_mport_sel_t *entity_mportp,
+ enum sfc_mae_switch_port_type type,
+ uint16_t *switch_port_id)
+{
+ struct sfc_mae_switch_domain *domain;
+ struct sfc_mae_switch_port *port;
+
+ SFC_ASSERT(rte_spinlock_is_locked(&sfc_mae_switch.lock));
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL)
+ return EINVAL;
+
+ port = sfc_mae_find_switch_port_by_entity(domain, entity_mportp, type);
+ if (port == NULL)
+ return ENOENT;
+
+ *switch_port_id = port->id;
+ return 0;
+}
+
int
sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
@@ -455,3 +520,20 @@ sfc_mae_switch_port_by_ethdev(uint16_t switch_domain_id,
return rc;
}
+
+int
+sfc_mae_switch_port_id_by_entity(uint16_t switch_domain_id,
+ const efx_mport_sel_t *entity_mportp,
+ enum sfc_mae_switch_port_type type,
+ uint16_t *switch_port_id)
+{
+ int rc;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+ rc = sfc_mae_find_switch_port_id_by_entity(switch_domain_id,
+ entity_mportp, type,
+ switch_port_id);
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+
+ return rc;
+}
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index d187c6dbbb..a77d2e6f28 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -52,6 +52,19 @@ struct sfc_mae_switch_port_request {
union sfc_mae_switch_port_data port_data;
};
+typedef void (sfc_mae_switch_port_iterator_cb)(
+ enum sfc_mae_switch_port_type type,
+ const efx_mport_sel_t *ethdev_mportp,
+ uint16_t ethdev_port_id,
+ const efx_mport_sel_t *entity_mportp,
+ uint16_t switch_port_id,
+ union sfc_mae_switch_port_data *port_datap,
+ void *user_datap);
+
+int sfc_mae_switch_ports_iterate(uint16_t switch_domain_id,
+ sfc_mae_switch_port_iterator_cb *cb,
+ void *data);
+
int sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
uint16_t *switch_domain_id);
@@ -63,6 +76,12 @@ int sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
efx_pcie_interface_t *controllers,
size_t nb_controllers);
+int sfc_mae_switch_controller_from_mapping(
+ const efx_pcie_interface_t *controllers,
+ size_t nb_controllers,
+ efx_pcie_interface_t intf,
+ int *controller);
+
int sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
efx_pcie_interface_t intf,
int *controller);
@@ -79,6 +98,11 @@ int sfc_mae_switch_port_by_ethdev(uint16_t switch_domain_id,
uint16_t ethdev_port_id,
efx_mport_sel_t *mport_sel);
+int sfc_mae_switch_port_id_by_entity(uint16_t switch_domain_id,
+ const efx_mport_sel_t *entity_mportp,
+ enum sfc_mae_switch_port_type type,
+ uint16_t *switch_port_id);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH 38/38] net/sfc: update comment about representor support
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (36 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 37/38] net/sfc: implement the representor info API Andrew Rybchenko
@ 2021-08-27 6:57 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-08-27 6:57 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, stable, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
The representor support has been implemented to some extent, and the fact
that ethdev mport is equivalent to entity mport is by design.
Fixes: 1fb65e4dae8 ("net/sfc: support flow action port ID in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_mae.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 7be77054ab..fa60c948ca 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -228,10 +228,7 @@ sfc_mae_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "assign RTE switch port");
switch_port_request.type = SFC_MAE_SWITCH_PORT_INDEPENDENT;
switch_port_request.entity_mportp = &entity_mport;
- /*
- * As of now, the driver does not support representors, so
- * RTE ethdev MPORT simply matches that of the entity.
- */
+ /* RTE ethdev MPORT matches that of the entity for independent ports. */
switch_port_request.ethdev_mportp = &entity_mport;
switch_port_request.ethdev_port_id = sas->port_id;
rc = sfc_mae_assign_switch_port(mae->switch_domain_id,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
` (37 preceding siblings ...)
2021-08-27 6:57 ` [dpdk-dev] [PATCH 38/38] net/sfc: update comment about representor support Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
` (38 more replies)
38 siblings, 39 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev
Support port representors on SN1000 SmartNICs including:
- new syntax with controller, PF and VF specification
- PF representors
- two controllers: host and embedded SoC
The patch series depends on [1] (including build dependency) since it
provides representors info on admin PF only.
[1] https://patches.dpdk.org/project/dpdk/list/?series=18373
v2:
- rebase on top of release callback prototype changes
- improve switch mode auto-detection
Andrew Rybchenko (2):
common/sfc_efx/base: update MCDI headers
common/sfc_efx/base: update EF100 registers definitions
Igor Romanov (23):
net/sfc: add switch mode device argument
net/sfc: insert switchdev mode MAE rules
common/sfc_efx/base: add an API to get mport ID by selector
net/sfc: support EF100 Tx override prefix
net/sfc: add representors proxy infrastructure
net/sfc: reserve TxQ and RxQ for port representors
net/sfc: move adapter state enum to separate header
net/sfc: add port representors infrastructure
common/sfc_efx/base: add filter ingress mport matching field
common/sfc_efx/base: add API to get mport selector by ID
common/sfc_efx/base: add mport alias MCDI wrappers
net/sfc: add representor proxy port API
net/sfc: implement representor queue setup and release
net/sfc: implement representor RxQ start/stop
net/sfc: implement representor TxQ start/stop
net/sfc: implement port representor start and stop
net/sfc: implement port representor link update
net/sfc: support multiple device probe
net/sfc: implement representor Tx routine
net/sfc: use xword type for EF100 Rx prefix
net/sfc: handle ingress m-port in EF100 Rx prefix
net/sfc: implement representor Rx routine
net/sfc: add simple port representor statistics
Viacheslav Galaktionov (13):
common/sfc_efx/base: allow creating invalid mport selectors
net/sfc: free MAE lock once switch domain is assigned
common/sfc_efx/base: add multi-host function M-port selector
common/sfc_efx/base: retrieve function interfaces for VNICs
common/sfc_efx/base: add a means to read MAE mport journal
common/sfc_efx/base: allow getting VNIC MCDI client handles
net/sfc: maintain controller to EFX interface mapping
net/sfc: store PCI address for represented entities
net/sfc: include controller and port in representor name
net/sfc: support new representor parameter syntax
net/sfc: use switch port ID as representor ID
net/sfc: implement the representor info API
net/sfc: update comment about representor support
doc/guides/nics/sfc_efx.rst | 24 +
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/common/sfc_efx/base/ef10_filter.c | 11 +-
drivers/common/sfc_efx/base/ef10_impl.h | 3 +-
drivers/common/sfc_efx/base/ef10_nic.c | 4 +-
drivers/common/sfc_efx/base/efx.h | 155 ++
drivers/common/sfc_efx/base/efx_impl.h | 6 +
drivers/common/sfc_efx/base/efx_mae.c | 506 +++++-
drivers/common/sfc_efx/base/efx_mcdi.c | 128 +-
drivers/common/sfc_efx/base/efx_mcdi.h | 54 +
drivers/common/sfc_efx/base/efx_regs_ef100.h | 106 +-
drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++-
drivers/common/sfc_efx/base/rhead_rx.c | 2 +-
drivers/common/sfc_efx/version.map | 9 +
drivers/net/sfc/meson.build | 2 +
drivers/net/sfc/sfc.c | 151 +-
drivers/net/sfc/sfc.h | 77 +-
drivers/net/sfc/sfc_dp.c | 46 +
drivers/net/sfc/sfc_dp.h | 25 +
drivers/net/sfc/sfc_ef100_rx.c | 36 +-
drivers/net/sfc/sfc_ef100_tx.c | 25 +
drivers/net/sfc/sfc_ethdev.c | 809 ++++++++-
drivers/net/sfc/sfc_ethdev_state.h | 72 +
drivers/net/sfc/sfc_ev.h | 56 +-
drivers/net/sfc/sfc_flow.c | 10 +-
drivers/net/sfc/sfc_intr.c | 12 +-
drivers/net/sfc/sfc_kvargs.c | 2 +
drivers/net/sfc/sfc_kvargs.h | 10 +
drivers/net/sfc/sfc_mae.c | 218 ++-
drivers/net/sfc/sfc_mae.h | 56 +
drivers/net/sfc/sfc_port.c | 2 +-
drivers/net/sfc/sfc_repr.c | 1085 ++++++++++++
drivers/net/sfc/sfc_repr.h | 44 +
drivers/net/sfc/sfc_repr_proxy.c | 1661 ++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 147 ++
drivers/net/sfc/sfc_repr_proxy_api.h | 47 +
drivers/net/sfc/sfc_sriov.c | 9 +-
drivers/net/sfc/sfc_switch.c | 207 ++-
drivers/net/sfc/sfc_switch.h | 56 +
drivers/net/sfc/sfc_tx.c | 42 +-
drivers/net/sfc/sfc_tx.h | 1 +
41 files changed, 6899 insertions(+), 234 deletions(-)
create mode 100644 drivers/net/sfc/sfc_ethdev_state.h
create mode 100644 drivers/net/sfc/sfc_repr.c
create mode 100644 drivers/net/sfc/sfc_repr.h
create mode 100644 drivers/net/sfc/sfc_repr_proxy.c
create mode 100644 drivers/net/sfc/sfc_repr_proxy.h
create mode 100644 drivers/net/sfc/sfc_repr_proxy_api.h
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 02/38] common/sfc_efx/base: update EF100 registers definitions Andrew Rybchenko
` (37 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev
Pickup new FW interface definitions.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++++++++-
1 file changed, 1176 insertions(+), 35 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx_regs_mcdi.h b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
index a3c9f076ec..2daf825a36 100644
--- a/drivers/common/sfc_efx/base/efx_regs_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
@@ -492,6 +492,24 @@
*/
#define MAE_FIELD_SUPPORTED_MATCH_MASK 0x5
+/* MAE_CT_VNI_MODE enum: Controls the layout of the VNI input to the conntrack
+ * lookup. (Values are not arbitrary - constrained by table access ABI.)
+ */
+/* enum: The VNI input to the conntrack lookup will be zero. */
+#define MAE_CT_VNI_MODE_ZERO 0x0
+/* enum: The VNI input to the conntrack lookup will be the VNI (VXLAN/Geneve)
+ * or VSID (NVGRE) field from the packet.
+ */
+#define MAE_CT_VNI_MODE_VNI 0x1
+/* enum: The VNI input to the conntrack lookup will be the VLAN ID from the
+ * outermost VLAN tag (in bottom 12 bits; top 12 bits zero).
+ */
+#define MAE_CT_VNI_MODE_1VLAN 0x2
+/* enum: The VNI input to the conntrack lookup will be the VLAN IDs from both
+ * VLAN tags (outermost in bottom 12 bits, innermost in top 12 bits).
+ */
+#define MAE_CT_VNI_MODE_2VLAN 0x3
+
/* MAE_FIELD enum: NB: this enum shares namespace with the support status enum.
*/
/* enum: Source mport upon entering the MAE. */
@@ -617,7 +635,8 @@
/* MAE_MCDI_ENCAP_TYPE enum: Encapsulation type. Defines how the payload will
* be parsed to an inner frame. Other values are reserved. Unknown values
- * should be treated same as NONE.
+ * should be treated same as NONE. (Values are not arbitrary - constrained by
+ * table access ABI.)
*/
#define MAE_MCDI_ENCAP_TYPE_NONE 0x0 /* enum */
/* enum: Don't assume enum aligns with support bitmask... */
@@ -634,6 +653,18 @@
/* enum: Selects the virtual NIC plugged into the MAE switch */
#define MAE_MPORT_END_VNIC 0x2
+/* MAE_COUNTER_TYPE enum: The datapath maintains several sets of counters, each
+ * being associated with a different table. Note that the same counter ID may
+ * be allocated by different counter blocks, so e.g. AR counter 42 is different
+ * from CT counter 42. Generation counts are also type-specific. This value is
+ * also present in the header of streaming counter packets, in the IDENTIFIER
+ * field (see packetiser packet format definitions).
+ */
+/* enum: Action Rule counters - can be referenced in AR response. */
+#define MAE_COUNTER_TYPE_AR 0x0
+/* enum: Conntrack counters - can be referenced in CT response. */
+#define MAE_COUNTER_TYPE_CT 0x1
+
/* MCDI_EVENT structuredef: The structure of an MCDI_EVENT on Siena/EF10/EF100
* platforms
*/
@@ -4547,6 +4578,8 @@
#define MC_CMD_MEDIA_BASE_T 0x6
/* enum: QSFP+. */
#define MC_CMD_MEDIA_QSFP_PLUS 0x7
+/* enum: DSFP. */
+#define MC_CMD_MEDIA_DSFP 0x8
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_OFST 48
#define MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_LEN 4
/* enum: Native clause 22 */
@@ -7823,11 +7856,16 @@
/***********************************/
/* MC_CMD_GET_PHY_MEDIA_INFO
* Read media-specific data from PHY (e.g. SFP/SFP+ module ID information for
- * SFP+ PHYs). The 'media type' can be found via GET_PHY_CFG
- * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid 'page number' input values, and the
- * output data, are interpreted on a per-type basis. For SFP+: PAGE=0 or 1
+ * SFP+ PHYs). The "media type" can be found via GET_PHY_CFG
+ * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid "page number" input values, and the
+ * output data, are interpreted on a per-type basis. For SFP+, PAGE=0 or 1
* returns a 128-byte block read from module I2C address 0xA0 offset 0 or 0x80.
- * Anything else: currently undefined. Locks required: None. Return code: 0.
+ * For QSFP, PAGE=-1 is the lower (unbanked) page. PAGE=2 is the EEPROM and
+ * PAGE=3 is the module limits. For DSFP, module addressing requires a
+ * "BANK:PAGE". Not every bank has the same number of pages. See the Common
+ * Management Interface Specification (CMIS) for further details. A BANK:PAGE
+ * of "0xffff:0xffff" retrieves the lower (unbanked) page. Locks required -
+ * None. Return code - 0.
*/
#define MC_CMD_GET_PHY_MEDIA_INFO 0x4b
#define MC_CMD_GET_PHY_MEDIA_INFO_MSGSET 0x4b
@@ -7839,6 +7877,12 @@
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_LEN 4
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_OFST 0
#define MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_LEN 4
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_LBN 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_WIDTH 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_OFST 0
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_LBN 16
+#define MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_WIDTH 16
/* MC_CMD_GET_PHY_MEDIA_INFO_OUT msgresponse */
#define MC_CMD_GET_PHY_MEDIA_INFO_OUT_LENMIN 5
@@ -9350,6 +9394,8 @@
#define NVRAM_PARTITION_TYPE_FPGA_JUMP 0xb08
/* enum: FPGA Validate XCLBIN */
#define NVRAM_PARTITION_TYPE_FPGA_XCLBIN_VALIDATE 0xb09
+/* enum: FPGA XOCL Configuration information */
+#define NVRAM_PARTITION_TYPE_FPGA_XOCL_CONFIG 0xb0a
/* enum: MUM firmware partition */
#define NVRAM_PARTITION_TYPE_MUM_FIRMWARE 0xc00
/* enum: SUC firmware partition (this is intentionally an alias of
@@ -9427,6 +9473,8 @@
#define NVRAM_PARTITION_TYPE_BUNDLE_LOG 0x1e02
/* enum: Partition for Solarflare gPXE bootrom installed via Bundle update. */
#define NVRAM_PARTITION_TYPE_EXPANSION_ROM_INTERNAL 0x1e03
+/* enum: Partition to store ASN.1 format Bundle Signature for checking. */
+#define NVRAM_PARTITION_TYPE_BUNDLE_SIGNATURE 0x1e04
/* enum: Test partition on SmartNIC system microcontroller (SUC) */
#define NVRAM_PARTITION_TYPE_SUC_TEST 0x1f00
/* enum: System microcontroller access to primary FPGA flash. */
@@ -10051,6 +10099,158 @@
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
#define MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+/* MC_CMD_INIT_EVQ_V3_IN msgrequest: Extended request to specify per-queue
+ * event merge timeouts.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_LEN 556
+/* Size, in entries */
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_OFST 0
+#define MC_CMD_INIT_EVQ_V3_IN_SIZE_LEN 4
+/* Desired instance. Must be set to a specific instance, which is a function
+ * local queue index. The calling client must be the currently-assigned user of
+ * this VI (see MC_CMD_SET_VI_USER).
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_IN_INSTANCE_LEN 4
+/* The initial timer value. The load value is ignored if the timer mode is DIS.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_OFST 8
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_LEN 4
+/* The reload value is ignored in one-shot modes */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_OFST 12
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_LEN 4
+/* tbd */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_LBN 0
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_LBN 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_LBN 2
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_LBN 3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_LBN 4
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_LBN 5
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_LBN 6
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LBN 7
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_WIDTH 4
+/* enum: All initialisation flags specified by host. */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_MANUAL 0x0
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the lowest latency achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LOW_LATENCY 0x1
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the best throughput achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_THROUGHPUT 0x2
+/* enum: MEDFORD only. Certain initialisation flags may be over-ridden by
+ * firmware based on licenses and firmware variant. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_AUTO 0x3
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_OFST 16
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_LBN 11
+#define MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_OFST 20
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_DIS 0x0
+/* enum: Immediate */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_IMMED_START 0x1
+/* enum: Triggered */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_TRIG_START 0x2
+/* enum: Hold-off */
+#define MC_CMD_INIT_EVQ_V3_IN_TMR_INT_HLDOFF 0x3
+/* Target EVQ for wakeups if in wakeup mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_LEN 4
+/* Target interrupt if in interrupting mode (note union with target EVQ). Use
+ * MC_CMD_RESOURCE_INSTANCE_ANY unless a specific one required for test
+ * purposes.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_OFST 24
+#define MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_LEN 4
+/* Event Counter Mode. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_OFST 28
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_LEN 4
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_DIS 0x0
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RX 0x1
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_TX 0x2
+/* enum: Disabled */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RXTX 0x3
+/* Event queue packet count threshold. */
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_OFST 32
+#define MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_LEN 4
+/* 64-bit address of 4k of 4k-aligned host memory buffer */
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LEN 8
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_OFST 36
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LBN 288
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_OFST 40
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LEN 4
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LBN 320
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_WIDTH 32
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
+/* Receive event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_RX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_OFST 548
+#define MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_LEN 4
+/* Transmit event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_TX_MERGE is not set.
+ */
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_OFST 552
+#define MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_LEN 4
+
+/* MC_CMD_INIT_EVQ_V3_OUT msgresponse */
+#define MC_CMD_INIT_EVQ_V3_OUT_LEN 8
+/* Only valid if INTRFLAG was true */
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_OFST 0
+#define MC_CMD_INIT_EVQ_V3_OUT_IRQ_LEN 4
+/* Actual configuration applied on the card */
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAGS_LEN 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_LBN 0
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_LBN 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_LBN 2
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_WIDTH 1
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_OFST 4
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
+#define MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+
/* QUEUE_CRC_MODE structuredef */
#define QUEUE_CRC_MODE_LEN 1
#define QUEUE_CRC_MODE_MODE_LBN 0
@@ -10256,7 +10456,9 @@
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10360,7 +10562,9 @@
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10493,7 +10697,9 @@
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10639,7 +10845,9 @@
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_NUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MINNUM 0
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM 64
+#define MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_OFST 540
#define MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10878,7 +11086,7 @@
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LEN 4
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LBN 256
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 1
+#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 0
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM 64
#define MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
/* Flags related to Qbb flow control mode. */
@@ -12228,6 +12436,8 @@
* rules inserted by MC_CMD_VNIC_ENCAP_RULE_ADD. (ef100 and later)
*/
#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_MATCHES 0x5
+/* enum: read the supported encapsulation types for the VNIC */
+#define MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_TYPES 0x6
/* MC_CMD_GET_PARSER_DISP_INFO_OUT msgresponse */
#define MC_CMD_GET_PARSER_DISP_INFO_OUT_LENMIN 8
@@ -12336,6 +12546,30 @@
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM 61
#define MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM_MCDI2 253
+/* MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT msgresponse: Returns
+ * the supported encapsulation types for the VNIC
+ */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_LEN 8
+/* The op code OP_GET_SUPPORTED_VNIC_ENCAP_TYPES is returned */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_OFST 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_GET_PARSER_DISP_INFO_IN/OP */
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+
/***********************************/
/* MC_CMD_PARSER_DISP_RW
@@ -16236,6 +16470,9 @@
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* MC_CMD_GET_CAPABILITIES_V8_OUT msgresponse */
#define MC_CMD_GET_CAPABILITIES_V8_OUT_LEN 160
@@ -16734,6 +16971,9 @@
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17246,6 +17486,9 @@
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -17793,6 +18036,9 @@
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
#define MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
/* These bits are reserved for communicating test-specific capabilities to
* host-side test software. All production drivers should treat this field as
* opaque.
@@ -19900,6 +20146,18 @@
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_OFST 4
#define MC_CMD_GET_FUNCTION_INFO_OUT_VF_LEN 4
+/* MC_CMD_GET_FUNCTION_INFO_OUT_V2 msgresponse */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN 12
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_OFST 0
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_LEN 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_OFST 4
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_LEN 4
+/* Values from PCIE_INTERFACE enumeration. For NICs with a single interface, or
+ * in the case of a V1 response, this should be HOST_PRIMARY.
+ */
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_OFST 8
+#define MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_LEN 4
+
/***********************************/
/* MC_CMD_ENABLE_OFFLINE_BIST
@@ -25682,6 +25940,9 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_LBN 6
#define MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_LBN 7
+#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_WIDTH 1
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_LBN 7
#define MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_WIDTH 1
@@ -25691,6 +25952,12 @@
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_OFST 0
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_LBN 9
#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_LBN 10
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_WIDTH 1
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_OFST 0
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_LBN 11
+#define MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_WIDTH 1
/* MC_CMD_GET_RX_PREFIX_ID_OUT msgresponse */
#define MC_CMD_GET_RX_PREFIX_ID_OUT_LENMIN 8
@@ -25736,9 +26003,12 @@
#define RX_PREFIX_FIELD_INFO_PARTIAL_TSTAMP 0x4 /* enum */
#define RX_PREFIX_FIELD_INFO_RSS_HASH 0x5 /* enum */
#define RX_PREFIX_FIELD_INFO_USER_MARK 0x6 /* enum */
+#define RX_PREFIX_FIELD_INFO_INGRESS_MPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_INGRESS_VPORT 0x7 /* enum */
#define RX_PREFIX_FIELD_INFO_CSUM_FRAME 0x8 /* enum */
#define RX_PREFIX_FIELD_INFO_VLAN_STRIP_TCI 0x9 /* enum */
+#define RX_PREFIX_FIELD_INFO_VLAN_STRIPPED 0xa /* enum */
+#define RX_PREFIX_FIELD_INFO_VSWITCH_STATUS 0xb /* enum */
#define RX_PREFIX_FIELD_INFO_TYPE_LBN 24
#define RX_PREFIX_FIELD_INFO_TYPE_WIDTH 8
@@ -26063,6 +26333,10 @@
#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_LINK 0x5
/* enum: Read internal link configuration. */
#define MC_CMD_FPGA_IN_OP_GET_INTERNAL_LINK 0x6
+/* enum: Get MAC statistics of FPGA external port. */
+#define MC_CMD_FPGA_IN_OP_GET_MAC_STATS 0x7
+/* enum: Set configuration on internal FPGA MAC. */
+#define MC_CMD_FPGA_IN_OP_SET_INTERNAL_MAC 0x8
/* MC_CMD_FPGA_OP_GET_VERSION_IN msgrequest: Get the FPGA version string. A
* free-format string is returned in response to this command. Any checks on
@@ -26206,6 +26480,87 @@
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_OFST 4
#define MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_LEN 4
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_IN msgrequest: Get FPGA external port MAC
+ * statistics.
+ */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_LEN 4
+/* Sub-command code. Must be OP_GET_MAC_STATS. */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_LEN 4
+
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_OUT msgresponse */
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMIN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX 252
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LEN(num) (4+8*(num))
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_NUM(len) (((len)-4)/8)
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_OFST 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LEN 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_OFST 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LBN 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_OFST 8
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LEN 4
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LBN 64
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_WIDTH 32
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MINNUM 0
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM 31
+#define MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM_MCDI2 127
+#define MC_CMD_FPGA_MAC_TX_TOTAL_PACKETS 0x0 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_BYTES 0x1 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_PACKETS 0x2 /* enum */
+#define MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_BYTES 0x3 /* enum */
+#define MC_CMD_FPGA_MAC_TX_BAD_FCS 0x4 /* enum */
+#define MC_CMD_FPGA_MAC_TX_PAUSE 0x5 /* enum */
+#define MC_CMD_FPGA_MAC_TX_USER_PAUSE 0x6 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_PACKETS 0x7 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_BYTES 0x8 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_PACKETS 0x9 /* enum */
+#define MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_BYTES 0xa /* enum */
+#define MC_CMD_FPGA_MAC_RX_BAD_FCS 0xb /* enum */
+#define MC_CMD_FPGA_MAC_RX_PAUSE 0xc /* enum */
+#define MC_CMD_FPGA_MAC_RX_USER_PAUSE 0xd /* enum */
+#define MC_CMD_FPGA_MAC_RX_UNDERSIZE 0xe /* enum */
+#define MC_CMD_FPGA_MAC_RX_OVERSIZE 0xf /* enum */
+#define MC_CMD_FPGA_MAC_RX_FRAMING_ERR 0x10 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_UNCORRECTED_ERRORS 0x11 /* enum */
+#define MC_CMD_FPGA_MAC_FEC_CORRECTED_ERRORS 0x12 /* enum */
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN msgrequest: Configures the internal port
+ * MAC on the FPGA.
+ */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_LEN 20
+/* Sub-command code. Must be OP_SET_INTERNAL_MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_OFST 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_LEN 4
+/* Select which parameters to configure. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_LEN 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_LBN 0
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_LBN 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_WIDTH 1
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_OFST 4
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_LBN 2
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_WIDTH 1
+/* The MTU to be programmed into the MAC. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_OFST 8
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_LEN 4
+/* Drain Tx FIFO */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_OFST 12
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_LEN 4
+/* flow control configuration. See MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL. */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_OFST 16
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_LEN 4
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT msgresponse */
+#define MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT_LEN 0
+
/***********************************/
/* MC_CMD_EXTERNAL_MAE_GET_LINK_MODE
@@ -26483,6 +26838,12 @@
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_OFST 29
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_LBN 0
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_LBN 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_WIDTH 1
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_OFST 29
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_LBN 2
+#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_WIDTH 1
/* Only if MATCH_DST_PORT is set. Port number as bytes in network order. */
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_OFST 30
#define MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_LEN 2
@@ -26544,6 +26905,257 @@
#define UUID_NODE_LBN 80
#define UUID_NODE_WIDTH 48
+
+/***********************************/
+/* MC_CMD_PLUGIN_ALLOC
+ * Create a handle to a datapath plugin's extension. This involves finding a
+ * currently-loaded plugin offering the given functionality (as identified by
+ * the UUID) and allocating a handle to track the usage of it. Plugin
+ * functionality is identified by 'extension' rather than any other identifier
+ * so that a single plugin bitfile may offer more than one piece of independent
+ * functionality. If two bitfiles are loaded which both offer the same
+ * extension, then the metadata is interrogated further to determine which is
+ * the newest and that is the one opened. See SF-123625-SW for architectural
+ * detail on datapath plugins.
+ */
+#define MC_CMD_PLUGIN_ALLOC 0x1ad
+#define MC_CMD_PLUGIN_ALLOC_MSGSET 0x1ad
+#undef MC_CMD_0x1ad_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ad_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_ALLOC_IN msgrequest */
+#define MC_CMD_PLUGIN_ALLOC_IN_LEN 24
+/* The functionality requested of the plugin, as a UUID structure */
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_IN_UUID_LEN 16
+/* Additional options for opening the handle */
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_LBN 0
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_WIDTH 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_OFST 16
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_WIDTH 1
+/* Load the extension only if it is in the specified administrative group.
+ * Specify ANY to load the extension wherever it is found (if there are
+ * multiple choices then the extension with the highest MINOR_VER/PATCH_VER
+ * will be loaded). See MC_CMD_PLUGIN_GET_META_GLOBAL for a description of
+ * administrative groups.
+ */
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_OFST 20
+#define MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_LEN 2
+/* enum: Load the extension from any ADMIN_GROUP. */
+#define MC_CMD_PLUGIN_ALLOC_IN_ANY 0xffff
+/* Reserved */
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_OFST 22
+#define MC_CMD_PLUGIN_ALLOC_IN_RESERVED_LEN 2
+
+/* MC_CMD_PLUGIN_ALLOC_OUT msgresponse */
+#define MC_CMD_PLUGIN_ALLOC_OUT_LEN 4
+/* Unique identifier of this usage */
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_LEN 4
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_FREE
+ * Delete a handle to a plugin's extension.
+ */
+#define MC_CMD_PLUGIN_FREE 0x1ae
+#define MC_CMD_PLUGIN_FREE_MSGSET 0x1ae
+#undef MC_CMD_0x1ae_PRIVILEGE_CTG
+
+#define MC_CMD_0x1ae_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_FREE_IN msgrequest */
+#define MC_CMD_PLUGIN_FREE_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_FREE_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_FREE_OUT msgresponse */
+#define MC_CMD_PLUGIN_FREE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_GLOBAL
+ * Returns the global metadata applying to the whole plugin extension. See the
+ * other metadata calls for subtypes of data.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL 0x1af
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_MSGSET 0x1af
+#undef MC_CMD_0x1af_PRIVILEGE_CTG
+
+#define MC_CMD_0x1af_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_LEN 36
+/* Unique identifier of this plugin extension. This is identical to the value
+ * which was requested when the handle was allocated.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_LEN 16
+/* semver sub-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_OFST 16
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_LEN 2
+/* semver micro-version of this plugin extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_OFST 18
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_LEN 2
+/* Number of different messages which can be sent to this extension */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_OFST 20
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_LEN 4
+/* Byte offset within the VI window of the plugin's mapped CSR window. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_OFST 24
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_LEN 2
+/* Number of bytes mapped through to the plugin's CSRs. 0 if that feature was
+ * not requested by the plugin (in which case MAPPED_CSR_OFFSET and
+ * MAPPED_CSR_FLAGS are ignored).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_OFST 26
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_LEN 2
+/* Flags indicating how to perform the CSR window mapping. */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_LBN 0
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_WIDTH 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_OFST 28
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_LBN 1
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_WIDTH 1
+/* Identifier of the set of extensions which all change state together.
+ * Extensions having the same ADMIN_GROUP will always load and unload at the
+ * same time. ADMIN_GROUP values themselves are arbitrary (but they contain a
+ * generation number as an implementation detail to ensure that they're not
+ * reused rapidly).
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_OFST 32
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_LEN 1
+/* Bitshift in MC_CMD_DEVEL_CLIENT_PRIVILEGE_MODIFY's MASK parameters
+ * corresponding to this extension, i.e. set the bit 1<<PRIVILEGE_BIT to permit
+ * access to this extension.
+ */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_OFST 33
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_LEN 1
+/* Reserved */
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_OFST 34
+#define MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_LEN 2
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER
+ * Returns metadata supplied by the plugin author which describes this
+ * extension in a human-readable way. Contrast with
+ * MC_CMD_PLUGIN_GET_META_GLOBAL, which returns information needed for software
+ * to operate.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER 0x1b0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_MSGSET 0x1b0
+#undef MC_CMD_0x1b0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b0_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_LEN 12
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_LEN 4
+/* Category of data to return */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_LEN 4
+/* enum: Top-level information about the extension. The returned data is an
+ * array of key/value pairs using the keys in RFC5013 (Dublin Core) to describe
+ * the extension. The data is a back-to-back list of zero-terminated strings;
+ * the even-numbered fields (0,2,4,...) are keys and their following odd-
+ * numbered fields are the corresponding values. Both keys and values are
+ * nominally UTF-8. Per RFC5013, the same key may be repeated any number of
+ * times. Note that all information (including the key/value structure itself
+ * and the UTF-8 encoding) may have been provided by the plugin author, so
+ * callers must be cautious about parsing it. Callers should parse only the
+ * top-level structure to separate out the keys and values; the contents of the
+ * values is not expected to be machine-readable.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_EXTENSION_KVS 0x0
+/* Byte position of the data to be returned within the full data block of the
+ * given SUBTYPE.
+ */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_OFST 8
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMIN 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LEN(num) (4+1*(num))
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_NUM(len) (((len)-4)/1)
+/* Full length of the data block of the requested SUBTYPE, in bytes. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_LEN 4
+/* The information requested by SUBTYPE. */
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_OFST 4
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM 248
+#define MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM_MCDI2 1016
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_MSG
+ * Returns the simple metadata for a specific plugin request message. This
+ * supplies information necessary for the host to know how to build an
+ * MC_CMD_PLUGIN_REQ request.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG 0x1b1
+#define MC_CMD_PLUGIN_GET_META_MSG_MSGSET 0x1b1
+#undef MC_CMD_0x1b1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b1_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_MSG_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_LEN 8
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_LEN 4
+/* Unique message ID to obtain */
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_IN_ID_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_MSG_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_LEN 44
+/* Unique message ID. This is the same value as the input parameter; it exists
+ * to allow future MCDI extensions which enumerate all messages.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_OFST 0
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_LEN 4
+/* Packed index number of this message, assigned by the MC to give each message
+ * a unique ID in an array to allow for more efficient storage/management.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_OFST 4
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_LEN 4
+/* Short human-readable codename for this message. This is conventionally
+ * formatted as a C identifier in the basic ASCII character set with any spare
+ * bytes at the end set to 0, however this convention is not enforced by the MC
+ * so consumers must check for all potential malformations before using it for
+ * a trusted purpose.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_OFST 8
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_LEN 32
+/* Number of bytes of data which must be passed from the host kernel to the MC
+ * for this message's payload, and which are passed back again in the response.
+ * The MC's plugin metadata loader will have validated that the number of bytes
+ * specified here will fit in to MC_CMD_PLUGIN_REQ_IN_DATA in a single MCDI
+ * message.
+ */
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_OFST 40
+#define MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_LEN 4
+
/* PLUGIN_EXTENSION structuredef: Used within MC_CMD_PLUGIN_GET_ALL to describe
* an individual extension.
*/
@@ -26561,6 +27173,100 @@
#define PLUGIN_EXTENSION_RESERVED_LBN 137
#define PLUGIN_EXTENSION_RESERVED_WIDTH 23
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_ALL
+ * Returns a list of all plugin extensions currently loaded and available. The
+ * UUIDs returned can be passed to MC_CMD_PLUGIN_ALLOC in order to obtain more
+ * detailed metadata via the MC_CMD_PLUGIN_GET_META_* family of requests. The
+ * ADMIN_GROUP field collects how extensions are grouped in to units which are
+ * loaded/unloaded together; extensions with the same value are in the same
+ * group.
+ */
+#define MC_CMD_PLUGIN_GET_ALL 0x1b2
+#define MC_CMD_PLUGIN_GET_ALL_MSGSET 0x1b2
+#undef MC_CMD_0x1b2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b2_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_ALL_IN msgrequest */
+#define MC_CMD_PLUGIN_GET_ALL_IN_LEN 4
+/* Additional options for querying. Note that if neither FLAG_INCLUDE_ENABLED
+ * nor FLAG_INCLUDE_DISABLED are specified then the result set will be empty.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_LEN 4
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_LBN 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_WIDTH 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_LBN 1
+#define MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_WIDTH 1
+
+/* MC_CMD_PLUGIN_GET_ALL_OUT msgresponse */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX 240
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_GET_ALL_OUT_LEN(num) (0+20*(num))
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_NUM(len) (((len)-0)/20)
+/* The list of available plugin extensions, as an array of PLUGIN_EXTENSION
+ * structs.
+ */
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_OFST 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_LEN 20
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MINNUM 0
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM 12
+#define MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM_MCDI2 51
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_REQ
+ * Send a command to a plugin. A plugin may define an arbitrary number of
+ * 'messages' which it allows applications on the host system to send, each
+ * identified by a 32-bit ID.
+ */
+#define MC_CMD_PLUGIN_REQ 0x1b3
+#define MC_CMD_PLUGIN_REQ_MSGSET 0x1b3
+#undef MC_CMD_0x1b3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1b3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_REQ_IN msgrequest */
+#define MC_CMD_PLUGIN_REQ_IN_LENMIN 8
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_IN_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_IN_LEN(num) (8+1*(num))
+#define MC_CMD_PLUGIN_REQ_IN_DATA_NUM(len) (((len)-8)/1)
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_OFST 0
+#define MC_CMD_PLUGIN_REQ_IN_HANDLE_LEN 4
+/* Message ID defined by the plugin author */
+#define MC_CMD_PLUGIN_REQ_IN_ID_OFST 4
+#define MC_CMD_PLUGIN_REQ_IN_ID_LEN 4
+/* Data blob being the parameter to the message. This must be of the length
+ * specified by MC_CMD_PLUGIN_GET_META_MSG_IN_MCDI_PARAM_SIZE.
+ */
+#define MC_CMD_PLUGIN_REQ_IN_DATA_OFST 8
+#define MC_CMD_PLUGIN_REQ_IN_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM 244
+#define MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM_MCDI2 1012
+
+/* MC_CMD_PLUGIN_REQ_OUT msgresponse */
+#define MC_CMD_PLUGIN_REQ_OUT_LENMIN 0
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX 252
+#define MC_CMD_PLUGIN_REQ_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_PLUGIN_REQ_OUT_LEN(num) (0+1*(num))
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_NUM(len) (((len)-0)/1)
+/* The input data, as transformed and/or updated by the plugin's eBPF. Will be
+ * the same size as the input DATA parameter.
+ */
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_OFST 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_LEN 1
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MINNUM 0
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM 252
+#define MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM_MCDI2 1020
+
/* DESC_ADDR_REGION structuredef: Describes a contiguous region of DESC_ADDR
* space that maps to a contiguous region of TRGT_ADDR space. Addresses
* DESC_ADDR in the range [DESC_ADDR_BASE:DESC_ADDR_BASE + 1 <<
@@ -27219,6 +27925,38 @@
#define MC_CMD_VIRTIO_TEST_FEATURES_OUT_LEN 0
+/***********************************/
+/* MC_CMD_VIRTIO_GET_CAPABILITIES
+ * Get virtio capabilities supported by the device. Returns general virtio
+ * capabilities and limitations of the hardware / firmware implementation
+ * (hardware device as a whole), rather than that of individual configured
+ * virtio devices. At present, only the absolute maximum number of queues
+ * allowed on multi-queue devices is returned. Response is expected to be
+ * extended as necessary in the future.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES 0x1d3
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_MSGSET 0x1d3
+#undef MC_CMD_0x1d3_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_IN msgrequest */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_LEN 4
+/* Type of device to get capabilities for. Matches the device id as defined by
+ * the virtio spec.
+ */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_LEN 4
+/* Enum values, see field(s): */
+/* MC_CMD_VIRTIO_GET_FEATURES/MC_CMD_VIRTIO_GET_FEATURES_IN/DEVICE_ID */
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_OUT msgresponse */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_LEN 4
+/* Maximum number of queues supported for a single device instance */
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_OFST 0
+#define MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_LEN 4
+
+
/***********************************/
/* MC_CMD_VIRTIO_INIT_QUEUE
* Create a virtio virtqueue. Fails with EALREADY if the queue already exists.
@@ -27490,6 +28228,24 @@
#define PCIE_FUNCTION_INTF_LBN 32
#define PCIE_FUNCTION_INTF_WIDTH 32
+/* QUEUE_ID structuredef: Structure representing an absolute queue identifier
+ * (absolute VI number + VI relative queue number). On Keystone, a VI can
+ * contain multiple queues (at present, up to 2), each with separate controls
+ * for direction. This structure is required to uniquely identify the absolute
+ * source queue for descriptor proxy functions.
+ */
+#define QUEUE_ID_LEN 4
+/* Absolute VI number */
+#define QUEUE_ID_ABS_VI_OFST 0
+#define QUEUE_ID_ABS_VI_LEN 2
+#define QUEUE_ID_ABS_VI_LBN 0
+#define QUEUE_ID_ABS_VI_WIDTH 16
+/* Relative queue number within the VI */
+#define QUEUE_ID_REL_QUEUE_LBN 16
+#define QUEUE_ID_REL_QUEUE_WIDTH 1
+#define QUEUE_ID_RESERVED_LBN 17
+#define QUEUE_ID_RESERVED_WIDTH 15
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_CREATE
@@ -28088,7 +28844,11 @@
* Enable descriptor proxying for function into target event queue. Returns VI
* allocation info for the proxy source function, so that the caller can map
* absolute VI IDs from descriptor proxy events back to the originating
- * function.
+ * function. This is a legacy function that only supports single queue proxy
+ * devices. It is also limited in that it can only be called after host driver
+ * attach (once VI allocation is known) and will return MC_CMD_ERR_ENOTCONN
+ * otherwise. For new code, see MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE which
+ * supports multi-queue devices and has no dependency on host driver attach.
*/
#define MC_CMD_DESC_PROXY_FUNC_ENABLE 0x178
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_MSGSET 0x178
@@ -28119,9 +28879,46 @@
#define MC_CMD_DESC_PROXY_FUNC_ENABLE_OUT_VI_BASE_LEN 4
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE
+ * Enable descriptor proxying for a source queue on a host function into target
+ * event queue. Source queue number is a relative virtqueue number on the
+ * source function (0 to max_virtqueues-1). For a multi-queue device, the
+ * caller must enable all source queues individually. To retrieve absolute VI
+ * information for the source function (so that VI IDs from descriptor proxy
+ * events can be mapped back to source function / queue) see
+ * MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE 0x1d0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_MSGSET 0x1d0
+#undef MC_CMD_0x1d0_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d0_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_LEN 12
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to enable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+/* Descriptor proxy sink queue (caller function relative). Must be extended
+ * width event queue
+ */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_OFST 8
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT_LEN 0
+
+
/***********************************/
/* MC_CMD_DESC_PROXY_FUNC_DISABLE
- * Disable descriptor proxying for function
+ * Disable descriptor proxying for function. For multi-queue functions,
+ * disables all queues.
*/
#define MC_CMD_DESC_PROXY_FUNC_DISABLE 0x179
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_MSGSET 0x179
@@ -28141,6 +28938,77 @@
#define MC_CMD_DESC_PROXY_FUNC_DISABLE_OUT_LEN 0
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE
+ * Disable descriptor proxying for a specific source queue on a function.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE 0x1d1
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_MSGSET 0x1d1
+#undef MC_CMD_0x1d1_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d1_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN msgrequest */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_LEN 8
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to disable proxying on */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_DESC_PROXY_GET_VI_INFO
+ * Returns absolute VI allocation information for the descriptor proxy source
+ * function referenced by HANDLE, so that the caller can map absolute VI IDs
+ * from descriptor proxy events back to the originating function and queue. The
+ * call is only valid after the host driver for the source function has
+ * attached (after receiving a driver attach event for the descriptor proxy
+ * function) and will fail with ENOTCONN otherwise.
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO 0x1d2
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_MSGSET 0x1d2
+#undef MC_CMD_0x1d2_PRIVILEGE_CTG
+
+#define MC_CMD_0x1d2_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_GET_VI_INFO_IN msgrequest */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_LEN 4
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_OFST 0
+#define MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT msgresponse */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMIN 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX 252
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX_MCDI2 1020
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_NUM(len) (((len)-0)/4)
+/* VI information (VI ID + VI relative queue number) for each of the source
+ * queues (in order from 0 to max_virtqueues-1), as array of QUEUE_ID
+ * structures.
+ */
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_LEN 4
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MINNUM 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM 63
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM_MCDI2 255
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_OFST 0
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_LEN 2
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_LBN 16
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_WIDTH 1
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_LBN 17
+#define MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_WIDTH 15
+
+
/***********************************/
/* MC_CMD_GET_ADDR_SPC_ID
* Get Address space identifier for use in mem2mem descriptors for a given
@@ -29384,9 +30252,12 @@
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_OFST 4
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_LBN 3
#define MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
-/* The total number of counters available to allocate. */
+/* Deprecated alias for AR_COUNTERS. */
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_OFST 8
#define MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_LEN 4
/* The total number of counters lists available to allocate. A value of zero
* indicates that counter lists are not supported by the NIC. (But single
* counters may still be.)
@@ -29429,6 +30300,87 @@
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_OFST 48
#define MC_CMD_MAE_GET_CAPS_OUT_API_VER_LEN 4
+/* MC_CMD_MAE_GET_CAPS_V2_OUT msgresponse */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_LEN 60
+/* The number of field IDs that the NIC supports. Any field with a ID greater
+ * than or equal to the value returned in this field must be treated as having
+ * a support level of MAE_FIELD_UNSUPPORTED in all requests.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_OFST 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+/* Deprecated alias for AR_COUNTERS. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_OFST 8
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_LEN 4
+/* The total number of counters lists available to allocate. A value of zero
+ * indicates that counter lists are not supported by the NIC. (But single
+ * counters may still be.)
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_OFST 12
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_LEN 4
+/* The total number of encap header structures available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_OFST 16
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_LEN 4
+/* Reserved. Should be zero. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_OFST 20
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_LEN 4
+/* The total number of action sets available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_OFST 24
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_LEN 4
+/* The total number of action set lists available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_OFST 28
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_LEN 4
+/* The total number of outer rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_OFST 32
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_LEN 4
+/* The total number of action rules available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_OFST 36
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_LEN 4
+/* The number of priorities available for ACTION_RULE filters. It is invalid to
+ * install a MATCH_ACTION filter with a priority number >= ACTION_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_OFST 40
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_LEN 4
+/* The number of priorities available for OUTER_RULE filters. It is invalid to
+ * install an OUTER_RULE filter with a priority number >= OUTER_PRIOS.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_OFST 44
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_LEN 4
+/* MAE API major version. Currently 1. If this field is not present in the
+ * response (i.e. response shorter than 384 bits), then its value is zero. If
+ * the value does not match the client's expectations, the client should raise
+ * a fatal error.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_OFST 48
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_LEN 4
+/* Mask of supported counter types. Each bit position corresponds to a value of
+ * the MAE_COUNTER_TYPE enum. If this field is missing (i.e. V1 response),
+ * clients must assume that only AR counters are supported (i.e.
+ * COUNTER_TYPES_SUPPORTED==0x1). See also
+ * MC_CMD_MAE_COUNTERS_STREAM_START/COUNTER_TYPES_MASK.
+ */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_OFST 52
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_LEN 4
+/* The total number of conntrack counters available to allocate. */
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_OFST 56
+#define MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_LEN 4
+
/***********************************/
/* MC_CMD_MAE_GET_AR_CAPS
@@ -29495,8 +30447,8 @@
/***********************************/
/* MC_CMD_MAE_COUNTER_ALLOC
- * Allocate match-action-engine counters, which can be referenced in Action
- * Rules.
+ * Allocate match-action-engine counters, which can be referenced in various
+ * tables.
*/
#define MC_CMD_MAE_COUNTER_ALLOC 0x143
#define MC_CMD_MAE_COUNTER_ALLOC_MSGSET 0x143
@@ -29504,12 +30456,25 @@
#define MC_CMD_0x143_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_LEN 4
/* The number of counters that the driver would like allocated */
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTER_ALLOC_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_LEN 8
+/* The number of counters that the driver would like allocated */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_LEN 4
+/* Which type of counter to allocate. */
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_OFST 4
+#define MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_ALLOC_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX 252
@@ -29518,7 +30483,8 @@
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. Packets with generation count >= GENERATION_COUNT will
* contain valid counter values for counter IDs allocated in this call, unless
- * the counter values are zero and zero squash is enabled.
+ * the counter values are zero and zero squash is enabled. Note that there is
+ * an independent GENERATION_COUNT object per counter type.
*/
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_LEN 4
@@ -29548,7 +30514,9 @@
#define MC_CMD_0x144_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMIN 8
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX 132
#define MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2 132
@@ -29564,6 +30532,23 @@
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM 32
#define MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* MC_CMD_MAE_COUNTER_FREE_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_LEN 136
+/* The number of counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_LEN 4
+/* An array containing the counter IDs to be freed. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_OFST 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_LEN 4
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MINNUM 1
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM 32
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* Which type of counter to free. */
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_OFST 132
+#define MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_LEN 4
+/* Enum values, see field(s): */
+/* MAE_COUNTER_TYPE */
+
/* MC_CMD_MAE_COUNTER_FREE_OUT msgresponse */
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN 12
#define MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX 136
@@ -29572,11 +30557,13 @@
#define MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_NUM(len) (((len)-8)/4)
/* Generation count. A packet with generation count == GENERATION_COUNT will
* contain the final values for these counter IDs, unless the counter values
- * are zero and zero squash is enabled. Receiving a packet with generation
- * count > GENERATION_COUNT guarantees that no more values will be written for
- * these counters. If values for these counter IDs are present, the counter ID
- * has been reallocated. A counter ID will not be reallocated within a single
- * read cycle as this would merge increments from the 'old' and 'new' counters.
+ * are zero and zero squash is enabled. Note that the GENERATION_COUNT value is
+ * specific to the COUNTER_TYPE (IDENTIFIER field in packet header). Receiving
+ * a packet with generation count > GENERATION_COUNT guarantees that no more
+ * values will be written for these counters. If values for these counter IDs
+ * are present, the counter ID has been reallocated. A counter ID will not be
+ * reallocated within a single read cycle as this would merge increments from
+ * the 'old' and 'new' counters.
*/
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_LEN 4
@@ -29616,7 +30603,9 @@
#define MC_CMD_0x151_PRIVILEGE_CTG SRIOV_CTG_MAE
-/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest */
+/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest: Using V1 is equivalent to V2
+ * with COUNTER_TYPES_MASK=0x1 (i.e. AR counters only).
+ */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN 8
/* The RxQ to write packets to. */
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_QID_OFST 0
@@ -29634,6 +30623,35 @@
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_LBN 1
#define MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_WIDTH 1
+/* MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN msgrequest */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_LEN 12
+/* The RxQ to write packets to. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_LEN 2
+/* Maximum size in bytes of packets that may be written to the RxQ. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_OFST 2
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_LEN 2
+/* Optional flags. */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_LBN 0
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_WIDTH 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_OFST 4
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_LBN 1
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_WIDTH 1
+/* Mask of which counter types should be reported. Each bit position
+ * corresponds to a value of the MAE_COUNTER_TYPE enum. For example a value of
+ * 0x3 requests both AR and CT counters. A value of zero is invalid. Counter
+ * types not selected by the mask value won't be included in the stream. If a
+ * client wishes to change which counter types are reported, it must first call
+ * MAE_COUNTERS_STREAM_STOP, then restart it with the new mask value.
+ * Requesting a counter type which isn't supported by firmware (reported in
+ * MC_CMD_MAE_GET_CAPS/COUNTER_TYPES_SUPPORTED) will result in ENOTSUP.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_OFST 8
+#define MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_LEN 4
+
/* MC_CMD_MAE_COUNTERS_STREAM_START_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN 4
#define MC_CMD_MAE_COUNTERS_STREAM_START_OUT_FLAGS_OFST 0
@@ -29661,14 +30679,32 @@
/* MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT msgresponse */
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN 4
-/* Generation count. The final set of counter values will be written out in
- * packets with count == GENERATION_COUNT. An empty packet with count >
- * GENERATION_COUNT indicates that no more counter values will be written to
- * this stream.
+/* Generation count for AR counters. The final set of AR counter values will be
+ * written out in packets with count == GENERATION_COUNT. An empty packet with
+ * count > GENERATION_COUNT indicates that no more counter values of this type
+ * will be written to this stream.
*/
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_OFST 0
#define MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_LEN 4
+/* MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT msgresponse */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMIN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX_MCDI2 32
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LEN(num) (0+4*(num))
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_NUM(len) (((len)-0)/4)
+/* Array of generation counts, indexed by MAE_COUNTER_TYPE. Note that since
+ * MAE_COUNTER_TYPE_AR==0, this response is backwards-compatible with V1. The
+ * final set of counter values will be written out in packets with count ==
+ * GENERATION_COUNT. An empty packet with count > GENERATION_COUNT indicates
+ * that no more counter values of this type will be written to this stream.
+ */
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_OFST 0
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_LEN 4
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MINNUM 1
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM 8
+#define MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM_MCDI2 8
+
/***********************************/
/* MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS
@@ -29941,9 +30977,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_LEN 4
@@ -30021,9 +31058,10 @@
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_LIST_ID_LEN 4
/* If a driver only wished to update one counter within this action set, then
* it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
*/
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_OFST 28
#define MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_LEN 4
@@ -30352,7 +31390,8 @@
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_LBN 64
#define MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_WIDTH 32
/* Counter ID to increment if DO_CT or DO_RECIRC is set. Must be set to
- * COUNTER_ID_NULL otherwise.
+ * COUNTER_ID_NULL otherwise. Counter ID must have been allocated with
+ * COUNTER_TYPE=AR.
*/
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_OFST 12
#define MAE_ACTION_RULE_RESPONSE_COUNTER_ID_LEN 4
@@ -30710,6 +31749,108 @@
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_LBN 352
#define MAE_MPORT_DESC_VNIC_PLUGIN_TBD_WIDTH 32
+/* MAE_MPORT_DESC_V2 structuredef */
+#define MAE_MPORT_DESC_V2_LEN 56
+#define MAE_MPORT_DESC_V2_MPORT_ID_OFST 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_MPORT_ID_LBN 0
+#define MAE_MPORT_DESC_V2_MPORT_ID_WIDTH 32
+/* Reserved for future purposes, contains information independent of caller */
+#define MAE_MPORT_DESC_V2_FLAGS_OFST 4
+#define MAE_MPORT_DESC_V2_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_FLAGS_LBN 32
+#define MAE_MPORT_DESC_V2_FLAGS_WIDTH 32
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_OFST 8
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LEN 4
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_LBN 0
+#define MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_LBN 1
+#define MAE_MPORT_DESC_V2_CAN_DELIVER_TO_WIDTH 1
+#define MAE_MPORT_DESC_V2_CAN_DELETE_OFST 8
+#define MAE_MPORT_DESC_V2_CAN_DELETE_LBN 2
+#define MAE_MPORT_DESC_V2_CAN_DELETE_WIDTH 1
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_OFST 8
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_LBN 3
+#define MAE_MPORT_DESC_V2_IS_ZOMBIE_WIDTH 1
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_LBN 64
+#define MAE_MPORT_DESC_V2_CALLER_FLAGS_WIDTH 32
+/* Not the ideal name; it's really the type of thing connected to the m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_OFST 12
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LEN 4
+/* enum: Connected to a MAC... */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT 0x0
+/* enum: Adds metadata and delivers to another m-port */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS 0x1
+/* enum: Connected to a VNIC. */
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC 0x2
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_LBN 96
+#define MAE_MPORT_DESC_V2_MPORT_TYPE_WIDTH 32
+/* 128-bit value available to drivers for m-port identification. */
+#define MAE_MPORT_DESC_V2_UUID_OFST 16
+#define MAE_MPORT_DESC_V2_UUID_LEN 16
+#define MAE_MPORT_DESC_V2_UUID_LBN 128
+#define MAE_MPORT_DESC_V2_UUID_WIDTH 128
+/* Big wadge of space reserved for other common properties */
+#define MAE_MPORT_DESC_V2_RESERVED_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LEN 8
+#define MAE_MPORT_DESC_V2_RESERVED_LO_OFST 32
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_LO_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_LO_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_HI_OFST 36
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LEN 4
+#define MAE_MPORT_DESC_V2_RESERVED_HI_LBN 288
+#define MAE_MPORT_DESC_V2_RESERVED_HI_WIDTH 32
+#define MAE_MPORT_DESC_V2_RESERVED_LBN 256
+#define MAE_MPORT_DESC_V2_RESERVED_WIDTH 64
+/* Logical port index. Only valid when type NET Port. */
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_OFST 40
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LEN 4
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_LBN 320
+#define MAE_MPORT_DESC_V2_NET_PORT_IDX_WIDTH 32
+/* The m-port delivered to */
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_OFST 40
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LEN 4
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LBN 320
+#define MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_WIDTH 32
+/* The type of thing that owns the VNIC */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_OFST 40
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION 0x1 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN 0x2 /* enum */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LBN 320
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_WIDTH 32
+/* The PCIe interface on which the function lives. CJK: We need an enumeration
+ * of interfaces that we extend as new interface (types) appear. This belongs
+ * elsewhere and should be referenced from here
+ */
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_WIDTH 32
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_OFST 48
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LEN 2
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LBN 384
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_WIDTH 16
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_OFST 50
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LEN 2
+/* enum: Indicates that the function is a PF */
+#define MAE_MPORT_DESC_V2_VF_IDX_NULL 0xffff
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LBN 400
+#define MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_WIDTH 16
+/* Reserved. Should be ignored for now. */
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_OFST 44
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LBN 352
+#define MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_WIDTH 32
+/* A client handle for the VNIC's owner. Only valid for type VNIC. */
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST 52
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN 4
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LBN 416
+#define MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_WIDTH 32
+
/***********************************/
/* MC_CMD_MAE_MPORT_ENUMERATE
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 02/38] common/sfc_efx/base: update EF100 registers definitions
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 03/38] net/sfc: add switch mode device argument Andrew Rybchenko
` (36 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev
Pick up all changes and extra definitions.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx_regs_ef100.h | 106 +++++++++++++++----
drivers/common/sfc_efx/base/rhead_rx.c | 2 +-
2 files changed, 85 insertions(+), 23 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx_regs_ef100.h b/drivers/common/sfc_efx/base/efx_regs_ef100.h
index 2b766aabdd..0446377f64 100644
--- a/drivers/common/sfc_efx/base/efx_regs_ef100.h
+++ b/drivers/common/sfc_efx/base/efx_regs_ef100.h
@@ -323,12 +323,6 @@ extern "C" {
/* ES_RHEAD_BASE_EVENT */
#define ESF_GZ_E_TYPE_LBN 60
#define ESF_GZ_E_TYPE_WIDTH 4
-#define ESE_GZ_EF100_EV_DRIVER 5
-#define ESE_GZ_EF100_EV_MCDI 4
-#define ESE_GZ_EF100_EV_CONTROL 3
-#define ESE_GZ_EF100_EV_TX_TIMESTAMP 2
-#define ESE_GZ_EF100_EV_TX_COMPLETION 1
-#define ESE_GZ_EF100_EV_RX_PKTS 0
#define ESF_GZ_EV_EVQ_PHASE_LBN 59
#define ESF_GZ_EV_EVQ_PHASE_WIDTH 1
#define ESE_GZ_RHEAD_BASE_EVENT_STRUCT_SIZE 64
@@ -467,6 +461,23 @@ extern "C" {
#define ESE_GZ_XIL_CFGBAR_VSEC_STRUCT_SIZE 96
+/* ES_addr_spc */
+#define ESF_GZ_ADDR_SPC_FORMAT_1_FUNCTION_LBN 28
+#define ESF_GZ_ADDR_SPC_FORMAT_1_FUNCTION_WIDTH 8
+#define ESF_GZ_ADDR_SPC_FORMAT_2_FUNCTION_LBN 24
+#define ESF_GZ_ADDR_SPC_FORMAT_2_FUNCTION_WIDTH 12
+#define ESF_GZ_ADDR_SPC_FORMAT_1_PROFILE_ID_LBN 24
+#define ESF_GZ_ADDR_SPC_FORMAT_1_PROFILE_ID_WIDTH 4
+#define ESF_GZ_ADDR_SPC_PASID_LBN 2
+#define ESF_GZ_ADDR_SPC_PASID_WIDTH 22
+#define ESF_GZ_ADDR_SPC_FORMAT_LBN 0
+#define ESF_GZ_ADDR_SPC_FORMAT_WIDTH 2
+#define ESE_GZ_ADDR_SPC_FORMAT_1 3
+#define ESF_GZ_ADDR_SPC_FORMAT_2_PROFILE_ID_IDX_LBN 0
+#define ESF_GZ_ADDR_SPC_FORMAT_2_PROFILE_ID_IDX_WIDTH 2
+#define ESE_GZ_ADDR_SPC_STRUCT_SIZE 36
+
+
/* ES_rh_egres_hclass */
#define ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM_LBN 15
#define ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM_WIDTH 1
@@ -560,14 +571,18 @@ extern "C" {
#define ESF_GZ_RX_PREFIX_VLAN_STRIP_TCI_WIDTH 16
#define ESF_GZ_RX_PREFIX_CSUM_FRAME_LBN 144
#define ESF_GZ_RX_PREFIX_CSUM_FRAME_WIDTH 16
-#define ESF_GZ_RX_PREFIX_INGRESS_VPORT_LBN 128
-#define ESF_GZ_RX_PREFIX_INGRESS_VPORT_WIDTH 16
+#define ESF_GZ_RX_PREFIX_INGRESS_MPORT_LBN 128
+#define ESF_GZ_RX_PREFIX_INGRESS_MPORT_WIDTH 16
#define ESF_GZ_RX_PREFIX_USER_MARK_LBN 96
#define ESF_GZ_RX_PREFIX_USER_MARK_WIDTH 32
#define ESF_GZ_RX_PREFIX_RSS_HASH_LBN 64
#define ESF_GZ_RX_PREFIX_RSS_HASH_WIDTH 32
-#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN 32
-#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_WIDTH 32
+#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_LBN 34
+#define ESF_GZ_RX_PREFIX_PARTIAL_TSTAMP_WIDTH 30
+#define ESF_GZ_RX_PREFIX_VSWITCH_STATUS_LBN 33
+#define ESF_GZ_RX_PREFIX_VSWITCH_STATUS_WIDTH 1
+#define ESF_GZ_RX_PREFIX_VLAN_STRIPPED_LBN 32
+#define ESF_GZ_RX_PREFIX_VLAN_STRIPPED_WIDTH 1
#define ESF_GZ_RX_PREFIX_CLASS_LBN 16
#define ESF_GZ_RX_PREFIX_CLASS_WIDTH 16
#define ESF_GZ_RX_PREFIX_USER_FLAG_LBN 15
@@ -674,12 +689,12 @@ extern "C" {
#define ESF_GZ_M2M_TRANSLATE_ADDR_WIDTH 1
#define ESF_GZ_M2M_RSVD_LBN 120
#define ESF_GZ_M2M_RSVD_WIDTH 2
-#define ESF_GZ_M2M_ADDR_SPC_LBN 108
-#define ESF_GZ_M2M_ADDR_SPC_WIDTH 12
-#define ESF_GZ_M2M_ADDR_SPC_PASID_LBN 86
-#define ESF_GZ_M2M_ADDR_SPC_PASID_WIDTH 22
-#define ESF_GZ_M2M_ADDR_SPC_MODE_LBN 84
-#define ESF_GZ_M2M_ADDR_SPC_MODE_WIDTH 2
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW0_LBN 84
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW0_WIDTH 32
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW1_LBN 116
+#define ESF_GZ_M2M_ADDR_SPC_ID_DW1_WIDTH 4
+#define ESF_GZ_M2M_ADDR_SPC_ID_LBN 84
+#define ESF_GZ_M2M_ADDR_SPC_ID_WIDTH 36
#define ESF_GZ_M2M_LEN_MINUS_1_LBN 64
#define ESF_GZ_M2M_LEN_MINUS_1_WIDTH 20
#define ESF_GZ_M2M_ADDR_DW0_LBN 0
@@ -722,12 +737,12 @@ extern "C" {
#define ESF_GZ_TX_SEG_TRANSLATE_ADDR_WIDTH 1
#define ESF_GZ_TX_SEG_RSVD2_LBN 120
#define ESF_GZ_TX_SEG_RSVD2_WIDTH 2
-#define ESF_GZ_TX_SEG_ADDR_SPC_LBN 108
-#define ESF_GZ_TX_SEG_ADDR_SPC_WIDTH 12
-#define ESF_GZ_TX_SEG_ADDR_SPC_PASID_LBN 86
-#define ESF_GZ_TX_SEG_ADDR_SPC_PASID_WIDTH 22
-#define ESF_GZ_TX_SEG_ADDR_SPC_MODE_LBN 84
-#define ESF_GZ_TX_SEG_ADDR_SPC_MODE_WIDTH 2
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW0_LBN 84
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW0_WIDTH 32
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW1_LBN 116
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_DW1_WIDTH 4
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_LBN 84
+#define ESF_GZ_TX_SEG_ADDR_SPC_ID_WIDTH 36
#define ESF_GZ_TX_SEG_RSVD_LBN 80
#define ESF_GZ_TX_SEG_RSVD_WIDTH 4
#define ESF_GZ_TX_SEG_LEN_LBN 64
@@ -824,6 +839,12 @@ extern "C" {
+/* Enum D2VIO_MSG_OP */
+#define ESE_GZ_QUE_JBDNE 3
+#define ESE_GZ_QUE_EVICT 2
+#define ESE_GZ_QUE_EMPTY 1
+#define ESE_GZ_NOP 0
+
/* Enum DESIGN_PARAMS */
#define ESE_EF100_DP_GZ_RX_MAX_RUNT 17
#define ESE_EF100_DP_GZ_VI_STRIDES 16
@@ -871,6 +892,19 @@ extern "C" {
#define ESE_GZ_PCI_BASE_CONFIG_SPACE_SIZE 256
#define ESE_GZ_PCI_EXPRESS_XCAP_HDR_SIZE 4
+/* Enum RH_DSC_TYPE */
+#define ESE_GZ_TX_TOMB 0xF
+#define ESE_GZ_TX_VIO 0xE
+#define ESE_GZ_TX_TSO_OVRRD 0x8
+#define ESE_GZ_TX_D2CMP 0x7
+#define ESE_GZ_TX_DATA 0x6
+#define ESE_GZ_TX_D2M 0x5
+#define ESE_GZ_TX_M2M 0x4
+#define ESE_GZ_TX_SEG 0x3
+#define ESE_GZ_TX_TSO 0x2
+#define ESE_GZ_TX_OVRRD 0x1
+#define ESE_GZ_TX_SEND 0x0
+
/* Enum RH_HCLASS_L2_CLASS */
#define ESE_GZ_RH_HCLASS_L2_CLASS_E2_0123VLAN 1
#define ESE_GZ_RH_HCLASS_L2_CLASS_OTHER 0
@@ -907,6 +941,25 @@ extern "C" {
#define ESE_GZ_RH_HCLASS_TUNNEL_CLASS_VXLAN 1
#define ESE_GZ_RH_HCLASS_TUNNEL_CLASS_NONE 0
+/* Enum SF_CTL_EVENT_SUBTYPE */
+#define ESE_GZ_EF100_CTL_EV_EVQ_TIMEOUT 0x3
+#define ESE_GZ_EF100_CTL_EV_FLUSH 0x2
+#define ESE_GZ_EF100_CTL_EV_TIME_SYNC 0x1
+#define ESE_GZ_EF100_CTL_EV_UNSOL_OVERFLOW 0x0
+
+/* Enum SF_EVENT_TYPE */
+#define ESE_GZ_EF100_EV_DRIVER 0x5
+#define ESE_GZ_EF100_EV_MCDI 0x4
+#define ESE_GZ_EF100_EV_CONTROL 0x3
+#define ESE_GZ_EF100_EV_TX_TIMESTAMP 0x2
+#define ESE_GZ_EF100_EV_TX_COMPLETION 0x1
+#define ESE_GZ_EF100_EV_RX_PKTS 0x0
+
+/* Enum SF_EW_EVENT_TYPE */
+#define ESE_GZ_EF100_EWEV_VIRTQ_DESC 0x2
+#define ESE_GZ_EF100_EWEV_TXQ_DESC 0x1
+#define ESE_GZ_EF100_EWEV_64BIT 0x0
+
/* Enum TX_DESC_CSO_PARTIAL_EN */
#define ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP 2
#define ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP 1
@@ -922,6 +975,15 @@ extern "C" {
#define ESE_GZ_TX_DESC_IP4_ID_INC_MOD16 2
#define ESE_GZ_TX_DESC_IP4_ID_INC_MOD15 1
#define ESE_GZ_TX_DESC_IP4_ID_NO_OP 0
+
+/* Enum VIRTIO_NET_HDR_F */
+#define ESE_GZ_NEEDS_CSUM 0x1
+
+/* Enum VIRTIO_NET_HDR_GSO */
+#define ESE_GZ_TCPV6 0x4
+#define ESE_GZ_UDP 0x3
+#define ESE_GZ_TCPV4 0x1
+#define ESE_GZ_NONE 0x0
/*************************************************************************
* NOTE: the comment line above marks the end of the autogenerated section
*/
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index 76b8ce302a..692c3e1d49 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -37,7 +37,7 @@ static const efx_rx_prefix_layout_t rhead_default_rx_prefix_layout = {
RHEAD_RX_PREFIX_FIELD(PARTIAL_TSTAMP, B_FALSE),
RHEAD_RX_PREFIX_FIELD(RSS_HASH, B_FALSE),
RHEAD_RX_PREFIX_FIELD(USER_MARK, B_FALSE),
- RHEAD_RX_PREFIX_FIELD(INGRESS_VPORT, B_FALSE),
+ RHEAD_RX_PREFIX_FIELD(INGRESS_MPORT, B_FALSE),
RHEAD_RX_PREFIX_FIELD(CSUM_FRAME, B_TRUE),
RHEAD_RX_PREFIX_FIELD(VLAN_STRIP_TCI, B_TRUE),
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 03/38] net/sfc: add switch mode device argument
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 02/38] common/sfc_efx/base: update EF100 registers definitions Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 04/38] net/sfc: insert switchdev mode MAE rules Andrew Rybchenko
` (35 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Add the argument that allows user to choose either switchdev or legacy
mode. Legacy mode enables switching by using Ethernet virtual bridging
(EVB) API. In switchdev mode, VF traffic goes via port representor
(if any) on PF, and software virtual switch (for example, Open vSwitch)
steers the traffic.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
doc/guides/nics/sfc_efx.rst | 13 ++++++++
drivers/net/sfc/sfc.h | 2 ++
drivers/net/sfc/sfc_ethdev.c | 60 ++++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_kvargs.c | 1 +
drivers/net/sfc/sfc_kvargs.h | 8 +++++
drivers/net/sfc/sfc_sriov.c | 9 ++++--
6 files changed, 91 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 163bc2533f..d66cb76dab 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -371,6 +371,19 @@ boolean parameters value.
If this parameter is not specified then ef100 device will operate as
network device.
+- ``switch_mode`` [legacy|switchdev] (see below for default)
+
+ In legacy mode, NIC firmware provides Ethernet virtual bridging (EVB) API
+ to configure switching inside NIC to deliver traffic to physical (PF) and
+ virtual (VF) PCI functions. PF driver is responsible to build the
+ infrastructure for VFs, and traffic goes to/from VF by default in accordance
+ with MAC address assigned, permissions and filters installed by VF drivers.
+ In switchdev mode VF traffic goes via port representor (if any) on PF, and
+ software virtual switch (for example, Open vSwitch) makes the decision.
+ Software virtual switch may install MAE rules to pass established traffic
+ flows via hardware and offload software datapath as the result.
+ Default is legacy.
+
- ``rx_datapath`` [auto|efx|ef10|ef10_essb] (default **auto**)
Choose receive datapath implementation.
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 331e06bac6..b045baca9e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -313,6 +313,8 @@ struct sfc_adapter {
boolean_t tso_encap;
uint32_t rxd_wait_timeout_ns;
+
+ bool switchdev;
};
static inline struct sfc_adapter_shared *
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 9dc5e5b3a3..b353bfe358 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2189,6 +2189,46 @@ sfc_register_dp(void)
}
}
+static int
+sfc_parse_switch_mode(struct sfc_adapter *sa)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ const char *switch_mode = NULL;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rc = sfc_kvargs_process(sa, SFC_KVARG_SWITCH_MODE,
+ sfc_kvarg_string_handler, &switch_mode);
+ if (rc != 0)
+ goto fail_kvargs;
+
+ if (switch_mode == NULL) {
+ sa->switchdev = encp->enc_mae_supported &&
+ !encp->enc_datapath_cap_evb;
+ } else if (strcasecmp(switch_mode, SFC_KVARG_SWITCH_MODE_LEGACY) == 0) {
+ sa->switchdev = false;
+ } else if (strcasecmp(switch_mode,
+ SFC_KVARG_SWITCH_MODE_SWITCHDEV) == 0) {
+ sa->switchdev = true;
+ } else {
+ sfc_err(sa, "invalid switch mode device argument '%s'",
+ switch_mode);
+ rc = EINVAL;
+ goto fail_mode;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_mode:
+fail_kvargs:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+ return rc;
+}
+
static int
sfc_eth_dev_init(struct rte_eth_dev *dev)
{
@@ -2276,6 +2316,14 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
if (rc != 0)
goto fail_probe;
+ /*
+ * Selecting a default switch mode requires the NIC to be probed and
+ * to have its capabilities filled in.
+ */
+ rc = sfc_parse_switch_mode(sa);
+ if (rc != 0)
+ goto fail_switch_mode;
+
sfc_log_init(sa, "set device ops");
rc = sfc_eth_dev_set_ops(dev);
if (rc != 0)
@@ -2286,6 +2334,13 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
if (rc != 0)
goto fail_attach;
+ if (sa->switchdev && sa->mae.status != SFC_MAE_STATUS_SUPPORTED) {
+ sfc_err(sa,
+ "failed to enable switchdev mode without MAE support");
+ rc = ENOTSUP;
+ goto fail_switchdev_no_mae;
+ }
+
encp = efx_nic_cfg_get(sa->nic);
/*
@@ -2300,10 +2355,14 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
sfc_log_init(sa, "done");
return 0;
+fail_switchdev_no_mae:
+ sfc_detach(sa);
+
fail_attach:
sfc_eth_dev_clear_ops(dev);
fail_set_ops:
+fail_switch_mode:
sfc_unprobe(sa);
fail_probe:
@@ -2371,6 +2430,7 @@ RTE_PMD_REGISTER_PCI(net_sfc_efx, sfc_efx_pmd);
RTE_PMD_REGISTER_PCI_TABLE(net_sfc_efx, pci_id_sfc_efx_map);
RTE_PMD_REGISTER_KMOD_DEP(net_sfc_efx, "* igb_uio | uio_pci_generic | vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_sfc_efx,
+ SFC_KVARG_SWITCH_MODE "=" SFC_KVARG_VALUES_SWITCH_MODE " "
SFC_KVARG_RX_DATAPATH "=" SFC_KVARG_VALUES_RX_DATAPATH " "
SFC_KVARG_TX_DATAPATH "=" SFC_KVARG_VALUES_TX_DATAPATH " "
SFC_KVARG_PERF_PROFILE "=" SFC_KVARG_VALUES_PERF_PROFILE " "
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index 974c05e68e..cd16213637 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -22,6 +22,7 @@ sfc_kvargs_parse(struct sfc_adapter *sa)
struct rte_eth_dev *eth_dev = (sa)->eth_dev;
struct rte_devargs *devargs = eth_dev->device->devargs;
const char **params = (const char *[]){
+ SFC_KVARG_SWITCH_MODE,
SFC_KVARG_STATS_UPDATE_PERIOD_MS,
SFC_KVARG_PERF_PROFILE,
SFC_KVARG_RX_DATAPATH,
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index ff76e7d9fc..8e34ec92a2 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -18,6 +18,14 @@ extern "C" {
#define SFC_KVARG_VALUES_BOOL "[1|y|yes|on|0|n|no|off]"
+#define SFC_KVARG_SWITCH_MODE_LEGACY "legacy"
+#define SFC_KVARG_SWITCH_MODE_SWITCHDEV "switchdev"
+
+#define SFC_KVARG_SWITCH_MODE "switch_mode"
+#define SFC_KVARG_VALUES_SWITCH_MODE \
+ "[" SFC_KVARG_SWITCH_MODE_LEGACY "|" \
+ SFC_KVARG_SWITCH_MODE_SWITCHDEV "]"
+
#define SFC_KVARG_PERF_PROFILE "perf_profile"
#define SFC_KVARG_PERF_PROFILE_AUTO "auto"
diff --git a/drivers/net/sfc/sfc_sriov.c b/drivers/net/sfc/sfc_sriov.c
index baa0242433..385b172e2e 100644
--- a/drivers/net/sfc/sfc_sriov.c
+++ b/drivers/net/sfc/sfc_sriov.c
@@ -53,7 +53,7 @@ sfc_sriov_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
sriov->num_vfs = pci_dev->max_vfs;
- if (sriov->num_vfs == 0)
+ if (sa->switchdev || sriov->num_vfs == 0)
goto done;
vport_config = calloc(sriov->num_vfs + 1, sizeof(*vport_config));
@@ -110,6 +110,11 @@ sfc_sriov_vswitch_create(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
+ if (sa->switchdev) {
+ sfc_log_init(sa, "don't create vswitch in switchdev mode");
+ goto done;
+ }
+
if (sriov->num_vfs == 0) {
sfc_log_init(sa, "no VFs enabled");
goto done;
@@ -152,7 +157,7 @@ sfc_sriov_vswitch_destroy(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- if (sriov->num_vfs == 0)
+ if (sa->switchdev || sriov->num_vfs == 0)
goto done;
rc = efx_evb_vswitch_destroy(sa->nic, sriov->vswitch);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 04/38] net/sfc: insert switchdev mode MAE rules
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (2 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 03/38] net/sfc: add switch mode device argument Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 05/38] common/sfc_efx/base: add an API to get mport ID by selector Andrew Rybchenko
` (34 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
By default, the firmware is in EVB mode, but insertion of the first MAE
rule resets it to switchdev mode automatically and removes all automatic
MAE rules added by EVB support. On initialisation, insert MAE rules that
forward traffic between PHY and PF.
Add an API for creation and insertion of driver-internal MAE
rules(flows).
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 8 ++
drivers/net/sfc/sfc_mae.c | 211 ++++++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_mae.h | 49 +++++++++
3 files changed, 268 insertions(+)
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 274a98e228..cd2c97f3b2 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -895,6 +895,10 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_mae_attach;
+ rc = sfc_mae_switchdev_init(sa);
+ if (rc != 0)
+ goto fail_mae_switchdev_init;
+
sfc_log_init(sa, "fini nic");
efx_nic_fini(enp);
@@ -923,6 +927,9 @@ sfc_attach(struct sfc_adapter *sa)
fail_sw_xstats_init:
sfc_flow_fini(sa);
+ sfc_mae_switchdev_fini(sa);
+
+fail_mae_switchdev_init:
sfc_mae_detach(sa);
fail_mae_attach:
@@ -969,6 +976,7 @@ sfc_detach(struct sfc_adapter *sa)
sfc_flow_fini(sa);
+ sfc_mae_switchdev_fini(sa);
sfc_mae_detach(sa);
sfc_mae_counter_rxq_detach(sa);
sfc_filter_detach(sa);
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 4b520bc619..b3607a178b 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -44,6 +44,139 @@ sfc_mae_counter_registry_fini(struct sfc_mae_counter_registry *registry)
sfc_mae_counters_fini(®istry->counters);
}
+static int
+sfc_mae_internal_rule_find_empty_slot(struct sfc_adapter *sa,
+ struct sfc_mae_rule **rule)
+{
+ struct sfc_mae *mae = &sa->mae;
+ struct sfc_mae_internal_rules *internal_rules = &mae->internal_rules;
+ unsigned int entry;
+ int rc;
+
+ for (entry = 0; entry < SFC_MAE_NB_RULES_MAX; entry++) {
+ if (internal_rules->rules[entry].spec == NULL)
+ break;
+ }
+
+ if (entry == SFC_MAE_NB_RULES_MAX) {
+ rc = ENOSPC;
+ sfc_err(sa, "failed too many rules (%u rules used)", entry);
+ goto fail_too_many_rules;
+ }
+
+ *rule = &internal_rules->rules[entry];
+
+ return 0;
+
+fail_too_many_rules:
+ return rc;
+}
+
+int
+sfc_mae_rule_add_mport_match_deliver(struct sfc_adapter *sa,
+ const efx_mport_sel_t *mport_match,
+ const efx_mport_sel_t *mport_deliver,
+ int prio, struct sfc_mae_rule **rulep)
+{
+ struct sfc_mae *mae = &sa->mae;
+ struct sfc_mae_rule *rule;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (prio > 0 && (unsigned int)prio >= mae->nb_action_rule_prios_max) {
+ rc = EINVAL;
+ sfc_err(sa, "failed: invalid priority %d (max %u)", prio,
+ mae->nb_action_rule_prios_max);
+ goto fail_invalid_prio;
+ }
+ if (prio < 0)
+ prio = mae->nb_action_rule_prios_max - 1;
+
+ rc = sfc_mae_internal_rule_find_empty_slot(sa, &rule);
+ if (rc != 0)
+ goto fail_find_empty_slot;
+
+ sfc_log_init(sa, "init MAE match spec");
+ rc = efx_mae_match_spec_init(sa->nic, EFX_MAE_RULE_ACTION,
+ (uint32_t)prio, &rule->spec);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init MAE match spec");
+ goto fail_match_init;
+ }
+
+ rc = efx_mae_match_spec_mport_set(rule->spec, mport_match, NULL);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get MAE match mport selector");
+ goto fail_mport_set;
+ }
+
+ rc = efx_mae_action_set_spec_init(sa->nic, &rule->actions);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init MAE action set");
+ goto fail_action_init;
+ }
+
+ rc = efx_mae_action_set_populate_deliver(rule->actions,
+ mport_deliver);
+ if (rc != 0) {
+ sfc_err(sa, "failed to populate deliver action");
+ goto fail_populate_deliver;
+ }
+
+ rc = efx_mae_action_set_alloc(sa->nic, rule->actions,
+ &rule->action_set);
+ if (rc != 0) {
+ sfc_err(sa, "failed to allocate action set");
+ goto fail_action_set_alloc;
+ }
+
+ rc = efx_mae_action_rule_insert(sa->nic, rule->spec, NULL,
+ &rule->action_set,
+ &rule->rule_id);
+ if (rc != 0) {
+ sfc_err(sa, "failed to insert action rule");
+ goto fail_rule_insert;
+ }
+
+ *rulep = rule;
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_rule_insert:
+ efx_mae_action_set_free(sa->nic, &rule->action_set);
+
+fail_action_set_alloc:
+fail_populate_deliver:
+ efx_mae_action_set_spec_fini(sa->nic, rule->actions);
+
+fail_action_init:
+fail_mport_set:
+ efx_mae_match_spec_fini(sa->nic, rule->spec);
+
+fail_match_init:
+fail_find_empty_slot:
+fail_invalid_prio:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_mae_rule_del(struct sfc_adapter *sa, struct sfc_mae_rule *rule)
+{
+ if (rule == NULL || rule->spec == NULL)
+ return;
+
+ efx_mae_action_rule_remove(sa->nic, &rule->rule_id);
+ efx_mae_action_set_free(sa->nic, &rule->action_set);
+ efx_mae_action_set_spec_fini(sa->nic, rule->actions);
+ efx_mae_match_spec_fini(sa->nic, rule->spec);
+
+ rule->spec = NULL;
+}
+
int
sfc_mae_attach(struct sfc_adapter *sa)
{
@@ -3443,3 +3576,81 @@ sfc_mae_flow_query(struct rte_eth_dev *dev,
"Query for action of this type is not supported");
}
}
+
+int
+sfc_mae_switchdev_init(struct sfc_adapter *sa)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ struct sfc_mae *mae = &sa->mae;
+ efx_mport_sel_t pf;
+ efx_mport_sel_t phy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sa->switchdev) {
+ sfc_log_init(sa, "switchdev is not enabled - skip");
+ return 0;
+ }
+
+ if (mae->status != SFC_MAE_STATUS_SUPPORTED) {
+ rc = ENOTSUP;
+ sfc_err(sa, "failed to init switchdev - no MAE support");
+ goto fail_no_mae;
+ }
+
+ rc = efx_mae_mport_by_pcie_function(encp->enc_pf, EFX_PCI_VF_INVALID,
+ &pf);
+ if (rc != 0) {
+ sfc_err(sa, "failed get PF mport");
+ goto fail_pf_get;
+ }
+
+ rc = efx_mae_mport_by_phy_port(encp->enc_assigned_port, &phy);
+ if (rc != 0) {
+ sfc_err(sa, "failed get PHY mport");
+ goto fail_phy_get;
+ }
+
+ rc = sfc_mae_rule_add_mport_match_deliver(sa, &pf, &phy,
+ SFC_MAE_RULE_PRIO_LOWEST,
+ &mae->switchdev_rule_pf_to_ext);
+ if (rc != 0) {
+ sfc_err(sa, "failed add MAE rule to forward from PF to PHY");
+ goto fail_pf_add;
+ }
+
+ rc = sfc_mae_rule_add_mport_match_deliver(sa, &phy, &pf,
+ SFC_MAE_RULE_PRIO_LOWEST,
+ &mae->switchdev_rule_ext_to_pf);
+ if (rc != 0) {
+ sfc_err(sa, "failed add MAE rule to forward from PHY to PF");
+ goto fail_phy_add;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_phy_add:
+ sfc_mae_rule_del(sa, mae->switchdev_rule_pf_to_ext);
+
+fail_pf_add:
+fail_phy_get:
+fail_pf_get:
+fail_no_mae:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_mae_switchdev_fini(struct sfc_adapter *sa)
+{
+ struct sfc_mae *mae = &sa->mae;
+
+ if (!sa->switchdev)
+ return;
+
+ sfc_mae_rule_del(sa, mae->switchdev_rule_pf_to_ext);
+ sfc_mae_rule_del(sa, mae->switchdev_rule_ext_to_pf);
+}
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 7e3b6a7a97..684f0daf7a 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -139,6 +139,26 @@ struct sfc_mae_counter_registry {
uint32_t service_id;
};
+/** Rules to forward traffic from PHY port to PF and from PF to PHY port */
+#define SFC_MAE_NB_SWITCHDEV_RULES (2)
+/** Maximum required internal MAE rules */
+#define SFC_MAE_NB_RULES_MAX (SFC_MAE_NB_SWITCHDEV_RULES)
+
+struct sfc_mae_rule {
+ efx_mae_match_spec_t *spec;
+ efx_mae_actions_t *actions;
+ efx_mae_aset_id_t action_set;
+ efx_mae_rule_id_t rule_id;
+};
+
+struct sfc_mae_internal_rules {
+ /*
+ * Rules required to sustain switchdev mode or to provide
+ * port representor functionality.
+ */
+ struct sfc_mae_rule rules[SFC_MAE_NB_RULES_MAX];
+};
+
struct sfc_mae {
/** Assigned switch domain identifier */
uint16_t switch_domain_id;
@@ -164,6 +184,14 @@ struct sfc_mae {
bool counter_rxq_running;
/** Counter registry */
struct sfc_mae_counter_registry counter_registry;
+ /** Driver-internal flow rules */
+ struct sfc_mae_internal_rules internal_rules;
+ /**
+ * Switchdev default rules. They forward traffic from PHY port
+ * to PF and vice versa.
+ */
+ struct sfc_mae_rule *switchdev_rule_pf_to_ext;
+ struct sfc_mae_rule *switchdev_rule_ext_to_pf;
};
struct sfc_adapter;
@@ -306,6 +334,27 @@ sfc_flow_insert_cb_t sfc_mae_flow_insert;
sfc_flow_remove_cb_t sfc_mae_flow_remove;
sfc_flow_query_cb_t sfc_mae_flow_query;
+/**
+ * The value used to represent the lowest priority.
+ * Used in MAE rule API.
+ */
+#define SFC_MAE_RULE_PRIO_LOWEST (-1)
+
+/**
+ * Insert a driver-internal flow rule that matches traffic originating from
+ * some m-port selector and redirects it to another one
+ * (eg. PF --> PHY, PHY --> PF).
+ *
+ * If requested priority is negative, use the lowest priority.
+ */
+int sfc_mae_rule_add_mport_match_deliver(struct sfc_adapter *sa,
+ const efx_mport_sel_t *mport_match,
+ const efx_mport_sel_t *mport_deliver,
+ int prio, struct sfc_mae_rule **rulep);
+void sfc_mae_rule_del(struct sfc_adapter *sa, struct sfc_mae_rule *rule);
+int sfc_mae_switchdev_init(struct sfc_adapter *sa);
+void sfc_mae_switchdev_fini(struct sfc_adapter *sa);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 05/38] common/sfc_efx/base: add an API to get mport ID by selector
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (3 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 04/38] net/sfc: insert switchdev mode MAE rules Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 06/38] net/sfc: support EF100 Tx override prefix Andrew Rybchenko
` (33 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The mport ID is required to set appropriate egress mport ID
in Tx prefix for port representor TxQ.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx.h | 21 +++++++++
drivers/common/sfc_efx/base/efx_mae.c | 64 +++++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 86 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 24e1314cc3..94803815ac 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4181,6 +4181,19 @@ typedef struct efx_mport_sel_s {
uint32_t sel;
} efx_mport_sel_t;
+/*
+ * MPORT ID. Used to refer dynamically to a specific MPORT.
+ * The difference between MPORT selector and MPORT ID is that
+ * selector can specify an exact MPORT ID or it can specify a
+ * pattern by which an exact MPORT ID can be selected. For example,
+ * static MPORT selector can specify MPORT of a current PF, which
+ * will be translated to the dynamic MPORT ID based on which PF is
+ * using that MPORT selector.
+ */
+typedef struct efx_mport_id_s {
+ uint32_t id;
+} efx_mport_id_t;
+
#define EFX_MPORT_NULL (0U)
/*
@@ -4210,6 +4223,14 @@ efx_mae_mport_by_pcie_function(
__in uint32_t vf,
__out efx_mport_sel_t *mportp);
+/* Get MPORT ID by an MPORT selector */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_id_by_selector(
+ __in efx_nic_t *enp,
+ __in const efx_mport_sel_t *mport_selectorp,
+ __out efx_mport_id_t *mport_idp);
+
/*
* Fields which have BE postfix in their named constants are expected
* to be passed by callers in big-endian byte order. They will appear
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index c22206e227..b38b1143d6 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -731,6 +731,70 @@ efx_mae_mport_by_pcie_function(
return (0);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+static __checkReturn efx_rc_t
+efx_mcdi_mae_mport_lookup(
+ __in efx_nic_t *enp,
+ __in const efx_mport_sel_t *mport_selectorp,
+ __out efx_mport_id_t *mport_idp)
+{
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_LOOKUP_IN_LEN,
+ MC_CMD_MAE_MPORT_LOOKUP_OUT_LEN);
+ efx_rc_t rc;
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_LOOKUP;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_LOOKUP_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_LOOKUP_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_LOOKUP_IN_MPORT_SELECTOR,
+ mport_selectorp->sel);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail1;
+ }
+
+ mport_idp->id = MCDI_OUT_DWORD(req, MAE_MPORT_LOOKUP_OUT_MPORT_ID);
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_mport_id_by_selector(
+ __in efx_nic_t *enp,
+ __in const efx_mport_sel_t *mport_selectorp,
+ __out efx_mport_id_t *mport_idp)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ rc = efx_mcdi_mae_mport_lookup(enp, mport_selectorp, mport_idp);
+ if (rc != 0)
+ goto fail2;
+
+ return (0);
+
fail2:
EFSYS_PROBE(fail2);
fail1:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 0c5bcdfa84..3dc21878c0 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -126,6 +126,7 @@ INTERNAL {
efx_mae_match_specs_equal;
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
+ efx_mae_mport_id_by_selector;
efx_mae_outer_rule_insert;
efx_mae_outer_rule_remove;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 06/38] net/sfc: support EF100 Tx override prefix
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (4 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 05/38] common/sfc_efx/base: add an API to get mport ID by selector Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 07/38] net/sfc: add representors proxy infrastructure Andrew Rybchenko
` (32 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Add internal mbuf dynamic flag and field to request EF100 native
Tx datapath to use Tx prefix descriptor to override egress m-port.
Overriding egress m-port is necessary on representor Tx burst
so that the packet will reach corresponding VF.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_dp.c | 46 ++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_dp.h | 25 ++++++++++++++++++
drivers/net/sfc/sfc_ef100_tx.c | 25 ++++++++++++++++++
drivers/net/sfc/sfc_ethdev.c | 4 +++
4 files changed, 100 insertions(+)
diff --git a/drivers/net/sfc/sfc_dp.c b/drivers/net/sfc/sfc_dp.c
index 24ed0898c8..66a84c99c8 100644
--- a/drivers/net/sfc/sfc_dp.c
+++ b/drivers/net/sfc/sfc_dp.c
@@ -12,6 +12,9 @@
#include <errno.h>
#include <rte_log.h>
+#include <rte_mbuf_dyn.h>
+
+#include "efx.h"
#include "sfc_dp.h"
#include "sfc_log.h"
@@ -77,3 +80,46 @@ sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry)
return 0;
}
+
+uint64_t sfc_dp_mport_override;
+int sfc_dp_mport_offset = -1;
+
+int
+sfc_dp_mport_register(void)
+{
+ static const struct rte_mbuf_dynfield mport = {
+ .name = "rte_net_sfc_dynfield_mport",
+ .size = sizeof(efx_mport_id_t),
+ .align = __alignof__(efx_mport_id_t),
+ };
+ static const struct rte_mbuf_dynflag mport_override = {
+ .name = "rte_net_sfc_dynflag_mport_override",
+ };
+
+ int field_offset;
+ int flag;
+
+ if (sfc_dp_mport_override != 0) {
+ SFC_GENERIC_LOG(INFO, "%s() already registered", __func__);
+ return 0;
+ }
+
+ field_offset = rte_mbuf_dynfield_register(&mport);
+ if (field_offset < 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to register mport dynfield",
+ __func__);
+ return -1;
+ }
+
+ flag = rte_mbuf_dynflag_register(&mport_override);
+ if (flag < 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to register mport dynflag",
+ __func__);
+ return -1;
+ }
+
+ sfc_dp_mport_offset = field_offset;
+ sfc_dp_mport_override = UINT64_C(1) << flag;
+
+ return 0;
+}
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 7fd8f34b0f..f3c6892426 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -126,6 +126,31 @@ struct sfc_dp *sfc_dp_find_by_caps(struct sfc_dp_list *head,
unsigned int avail_caps);
int sfc_dp_register(struct sfc_dp_list *head, struct sfc_dp *entry);
+/**
+ * Dynamically registered mbuf flag "mport_override" (as a bitmask).
+ *
+ * If this flag is set in an mbuf then the dynamically registered
+ * mbuf field "mport" holds a valid value. This is used to direct
+ * port representor transmit traffic to the correct target port.
+ */
+extern uint64_t sfc_dp_mport_override;
+
+/**
+ * Dynamically registered mbuf field "mport" (mbuf byte offset).
+ *
+ * If the dynamically registered "mport_override" flag is set in
+ * an mbuf then the mbuf "mport" field holds a valid value. This
+ * is used to direct port representor transmit traffic to the
+ * correct target port.
+ */
+extern int sfc_dp_mport_offset;
+
+/**
+ * Register dynamic mbuf flag and field which can be used to require Tx override
+ * prefix descriptor with egress mport set.
+ */
+int sfc_dp_mport_register(void);
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d34..51eecbe832 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -10,6 +10,7 @@
#include <stdbool.h>
#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
#include <rte_io.h>
#include <rte_net.h>
@@ -309,6 +310,19 @@ sfc_ef100_tx_reap(struct sfc_ef100_txq *txq)
sfc_ef100_tx_reap_num_descs(txq, sfc_ef100_tx_process_events(txq));
}
+static void
+sfc_ef100_tx_qdesc_prefix_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
+{
+ efx_mport_id_t *mport_id =
+ RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset, efx_mport_id_t *);
+
+ EFX_POPULATE_OWORD_3(*tx_desc,
+ ESF_GZ_TX_PREFIX_EGRESS_MPORT,
+ mport_id->id,
+ ESF_GZ_TX_PREFIX_EGRESS_MPORT_EN, 1,
+ ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_PREFIX);
+}
+
static uint8_t
sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
{
@@ -525,6 +539,11 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
SFC_MBUF_SEG_LEN_MAX));
}
+ if (m->ol_flags & sfc_dp_mport_override) {
+ /* Tx override prefix descriptor will be used */
+ extra_descs++;
+ }
+
/*
* Any segment of scattered packet cannot be bigger than maximum
* segment length. Make sure that subsequent segments do not need
@@ -671,6 +690,12 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
break;
}
+ if (m_seg->ol_flags & sfc_dp_mport_override) {
+ id = added++ & txq->ptr_mask;
+ sfc_ef100_tx_qdesc_prefix_create(m_seg,
+ &txq->txq_hw_ring[id]);
+ }
+
if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
} else {
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index b353bfe358..7f5212c3fd 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2248,6 +2248,10 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
return 1;
}
+ rc = sfc_dp_mport_register();
+ if (rc != 0)
+ return rc;
+
sfc_register_dp();
logtype_main = sfc_register_logtype(&pci_dev->addr,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 07/38] net/sfc: add representors proxy infrastructure
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (5 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 06/38] net/sfc: support EF100 Tx override prefix Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 08/38] net/sfc: reserve TxQ and RxQ for port representors Andrew Rybchenko
` (31 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Representor proxy is a mediator between virtual functions and port
representors. It forwards traffic between virtual functions and port
representors performing base PF ethdev + VF's representor traffic
(de-)multiplexing. The implementation will be provided by later patches.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/meson.build | 1 +
drivers/net/sfc/sfc.c | 35 ++++++
drivers/net/sfc/sfc.h | 5 +
drivers/net/sfc/sfc_repr_proxy.c | 210 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 34 +++++
5 files changed, 285 insertions(+)
create mode 100644 drivers/net/sfc/sfc_repr_proxy.c
create mode 100644 drivers/net/sfc/sfc_repr_proxy.h
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 948c65968a..4fc2063f7a 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -97,4 +97,5 @@ sources = files(
'sfc_ef100_rx.c',
'sfc_ef100_tx.c',
'sfc_service.c',
+ 'sfc_repr_proxy.c',
)
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index cd2c97f3b2..591b8971b3 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -27,6 +27,25 @@
#include "sfc_sw_stats.h"
+bool
+sfc_repr_supported(const struct sfc_adapter *sa)
+{
+ if (!sa->switchdev)
+ return false;
+
+ /*
+ * Representor proxy should use service lcore on PF's socket
+ * (sa->socket_id) to be efficient. But the proxy will fall back
+ * to any socket if it is not possible to get the service core
+ * on the same socket. Check that at least service core on any
+ * socket is available.
+ */
+ if (sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE)
+ return false;
+
+ return true;
+}
+
int
sfc_dma_alloc(const struct sfc_adapter *sa, const char *name, uint16_t id,
size_t len, int socket_id, efsys_mem_t *esmp)
@@ -434,9 +453,16 @@ sfc_try_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_flows_insert;
+ rc = sfc_repr_proxy_start(sa);
+ if (rc != 0)
+ goto fail_repr_proxy_start;
+
sfc_log_init(sa, "done");
return 0;
+fail_repr_proxy_start:
+ sfc_flow_stop(sa);
+
fail_flows_insert:
sfc_tx_stop(sa);
@@ -540,6 +566,7 @@ sfc_stop(struct sfc_adapter *sa)
sa->state = SFC_ADAPTER_STOPPING;
+ sfc_repr_proxy_stop(sa);
sfc_flow_stop(sa);
sfc_tx_stop(sa);
sfc_rx_stop(sa);
@@ -899,6 +926,10 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_mae_switchdev_init;
+ rc = sfc_repr_proxy_attach(sa);
+ if (rc != 0)
+ goto fail_repr_proxy_attach;
+
sfc_log_init(sa, "fini nic");
efx_nic_fini(enp);
@@ -927,6 +958,9 @@ sfc_attach(struct sfc_adapter *sa)
fail_sw_xstats_init:
sfc_flow_fini(sa);
+ sfc_repr_proxy_detach(sa);
+
+fail_repr_proxy_attach:
sfc_mae_switchdev_fini(sa);
fail_mae_switchdev_init:
@@ -976,6 +1010,7 @@ sfc_detach(struct sfc_adapter *sa)
sfc_flow_fini(sa);
+ sfc_repr_proxy_detach(sa);
sfc_mae_switchdev_fini(sa);
sfc_mae_detach(sa);
sfc_mae_counter_rxq_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index b045baca9e..8f65857f65 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -30,6 +30,8 @@
#include "sfc_sriov.h"
#include "sfc_mae.h"
#include "sfc_dp.h"
+#include "sfc_repr_proxy.h"
+#include "sfc_service.h"
#ifdef __cplusplus
extern "C" {
@@ -260,6 +262,7 @@ struct sfc_adapter {
struct sfc_sw_xstats sw_xstats;
struct sfc_filter filter;
struct sfc_mae mae;
+ struct sfc_repr_proxy repr_proxy;
struct sfc_flow_list flow_list;
@@ -388,6 +391,8 @@ sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
return sas->counters_rxq_allocated ? 1 : 0;
}
+bool sfc_repr_supported(const struct sfc_adapter *sa);
+
/** Get the number of milliseconds since boot from the default timer */
static inline uint64_t
sfc_get_system_msecs(void)
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
new file mode 100644
index 0000000000..eb29376988
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -0,0 +1,210 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <rte_service.h>
+#include <rte_service_component.h>
+
+#include "sfc_log.h"
+#include "sfc_service.h"
+#include "sfc_repr_proxy.h"
+#include "sfc.h"
+
+static int32_t
+sfc_repr_proxy_routine(void *arg)
+{
+ struct sfc_repr_proxy *rp = arg;
+
+ /* Representor proxy boilerplate will be here */
+ RTE_SET_USED(rp);
+
+ return 0;
+}
+
+int
+sfc_repr_proxy_attach(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct rte_service_spec service;
+ uint32_t cid;
+ uint32_t sid;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ cid = sfc_get_service_lcore(sa->socket_id);
+ if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
+ /* Warn and try to allocate on any NUMA node */
+ sfc_warn(sa,
+ "repr proxy: unable to get service lcore at socket %d",
+ sa->socket_id);
+
+ cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+ }
+ if (cid == RTE_MAX_LCORE) {
+ rc = ENOTSUP;
+ sfc_err(sa, "repr proxy: failed to get service lcore");
+ goto fail_get_service_lcore;
+ }
+
+ memset(&service, 0, sizeof(service));
+ snprintf(service.name, sizeof(service.name),
+ "net_sfc_%hu_repr_proxy", sfc_sa2shared(sa)->port_id);
+ service.socket_id = rte_lcore_to_socket_id(cid);
+ service.callback = sfc_repr_proxy_routine;
+ service.callback_userdata = rp;
+
+ rc = rte_service_component_register(&service, &sid);
+ if (rc != 0) {
+ rc = ENOEXEC;
+ sfc_err(sa, "repr proxy: failed to register service component");
+ goto fail_register;
+ }
+
+ rc = rte_service_map_lcore_set(sid, cid, 1);
+ if (rc != 0) {
+ rc = -rc;
+ sfc_err(sa, "repr proxy: failed to map lcore");
+ goto fail_map_lcore;
+ }
+
+ rp->service_core_id = cid;
+ rp->service_id = sid;
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_map_lcore:
+ rte_service_component_unregister(sid);
+
+fail_register:
+ /*
+ * No need to rollback service lcore get since
+ * it just makes socket_id based search and remembers it.
+ */
+
+fail_get_service_lcore:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_repr_proxy_detach(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ rte_service_map_lcore_set(rp->service_id, rp->service_core_id, 0);
+ rte_service_component_unregister(rp->service_id);
+
+ sfc_log_init(sa, "done");
+}
+
+int
+sfc_repr_proxy_start(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ /*
+ * The condition to start the proxy is insufficient. It will be
+ * complemented with representor port start/stop support.
+ */
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ /* Service core may be in "stopped" state, start it */
+ rc = rte_service_lcore_start(rp->service_core_id);
+ if (rc != 0 && rc != -EALREADY) {
+ rc = -rc;
+ sfc_err(sa, "failed to start service core for %s: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(rc));
+ goto fail_start_core;
+ }
+
+ /* Run the service */
+ rc = rte_service_component_runstate_set(rp->service_id, 1);
+ if (rc < 0) {
+ rc = -rc;
+ sfc_err(sa, "failed to run %s component: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(rc));
+ goto fail_component_runstate_set;
+ }
+ rc = rte_service_runstate_set(rp->service_id, 1);
+ if (rc < 0) {
+ rc = -rc;
+ sfc_err(sa, "failed to run %s: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(rc));
+ goto fail_runstate_set;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_runstate_set:
+ rte_service_component_runstate_set(rp->service_id, 0);
+
+fail_component_runstate_set:
+ /* Service lcore may be shared and we never stop it */
+
+fail_start_core:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_repr_proxy_stop(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_supported(sa)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ rc = rte_service_runstate_set(rp->service_id, 0);
+ if (rc < 0) {
+ sfc_err(sa, "failed to stop %s: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(-rc));
+ }
+
+ rc = rte_service_component_runstate_set(rp->service_id, 0);
+ if (rc < 0) {
+ sfc_err(sa, "failed to stop %s component: %s",
+ rte_service_get_name(rp->service_id),
+ rte_strerror(-rc));
+ }
+
+ /* Service lcore may be shared and we never stop it */
+
+ sfc_log_init(sa, "done");
+}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
new file mode 100644
index 0000000000..40ce352335
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_REPR_PROXY_H
+#define _SFC_REPR_PROXY_H
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct sfc_repr_proxy {
+ uint32_t service_core_id;
+ uint32_t service_id;
+};
+
+struct sfc_adapter;
+
+int sfc_repr_proxy_attach(struct sfc_adapter *sa);
+void sfc_repr_proxy_detach(struct sfc_adapter *sa);
+int sfc_repr_proxy_start(struct sfc_adapter *sa);
+void sfc_repr_proxy_stop(struct sfc_adapter *sa);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_REPR_PROXY_H */
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 08/38] net/sfc: reserve TxQ and RxQ for port representors
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (6 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 07/38] net/sfc: add representors proxy infrastructure Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 09/38] net/sfc: move adapter state enum to separate header Andrew Rybchenko
` (30 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
A Tx/Rx queue pair is required to forward traffic between
port representors and virtual functions.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 51 ++++++++++++++++++++++++++++++--
drivers/net/sfc/sfc.h | 15 ++++++++++
drivers/net/sfc/sfc_ev.h | 40 ++++++++++++++++++-------
drivers/net/sfc/sfc_repr_proxy.c | 12 +++++---
drivers/net/sfc/sfc_repr_proxy.h | 8 +++++
drivers/net/sfc/sfc_tx.c | 29 ++++++++++--------
6 files changed, 124 insertions(+), 31 deletions(-)
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 591b8971b3..9abd6d600b 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -46,6 +46,12 @@ sfc_repr_supported(const struct sfc_adapter *sa)
return true;
}
+bool
+sfc_repr_available(const struct sfc_adapter_shared *sas)
+{
+ return sas->nb_repr_rxq > 0 && sas->nb_repr_txq > 0;
+}
+
int
sfc_dma_alloc(const struct sfc_adapter *sa, const char *name, uint16_t id,
size_t len, int socket_id, efsys_mem_t *esmp)
@@ -296,6 +302,41 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
sas->counters_rxq_allocated = false;
}
+ if (sfc_repr_supported(sa) &&
+ evq_allocated >= SFC_REPR_PROXY_NB_RXQ_MIN +
+ SFC_REPR_PROXY_NB_TXQ_MIN &&
+ rxq_allocated >= SFC_REPR_PROXY_NB_RXQ_MIN &&
+ txq_allocated >= SFC_REPR_PROXY_NB_TXQ_MIN) {
+ unsigned int extra;
+
+ txq_allocated -= SFC_REPR_PROXY_NB_TXQ_MIN;
+ rxq_allocated -= SFC_REPR_PROXY_NB_RXQ_MIN;
+ evq_allocated -= SFC_REPR_PROXY_NB_RXQ_MIN +
+ SFC_REPR_PROXY_NB_TXQ_MIN;
+
+ sas->nb_repr_rxq = SFC_REPR_PROXY_NB_RXQ_MIN;
+ sas->nb_repr_txq = SFC_REPR_PROXY_NB_TXQ_MIN;
+
+ /* Allocate extra representor RxQs up to the maximum */
+ extra = MIN(evq_allocated, rxq_allocated);
+ extra = MIN(extra,
+ SFC_REPR_PROXY_NB_RXQ_MAX - sas->nb_repr_rxq);
+ evq_allocated -= extra;
+ rxq_allocated -= extra;
+ sas->nb_repr_rxq += extra;
+
+ /* Allocate extra representor TxQs up to the maximum */
+ extra = MIN(evq_allocated, txq_allocated);
+ extra = MIN(extra,
+ SFC_REPR_PROXY_NB_TXQ_MAX - sas->nb_repr_txq);
+ evq_allocated -= extra;
+ txq_allocated -= extra;
+ sas->nb_repr_txq += extra;
+ } else {
+ sas->nb_repr_rxq = 0;
+ sas->nb_repr_txq = 0;
+ }
+
/* Add remaining allocated queues */
sa->rxq_max += MIN(rxq_allocated, evq_allocated / 2);
sa->txq_max += MIN(txq_allocated, evq_allocated - sa->rxq_max);
@@ -313,8 +354,10 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
static int
sfc_set_drv_limits(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
const struct rte_eth_dev_data *data = sa->eth_dev->data;
- uint32_t rxq_reserved = sfc_nb_reserved_rxq(sfc_sa2shared(sa));
+ uint32_t rxq_reserved = sfc_nb_reserved_rxq(sas);
+ uint32_t txq_reserved = sfc_nb_txq_reserved(sas);
efx_drv_limits_t lim;
memset(&lim, 0, sizeof(lim));
@@ -325,10 +368,12 @@ sfc_set_drv_limits(struct sfc_adapter *sa)
* sfc_estimate_resource_limits().
*/
lim.edl_min_evq_count = lim.edl_max_evq_count =
- 1 + data->nb_rx_queues + data->nb_tx_queues + rxq_reserved;
+ 1 + data->nb_rx_queues + data->nb_tx_queues +
+ rxq_reserved + txq_reserved;
lim.edl_min_rxq_count = lim.edl_max_rxq_count =
data->nb_rx_queues + rxq_reserved;
- lim.edl_min_txq_count = lim.edl_max_txq_count = data->nb_tx_queues;
+ lim.edl_min_txq_count = lim.edl_max_txq_count =
+ data->nb_tx_queues + txq_reserved;
return efx_nic_set_drv_limits(sa->nic, &lim);
}
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 8f65857f65..79f9d7979e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -191,6 +191,8 @@ struct sfc_adapter_shared {
char *dp_tx_name;
bool counters_rxq_allocated;
+ unsigned int nb_repr_rxq;
+ unsigned int nb_repr_txq;
};
/* Adapter process private data */
@@ -392,6 +394,19 @@ sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
}
bool sfc_repr_supported(const struct sfc_adapter *sa);
+bool sfc_repr_available(const struct sfc_adapter_shared *sas);
+
+static inline unsigned int
+sfc_repr_nb_rxq(const struct sfc_adapter_shared *sas)
+{
+ return sas->nb_repr_rxq;
+}
+
+static inline unsigned int
+sfc_repr_nb_txq(const struct sfc_adapter_shared *sas)
+{
+ return sas->nb_repr_txq;
+}
/** Get the number of milliseconds since boot from the default timer */
static inline uint64_t
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index b2a0380205..590cfb1694 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -70,14 +70,21 @@ sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
static inline unsigned int
sfc_nb_reserved_rxq(const struct sfc_adapter_shared *sas)
{
- return sfc_nb_counter_rxq(sas);
+ return sfc_nb_counter_rxq(sas) + sfc_repr_nb_rxq(sas);
+}
+
+/* Return the number of Tx queues reserved for driver's internal use */
+static inline unsigned int
+sfc_nb_txq_reserved(const struct sfc_adapter_shared *sas)
+{
+ return sfc_repr_nb_txq(sas);
}
static inline unsigned int
sfc_nb_reserved_evq(const struct sfc_adapter_shared *sas)
{
- /* An EvQ is required for each reserved RxQ */
- return 1 + sfc_nb_reserved_rxq(sas);
+ /* An EvQ is required for each reserved Rx/Tx queue */
+ return 1 + sfc_nb_reserved_rxq(sas) + sfc_nb_txq_reserved(sas);
}
/*
@@ -112,6 +119,7 @@ sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
* Own event queue is allocated for management, each Rx and each Tx queue.
* Zero event queue is used for management events.
* When counters are supported, one Rx event queue is reserved.
+ * When representors are supported, Rx and Tx event queues are reserved.
* Rx event queues follow reserved event queues.
* Tx event queues follow Rx event queues.
*/
@@ -150,27 +158,37 @@ sfc_evq_sw_index_by_rxq_sw_index(struct sfc_adapter *sa,
}
static inline sfc_ethdev_qid_t
-sfc_ethdev_tx_qid_by_txq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+sfc_ethdev_tx_qid_by_txq_sw_index(struct sfc_adapter_shared *sas,
sfc_sw_index_t txq_sw_index)
{
- /* Only ethdev queues are present for now */
- return txq_sw_index;
+ if (txq_sw_index < sfc_nb_txq_reserved(sas))
+ return SFC_ETHDEV_QID_INVALID;
+
+ return txq_sw_index - sfc_nb_txq_reserved(sas);
}
static inline sfc_sw_index_t
-sfc_txq_sw_index_by_ethdev_tx_qid(__rte_unused struct sfc_adapter_shared *sas,
+sfc_txq_sw_index_by_ethdev_tx_qid(struct sfc_adapter_shared *sas,
sfc_ethdev_qid_t ethdev_qid)
{
- /* Only ethdev queues are present for now */
- return ethdev_qid;
+ return sfc_nb_txq_reserved(sas) + ethdev_qid;
}
static inline sfc_sw_index_t
sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
sfc_sw_index_t txq_sw_index)
{
- return sfc_nb_reserved_evq(sfc_sa2shared(sa)) +
- sa->eth_dev->data->nb_rx_queues + txq_sw_index;
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+ sfc_ethdev_qid_t ethdev_qid;
+
+ ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, txq_sw_index);
+ if (ethdev_qid == SFC_ETHDEV_QID_INVALID) {
+ return sfc_nb_reserved_evq(sas) - sfc_nb_txq_reserved(sas) +
+ txq_sw_index;
+ }
+
+ return sfc_nb_reserved_evq(sas) + sa->eth_dev->data->nb_rx_queues +
+ ethdev_qid;
}
int sfc_ev_attach(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index eb29376988..6d3962304f 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -29,6 +29,7 @@ sfc_repr_proxy_routine(void *arg)
int
sfc_repr_proxy_attach(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
struct rte_service_spec service;
uint32_t cid;
@@ -37,7 +38,7 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return 0;
}
@@ -102,11 +103,12 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
void
sfc_repr_proxy_detach(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
sfc_log_init(sa, "entry");
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return;
}
@@ -120,6 +122,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
int
sfc_repr_proxy_start(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
int rc;
@@ -129,7 +132,7 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
* The condition to start the proxy is insufficient. It will be
* complemented with representor port start/stop support.
*/
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return 0;
}
@@ -180,12 +183,13 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
void
sfc_repr_proxy_stop(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
int rc;
sfc_log_init(sa, "entry");
- if (!sfc_repr_supported(sa)) {
+ if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return;
}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index 40ce352335..953b9922c8 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -16,6 +16,14 @@
extern "C" {
#endif
+/* Number of supported RxQs with different mbuf memory pools */
+#define SFC_REPR_PROXY_NB_RXQ_MIN (1)
+#define SFC_REPR_PROXY_NB_RXQ_MAX (1)
+
+/* One TxQ is required and sufficient for port representors support */
+#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
+#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+
struct sfc_repr_proxy {
uint32_t service_core_id;
uint32_t service_id;
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d2..c1b2e964f8 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -376,6 +376,8 @@ sfc_tx_configure(struct sfc_adapter *sa)
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
const struct rte_eth_conf *dev_conf = &sa->eth_dev->data->dev_conf;
const unsigned int nb_tx_queues = sa->eth_dev->data->nb_tx_queues;
+ const unsigned int nb_rsvd_tx_queues = sfc_nb_txq_reserved(sas);
+ const unsigned int nb_txq_total = nb_tx_queues + nb_rsvd_tx_queues;
int rc = 0;
sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
@@ -395,11 +397,11 @@ sfc_tx_configure(struct sfc_adapter *sa)
if (rc != 0)
goto fail_check_mode;
- if (nb_tx_queues == sas->txq_count)
+ if (nb_txq_total == sas->txq_count)
goto done;
if (sas->txq_info == NULL) {
- sas->txq_info = rte_calloc_socket("sfc-txqs", nb_tx_queues,
+ sas->txq_info = rte_calloc_socket("sfc-txqs", nb_txq_total,
sizeof(sas->txq_info[0]), 0,
sa->socket_id);
if (sas->txq_info == NULL)
@@ -410,7 +412,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
* since it should not be shared.
*/
rc = ENOMEM;
- sa->txq_ctrl = calloc(nb_tx_queues, sizeof(sa->txq_ctrl[0]));
+ sa->txq_ctrl = calloc(nb_txq_total, sizeof(sa->txq_ctrl[0]));
if (sa->txq_ctrl == NULL)
goto fail_txqs_ctrl_alloc;
} else {
@@ -422,23 +424,23 @@ sfc_tx_configure(struct sfc_adapter *sa)
new_txq_info =
rte_realloc(sas->txq_info,
- nb_tx_queues * sizeof(sas->txq_info[0]), 0);
- if (new_txq_info == NULL && nb_tx_queues > 0)
+ nb_txq_total * sizeof(sas->txq_info[0]), 0);
+ if (new_txq_info == NULL && nb_txq_total > 0)
goto fail_txqs_realloc;
new_txq_ctrl = realloc(sa->txq_ctrl,
- nb_tx_queues * sizeof(sa->txq_ctrl[0]));
- if (new_txq_ctrl == NULL && nb_tx_queues > 0)
+ nb_txq_total * sizeof(sa->txq_ctrl[0]));
+ if (new_txq_ctrl == NULL && nb_txq_total > 0)
goto fail_txqs_ctrl_realloc;
sas->txq_info = new_txq_info;
sa->txq_ctrl = new_txq_ctrl;
- if (nb_tx_queues > sas->ethdev_txq_count) {
- memset(&sas->txq_info[sas->ethdev_txq_count], 0,
- (nb_tx_queues - sas->ethdev_txq_count) *
+ if (nb_txq_total > sas->txq_count) {
+ memset(&sas->txq_info[sas->txq_count], 0,
+ (nb_txq_total - sas->txq_count) *
sizeof(sas->txq_info[0]));
- memset(&sa->txq_ctrl[sas->ethdev_txq_count], 0,
- (nb_tx_queues - sas->ethdev_txq_count) *
+ memset(&sa->txq_ctrl[sas->txq_count], 0,
+ (nb_txq_total - sas->txq_count) *
sizeof(sa->txq_ctrl[0]));
}
}
@@ -455,7 +457,8 @@ sfc_tx_configure(struct sfc_adapter *sa)
sas->ethdev_txq_count++;
}
- sas->txq_count = sas->ethdev_txq_count;
+ /* TODO: initialize reserved queues when supported. */
+ sas->txq_count = sas->ethdev_txq_count + nb_rsvd_tx_queues;
done:
return 0;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 09/38] net/sfc: move adapter state enum to separate header
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (7 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 08/38] net/sfc: reserve TxQ and RxQ for port representors Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 10/38] common/sfc_efx/base: allow creating invalid mport selectors Andrew Rybchenko
` (29 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Adapter state will be reused by representors, that will have
a separate adapter. Rename adapter state to ethdev state
so that the meaning of it is clearer.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 47 ++++++++++---------
drivers/net/sfc/sfc.h | 54 +---------------------
drivers/net/sfc/sfc_ethdev.c | 40 ++++++++---------
drivers/net/sfc/sfc_ethdev_state.h | 72 ++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_flow.c | 10 ++---
drivers/net/sfc/sfc_intr.c | 12 ++---
drivers/net/sfc/sfc_mae.c | 2 +-
drivers/net/sfc/sfc_port.c | 2 +-
8 files changed, 130 insertions(+), 109 deletions(-)
create mode 100644 drivers/net/sfc/sfc_ethdev_state.h
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 9abd6d600b..152234cb61 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -26,7 +26,6 @@
#include "sfc_tweak.h"
#include "sfc_sw_stats.h"
-
bool
sfc_repr_supported(const struct sfc_adapter *sa)
{
@@ -440,7 +439,7 @@ sfc_try_start(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
SFC_ASSERT(sfc_adapter_is_locked(sa));
- SFC_ASSERT(sa->state == SFC_ADAPTER_STARTING);
+ SFC_ASSERT(sa->state == SFC_ETHDEV_STARTING);
sfc_log_init(sa, "set FW subvariant");
rc = sfc_set_fw_subvariant(sa);
@@ -545,9 +544,9 @@ sfc_start(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
switch (sa->state) {
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
break;
- case SFC_ADAPTER_STARTED:
+ case SFC_ETHDEV_STARTED:
sfc_notice(sa, "already started");
return 0;
default:
@@ -555,7 +554,7 @@ sfc_start(struct sfc_adapter *sa)
goto fail_bad_state;
}
- sa->state = SFC_ADAPTER_STARTING;
+ sa->state = SFC_ETHDEV_STARTING;
rc = 0;
do {
@@ -578,13 +577,13 @@ sfc_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_try_start;
- sa->state = SFC_ADAPTER_STARTED;
+ sa->state = SFC_ETHDEV_STARTED;
sfc_log_init(sa, "done");
return 0;
fail_try_start:
fail_sriov_vswitch_create:
- sa->state = SFC_ADAPTER_CONFIGURED;
+ sa->state = SFC_ETHDEV_CONFIGURED;
fail_bad_state:
sfc_log_init(sa, "failed %d", rc);
return rc;
@@ -598,9 +597,9 @@ sfc_stop(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
switch (sa->state) {
- case SFC_ADAPTER_STARTED:
+ case SFC_ETHDEV_STARTED:
break;
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
sfc_notice(sa, "already stopped");
return;
default:
@@ -609,7 +608,7 @@ sfc_stop(struct sfc_adapter *sa)
return;
}
- sa->state = SFC_ADAPTER_STOPPING;
+ sa->state = SFC_ETHDEV_STOPPING;
sfc_repr_proxy_stop(sa);
sfc_flow_stop(sa);
@@ -620,7 +619,7 @@ sfc_stop(struct sfc_adapter *sa)
sfc_intr_stop(sa);
efx_nic_fini(sa->nic);
- sa->state = SFC_ADAPTER_CONFIGURED;
+ sa->state = SFC_ETHDEV_CONFIGURED;
sfc_log_init(sa, "done");
}
@@ -631,7 +630,7 @@ sfc_restart(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return EINVAL;
sfc_stop(sa);
@@ -652,7 +651,7 @@ sfc_restart_if_required(void *arg)
if (rte_atomic32_cmpset((volatile uint32_t *)&sa->restart_required,
1, 0)) {
sfc_adapter_lock(sa);
- if (sa->state == SFC_ADAPTER_STARTED)
+ if (sa->state == SFC_ETHDEV_STARTED)
(void)sfc_restart(sa);
sfc_adapter_unlock(sa);
}
@@ -685,9 +684,9 @@ sfc_configure(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- SFC_ASSERT(sa->state == SFC_ADAPTER_INITIALIZED ||
- sa->state == SFC_ADAPTER_CONFIGURED);
- sa->state = SFC_ADAPTER_CONFIGURING;
+ SFC_ASSERT(sa->state == SFC_ETHDEV_INITIALIZED ||
+ sa->state == SFC_ETHDEV_CONFIGURED);
+ sa->state = SFC_ETHDEV_CONFIGURING;
rc = sfc_check_conf(sa);
if (rc != 0)
@@ -713,7 +712,7 @@ sfc_configure(struct sfc_adapter *sa)
if (rc != 0)
goto fail_sw_xstats_configure;
- sa->state = SFC_ADAPTER_CONFIGURED;
+ sa->state = SFC_ETHDEV_CONFIGURED;
sfc_log_init(sa, "done");
return 0;
@@ -731,7 +730,7 @@ sfc_configure(struct sfc_adapter *sa)
fail_intr_configure:
fail_check_conf:
- sa->state = SFC_ADAPTER_INITIALIZED;
+ sa->state = SFC_ETHDEV_INITIALIZED;
sfc_log_init(sa, "failed %d", rc);
return rc;
}
@@ -743,8 +742,8 @@ sfc_close(struct sfc_adapter *sa)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- SFC_ASSERT(sa->state == SFC_ADAPTER_CONFIGURED);
- sa->state = SFC_ADAPTER_CLOSING;
+ SFC_ASSERT(sa->state == SFC_ETHDEV_CONFIGURED);
+ sa->state = SFC_ETHDEV_CLOSING;
sfc_sw_xstats_close(sa);
sfc_tx_close(sa);
@@ -752,7 +751,7 @@ sfc_close(struct sfc_adapter *sa)
sfc_port_close(sa);
sfc_intr_close(sa);
- sa->state = SFC_ADAPTER_INITIALIZED;
+ sa->state = SFC_ETHDEV_INITIALIZED;
sfc_log_init(sa, "done");
}
@@ -993,7 +992,7 @@ sfc_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_sriov_vswitch_create;
- sa->state = SFC_ADAPTER_INITIALIZED;
+ sa->state = SFC_ETHDEV_INITIALIZED;
sfc_log_init(sa, "done");
return 0;
@@ -1067,7 +1066,7 @@ sfc_detach(struct sfc_adapter *sa)
efx_tunnel_fini(sa->nic);
sfc_sriov_detach(sa);
- sa->state = SFC_ADAPTER_UNINITIALIZED;
+ sa->state = SFC_ETHDEV_UNINITIALIZED;
}
static int
@@ -1325,7 +1324,7 @@ sfc_unprobe(struct sfc_adapter *sa)
sfc_mem_bar_fini(sa);
sfc_flow_fini(sa);
- sa->state = SFC_ADAPTER_UNINITIALIZED;
+ sa->state = SFC_ETHDEV_UNINITIALIZED;
}
uint32_t
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 79f9d7979e..628f32c13f 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -32,62 +32,12 @@
#include "sfc_dp.h"
#include "sfc_repr_proxy.h"
#include "sfc_service.h"
+#include "sfc_ethdev_state.h"
#ifdef __cplusplus
extern "C" {
#endif
-/*
- * +---------------+
- * | UNINITIALIZED |<-----------+
- * +---------------+ |
- * |.eth_dev_init |.eth_dev_uninit
- * V |
- * +---------------+------------+
- * | INITIALIZED |
- * +---------------+<-----------<---------------+
- * |.dev_configure | |
- * V |failed |
- * +---------------+------------+ |
- * | CONFIGURING | |
- * +---------------+----+ |
- * |success | |
- * | | +---------------+
- * | | | CLOSING |
- * | | +---------------+
- * | | ^
- * V |.dev_configure |
- * +---------------+----+ |.dev_close
- * | CONFIGURED |----------------------------+
- * +---------------+<-----------+
- * |.dev_start |
- * V |
- * +---------------+ |
- * | STARTING |------------^
- * +---------------+ failed |
- * |success |
- * | +---------------+
- * | | STOPPING |
- * | +---------------+
- * | ^
- * V |.dev_stop
- * +---------------+------------+
- * | STARTED |
- * +---------------+
- */
-enum sfc_adapter_state {
- SFC_ADAPTER_UNINITIALIZED = 0,
- SFC_ADAPTER_INITIALIZED,
- SFC_ADAPTER_CONFIGURING,
- SFC_ADAPTER_CONFIGURED,
- SFC_ADAPTER_CLOSING,
- SFC_ADAPTER_STARTING,
- SFC_ADAPTER_STARTED,
- SFC_ADAPTER_STOPPING,
-
- SFC_ADAPTER_NSTATES
-};
-
enum sfc_dev_filter_mode {
SFC_DEV_FILTER_MODE_PROMISC = 0,
SFC_DEV_FILTER_MODE_ALLMULTI,
@@ -245,7 +195,7 @@ struct sfc_adapter {
* change its state should acquire the lock.
*/
rte_spinlock_t lock;
- enum sfc_adapter_state state;
+ enum sfc_ethdev_state state;
struct rte_eth_dev *eth_dev;
struct rte_kvargs *kvargs;
int socket_id;
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 7f5212c3fd..1e9a31c937 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -213,9 +213,9 @@ sfc_dev_configure(struct rte_eth_dev *dev)
sfc_adapter_lock(sa);
switch (sa->state) {
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
/* FALLTHROUGH */
- case SFC_ADAPTER_INITIALIZED:
+ case SFC_ETHDEV_INITIALIZED:
rc = sfc_configure(sa);
break;
default:
@@ -257,7 +257,7 @@ sfc_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
sfc_log_init(sa, "entry");
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, ¤t_link);
} else if (wait_to_complete) {
efx_link_mode_t link_mode;
@@ -346,15 +346,15 @@ sfc_dev_close(struct rte_eth_dev *dev)
sfc_adapter_lock(sa);
switch (sa->state) {
- case SFC_ADAPTER_STARTED:
+ case SFC_ETHDEV_STARTED:
sfc_stop(sa);
- SFC_ASSERT(sa->state == SFC_ADAPTER_CONFIGURED);
+ SFC_ASSERT(sa->state == SFC_ETHDEV_CONFIGURED);
/* FALLTHROUGH */
- case SFC_ADAPTER_CONFIGURED:
+ case SFC_ETHDEV_CONFIGURED:
sfc_close(sa);
- SFC_ASSERT(sa->state == SFC_ADAPTER_INITIALIZED);
+ SFC_ASSERT(sa->state == SFC_ETHDEV_INITIALIZED);
/* FALLTHROUGH */
- case SFC_ADAPTER_INITIALIZED:
+ case SFC_ETHDEV_INITIALIZED:
break;
default:
sfc_err(sa, "unexpected adapter state %u on close", sa->state);
@@ -410,7 +410,7 @@ sfc_dev_filter_set(struct rte_eth_dev *dev, enum sfc_dev_filter_mode mode,
sfc_warn(sa, "the change is to be applied on the next "
"start provided that isolated mode is "
"disabled prior the next start");
- } else if ((sa->state == SFC_ADAPTER_STARTED) &&
+ } else if ((sa->state == SFC_ETHDEV_STARTED) &&
((rc = sfc_set_rx_mode(sa)) != 0)) {
*toggle = !(enabled);
sfc_warn(sa, "Failed to %s %s mode, rc = %d",
@@ -704,7 +704,7 @@ sfc_stats_reset(struct rte_eth_dev *dev)
sfc_adapter_lock(sa);
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
/*
* The operation cannot be done if port is not started; it
* will be scheduled to be done during the next port start
@@ -906,7 +906,7 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
sfc_adapter_lock(sa);
- if (sa->state == SFC_ADAPTER_STARTED)
+ if (sa->state == SFC_ETHDEV_STARTED)
efx_mac_fcntl_get(sa->nic, &wanted_fc, &link_fc);
else
link_fc = sa->port.flow_ctrl;
@@ -972,7 +972,7 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
sfc_adapter_lock(sa);
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = efx_mac_fcntl_set(sa->nic, fcntl, fc_conf->autoneg);
if (rc != 0)
goto fail_mac_fcntl_set;
@@ -1052,7 +1052,7 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
goto fail_check_scatter;
if (pdu != sa->port.pdu) {
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
sfc_stop(sa);
old_pdu = sa->port.pdu;
@@ -1129,7 +1129,7 @@ sfc_mac_addr_set(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
goto unlock;
}
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
sfc_notice(sa, "the port is not started");
sfc_notice(sa, "the new MAC address will be set on port start");
@@ -1216,7 +1216,7 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
port->nb_mcast_addrs = nb_mc_addr;
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return 0;
rc = efx_mac_multicast_list_set(sa->nic, port->mcast_addrs,
@@ -1357,7 +1357,7 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
sfc_adapter_lock(sa);
rc = EINVAL;
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
goto fail_not_started;
rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
@@ -1421,7 +1421,7 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
sfc_adapter_lock(sa);
rc = EINVAL;
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
goto fail_not_started;
txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
@@ -1529,7 +1529,7 @@ sfc_dev_udp_tunnel_op(struct rte_eth_dev *dev,
if (rc != 0)
goto fail_op;
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = efx_tunnel_reconfigure(sa->nic);
if (rc == EAGAIN) {
/*
@@ -1665,7 +1665,7 @@ sfc_dev_rss_hash_update(struct rte_eth_dev *dev,
}
if (rss_conf->rss_key != NULL) {
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
for (key_i = 0; key_i < n_contexts; key_i++) {
rc = efx_rx_scale_key_set(sa->nic,
contexts[key_i],
@@ -1792,7 +1792,7 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
}
}
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = efx_rx_scale_tbl_set(sa->nic, EFX_RSS_CONTEXT_DEFAULT,
rss_tbl_new, EFX_RSS_TBL_SIZE);
if (rc != 0)
diff --git a/drivers/net/sfc/sfc_ethdev_state.h b/drivers/net/sfc/sfc_ethdev_state.h
new file mode 100644
index 0000000000..51fb51e20e
--- /dev/null
+++ b/drivers/net/sfc/sfc_ethdev_state.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_ETHDEV_STATE_H
+#define _SFC_ETHDEV_STATE_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * +---------------+
+ * | UNINITIALIZED |<-----------+
+ * +---------------+ |
+ * |.eth_dev_init |.eth_dev_uninit
+ * V |
+ * +---------------+------------+
+ * | INITIALIZED |
+ * +---------------+<-----------<---------------+
+ * |.dev_configure | |
+ * V |failed |
+ * +---------------+------------+ |
+ * | CONFIGURING | |
+ * +---------------+----+ |
+ * |success | |
+ * | | +---------------+
+ * | | | CLOSING |
+ * | | +---------------+
+ * | | ^
+ * V |.dev_configure |
+ * +---------------+----+ |.dev_close
+ * | CONFIGURED |----------------------------+
+ * +---------------+<-----------+
+ * |.dev_start |
+ * V |
+ * +---------------+ |
+ * | STARTING |------------^
+ * +---------------+ failed |
+ * |success |
+ * | +---------------+
+ * | | STOPPING |
+ * | +---------------+
+ * | ^
+ * V |.dev_stop
+ * +---------------+------------+
+ * | STARTED |
+ * +---------------+
+ */
+enum sfc_ethdev_state {
+ SFC_ETHDEV_UNINITIALIZED = 0,
+ SFC_ETHDEV_INITIALIZED,
+ SFC_ETHDEV_CONFIGURING,
+ SFC_ETHDEV_CONFIGURED,
+ SFC_ETHDEV_CLOSING,
+ SFC_ETHDEV_STARTING,
+ SFC_ETHDEV_STARTED,
+ SFC_ETHDEV_STOPPING,
+
+ SFC_ETHDEV_NSTATES
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _SFC_ETHDEV_STATE_H */
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 4f5993a68d..36ee79f331 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -2724,7 +2724,7 @@ sfc_flow_create(struct rte_eth_dev *dev,
TAILQ_INSERT_TAIL(&sa->flow_list, flow, entries);
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
rc = sfc_flow_insert(sa, flow, error);
if (rc != 0)
goto fail_flow_insert;
@@ -2767,7 +2767,7 @@ sfc_flow_destroy(struct rte_eth_dev *dev,
goto fail_bad_value;
}
- if (sa->state == SFC_ADAPTER_STARTED)
+ if (sa->state == SFC_ETHDEV_STARTED)
rc = sfc_flow_remove(sa, flow, error);
TAILQ_REMOVE(&sa->flow_list, flow, entries);
@@ -2790,7 +2790,7 @@ sfc_flow_flush(struct rte_eth_dev *dev,
sfc_adapter_lock(sa);
while ((flow = TAILQ_FIRST(&sa->flow_list)) != NULL) {
- if (sa->state == SFC_ADAPTER_STARTED) {
+ if (sa->state == SFC_ETHDEV_STARTED) {
int rc;
rc = sfc_flow_remove(sa, flow, error);
@@ -2828,7 +2828,7 @@ sfc_flow_query(struct rte_eth_dev *dev,
goto fail_no_backend;
}
- if (sa->state != SFC_ADAPTER_STARTED) {
+ if (sa->state != SFC_ETHDEV_STARTED) {
ret = rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"Can't query the flow: the adapter is not started");
@@ -2858,7 +2858,7 @@ sfc_flow_isolate(struct rte_eth_dev *dev, int enable,
int ret = 0;
sfc_adapter_lock(sa);
- if (sa->state != SFC_ADAPTER_INITIALIZED) {
+ if (sa->state != SFC_ETHDEV_INITIALIZED) {
rte_flow_error_set(error, EBUSY,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
NULL, "please close the port first");
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..69414fd839 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -60,9 +60,9 @@ sfc_intr_line_handler(void *cb_arg)
sfc_log_init(sa, "entry");
- if (sa->state != SFC_ADAPTER_STARTED &&
- sa->state != SFC_ADAPTER_STARTING &&
- sa->state != SFC_ADAPTER_STOPPING) {
+ if (sa->state != SFC_ETHDEV_STARTED &&
+ sa->state != SFC_ETHDEV_STARTING &&
+ sa->state != SFC_ETHDEV_STOPPING) {
sfc_log_init(sa,
"interrupt on stopped adapter, don't reenable");
goto exit;
@@ -106,9 +106,9 @@ sfc_intr_message_handler(void *cb_arg)
sfc_log_init(sa, "entry");
- if (sa->state != SFC_ADAPTER_STARTED &&
- sa->state != SFC_ADAPTER_STARTING &&
- sa->state != SFC_ADAPTER_STOPPING) {
+ if (sa->state != SFC_ETHDEV_STARTED &&
+ sa->state != SFC_ETHDEV_STARTING &&
+ sa->state != SFC_ETHDEV_STOPPING) {
sfc_log_init(sa, "adapter not-started, don't reenable");
goto exit;
}
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index b3607a178b..7be77054ab 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -3414,7 +3414,7 @@ sfc_mae_flow_verify(struct sfc_adapter *sa,
SFC_ASSERT(sfc_adapter_is_locked(sa));
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return EAGAIN;
if (outer_rule != NULL) {
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb81..7a3f59a112 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -48,7 +48,7 @@ sfc_port_update_mac_stats(struct sfc_adapter *sa, boolean_t force_upload)
SFC_ASSERT(sfc_adapter_is_locked(sa));
- if (sa->state != SFC_ADAPTER_STARTED)
+ if (sa->state != SFC_ETHDEV_STARTED)
return 0;
/*
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 10/38] common/sfc_efx/base: allow creating invalid mport selectors
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (8 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 09/38] net/sfc: move adapter state enum to separate header Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 11/38] net/sfc: add port representors infrastructure Andrew Rybchenko
` (28 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
There isn't always a valid mport that can be used. For these cases,
special invalid selectors can be generated. Requests that use such
selectors in any way will be rejected.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 11 +++++++++++
drivers/common/sfc_efx/base/efx_mae.c | 25 +++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 37 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 94803815ac..c0d1535017 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4196,6 +4196,17 @@ typedef struct efx_mport_id_s {
#define EFX_MPORT_NULL (0U)
+/*
+ * Generate an invalid MPORT selector.
+ *
+ * The resulting MPORT selector is opaque to the caller. Requests
+ * that attempt to use it will be rejected.
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_invalid(
+ __out efx_mport_sel_t *mportp);
+
/*
* Get MPORT selector of a physical port.
*
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index b38b1143d6..b7afe8fdc8 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -660,6 +660,31 @@ static const efx_mae_mv_bit_desc_t __efx_mae_action_rule_mv_bit_desc_set[] = {
#undef EFX_MAE_MV_BIT_DESC
};
+ __checkReturn efx_rc_t
+efx_mae_mport_invalid(
+ __out efx_mport_sel_t *mportp)
+{
+ efx_dword_t dword;
+ efx_rc_t rc;
+
+ if (mportp == NULL) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ EFX_POPULATE_DWORD_1(dword,
+ MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_INVALID);
+
+ memset(mportp, 0, sizeof (*mportp));
+ mportp->sel = dword.ed_u32[0];
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
__checkReturn efx_rc_t
efx_mae_mport_by_phy_port(
__in uint32_t phy_port,
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 3dc21878c0..611757ccde 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -127,6 +127,7 @@ INTERNAL {
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
efx_mae_mport_id_by_selector;
+ efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
efx_mae_outer_rule_remove;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 11/38] net/sfc: add port representors infrastructure
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (9 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 10/38] common/sfc_efx/base: allow creating invalid mport selectors Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 12/38] common/sfc_efx/base: add filter ingress mport matching field Andrew Rybchenko
` (27 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Provide minimal implementation for port representors that only can be
configured and can provide device information.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
doc/guides/nics/sfc_efx.rst | 13 +-
drivers/net/sfc/meson.build | 1 +
drivers/net/sfc/sfc_ethdev.c | 153 +++++++++++-
drivers/net/sfc/sfc_kvargs.c | 1 +
drivers/net/sfc/sfc_kvargs.h | 2 +
drivers/net/sfc/sfc_repr.c | 458 +++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr.h | 36 +++
drivers/net/sfc/sfc_switch.h | 5 +
8 files changed, 662 insertions(+), 7 deletions(-)
create mode 100644 drivers/net/sfc/sfc_repr.c
create mode 100644 drivers/net/sfc/sfc_repr.h
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index d66cb76dab..4719031508 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -74,6 +74,8 @@ SFC EFX PMD has support for:
- SR-IOV PF
+- Port representors (see :ref: switch_representation)
+
Non-supported Features
----------------------
@@ -382,7 +384,16 @@ boolean parameters value.
software virtual switch (for example, Open vSwitch) makes the decision.
Software virtual switch may install MAE rules to pass established traffic
flows via hardware and offload software datapath as the result.
- Default is legacy.
+ Default is legacy, unless representors are specified, in which case switchdev
+ is chosen.
+
+- ``representor`` parameter [list]
+
+ Instantiate port representor Ethernet devices for specified Virtual
+ Functions list.
+
+ It is a standard parameter whose format is described in
+ :ref:`ethernet_device_standard_device_arguments`.
- ``rx_datapath`` [auto|efx|ef10|ef10_essb] (default **auto**)
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 4fc2063f7a..98365e9e73 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -98,4 +98,5 @@ sources = files(
'sfc_ef100_tx.c',
'sfc_service.c',
'sfc_repr_proxy.c',
+ 'sfc_repr.c',
)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 1e9a31c937..e270b5cefd 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -28,6 +28,7 @@
#include "sfc_flow.h"
#include "sfc_dp.h"
#include "sfc_dp_rx.h"
+#include "sfc_repr.h"
#include "sfc_sw_stats.h"
#define SFC_XSTAT_ID_INVALID_VAL UINT64_MAX
@@ -1909,6 +1910,10 @@ static const struct eth_dev_ops sfc_eth_dev_ops = {
.pool_ops_supported = sfc_pool_ops_supported,
};
+struct sfc_ethdev_init_data {
+ uint16_t nb_representors;
+};
+
/**
* Duplicate a string in potentially shared memory required for
* multi-process support.
@@ -2190,7 +2195,7 @@ sfc_register_dp(void)
}
static int
-sfc_parse_switch_mode(struct sfc_adapter *sa)
+sfc_parse_switch_mode(struct sfc_adapter *sa, bool has_representors)
{
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
const char *switch_mode = NULL;
@@ -2205,7 +2210,8 @@ sfc_parse_switch_mode(struct sfc_adapter *sa)
if (switch_mode == NULL) {
sa->switchdev = encp->enc_mae_supported &&
- !encp->enc_datapath_cap_evb;
+ (!encp->enc_datapath_cap_evb ||
+ has_representors);
} else if (strcasecmp(switch_mode, SFC_KVARG_SWITCH_MODE_LEGACY) == 0) {
sa->switchdev = false;
} else if (strcasecmp(switch_mode,
@@ -2230,10 +2236,11 @@ sfc_parse_switch_mode(struct sfc_adapter *sa)
}
static int
-sfc_eth_dev_init(struct rte_eth_dev *dev)
+sfc_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
{
struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+ struct sfc_ethdev_init_data *init_data = init_params;
uint32_t logtype_main;
struct sfc_adapter *sa;
int rc;
@@ -2324,7 +2331,7 @@ sfc_eth_dev_init(struct rte_eth_dev *dev)
* Selecting a default switch mode requires the NIC to be probed and
* to have its capabilities filled in.
*/
- rc = sfc_parse_switch_mode(sa);
+ rc = sfc_parse_switch_mode(sa, init_data->nb_representors > 0);
if (rc != 0)
goto fail_switch_mode;
@@ -2409,11 +2416,145 @@ static const struct rte_pci_id pci_id_sfc_efx_map[] = {
{ .vendor_id = 0 /* sentinel */ }
};
+static int
+sfc_parse_rte_devargs(const char *args, struct rte_eth_devargs *devargs)
+{
+ struct rte_eth_devargs eth_da = { .nb_representor_ports = 0 };
+ int rc;
+
+ if (args != NULL) {
+ rc = rte_eth_devargs_parse(args, ð_da);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR,
+ "Failed to parse generic devargs '%s'",
+ args);
+ return rc;
+ }
+ }
+
+ *devargs = eth_da;
+
+ return 0;
+}
+
+static int
+sfc_eth_dev_create(struct rte_pci_device *pci_dev,
+ struct sfc_ethdev_init_data *init_data,
+ struct rte_eth_dev **devp)
+{
+ struct rte_eth_dev *dev;
+ int rc;
+
+ rc = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+ sizeof(struct sfc_adapter_shared),
+ eth_dev_pci_specific_init, pci_dev,
+ sfc_eth_dev_init, init_data);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR, "Failed to create sfc ethdev '%s'",
+ pci_dev->device.name);
+ return rc;
+ }
+
+ dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (dev == NULL) {
+ SFC_GENERIC_LOG(ERR, "Failed to find allocated sfc ethdev '%s'",
+ pci_dev->device.name);
+ return -ENODEV;
+ }
+
+ *devp = dev;
+
+ return 0;
+}
+
+static int
+sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
+ const struct rte_eth_devargs *eth_da)
+{
+ struct sfc_adapter *sa;
+ unsigned int i;
+ int rc;
+
+ if (eth_da->nb_representor_ports == 0)
+ return 0;
+
+ sa = sfc_adapter_by_eth_dev(dev);
+
+ if (!sa->switchdev) {
+ sfc_err(sa, "cannot create representors in non-switchdev mode");
+ return -EINVAL;
+ }
+
+ if (!sfc_repr_available(sfc_sa2shared(sa))) {
+ sfc_err(sa, "cannot create representors: unsupported");
+
+ return -ENOTSUP;
+ }
+
+ for (i = 0; i < eth_da->nb_representor_ports; ++i) {
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ efx_mport_sel_t mport_sel;
+
+ rc = efx_mae_mport_by_pcie_function(encp->enc_pf,
+ eth_da->representor_ports[i], &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to get representor %u m-port: %s - ignore",
+ eth_da->representor_ports[i],
+ rte_strerror(-rc));
+ continue;
+ }
+
+ rc = sfc_repr_create(dev, eth_da->representor_ports[i],
+ sa->mae.switch_domain_id, &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa, "cannot create representor %u: %s - ignore",
+ eth_da->representor_ports[i],
+ rte_strerror(-rc));
+ }
+ }
+
+ return 0;
+}
+
static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
{
- return rte_eth_dev_pci_generic_probe(pci_dev,
- sizeof(struct sfc_adapter_shared), sfc_eth_dev_init);
+ struct sfc_ethdev_init_data init_data;
+ struct rte_eth_devargs eth_da;
+ struct rte_eth_dev *dev;
+ int rc;
+
+ if (pci_dev->device.devargs != NULL) {
+ rc = sfc_parse_rte_devargs(pci_dev->device.devargs->args,
+ ð_da);
+ if (rc != 0)
+ return rc;
+ } else {
+ memset(ð_da, 0, sizeof(eth_da));
+ }
+
+ init_data.nb_representors = eth_da.nb_representor_ports;
+
+ if (eth_da.nb_representor_ports > 0 &&
+ rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ SFC_GENERIC_LOG(ERR,
+ "Create representors from secondary process not supported, dev '%s'",
+ pci_dev->device.name);
+ return -ENOTSUP;
+ }
+
+ rc = sfc_eth_dev_create(pci_dev, &init_data, &dev);
+ if (rc != 0)
+ return rc;
+
+ rc = sfc_eth_dev_create_representors(dev, ð_da);
+ if (rc != 0) {
+ (void)rte_eth_dev_destroy(dev, sfc_eth_dev_uninit);
+ return rc;
+ }
+
+ return 0;
}
static int sfc_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
diff --git a/drivers/net/sfc/sfc_kvargs.c b/drivers/net/sfc/sfc_kvargs.c
index cd16213637..783cb43ae6 100644
--- a/drivers/net/sfc/sfc_kvargs.c
+++ b/drivers/net/sfc/sfc_kvargs.c
@@ -23,6 +23,7 @@ sfc_kvargs_parse(struct sfc_adapter *sa)
struct rte_devargs *devargs = eth_dev->device->devargs;
const char **params = (const char *[]){
SFC_KVARG_SWITCH_MODE,
+ SFC_KVARG_REPRESENTOR,
SFC_KVARG_STATS_UPDATE_PERIOD_MS,
SFC_KVARG_PERF_PROFILE,
SFC_KVARG_RX_DATAPATH,
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index 8e34ec92a2..2226f2b3d9 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -26,6 +26,8 @@ extern "C" {
"[" SFC_KVARG_SWITCH_MODE_LEGACY "|" \
SFC_KVARG_SWITCH_MODE_SWITCHDEV "]"
+#define SFC_KVARG_REPRESENTOR "representor"
+
#define SFC_KVARG_PERF_PROFILE "perf_profile"
#define SFC_KVARG_PERF_PROFILE_AUTO "auto"
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
new file mode 100644
index 0000000000..71eea0e209
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr.c
@@ -0,0 +1,458 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdint.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <ethdev_driver.h>
+
+#include "efx.h"
+
+#include "sfc_log.h"
+#include "sfc_debug.h"
+#include "sfc_repr.h"
+#include "sfc_ethdev_state.h"
+#include "sfc_switch.h"
+
+/** Multi-process shared representor private data */
+struct sfc_repr_shared {
+ uint16_t pf_port_id;
+ uint16_t repr_id;
+ uint16_t switch_domain_id;
+ uint16_t switch_port_id;
+};
+
+/** Primary process representor private data */
+struct sfc_repr {
+ /**
+ * PMD setup and configuration is not thread safe. Since it is not
+ * performance sensitive, it is better to guarantee thread-safety
+ * and add device level lock. Adapter control operations which
+ * change its state should acquire the lock.
+ */
+ rte_spinlock_t lock;
+ enum sfc_ethdev_state state;
+};
+
+#define sfcr_err(sr, ...) \
+ do { \
+ const struct sfc_repr *_sr = (sr); \
+ \
+ (void)_sr; \
+ SFC_GENERIC_LOG(ERR, __VA_ARGS__); \
+ } while (0)
+
+#define sfcr_info(sr, ...) \
+ do { \
+ const struct sfc_repr *_sr = (sr); \
+ \
+ (void)_sr; \
+ SFC_GENERIC_LOG(INFO, \
+ RTE_FMT("%s() " \
+ RTE_FMT_HEAD(__VA_ARGS__ ,), \
+ __func__, \
+ RTE_FMT_TAIL(__VA_ARGS__ ,))); \
+ } while (0)
+
+static inline struct sfc_repr_shared *
+sfc_repr_shared_by_eth_dev(struct rte_eth_dev *eth_dev)
+{
+ struct sfc_repr_shared *srs = eth_dev->data->dev_private;
+
+ return srs;
+}
+
+static inline struct sfc_repr *
+sfc_repr_by_eth_dev(struct rte_eth_dev *eth_dev)
+{
+ struct sfc_repr *sr = eth_dev->process_private;
+
+ return sr;
+}
+
+/*
+ * Add wrapper functions to acquire/release lock to be able to remove or
+ * change the lock in one place.
+ */
+
+static inline void
+sfc_repr_lock_init(struct sfc_repr *sr)
+{
+ rte_spinlock_init(&sr->lock);
+}
+
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+
+static inline int
+sfc_repr_lock_is_locked(struct sfc_repr *sr)
+{
+ return rte_spinlock_is_locked(&sr->lock);
+}
+
+#endif
+
+static inline void
+sfc_repr_lock(struct sfc_repr *sr)
+{
+ rte_spinlock_lock(&sr->lock);
+}
+
+static inline void
+sfc_repr_unlock(struct sfc_repr *sr)
+{
+ rte_spinlock_unlock(&sr->lock);
+}
+
+static inline void
+sfc_repr_lock_fini(__rte_unused struct sfc_repr *sr)
+{
+ /* Just for symmetry of the API */
+}
+
+static int
+sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
+ const struct rte_eth_conf *conf)
+{
+ const struct rte_eth_rss_conf *rss_conf;
+ int ret = 0;
+
+ sfcr_info(sr, "entry");
+
+ if (conf->link_speeds != 0) {
+ sfcr_err(sr, "specific link speeds not supported");
+ ret = -EINVAL;
+ }
+
+ switch (conf->rxmode.mq_mode) {
+ case ETH_MQ_RX_RSS:
+ if (nb_rx_queues != 1) {
+ sfcr_err(sr, "Rx RSS is not supported with %u queues",
+ nb_rx_queues);
+ ret = -EINVAL;
+ break;
+ }
+
+ rss_conf = &conf->rx_adv_conf.rss_conf;
+ if (rss_conf->rss_key != NULL || rss_conf->rss_key_len != 0 ||
+ rss_conf->rss_hf != 0) {
+ sfcr_err(sr, "Rx RSS configuration is not supported");
+ ret = -EINVAL;
+ }
+ break;
+ case ETH_MQ_RX_NONE:
+ break;
+ default:
+ sfcr_err(sr, "Rx mode MQ modes other than RSS not supported");
+ ret = -EINVAL;
+ break;
+ }
+
+ if (conf->txmode.mq_mode != ETH_MQ_TX_NONE) {
+ sfcr_err(sr, "Tx mode MQ modes not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->lpbk_mode != 0) {
+ sfcr_err(sr, "loopback not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->dcb_capability_en != 0) {
+ sfcr_err(sr, "priority-based flow control not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
+ sfcr_err(sr, "Flow Director not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->intr_conf.lsc != 0) {
+ sfcr_err(sr, "link status change interrupt not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->intr_conf.rxq != 0) {
+ sfcr_err(sr, "receive queue interrupt not supported");
+ ret = -EINVAL;
+ }
+
+ if (conf->intr_conf.rmv != 0) {
+ sfcr_err(sr, "remove interrupt not supported");
+ ret = -EINVAL;
+ }
+
+ sfcr_info(sr, "done %d", ret);
+
+ return ret;
+}
+
+
+static int
+sfc_repr_configure(struct sfc_repr *sr, uint16_t nb_rx_queues,
+ const struct rte_eth_conf *conf)
+{
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ ret = sfc_repr_check_conf(sr, nb_rx_queues, conf);
+ if (ret != 0)
+ goto fail_check_conf;
+
+ sr->state = SFC_ETHDEV_CONFIGURED;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_check_conf:
+ sfcr_info(sr, "failed %s", rte_strerror(-ret));
+ return ret;
+}
+
+static int
+sfc_repr_dev_configure(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct rte_eth_dev_data *dev_data = dev->data;
+ int ret;
+
+ sfcr_info(sr, "entry n_rxq=%u n_txq=%u",
+ dev_data->nb_rx_queues, dev_data->nb_tx_queues);
+
+ sfc_repr_lock(sr);
+ switch (sr->state) {
+ case SFC_ETHDEV_CONFIGURED:
+ /* FALLTHROUGH */
+ case SFC_ETHDEV_INITIALIZED:
+ ret = sfc_repr_configure(sr, dev_data->nb_rx_queues,
+ &dev_data->dev_conf);
+ break;
+ default:
+ sfcr_err(sr, "unexpected adapter state %u to configure",
+ sr->state);
+ ret = -EINVAL;
+ break;
+ }
+ sfc_repr_unlock(sr);
+
+ sfcr_info(sr, "done %s", rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_dev_infos_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+
+ dev_info->device = dev->device;
+
+ dev_info->max_rx_queues = SFC_REPR_RXQ_MAX;
+ dev_info->max_tx_queues = SFC_REPR_TXQ_MAX;
+ dev_info->default_rxconf.rx_drop_en = 1;
+ dev_info->switch_info.domain_id = srs->switch_domain_id;
+ dev_info->switch_info.port_id = srs->switch_port_id;
+
+ return 0;
+}
+
+static void
+sfc_repr_close(struct sfc_repr *sr)
+{
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ SFC_ASSERT(sr->state == SFC_ETHDEV_CONFIGURED);
+ sr->state = SFC_ETHDEV_CLOSING;
+
+ /* Put representor close actions here */
+
+ sr->state = SFC_ETHDEV_INITIALIZED;
+}
+
+static int
+sfc_repr_dev_close(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+
+ sfcr_info(sr, "entry");
+
+ sfc_repr_lock(sr);
+ switch (sr->state) {
+ case SFC_ETHDEV_CONFIGURED:
+ sfc_repr_close(sr);
+ SFC_ASSERT(sr->state == SFC_ETHDEV_INITIALIZED);
+ /* FALLTHROUGH */
+ case SFC_ETHDEV_INITIALIZED:
+ break;
+ default:
+ sfcr_err(sr, "unexpected adapter state %u on close", sr->state);
+ break;
+ }
+
+ /*
+ * Cleanup all resources.
+ * Rollback primary process sfc_repr_eth_dev_init() below.
+ */
+
+ dev->dev_ops = NULL;
+
+ sfc_repr_unlock(sr);
+ sfc_repr_lock_fini(sr);
+
+ sfcr_info(sr, "done");
+
+ free(sr);
+
+ return 0;
+}
+
+static const struct eth_dev_ops sfc_repr_dev_ops = {
+ .dev_configure = sfc_repr_dev_configure,
+ .dev_close = sfc_repr_dev_close,
+ .dev_infos_get = sfc_repr_dev_infos_get,
+};
+
+
+struct sfc_repr_init_data {
+ uint16_t pf_port_id;
+ uint16_t repr_id;
+ uint16_t switch_domain_id;
+ efx_mport_sel_t mport_sel;
+};
+
+static int
+sfc_repr_assign_mae_switch_port(uint16_t switch_domain_id,
+ const struct sfc_mae_switch_port_request *req,
+ uint16_t *switch_port_id)
+{
+ int rc;
+
+ rc = sfc_mae_assign_switch_port(switch_domain_id, req, switch_port_id);
+
+ SFC_ASSERT(rc >= 0);
+ return -rc;
+}
+
+static int
+sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
+{
+ const struct sfc_repr_init_data *repr_data = init_params;
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_mae_switch_port_request switch_port_request;
+ efx_mport_sel_t ethdev_mport_sel;
+ struct sfc_repr *sr;
+ int ret;
+
+ /*
+ * Currently there is no mport we can use for representor's
+ * ethdev. Use an invalid one for now. This way representors
+ * can be instantiated.
+ */
+ efx_mae_mport_invalid(ðdev_mport_sel);
+
+ memset(&switch_port_request, 0, sizeof(switch_port_request));
+ switch_port_request.type = SFC_MAE_SWITCH_PORT_REPRESENTOR;
+ switch_port_request.ethdev_mportp = ðdev_mport_sel;
+ switch_port_request.entity_mportp = &repr_data->mport_sel;
+ switch_port_request.ethdev_port_id = dev->data->port_id;
+
+ ret = sfc_repr_assign_mae_switch_port(repr_data->switch_domain_id,
+ &switch_port_request,
+ &srs->switch_port_id);
+ if (ret != 0) {
+ SFC_GENERIC_LOG(ERR,
+ "%s() failed to assign MAE switch port (domain id %u)",
+ __func__, repr_data->switch_domain_id);
+ goto fail_mae_assign_switch_port;
+ }
+
+ /*
+ * Allocate process private data from heap, since it should not
+ * be located in shared memory allocated using rte_malloc() API.
+ */
+ sr = calloc(1, sizeof(*sr));
+ if (sr == NULL) {
+ ret = -ENOMEM;
+ goto fail_alloc_sr;
+ }
+
+ sfc_repr_lock_init(sr);
+ sfc_repr_lock(sr);
+
+ dev->process_private = sr;
+
+ srs->pf_port_id = repr_data->pf_port_id;
+ srs->repr_id = repr_data->repr_id;
+ srs->switch_domain_id = repr_data->switch_domain_id;
+
+ dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
+ dev->data->representor_id = srs->repr_id;
+ dev->data->backer_port_id = srs->pf_port_id;
+
+ dev->data->mac_addrs = rte_zmalloc("sfcr", RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ ret = -ENOMEM;
+ goto fail_mac_addrs;
+ }
+
+ dev->dev_ops = &sfc_repr_dev_ops;
+
+ sr->state = SFC_ETHDEV_INITIALIZED;
+ sfc_repr_unlock(sr);
+
+ return 0;
+
+fail_mac_addrs:
+ sfc_repr_unlock(sr);
+ free(sr);
+
+fail_alloc_sr:
+fail_mae_assign_switch_port:
+ SFC_GENERIC_LOG(ERR, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+int
+sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
+ uint16_t switch_domain_id, const efx_mport_sel_t *mport_sel)
+{
+ struct sfc_repr_init_data repr_data;
+ char name[RTE_ETH_NAME_MAX_LEN];
+ int ret;
+
+ if (snprintf(name, sizeof(name), "net_%s_representor_%u",
+ parent->device->name, representor_id) >=
+ (int)sizeof(name)) {
+ SFC_GENERIC_LOG(ERR, "%s() failed name too long", __func__);
+ return -ENAMETOOLONG;
+ }
+
+ memset(&repr_data, 0, sizeof(repr_data));
+ repr_data.pf_port_id = parent->data->port_id;
+ repr_data.repr_id = representor_id;
+ repr_data.switch_domain_id = switch_domain_id;
+ repr_data.mport_sel = *mport_sel;
+
+ ret = rte_eth_dev_create(parent->device, name,
+ sizeof(struct sfc_repr_shared),
+ NULL, NULL,
+ sfc_repr_eth_dev_init, &repr_data);
+ if (ret != 0)
+ SFC_GENERIC_LOG(ERR, "%s() failed to create device", __func__);
+
+ SFC_GENERIC_LOG(INFO, "%s() done: %s", __func__, rte_strerror(-ret));
+
+ return ret;
+}
diff --git a/drivers/net/sfc/sfc_repr.h b/drivers/net/sfc/sfc_repr.h
new file mode 100644
index 0000000000..1347206006
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_REPR_H
+#define _SFC_REPR_H
+
+#include <stdint.h>
+
+#include <rte_ethdev.h>
+
+#include "efx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** Max count of the representor Rx queues */
+#define SFC_REPR_RXQ_MAX 1
+
+/** Max count of the representor Tx queues */
+#define SFC_REPR_TXQ_MAX 1
+
+int sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
+ uint16_t switch_domain_id,
+ const efx_mport_sel_t *mport_sel);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_REPR_H */
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index 84a02a61f8..a1a2ab9848 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -27,6 +27,11 @@ enum sfc_mae_switch_port_type {
* and thus refers to its underlying PCIe function
*/
SFC_MAE_SWITCH_PORT_INDEPENDENT = 0,
+ /**
+ * The switch port is operated by a representor RTE ethdev
+ * and thus refers to the represented PCIe function
+ */
+ SFC_MAE_SWITCH_PORT_REPRESENTOR,
};
struct sfc_mae_switch_port_request {
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 12/38] common/sfc_efx/base: add filter ingress mport matching field
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (10 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 11/38] net/sfc: add port representors infrastructure Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 13/38] common/sfc_efx/base: add API to get mport selector by ID Andrew Rybchenko
` (26 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The field changes the mport for which the filter is created.
It is required to filter traffic from VF on an alias mport.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/ef10_filter.c | 11 +++++++++--
drivers/common/sfc_efx/base/efx.h | 3 +++
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_filter.c b/drivers/common/sfc_efx/base/ef10_filter.c
index ac6006c9b4..6d19797d16 100644
--- a/drivers/common/sfc_efx/base/ef10_filter.c
+++ b/drivers/common/sfc_efx/base/ef10_filter.c
@@ -171,6 +171,7 @@ efx_mcdi_filter_op_add(
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_FILTER_OP_V3_IN_LEN,
MC_CMD_FILTER_OP_EXT_OUT_LEN);
efx_filter_match_flags_t match_flags;
+ uint32_t port_id;
efx_rc_t rc;
req.emr_cmd = MC_CMD_FILTER_OP;
@@ -180,10 +181,11 @@ efx_mcdi_filter_op_add(
req.emr_out_length = MC_CMD_FILTER_OP_EXT_OUT_LEN;
/*
- * Remove match flag for encapsulated filters that does not correspond
+ * Remove EFX match flags that do not correspond
* to the MCDI match flags
*/
match_flags = spec->efs_match_flags & ~EFX_FILTER_MATCH_ENCAP_TYPE;
+ match_flags &= ~EFX_FILTER_MATCH_MPORT;
switch (filter_op) {
case MC_CMD_FILTER_OP_IN_OP_REPLACE:
@@ -202,7 +204,12 @@ efx_mcdi_filter_op_add(
goto fail1;
}
- MCDI_IN_SET_DWORD(req, FILTER_OP_EXT_IN_PORT_ID, enp->en_vport_id);
+ if (spec->efs_match_flags & EFX_FILTER_MATCH_MPORT)
+ port_id = spec->efs_ingress_mport;
+ else
+ port_id = enp->en_vport_id;
+
+ MCDI_IN_SET_DWORD(req, FILTER_OP_EXT_IN_PORT_ID, port_id);
MCDI_IN_SET_DWORD(req, FILTER_OP_EXT_IN_MATCH_FIELDS,
match_flags);
if (spec->efs_dmaq_id == EFX_FILTER_SPEC_RX_DMAQ_ID_DROP) {
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index c0d1535017..7f04b42bae 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -3389,6 +3389,8 @@ typedef uint8_t efx_filter_flags_t;
#define EFX_FILTER_MATCH_OUTER_VID 0x00000100
/* Match by IP transport protocol */
#define EFX_FILTER_MATCH_IP_PROTO 0x00000200
+/* Match by ingress MPORT */
+#define EFX_FILTER_MATCH_MPORT 0x00000400
/* Match by VNI or VSID */
#define EFX_FILTER_MATCH_VNI_OR_VSID 0x00000800
/* For encapsulated packets, match by inner frame local MAC address */
@@ -3451,6 +3453,7 @@ typedef struct efx_filter_spec_s {
efx_oword_t efs_loc_host;
uint8_t efs_vni_or_vsid[EFX_VNI_OR_VSID_LEN];
uint8_t efs_ifrm_loc_mac[EFX_MAC_ADDR_LEN];
+ uint32_t efs_ingress_mport;
} efx_filter_spec_t;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 13/38] common/sfc_efx/base: add API to get mport selector by ID
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (11 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 12/38] common/sfc_efx/base: add filter ingress mport matching field Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 14/38] common/sfc_efx/base: add mport alias MCDI wrappers Andrew Rybchenko
` (25 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The conversion is required when mport ID is received via
mport allocation and mport selector is required for filter
creation.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx.h | 13 +++++++++++++
drivers/common/sfc_efx/base/efx_mae.c | 17 +++++++++++++++++
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 31 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 7f04b42bae..a59c2e47ef 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4237,6 +4237,19 @@ efx_mae_mport_by_pcie_function(
__in uint32_t vf,
__out efx_mport_sel_t *mportp);
+/*
+ * Get MPORT selector by an MPORT ID
+ *
+ * The resulting MPORT selector is opaque to the caller and can be
+ * passed as an argument to efx_mae_match_spec_mport_set()
+ * and efx_mae_action_set_populate_deliver().
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_by_id(
+ __in const efx_mport_id_t *mport_idp,
+ __out efx_mport_sel_t *mportp);
+
/* Get MPORT ID by an MPORT selector */
LIBEFX_API
extern __checkReturn efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index b7afe8fdc8..f5d981f973 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -827,6 +827,23 @@ efx_mae_mport_id_by_selector(
return (rc);
}
+ __checkReturn efx_rc_t
+efx_mae_mport_by_id(
+ __in const efx_mport_id_t *mport_idp,
+ __out efx_mport_sel_t *mportp)
+{
+ efx_dword_t dword;
+
+ EFX_POPULATE_DWORD_2(dword,
+ MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_MPORT_ID,
+ MAE_MPORT_SELECTOR_MPORT_ID, mport_idp->id);
+
+ memset(mportp, 0, sizeof (*mportp));
+ mportp->sel = __LE_TO_CPU_32(dword.ed_u32[0]);
+
+ return (0);
+}
+
__checkReturn efx_rc_t
efx_mae_match_spec_field_set(
__in efx_mae_match_spec_t *spec,
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 611757ccde..8c5d813c19 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -126,6 +126,7 @@ INTERNAL {
efx_mae_match_specs_equal;
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
+ efx_mae_mport_by_id;
efx_mae_mport_id_by_selector;
efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 14/38] common/sfc_efx/base: add mport alias MCDI wrappers
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (12 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 13/38] common/sfc_efx/base: add API to get mport selector by ID Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 15/38] net/sfc: add representor proxy port API Andrew Rybchenko
` (24 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The APIs allow creation of mports for port representor
traffic filtering.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/common/sfc_efx/base/efx.h | 13 ++++
drivers/common/sfc_efx/base/efx_mae.c | 90 +++++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 2 +
3 files changed, 105 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index a59c2e47ef..0a178128ba 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4599,6 +4599,19 @@ efx_mae_action_rule_remove(
__in efx_nic_t *enp,
__in const efx_mae_rule_id_t *ar_idp);
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mcdi_mport_alloc_alias(
+ __in efx_nic_t *enp,
+ __out efx_mport_id_t *mportp,
+ __out_opt uint32_t *labelp);
+
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_free(
+ __in efx_nic_t *enp,
+ __in const efx_mport_id_t *mportp);
+
#endif /* EFSYS_OPT_MAE */
#if EFSYS_OPT_VIRTIO
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index f5d981f973..3f498fe189 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -3142,4 +3142,94 @@ efx_mae_action_rule_remove(
return (rc);
}
+ __checkReturn efx_rc_t
+efx_mcdi_mport_alloc_alias(
+ __in efx_nic_t *enp,
+ __out efx_mport_id_t *mportp,
+ __out_opt uint32_t *labelp)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_ALLOC_ALIAS_IN_LEN,
+ MC_CMD_MAE_MPORT_ALLOC_ALIAS_OUT_LEN);
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_ALLOC;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_ALLOC_ALIAS_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_ALLOC_ALIAS_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_ALLOC_IN_TYPE,
+ MC_CMD_MAE_MPORT_ALLOC_IN_MPORT_TYPE_ALIAS);
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_ALLOC_ALIAS_IN_DELIVER_MPORT,
+ MAE_MPORT_SELECTOR_ASSIGNED);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ mportp->id = MCDI_OUT_DWORD(req, MAE_MPORT_ALLOC_OUT_MPORT_ID);
+ if (labelp != NULL)
+ *labelp = MCDI_OUT_DWORD(req, MAE_MPORT_ALLOC_ALIAS_OUT_LABEL);
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_mport_free(
+ __in efx_nic_t *enp,
+ __in const efx_mport_id_t *mportp)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_FREE_IN_LEN,
+ MC_CMD_MAE_MPORT_FREE_OUT_LEN);
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_FREE;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_FREE_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_FREE_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_FREE_IN_MPORT_ID, mportp->id);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
#endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 8c5d813c19..3488367f68 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -127,6 +127,7 @@ INTERNAL {
efx_mae_mport_by_pcie_function;
efx_mae_mport_by_phy_port;
efx_mae_mport_by_id;
+ efx_mae_mport_free;
efx_mae_mport_id_by_selector;
efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
@@ -136,6 +137,7 @@ INTERNAL {
efx_mcdi_get_proxy_handle;
efx_mcdi_get_timeout;
efx_mcdi_init;
+ efx_mcdi_mport_alloc_alias;
efx_mcdi_new_epoch;
efx_mcdi_reboot;
efx_mcdi_request_abort;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 15/38] net/sfc: add representor proxy port API
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (13 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 14/38] common/sfc_efx/base: add mport alias MCDI wrappers Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 16/38] net/sfc: implement representor queue setup and release Andrew Rybchenko
` (23 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
The API is required to create and destroy representor proxy
port assigned to representor.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc.c | 12 +
drivers/net/sfc/sfc.h | 1 +
drivers/net/sfc/sfc_ethdev.c | 2 +
drivers/net/sfc/sfc_repr.c | 20 ++
drivers/net/sfc/sfc_repr_proxy.c | 320 ++++++++++++++++++++++++++-
drivers/net/sfc/sfc_repr_proxy.h | 30 +++
drivers/net/sfc/sfc_repr_proxy_api.h | 29 +++
7 files changed, 412 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/sfc/sfc_repr_proxy_api.h
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 152234cb61..f79f4d5ffc 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -1043,6 +1043,18 @@ sfc_attach(struct sfc_adapter *sa)
return rc;
}
+void
+sfc_pre_detach(struct sfc_adapter *sa)
+{
+ sfc_log_init(sa, "entry");
+
+ SFC_ASSERT(!sfc_adapter_is_locked(sa));
+
+ sfc_repr_proxy_pre_detach(sa);
+
+ sfc_log_init(sa, "done");
+}
+
void
sfc_detach(struct sfc_adapter *sa)
{
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 628f32c13f..c3e92f3ab6 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -376,6 +376,7 @@ uint32_t sfc_register_logtype(const struct rte_pci_addr *pci_addr,
int sfc_probe(struct sfc_adapter *sa);
void sfc_unprobe(struct sfc_adapter *sa);
int sfc_attach(struct sfc_adapter *sa);
+void sfc_pre_detach(struct sfc_adapter *sa);
void sfc_detach(struct sfc_adapter *sa);
int sfc_start(struct sfc_adapter *sa);
void sfc_stop(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index e270b5cefd..efd5e6b1ab 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -345,6 +345,8 @@ sfc_dev_close(struct rte_eth_dev *dev)
return 0;
}
+ sfc_pre_detach(sa);
+
sfc_adapter_lock(sa);
switch (sa->state) {
case SFC_ETHDEV_STARTED:
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 71eea0e209..ff5ea0d1ed 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -19,6 +19,7 @@
#include "sfc_debug.h"
#include "sfc_repr.h"
#include "sfc_ethdev_state.h"
+#include "sfc_repr_proxy_api.h"
#include "sfc_switch.h"
/** Multi-process shared representor private data */
@@ -285,6 +286,7 @@ static int
sfc_repr_dev_close(struct rte_eth_dev *dev)
{
struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
sfcr_info(sr, "entry");
@@ -306,6 +308,8 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
* Rollback primary process sfc_repr_eth_dev_init() below.
*/
+ (void)sfc_repr_proxy_del_port(srs->pf_port_id, srs->repr_id);
+
dev->dev_ops = NULL;
sfc_repr_unlock(sr);
@@ -378,6 +382,18 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
goto fail_mae_assign_switch_port;
}
+ ret = sfc_repr_proxy_add_port(repr_data->pf_port_id,
+ repr_data->repr_id,
+ dev->data->port_id,
+ &repr_data->mport_sel);
+ if (ret != 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to add repr proxy port",
+ __func__);
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ goto fail_create_port;
+ }
+
/*
* Allocate process private data from heap, since it should not
* be located in shared memory allocated using rte_malloc() API.
@@ -419,6 +435,10 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
free(sr);
fail_alloc_sr:
+ (void)sfc_repr_proxy_del_port(repr_data->pf_port_id,
+ repr_data->repr_id);
+
+fail_create_port:
fail_mae_assign_switch_port:
SFC_GENERIC_LOG(ERR, "%s() failed: %s", __func__, rte_strerror(-ret));
return ret;
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index 6d3962304f..f64fa2efc7 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -13,17 +13,191 @@
#include "sfc_log.h"
#include "sfc_service.h"
#include "sfc_repr_proxy.h"
+#include "sfc_repr_proxy_api.h"
#include "sfc.h"
+/**
+ * Amount of time to wait for the representor proxy routine (which is
+ * running on a service core) to handle a request sent via mbox.
+ */
+#define SFC_REPR_PROXY_MBOX_POLL_TIMEOUT_MS 1000
+
+static struct sfc_repr_proxy *
+sfc_repr_proxy_by_adapter(struct sfc_adapter *sa)
+{
+ return &sa->repr_proxy;
+}
+
+static struct sfc_adapter *
+sfc_get_adapter_by_pf_port_id(uint16_t pf_port_id)
+{
+ struct rte_eth_dev *dev;
+ struct sfc_adapter *sa;
+
+ SFC_ASSERT(pf_port_id < RTE_MAX_ETHPORTS);
+
+ dev = &rte_eth_devices[pf_port_id];
+ sa = sfc_adapter_by_eth_dev(dev);
+
+ sfc_adapter_lock(sa);
+
+ return sa;
+}
+
+static void
+sfc_put_adapter(struct sfc_adapter *sa)
+{
+ sfc_adapter_unlock(sa);
+}
+
+static int
+sfc_repr_proxy_mbox_send(struct sfc_repr_proxy_mbox *mbox,
+ struct sfc_repr_proxy_port *port,
+ enum sfc_repr_proxy_mbox_op op)
+{
+ const unsigned int wait_ms = SFC_REPR_PROXY_MBOX_POLL_TIMEOUT_MS;
+ unsigned int i;
+
+ mbox->op = op;
+ mbox->port = port;
+ mbox->ack = false;
+
+ /*
+ * Release ordering enforces marker set after data is populated.
+ * Paired with acquire ordering in sfc_repr_proxy_mbox_handle().
+ */
+ __atomic_store_n(&mbox->write_marker, true, __ATOMIC_RELEASE);
+
+ /*
+ * Wait for the representor routine to process the request.
+ * Give up on timeout.
+ */
+ for (i = 0; i < wait_ms; i++) {
+ /*
+ * Paired with release ordering in sfc_repr_proxy_mbox_handle()
+ * on acknowledge write.
+ */
+ if (__atomic_load_n(&mbox->ack, __ATOMIC_ACQUIRE))
+ break;
+
+ rte_delay_ms(1);
+ }
+
+ if (i == wait_ms) {
+ SFC_GENERIC_LOG(ERR,
+ "%s() failed to wait for representor proxy routine ack",
+ __func__);
+ return ETIMEDOUT;
+ }
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_mbox_handle(struct sfc_repr_proxy *rp)
+{
+ struct sfc_repr_proxy_mbox *mbox = &rp->mbox;
+
+ /*
+ * Paired with release ordering in sfc_repr_proxy_mbox_send()
+ * on marker set.
+ */
+ if (!__atomic_load_n(&mbox->write_marker, __ATOMIC_ACQUIRE))
+ return;
+
+ mbox->write_marker = false;
+
+ switch (mbox->op) {
+ case SFC_REPR_PROXY_MBOX_ADD_PORT:
+ TAILQ_INSERT_TAIL(&rp->ports, mbox->port, entries);
+ break;
+ case SFC_REPR_PROXY_MBOX_DEL_PORT:
+ TAILQ_REMOVE(&rp->ports, mbox->port, entries);
+ break;
+ default:
+ SFC_ASSERT(0);
+ return;
+ }
+
+ /*
+ * Paired with acquire ordering in sfc_repr_proxy_mbox_send()
+ * on acknowledge read.
+ */
+ __atomic_store_n(&mbox->ack, true, __ATOMIC_RELEASE);
+}
+
static int32_t
sfc_repr_proxy_routine(void *arg)
{
struct sfc_repr_proxy *rp = arg;
- /* Representor proxy boilerplate will be here */
- RTE_SET_USED(rp);
+ sfc_repr_proxy_mbox_handle(rp);
+
+ return 0;
+}
+
+static int
+sfc_repr_proxy_ports_init(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rc = efx_mcdi_mport_alloc_alias(sa->nic, &rp->mport_alias, NULL);
+ if (rc != 0) {
+ sfc_err(sa, "failed to alloc mport alias: %s",
+ rte_strerror(rc));
+ goto fail_alloc_mport_alias;
+ }
+
+ TAILQ_INIT(&rp->ports);
+
+ sfc_log_init(sa, "done");
return 0;
+
+fail_alloc_mport_alias:
+
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+void
+sfc_repr_proxy_pre_detach(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ bool close_ports[RTE_MAX_ETHPORTS] = {0};
+ struct sfc_repr_proxy_port *port;
+ unsigned int i;
+
+ SFC_ASSERT(!sfc_adapter_is_locked(sa));
+
+ sfc_adapter_lock(sa);
+
+ if (sfc_repr_available(sfc_sa2shared(sa))) {
+ TAILQ_FOREACH(port, &rp->ports, entries)
+ close_ports[port->rte_port_id] = true;
+ } else {
+ sfc_log_init(sa, "representors not supported - skip");
+ }
+
+ sfc_adapter_unlock(sa);
+
+ for (i = 0; i < RTE_DIM(close_ports); i++) {
+ if (close_ports[i]) {
+ rte_eth_dev_stop(i);
+ rte_eth_dev_close(i);
+ }
+ }
+}
+
+static void
+sfc_repr_proxy_ports_fini(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+
+ efx_mae_mport_free(sa->nic, &rp->mport_alias);
}
int
@@ -43,6 +217,10 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
return 0;
}
+ rc = sfc_repr_proxy_ports_init(sa);
+ if (rc != 0)
+ goto fail_ports_init;
+
cid = sfc_get_service_lcore(sa->socket_id);
if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
/* Warn and try to allocate on any NUMA node */
@@ -96,6 +274,9 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
*/
fail_get_service_lcore:
+ sfc_repr_proxy_ports_fini(sa);
+
+fail_ports_init:
sfc_log_init(sa, "failed: %s", rte_strerror(rc));
return rc;
}
@@ -115,6 +296,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
rte_service_map_lcore_set(rp->service_id, rp->service_core_id, 0);
rte_service_component_unregister(rp->service_id);
+ sfc_repr_proxy_ports_fini(sa);
sfc_log_init(sa, "done");
}
@@ -165,6 +347,8 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
goto fail_runstate_set;
}
+ rp->started = true;
+
sfc_log_init(sa, "done");
return 0;
@@ -210,5 +394,137 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
+ rp->started = false;
+
+ sfc_log_init(sa, "done");
+}
+
+static struct sfc_repr_proxy_port *
+sfc_repr_proxy_find_port(struct sfc_repr_proxy *rp, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->repr_id == repr_id)
+ return port;
+ }
+
+ return NULL;
+}
+
+int
+sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t rte_port_id, const efx_mport_sel_t *mport_sel)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->rte_port_id == rte_port_id) {
+ rc = EEXIST;
+ sfc_err(sa, "%s() failed: port exists", __func__);
+ goto fail_port_exists;
+ }
+ }
+
+ port = rte_zmalloc("sfc-repr-proxy-port", sizeof(*port),
+ sa->socket_id);
+ if (port == NULL) {
+ rc = ENOMEM;
+ sfc_err(sa, "failed to alloc memory for proxy port");
+ goto fail_alloc_port;
+ }
+
+ rc = efx_mae_mport_id_by_selector(sa->nic, mport_sel,
+ &port->egress_mport);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed get MAE mport id by selector (repr_id %u): %s",
+ repr_id, rte_strerror(rc));
+ goto fail_mport_id;
+ }
+
+ port->rte_port_id = rte_port_id;
+ port->repr_id = repr_id;
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_ADD_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to add proxy port %u",
+ port->repr_id);
+ goto fail_port_add;
+ }
+ } else {
+ TAILQ_INSERT_TAIL(&rp->ports, port, entries);
+ }
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+
+fail_port_add:
+fail_mport_id:
+ rte_free(port);
+fail_alloc_port:
+fail_port_exists:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ sfc_put_adapter(sa);
+
+ return rc;
+}
+
+int
+sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "failed: no such port");
+ rc = ENOENT;
+ goto fail_no_port;
+ }
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_DEL_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to remove proxy port %u",
+ port->repr_id);
+ goto fail_port_remove;
+ }
+ } else {
+ TAILQ_REMOVE(&rp->ports, port, entries);
+ }
+
+ rte_free(port);
+
sfc_log_init(sa, "done");
+
+ sfc_put_adapter(sa);
+
+ return 0;
+
+fail_port_remove:
+fail_no_port:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ sfc_put_adapter(sa);
+
+ return rc;
}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index 953b9922c8..e4a6213c10 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -12,6 +12,8 @@
#include <stdint.h>
+#include "efx.h"
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -24,14 +26,42 @@ extern "C" {
#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+struct sfc_repr_proxy_port {
+ TAILQ_ENTRY(sfc_repr_proxy_port) entries;
+ uint16_t repr_id;
+ uint16_t rte_port_id;
+ efx_mport_id_t egress_mport;
+};
+
+enum sfc_repr_proxy_mbox_op {
+ SFC_REPR_PROXY_MBOX_ADD_PORT,
+ SFC_REPR_PROXY_MBOX_DEL_PORT,
+};
+
+struct sfc_repr_proxy_mbox {
+ struct sfc_repr_proxy_port *port;
+ enum sfc_repr_proxy_mbox_op op;
+
+ bool write_marker;
+ bool ack;
+};
+
+TAILQ_HEAD(sfc_repr_proxy_ports, sfc_repr_proxy_port);
+
struct sfc_repr_proxy {
uint32_t service_core_id;
uint32_t service_id;
+ efx_mport_id_t mport_alias;
+ struct sfc_repr_proxy_ports ports;
+ bool started;
+
+ struct sfc_repr_proxy_mbox mbox;
};
struct sfc_adapter;
int sfc_repr_proxy_attach(struct sfc_adapter *sa);
+void sfc_repr_proxy_pre_detach(struct sfc_adapter *sa);
void sfc_repr_proxy_detach(struct sfc_adapter *sa);
int sfc_repr_proxy_start(struct sfc_adapter *sa);
void sfc_repr_proxy_stop(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_repr_proxy_api.h b/drivers/net/sfc/sfc_repr_proxy_api.h
new file mode 100644
index 0000000000..af9009ca3c
--- /dev/null
+++ b/drivers/net/sfc/sfc_repr_proxy_api.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_REPR_PROXY_API_H
+#define _SFC_REPR_PROXY_API_H
+
+#include <stdint.h>
+
+#include "efx.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t rte_port_id,
+ const efx_mport_sel_t *mport_set);
+int sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_REPR_PROXY_API_H */
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 16/38] net/sfc: implement representor queue setup and release
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (14 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 15/38] net/sfc: add representor proxy port API Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 17/38] net/sfc: implement representor RxQ start/stop Andrew Rybchenko
` (22 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement queue creation and destruction both in port representors
and representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 257 +++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.c | 132 ++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 22 +++
drivers/net/sfc/sfc_repr_proxy_api.h | 15 ++
4 files changed, 426 insertions(+)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index ff5ea0d1ed..ddd848466c 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -30,6 +30,17 @@ struct sfc_repr_shared {
uint16_t switch_port_id;
};
+struct sfc_repr_rxq {
+ /* Datapath members */
+ struct rte_ring *ring;
+};
+
+struct sfc_repr_txq {
+ /* Datapath members */
+ struct rte_ring *ring;
+ efx_mport_id_t egress_mport;
+};
+
/** Primary process representor private data */
struct sfc_repr {
/**
@@ -50,6 +61,14 @@ struct sfc_repr {
SFC_GENERIC_LOG(ERR, __VA_ARGS__); \
} while (0)
+#define sfcr_warn(sr, ...) \
+ do { \
+ const struct sfc_repr *_sr = (sr); \
+ \
+ (void)_sr; \
+ SFC_GENERIC_LOG(WARNING, __VA_ARGS__); \
+ } while (0)
+
#define sfcr_info(sr, ...) \
do { \
const struct sfc_repr *_sr = (sr); \
@@ -269,6 +288,229 @@ sfc_repr_dev_infos_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+sfc_repr_ring_create(uint16_t pf_port_id, uint16_t repr_id,
+ const char *type_name, uint16_t qid, uint16_t nb_desc,
+ unsigned int socket_id, struct rte_ring **ring)
+{
+ char ring_name[RTE_RING_NAMESIZE];
+ int ret;
+
+ ret = snprintf(ring_name, sizeof(ring_name), "sfc_%u_repr_%u_%sq%u",
+ pf_port_id, repr_id, type_name, qid);
+ if (ret >= (int)sizeof(ring_name))
+ return -ENAMETOOLONG;
+
+ /*
+ * Single producer/consumer rings are used since the API for Tx/Rx
+ * packet burst for representors are guaranteed to be called from
+ * a single thread, and the user of the other end (representor proxy)
+ * is also single-threaded.
+ */
+ *ring = rte_ring_create(ring_name, nb_desc, socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (*ring == NULL)
+ return -rte_errno;
+
+ return 0;
+}
+
+static int
+sfc_repr_rx_qcheck_conf(struct sfc_repr *sr,
+ const struct rte_eth_rxconf *rx_conf)
+{
+ int ret = 0;
+
+ sfcr_info(sr, "entry");
+
+ if (rx_conf->rx_thresh.pthresh != 0 ||
+ rx_conf->rx_thresh.hthresh != 0 ||
+ rx_conf->rx_thresh.wthresh != 0) {
+ sfcr_warn(sr,
+ "RxQ prefetch/host/writeback thresholds are not supported");
+ }
+
+ if (rx_conf->rx_free_thresh != 0)
+ sfcr_warn(sr, "RxQ free threshold is not supported");
+
+ if (rx_conf->rx_drop_en == 0)
+ sfcr_warn(sr, "RxQ drop disable is not supported");
+
+ if (rx_conf->rx_deferred_start) {
+ sfcr_err(sr, "Deferred start is not supported");
+ ret = -EINVAL;
+ }
+
+ sfcr_info(sr, "done: %s", rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+ uint16_t nb_rx_desc, unsigned int socket_id,
+ __rte_unused const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mb_pool)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_rxq *rxq;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ ret = sfc_repr_rx_qcheck_conf(sr, rx_conf);
+ if (ret != 0)
+ goto fail_check_conf;
+
+ ret = -ENOMEM;
+ rxq = rte_zmalloc_socket("sfc-repr-rxq", sizeof(*rxq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (rxq == NULL) {
+ sfcr_err(sr, "%s() failed to alloc RxQ", __func__);
+ goto fail_rxq_alloc;
+ }
+
+ ret = sfc_repr_ring_create(srs->pf_port_id, srs->repr_id,
+ "rx", rx_queue_id, nb_rx_desc,
+ socket_id, &rxq->ring);
+ if (ret != 0) {
+ sfcr_err(sr, "%s() failed to create ring", __func__);
+ goto fail_ring_create;
+ }
+
+ ret = sfc_repr_proxy_add_rxq(srs->pf_port_id, srs->repr_id,
+ rx_queue_id, rxq->ring, mb_pool);
+ if (ret != 0) {
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ sfcr_err(sr, "%s() failed to add proxy RxQ", __func__);
+ goto fail_proxy_add_rxq;
+ }
+
+ dev->data->rx_queues[rx_queue_id] = rxq;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_proxy_add_rxq:
+ rte_ring_free(rxq->ring);
+
+fail_ring_create:
+ rte_free(rxq);
+
+fail_rxq_alloc:
+fail_check_conf:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static void
+sfc_repr_rx_queue_release(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_repr_rxq *rxq = dev->data->rx_queues[rx_queue_id];
+
+ sfc_repr_proxy_del_rxq(srs->pf_port_id, srs->repr_id, rx_queue_id);
+ rte_ring_free(rxq->ring);
+ rte_free(rxq);
+}
+
+static int
+sfc_repr_tx_qcheck_conf(struct sfc_repr *sr,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int ret = 0;
+
+ sfcr_info(sr, "entry");
+
+ if (tx_conf->tx_rs_thresh != 0)
+ sfcr_warn(sr, "RS bit in transmit descriptor is not supported");
+
+ if (tx_conf->tx_free_thresh != 0)
+ sfcr_warn(sr, "TxQ free threshold is not supported");
+
+ if (tx_conf->tx_thresh.pthresh != 0 ||
+ tx_conf->tx_thresh.hthresh != 0 ||
+ tx_conf->tx_thresh.wthresh != 0) {
+ sfcr_warn(sr,
+ "prefetch/host/writeback thresholds are not supported");
+ }
+
+ if (tx_conf->tx_deferred_start) {
+ sfcr_err(sr, "Deferred start is not supported");
+ ret = -EINVAL;
+ }
+
+ sfcr_info(sr, "done: %s", rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+ uint16_t nb_tx_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_txq *txq;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ ret = sfc_repr_tx_qcheck_conf(sr, tx_conf);
+ if (ret != 0)
+ goto fail_check_conf;
+
+ ret = -ENOMEM;
+ txq = rte_zmalloc_socket("sfc-repr-txq", sizeof(*txq),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (txq == NULL)
+ goto fail_txq_alloc;
+
+ ret = sfc_repr_ring_create(srs->pf_port_id, srs->repr_id,
+ "tx", tx_queue_id, nb_tx_desc,
+ socket_id, &txq->ring);
+ if (ret != 0)
+ goto fail_ring_create;
+
+ ret = sfc_repr_proxy_add_txq(srs->pf_port_id, srs->repr_id,
+ tx_queue_id, txq->ring,
+ &txq->egress_mport);
+ if (ret != 0)
+ goto fail_proxy_add_txq;
+
+ dev->data->tx_queues[tx_queue_id] = txq;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_proxy_add_txq:
+ rte_ring_free(txq->ring);
+
+fail_ring_create:
+ rte_free(txq);
+
+fail_txq_alloc:
+fail_check_conf:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static void
+sfc_repr_tx_queue_release(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ struct sfc_repr_txq *txq = dev->data->tx_queues[tx_queue_id];
+
+ sfc_repr_proxy_del_txq(srs->pf_port_id, srs->repr_id, tx_queue_id);
+ rte_ring_free(txq->ring);
+ rte_free(txq);
+}
+
static void
sfc_repr_close(struct sfc_repr *sr)
{
@@ -287,6 +529,7 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
{
struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
struct sfc_repr_shared *srs = sfc_repr_shared_by_eth_dev(dev);
+ unsigned int i;
sfcr_info(sr, "entry");
@@ -303,6 +546,16 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
break;
}
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ sfc_repr_rx_queue_release(dev, i);
+ dev->data->rx_queues[i] = NULL;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ sfc_repr_tx_queue_release(dev, i);
+ dev->data->tx_queues[i] = NULL;
+ }
+
/*
* Cleanup all resources.
* Rollback primary process sfc_repr_eth_dev_init() below.
@@ -326,6 +579,10 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_configure = sfc_repr_dev_configure,
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
+ .rx_queue_setup = sfc_repr_rx_queue_setup,
+ .rx_queue_release = sfc_repr_rx_queue_release,
+ .tx_queue_setup = sfc_repr_tx_queue_setup,
+ .tx_queue_release = sfc_repr_tx_queue_release,
};
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index f64fa2efc7..6a89cca40a 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -528,3 +528,135 @@ sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id)
return rc;
}
+
+int
+sfc_repr_proxy_add_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *rx_ring,
+ struct rte_mempool *mp)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_rxq *rxq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return ENOENT;
+ }
+
+ rxq = &port->rxq[queue_id];
+ if (rp->dp_rxq[queue_id].mp != NULL && rp->dp_rxq[queue_id].mp != mp) {
+ sfc_err(sa, "multiple mempools per queue are not supported");
+ sfc_put_adapter(sa);
+ return ENOTSUP;
+ }
+
+ rxq->ring = rx_ring;
+ rxq->mb_pool = mp;
+ rp->dp_rxq[queue_id].mp = mp;
+ rp->dp_rxq[queue_id].ref_count++;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+}
+
+void
+sfc_repr_proxy_del_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_rxq *rxq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return;
+ }
+
+ rxq = &port->rxq[queue_id];
+
+ rxq->ring = NULL;
+ rxq->mb_pool = NULL;
+ rp->dp_rxq[queue_id].ref_count--;
+ if (rp->dp_rxq[queue_id].ref_count == 0)
+ rp->dp_rxq[queue_id].mp = NULL;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+}
+
+int
+sfc_repr_proxy_add_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *tx_ring,
+ efx_mport_id_t *egress_mport)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_txq *txq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return ENOENT;
+ }
+
+ txq = &port->txq[queue_id];
+
+ txq->ring = tx_ring;
+
+ *egress_mport = port->egress_mport;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+}
+
+void
+sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_txq *txq;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return;
+ }
+
+ txq = &port->txq[queue_id];
+
+ txq->ring = NULL;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index e4a6213c10..bd7ad7148a 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -12,8 +12,13 @@
#include <stdint.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
#include "efx.h"
+#include "sfc_repr.h"
+
#ifdef __cplusplus
extern "C" {
#endif
@@ -26,11 +31,27 @@ extern "C" {
#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+struct sfc_repr_proxy_rxq {
+ struct rte_ring *ring;
+ struct rte_mempool *mb_pool;
+};
+
+struct sfc_repr_proxy_txq {
+ struct rte_ring *ring;
+};
+
struct sfc_repr_proxy_port {
TAILQ_ENTRY(sfc_repr_proxy_port) entries;
uint16_t repr_id;
uint16_t rte_port_id;
efx_mport_id_t egress_mport;
+ struct sfc_repr_proxy_rxq rxq[SFC_REPR_RXQ_MAX];
+ struct sfc_repr_proxy_txq txq[SFC_REPR_TXQ_MAX];
+};
+
+struct sfc_repr_proxy_dp_rxq {
+ struct rte_mempool *mp;
+ unsigned int ref_count;
};
enum sfc_repr_proxy_mbox_op {
@@ -54,6 +75,7 @@ struct sfc_repr_proxy {
efx_mport_id_t mport_alias;
struct sfc_repr_proxy_ports ports;
bool started;
+ struct sfc_repr_proxy_dp_rxq dp_rxq[SFC_REPR_PROXY_NB_RXQ_MAX];
struct sfc_repr_proxy_mbox mbox;
};
diff --git a/drivers/net/sfc/sfc_repr_proxy_api.h b/drivers/net/sfc/sfc_repr_proxy_api.h
index af9009ca3c..d1c0760efa 100644
--- a/drivers/net/sfc/sfc_repr_proxy_api.h
+++ b/drivers/net/sfc/sfc_repr_proxy_api.h
@@ -12,6 +12,9 @@
#include <stdint.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+
#include "efx.h"
#ifdef __cplusplus
@@ -23,6 +26,18 @@ int sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
const efx_mport_sel_t *mport_set);
int sfc_repr_proxy_del_port(uint16_t pf_port_id, uint16_t repr_id);
+int sfc_repr_proxy_add_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *rx_ring,
+ struct rte_mempool *mp);
+void sfc_repr_proxy_del_rxq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id);
+
+int sfc_repr_proxy_add_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id, struct rte_ring *tx_ring,
+ efx_mport_id_t *egress_mport);
+void sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
+ uint16_t queue_id);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 17/38] net/sfc: implement representor RxQ start/stop
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (15 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 16/38] net/sfc: implement representor queue setup and release Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 18/38] net/sfc: implement representor TxQ start/stop Andrew Rybchenko
` (21 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Add extra libefx flags to Rx queue information initialization
function interface to be able to specify the ingress m-port
flag for a representor RxQ. Rx prefix of packets on that queue
will contain ingress m-port field required for packet forwarding
in representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ev.h | 8 ++
drivers/net/sfc/sfc_repr_proxy.c | 194 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 7 ++
3 files changed, 209 insertions(+)
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 590cfb1694..bcb7fbe466 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -110,6 +110,14 @@ sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
return sas->counters_rxq_allocated ? 0 : SFC_SW_INDEX_INVALID;
}
+static inline sfc_sw_index_t
+sfc_repr_rxq_sw_index(const struct sfc_adapter_shared *sas,
+ unsigned int repr_queue_id)
+{
+ return sfc_counters_rxq_sw_index(sas) + sfc_repr_nb_rxq(sas) +
+ repr_queue_id;
+}
+
/*
* Functions below define event queue to transmit/receive queue and vice
* versa mapping.
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index 6a89cca40a..03b6421b04 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -15,6 +15,8 @@
#include "sfc_repr_proxy.h"
#include "sfc_repr_proxy_api.h"
#include "sfc.h"
+#include "sfc_ev.h"
+#include "sfc_rx.h"
/**
* Amount of time to wait for the representor proxy routine (which is
@@ -136,6 +138,181 @@ sfc_repr_proxy_routine(void *arg)
return 0;
}
+static int
+sfc_repr_proxy_rxq_attach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++) {
+ sfc_sw_index_t sw_index = sfc_repr_rxq_sw_index(sas, i);
+
+ rp->dp_rxq[i].sw_index = sw_index;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_rxq_detach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++)
+ rp->dp_rxq[i].sw_index = 0;
+
+ sfc_log_init(sa, "done");
+}
+
+static int
+sfc_repr_proxy_rxq_init(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_dp_rxq *rxq)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ uint16_t nb_rx_desc = SFC_REPR_PROXY_RX_DESC_COUNT;
+ struct sfc_rxq_info *rxq_info;
+ struct rte_eth_rxconf rxconf = {
+ .rx_free_thresh = SFC_REPR_PROXY_RXQ_REFILL_LEVEL,
+ .rx_drop_en = 1,
+ };
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rxq_info = &sas->rxq_info[rxq->sw_index];
+ if (rxq_info->state & SFC_RXQ_INITIALIZED) {
+ sfc_log_init(sa, "RxQ is already initialized - skip");
+ return 0;
+ }
+
+ nb_rx_desc = RTE_MIN(nb_rx_desc, sa->rxq_max_entries);
+ nb_rx_desc = RTE_MAX(nb_rx_desc, sa->rxq_min_entries);
+
+ rc = sfc_rx_qinit_info(sa, rxq->sw_index, EFX_RXQ_FLAG_INGRESS_MPORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy RxQ info");
+ goto fail_repr_rxq_init_info;
+ }
+
+ rc = sfc_rx_qinit(sa, rxq->sw_index, nb_rx_desc, sa->socket_id, &rxconf,
+ rxq->mp);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy RxQ");
+ goto fail_repr_rxq_init;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_repr_rxq_init:
+fail_repr_rxq_init_info:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+ return rc;
+}
+
+static void
+sfc_repr_proxy_rxq_fini(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_rxq_info *rxq_info;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++) {
+ struct sfc_repr_proxy_dp_rxq *rxq = &rp->dp_rxq[i];
+
+ rxq_info = &sas->rxq_info[rxq->sw_index];
+ if (rxq_info->state != SFC_RXQ_INITIALIZED) {
+ sfc_log_init(sa,
+ "representor RxQ %u is already finalized - skip",
+ i);
+ continue;
+ }
+
+ sfc_rx_qfini(sa, rxq->sw_index);
+ }
+
+ sfc_log_init(sa, "done");
+}
+
+static void
+sfc_repr_proxy_rxq_stop(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++)
+ sfc_rx_qstop(sa, sa->repr_proxy.dp_rxq[i].sw_index);
+
+ sfc_repr_proxy_rxq_fini(sa);
+
+ sfc_log_init(sa, "done");
+}
+
+static int
+sfc_repr_proxy_rxq_start(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ for (i = 0; i < sfc_repr_nb_rxq(sas); i++) {
+ struct sfc_repr_proxy_dp_rxq *rxq = &rp->dp_rxq[i];
+
+ rc = sfc_repr_proxy_rxq_init(sa, rxq);
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy RxQ %u",
+ i);
+ goto fail_init;
+ }
+
+ rc = sfc_rx_qstart(sa, rxq->sw_index);
+ if (rc != 0) {
+ sfc_err(sa, "failed to start representor proxy RxQ %u",
+ i);
+ goto fail_start;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_start:
+fail_init:
+ sfc_repr_proxy_rxq_stop(sa);
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
static int
sfc_repr_proxy_ports_init(struct sfc_adapter *sa)
{
@@ -217,6 +394,10 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
return 0;
}
+ rc = sfc_repr_proxy_rxq_attach(sa);
+ if (rc != 0)
+ goto fail_rxq_attach;
+
rc = sfc_repr_proxy_ports_init(sa);
if (rc != 0)
goto fail_ports_init;
@@ -277,6 +458,9 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
sfc_repr_proxy_ports_fini(sa);
fail_ports_init:
+ sfc_repr_proxy_rxq_detach(sa);
+
+fail_rxq_attach:
sfc_log_init(sa, "failed: %s", rte_strerror(rc));
return rc;
}
@@ -297,6 +481,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
rte_service_map_lcore_set(rp->service_id, rp->service_core_id, 0);
rte_service_component_unregister(rp->service_id);
sfc_repr_proxy_ports_fini(sa);
+ sfc_repr_proxy_rxq_detach(sa);
sfc_log_init(sa, "done");
}
@@ -319,6 +504,10 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
return 0;
}
+ rc = sfc_repr_proxy_rxq_start(sa);
+ if (rc != 0)
+ goto fail_rxq_start;
+
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
if (rc != 0 && rc != -EALREADY) {
@@ -360,6 +549,9 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
fail_start_core:
+ sfc_repr_proxy_rxq_stop(sa);
+
+fail_rxq_start:
sfc_log_init(sa, "failed: %s", rte_strerror(rc));
return rc;
}
@@ -394,6 +586,8 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
+ sfc_repr_proxy_rxq_stop(sa);
+
rp->started = false;
sfc_log_init(sa, "done");
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index bd7ad7148a..dca3fca2b9 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -18,6 +18,7 @@
#include "efx.h"
#include "sfc_repr.h"
+#include "sfc_dp.h"
#ifdef __cplusplus
extern "C" {
@@ -31,6 +32,10 @@ extern "C" {
#define SFC_REPR_PROXY_NB_TXQ_MIN (1)
#define SFC_REPR_PROXY_NB_TXQ_MAX (1)
+#define SFC_REPR_PROXY_RX_DESC_COUNT 256
+#define SFC_REPR_PROXY_RXQ_REFILL_LEVEL (SFC_REPR_PROXY_RX_DESC_COUNT / 4)
+#define SFC_REPR_PROXY_RX_BURST 32
+
struct sfc_repr_proxy_rxq {
struct rte_ring *ring;
struct rte_mempool *mb_pool;
@@ -52,6 +57,8 @@ struct sfc_repr_proxy_port {
struct sfc_repr_proxy_dp_rxq {
struct rte_mempool *mp;
unsigned int ref_count;
+
+ sfc_sw_index_t sw_index;
};
enum sfc_repr_proxy_mbox_op {
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 18/38] net/sfc: implement representor TxQ start/stop
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (16 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 17/38] net/sfc: implement representor RxQ start/stop Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 19/38] net/sfc: implement port representor start and stop Andrew Rybchenko
` (20 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement Tx queue start and stop in port representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ev.h | 8 ++
drivers/net/sfc/sfc_repr_proxy.c | 166 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 11 ++
drivers/net/sfc/sfc_tx.c | 15 ++-
drivers/net/sfc/sfc_tx.h | 1 +
5 files changed, 199 insertions(+), 2 deletions(-)
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index bcb7fbe466..a4ababc2bc 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -118,6 +118,14 @@ sfc_repr_rxq_sw_index(const struct sfc_adapter_shared *sas,
repr_queue_id;
}
+static inline sfc_sw_index_t
+sfc_repr_txq_sw_index(const struct sfc_adapter_shared *sas,
+ unsigned int repr_queue_id)
+{
+ /* Reserved TxQ for representors is the first reserved TxQ */
+ return sfc_repr_available(sas) ? repr_queue_id : SFC_SW_INDEX_INVALID;
+}
+
/*
* Functions below define event queue to transmit/receive queue and vice
* versa mapping.
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index 03b6421b04..a5be8fa270 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -17,6 +17,7 @@
#include "sfc.h"
#include "sfc_ev.h"
#include "sfc_rx.h"
+#include "sfc_tx.h"
/**
* Amount of time to wait for the representor proxy routine (which is
@@ -138,6 +139,155 @@ sfc_repr_proxy_routine(void *arg)
return 0;
}
+static int
+sfc_repr_proxy_txq_attach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++) {
+ sfc_sw_index_t sw_index = sfc_repr_txq_sw_index(sas, i);
+
+ rp->dp_txq[i].sw_index = sw_index;
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_txq_detach(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++)
+ rp->dp_txq[i].sw_index = 0;
+
+ sfc_log_init(sa, "done");
+}
+
+int
+sfc_repr_proxy_txq_init(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ const struct rte_eth_txconf tx_conf = {
+ .tx_free_thresh = SFC_REPR_PROXY_TXQ_FREE_THRESH,
+ };
+ struct sfc_txq_info *txq_info;
+ unsigned int init_i;
+ unsigned int i;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return 0;
+ }
+
+ for (init_i = 0; init_i < sfc_repr_nb_txq(sas); init_i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[init_i];
+
+ txq_info = &sfc_sa2shared(sa)->txq_info[txq->sw_index];
+ if (txq_info->state == SFC_TXQ_INITIALIZED) {
+ sfc_log_init(sa,
+ "representor proxy TxQ %u is already initialized - skip",
+ init_i);
+ continue;
+ }
+
+ sfc_tx_qinit_info(sa, txq->sw_index);
+
+ rc = sfc_tx_qinit(sa, txq->sw_index,
+ SFC_REPR_PROXY_TX_DESC_COUNT, sa->socket_id,
+ &tx_conf);
+
+ if (rc != 0) {
+ sfc_err(sa, "failed to init representor proxy TxQ %u",
+ init_i);
+ goto fail_init;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_init:
+ for (i = 0; i < init_i; i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[i];
+
+ txq_info = &sfc_sa2shared(sa)->txq_info[txq->sw_index];
+ if (txq_info->state == SFC_TXQ_INITIALIZED)
+ sfc_tx_qfini(sa, txq->sw_index);
+ }
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+ return rc;
+}
+
+void
+sfc_repr_proxy_txq_fini(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_txq_info *txq_info;
+ unsigned int i;
+
+ sfc_log_init(sa, "entry");
+
+ if (!sfc_repr_available(sas)) {
+ sfc_log_init(sa, "representors not supported - skip");
+ return;
+ }
+
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[i];
+
+ txq_info = &sfc_sa2shared(sa)->txq_info[txq->sw_index];
+ if (txq_info->state != SFC_TXQ_INITIALIZED) {
+ sfc_log_init(sa,
+ "representor proxy TxQ %u is already finalized - skip",
+ i);
+ continue;
+ }
+
+ sfc_tx_qfini(sa, txq->sw_index);
+ }
+
+ sfc_log_init(sa, "done");
+}
+
+static int
+sfc_repr_proxy_txq_start(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+
+ sfc_log_init(sa, "entry");
+
+ RTE_SET_USED(rp);
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+}
+
+static void
+sfc_repr_proxy_txq_stop(struct sfc_adapter *sa)
+{
+ sfc_log_init(sa, "entry");
+ sfc_log_init(sa, "done");
+}
+
static int
sfc_repr_proxy_rxq_attach(struct sfc_adapter *sa)
{
@@ -398,6 +548,10 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
if (rc != 0)
goto fail_rxq_attach;
+ rc = sfc_repr_proxy_txq_attach(sa);
+ if (rc != 0)
+ goto fail_txq_attach;
+
rc = sfc_repr_proxy_ports_init(sa);
if (rc != 0)
goto fail_ports_init;
@@ -458,6 +612,9 @@ sfc_repr_proxy_attach(struct sfc_adapter *sa)
sfc_repr_proxy_ports_fini(sa);
fail_ports_init:
+ sfc_repr_proxy_txq_detach(sa);
+
+fail_txq_attach:
sfc_repr_proxy_rxq_detach(sa);
fail_rxq_attach:
@@ -482,6 +639,7 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
rte_service_component_unregister(rp->service_id);
sfc_repr_proxy_ports_fini(sa);
sfc_repr_proxy_rxq_detach(sa);
+ sfc_repr_proxy_txq_detach(sa);
sfc_log_init(sa, "done");
}
@@ -508,6 +666,10 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_rxq_start;
+ rc = sfc_repr_proxy_txq_start(sa);
+ if (rc != 0)
+ goto fail_txq_start;
+
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
if (rc != 0 && rc != -EALREADY) {
@@ -549,6 +711,9 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
fail_start_core:
+ sfc_repr_proxy_txq_stop(sa);
+
+fail_txq_start:
sfc_repr_proxy_rxq_stop(sa);
fail_rxq_start:
@@ -587,6 +752,7 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
sfc_repr_proxy_rxq_stop(sa);
+ sfc_repr_proxy_txq_stop(sa);
rp->started = false;
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index dca3fca2b9..1fe7ff3695 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -36,6 +36,10 @@ extern "C" {
#define SFC_REPR_PROXY_RXQ_REFILL_LEVEL (SFC_REPR_PROXY_RX_DESC_COUNT / 4)
#define SFC_REPR_PROXY_RX_BURST 32
+#define SFC_REPR_PROXY_TX_DESC_COUNT 256
+#define SFC_REPR_PROXY_TXQ_FREE_THRESH (SFC_REPR_PROXY_TX_DESC_COUNT / 4)
+#define SFC_REPR_PROXY_TX_BURST 32
+
struct sfc_repr_proxy_rxq {
struct rte_ring *ring;
struct rte_mempool *mb_pool;
@@ -61,6 +65,10 @@ struct sfc_repr_proxy_dp_rxq {
sfc_sw_index_t sw_index;
};
+struct sfc_repr_proxy_dp_txq {
+ sfc_sw_index_t sw_index;
+};
+
enum sfc_repr_proxy_mbox_op {
SFC_REPR_PROXY_MBOX_ADD_PORT,
SFC_REPR_PROXY_MBOX_DEL_PORT,
@@ -83,6 +91,7 @@ struct sfc_repr_proxy {
struct sfc_repr_proxy_ports ports;
bool started;
struct sfc_repr_proxy_dp_rxq dp_rxq[SFC_REPR_PROXY_NB_RXQ_MAX];
+ struct sfc_repr_proxy_dp_txq dp_txq[SFC_REPR_PROXY_NB_TXQ_MAX];
struct sfc_repr_proxy_mbox mbox;
};
@@ -92,6 +101,8 @@ struct sfc_adapter;
int sfc_repr_proxy_attach(struct sfc_adapter *sa);
void sfc_repr_proxy_pre_detach(struct sfc_adapter *sa);
void sfc_repr_proxy_detach(struct sfc_adapter *sa);
+int sfc_repr_proxy_txq_init(struct sfc_adapter *sa);
+void sfc_repr_proxy_txq_fini(struct sfc_adapter *sa);
int sfc_repr_proxy_start(struct sfc_adapter *sa);
void sfc_repr_proxy_stop(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index c1b2e964f8..13392cdd5a 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -290,7 +290,7 @@ sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
txq->evq = NULL;
}
-static int
+int
sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
@@ -378,6 +378,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
const unsigned int nb_tx_queues = sa->eth_dev->data->nb_tx_queues;
const unsigned int nb_rsvd_tx_queues = sfc_nb_txq_reserved(sas);
const unsigned int nb_txq_total = nb_tx_queues + nb_rsvd_tx_queues;
+ bool reconfigure;
int rc = 0;
sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
@@ -401,6 +402,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
goto done;
if (sas->txq_info == NULL) {
+ reconfigure = false;
sas->txq_info = rte_calloc_socket("sfc-txqs", nb_txq_total,
sizeof(sas->txq_info[0]), 0,
sa->socket_id);
@@ -419,6 +421,8 @@ sfc_tx_configure(struct sfc_adapter *sa)
struct sfc_txq_info *new_txq_info;
struct sfc_txq *new_txq_ctrl;
+ reconfigure = true;
+
if (nb_tx_queues < sas->ethdev_txq_count)
sfc_tx_fini_queues(sa, nb_tx_queues);
@@ -457,12 +461,18 @@ sfc_tx_configure(struct sfc_adapter *sa)
sas->ethdev_txq_count++;
}
- /* TODO: initialize reserved queues when supported. */
sas->txq_count = sas->ethdev_txq_count + nb_rsvd_tx_queues;
+ if (!reconfigure) {
+ rc = sfc_repr_proxy_txq_init(sa);
+ if (rc != 0)
+ goto fail_repr_proxy_txq_init;
+ }
+
done:
return 0;
+fail_repr_proxy_txq_init:
fail_tx_qinit_info:
fail_txqs_ctrl_realloc:
fail_txqs_realloc:
@@ -480,6 +490,7 @@ void
sfc_tx_close(struct sfc_adapter *sa)
{
sfc_tx_fini_queues(sa, 0);
+ sfc_repr_proxy_txq_fini(sa);
free(sa->txq_ctrl);
sa->txq_ctrl = NULL;
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index f1700b13ca..1a33199fdc 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -108,6 +108,7 @@ struct sfc_txq_info *sfc_txq_info_by_dp_txq(const struct sfc_dp_txq *dp_txq);
int sfc_tx_configure(struct sfc_adapter *sa);
void sfc_tx_close(struct sfc_adapter *sa);
+int sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
int sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
uint16_t nb_tx_desc, unsigned int socket_id,
const struct rte_eth_txconf *tx_conf);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 19/38] net/sfc: implement port representor start and stop
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (17 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 18/38] net/sfc: implement representor TxQ start/stop Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 20/38] net/sfc: implement port representor link update Andrew Rybchenko
` (19 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement queue start and stop operation both in port
representors and representor proxy.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_mae.h | 9 +-
drivers/net/sfc/sfc_repr.c | 181 +++++++++++
drivers/net/sfc/sfc_repr_proxy.c | 453 ++++++++++++++++++++++++++-
drivers/net/sfc/sfc_repr_proxy.h | 16 +
drivers/net/sfc/sfc_repr_proxy_api.h | 3 +
5 files changed, 644 insertions(+), 18 deletions(-)
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 684f0daf7a..d835056aef 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -139,10 +139,17 @@ struct sfc_mae_counter_registry {
uint32_t service_id;
};
+/**
+ * MAE rules used to capture traffic generated by VFs and direct it to
+ * representors (one for each VF).
+ */
+#define SFC_MAE_NB_REPR_RULES_MAX (64)
+
/** Rules to forward traffic from PHY port to PF and from PF to PHY port */
#define SFC_MAE_NB_SWITCHDEV_RULES (2)
/** Maximum required internal MAE rules */
-#define SFC_MAE_NB_RULES_MAX (SFC_MAE_NB_SWITCHDEV_RULES)
+#define SFC_MAE_NB_RULES_MAX (SFC_MAE_NB_SWITCHDEV_RULES + \
+ SFC_MAE_NB_REPR_RULES_MAX)
struct sfc_mae_rule {
efx_mae_match_spec_t *spec;
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index ddd848466c..f60106c196 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -9,6 +9,7 @@
#include <stdint.h>
+#include <rte_mbuf.h>
#include <rte_ethdev.h>
#include <rte_malloc.h>
#include <ethdev_driver.h>
@@ -21,6 +22,7 @@
#include "sfc_ethdev_state.h"
#include "sfc_repr_proxy_api.h"
#include "sfc_switch.h"
+#include "sfc_dp_tx.h"
/** Multi-process shared representor private data */
struct sfc_repr_shared {
@@ -136,6 +138,179 @@ sfc_repr_lock_fini(__rte_unused struct sfc_repr *sr)
/* Just for symmetry of the API */
}
+static void
+sfc_repr_rx_queue_stop(void *queue)
+{
+ struct sfc_repr_rxq *rxq = queue;
+
+ if (rxq == NULL)
+ return;
+
+ rte_ring_reset(rxq->ring);
+}
+
+static void
+sfc_repr_tx_queue_stop(void *queue)
+{
+ struct sfc_repr_txq *txq = queue;
+
+ if (txq == NULL)
+ return;
+
+ rte_ring_reset(txq->ring);
+}
+
+static int
+sfc_repr_start(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_shared *srs;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ switch (sr->state) {
+ case SFC_ETHDEV_CONFIGURED:
+ break;
+ case SFC_ETHDEV_STARTED:
+ sfcr_info(sr, "already started");
+ return 0;
+ default:
+ ret = -EINVAL;
+ goto fail_bad_state;
+ }
+
+ sr->state = SFC_ETHDEV_STARTING;
+
+ srs = sfc_repr_shared_by_eth_dev(dev);
+ ret = sfc_repr_proxy_start_repr(srs->pf_port_id, srs->repr_id);
+ if (ret != 0) {
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ goto fail_start;
+ }
+
+ sr->state = SFC_ETHDEV_STARTED;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_start:
+ sr->state = SFC_ETHDEV_CONFIGURED;
+
+fail_bad_state:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static int
+sfc_repr_dev_start(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ sfc_repr_lock(sr);
+ ret = sfc_repr_start(dev);
+ sfc_repr_unlock(sr);
+
+ if (ret != 0)
+ goto fail_start;
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_start:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+ return ret;
+}
+
+static int
+sfc_repr_stop(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct sfc_repr_shared *srs;
+ unsigned int i;
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ SFC_ASSERT(sfc_repr_lock_is_locked(sr));
+
+ switch (sr->state) {
+ case SFC_ETHDEV_STARTED:
+ break;
+ case SFC_ETHDEV_CONFIGURED:
+ sfcr_info(sr, "already stopped");
+ return 0;
+ default:
+ sfcr_err(sr, "stop in unexpected state %u", sr->state);
+ SFC_ASSERT(B_FALSE);
+ ret = -EINVAL;
+ goto fail_bad_state;
+ }
+
+ srs = sfc_repr_shared_by_eth_dev(dev);
+ ret = sfc_repr_proxy_stop_repr(srs->pf_port_id, srs->repr_id);
+ if (ret != 0) {
+ SFC_ASSERT(ret > 0);
+ ret = -ret;
+ goto fail_stop;
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ sfc_repr_rx_queue_stop(dev->data->rx_queues[i]);
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ sfc_repr_tx_queue_stop(dev->data->tx_queues[i]);
+
+ sr->state = SFC_ETHDEV_CONFIGURED;
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_bad_state:
+fail_stop:
+ sfcr_err(sr, "%s() failed: %s", __func__, rte_strerror(-ret));
+
+ return ret;
+}
+
+static int
+sfc_repr_dev_stop(struct rte_eth_dev *dev)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ int ret;
+
+ sfcr_info(sr, "entry");
+
+ sfc_repr_lock(sr);
+
+ ret = sfc_repr_stop(dev);
+ if (ret != 0) {
+ sfcr_err(sr, "%s() failed to stop representor", __func__);
+ goto fail_stop;
+ }
+
+ sfc_repr_unlock(sr);
+
+ sfcr_info(sr, "done");
+
+ return 0;
+
+fail_stop:
+ sfc_repr_unlock(sr);
+
+ sfcr_err(sr, "%s() failed %s", __func__, rte_strerror(-ret));
+
+ return ret;
+}
+
static int
sfc_repr_check_conf(struct sfc_repr *sr, uint16_t nb_rx_queues,
const struct rte_eth_conf *conf)
@@ -535,6 +710,10 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
sfc_repr_lock(sr);
switch (sr->state) {
+ case SFC_ETHDEV_STARTED:
+ sfc_repr_stop(dev);
+ SFC_ASSERT(sr->state == SFC_ETHDEV_CONFIGURED);
+ /* FALLTHROUGH */
case SFC_ETHDEV_CONFIGURED:
sfc_repr_close(sr);
SFC_ASSERT(sr->state == SFC_ETHDEV_INITIALIZED);
@@ -577,6 +756,8 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_configure = sfc_repr_dev_configure,
+ .dev_start = sfc_repr_dev_start,
+ .dev_stop = sfc_repr_dev_stop,
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
.rx_queue_setup = sfc_repr_rx_queue_setup,
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index a5be8fa270..ea03d5afdd 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -53,6 +53,19 @@ sfc_put_adapter(struct sfc_adapter *sa)
sfc_adapter_unlock(sa);
}
+static struct sfc_repr_proxy_port *
+sfc_repr_proxy_find_port(struct sfc_repr_proxy *rp, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->repr_id == repr_id)
+ return port;
+ }
+
+ return NULL;
+}
+
static int
sfc_repr_proxy_mbox_send(struct sfc_repr_proxy_mbox *mbox,
struct sfc_repr_proxy_port *port,
@@ -117,6 +130,12 @@ sfc_repr_proxy_mbox_handle(struct sfc_repr_proxy *rp)
case SFC_REPR_PROXY_MBOX_DEL_PORT:
TAILQ_REMOVE(&rp->ports, mbox->port, entries);
break;
+ case SFC_REPR_PROXY_MBOX_START_PORT:
+ mbox->port->started = true;
+ break;
+ case SFC_REPR_PROXY_MBOX_STOP_PORT:
+ mbox->port->started = false;
+ break;
default:
SFC_ASSERT(0);
return;
@@ -463,6 +482,158 @@ sfc_repr_proxy_rxq_start(struct sfc_adapter *sa)
return rc;
}
+static int
+sfc_repr_proxy_mae_rule_insert(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ efx_mport_sel_t mport_alias_selector;
+ efx_mport_sel_t mport_vf_selector;
+ struct sfc_mae_rule *mae_rule;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ rc = efx_mae_mport_by_id(&port->egress_mport,
+ &mport_vf_selector);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get VF mport for repr %u",
+ port->repr_id);
+ goto fail_get_vf;
+ }
+
+ rc = efx_mae_mport_by_id(&rp->mport_alias, &mport_alias_selector);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get mport selector for repr %u",
+ port->repr_id);
+ goto fail_get_alias;
+ }
+
+ rc = sfc_mae_rule_add_mport_match_deliver(sa, &mport_vf_selector,
+ &mport_alias_selector, -1,
+ &mae_rule);
+ if (rc != 0) {
+ sfc_err(sa, "failed to insert MAE rule for repr %u",
+ port->repr_id);
+ goto fail_rule_add;
+ }
+
+ port->mae_rule = mae_rule;
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_rule_add:
+fail_get_alias:
+fail_get_vf:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+static void
+sfc_repr_proxy_mae_rule_remove(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ struct sfc_mae_rule *mae_rule = port->mae_rule;
+
+ sfc_mae_rule_del(sa, mae_rule);
+}
+
+static int
+sfc_repr_proxy_mport_filter_insert(struct sfc_adapter *sa)
+{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_rxq *rxq_ctrl;
+ struct sfc_repr_proxy_filter *filter = &rp->mport_filter;
+ efx_mport_sel_t mport_alias_selector;
+ static const efx_filter_match_flags_t flags[RTE_DIM(filter->specs)] = {
+ EFX_FILTER_MATCH_UNKNOWN_UCAST_DST,
+ EFX_FILTER_MATCH_UNKNOWN_MCAST_DST };
+ unsigned int i;
+ int rc;
+
+ sfc_log_init(sa, "entry");
+
+ if (sfc_repr_nb_rxq(sas) == 1) {
+ rxq_ctrl = &sa->rxq_ctrl[rp->dp_rxq[0].sw_index];
+ } else {
+ sfc_err(sa, "multiple representor proxy RxQs not supported");
+ rc = ENOTSUP;
+ goto fail_multiple_queues;
+ }
+
+ rc = efx_mae_mport_by_id(&rp->mport_alias, &mport_alias_selector);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get repr proxy mport by ID");
+ goto fail_get_selector;
+ }
+
+ memset(filter->specs, 0, sizeof(filter->specs));
+ for (i = 0; i < RTE_DIM(filter->specs); i++) {
+ filter->specs[i].efs_priority = EFX_FILTER_PRI_MANUAL;
+ filter->specs[i].efs_flags = EFX_FILTER_FLAG_RX;
+ filter->specs[i].efs_dmaq_id = rxq_ctrl->hw_index;
+ filter->specs[i].efs_match_flags = flags[i] |
+ EFX_FILTER_MATCH_MPORT;
+ filter->specs[i].efs_ingress_mport = mport_alias_selector.sel;
+
+ rc = efx_filter_insert(sa->nic, &filter->specs[i]);
+ if (rc != 0) {
+ sfc_err(sa, "failed to insert repr proxy filter");
+ goto fail_insert;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+
+ return 0;
+
+fail_insert:
+ while (i-- > 0)
+ efx_filter_remove(sa->nic, &filter->specs[i]);
+
+fail_get_selector:
+fail_multiple_queues:
+ sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+ return rc;
+}
+
+static void
+sfc_repr_proxy_mport_filter_remove(struct sfc_adapter *sa)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_repr_proxy_filter *filter = &rp->mport_filter;
+ unsigned int i;
+
+ for (i = 0; i < RTE_DIM(filter->specs); i++)
+ efx_filter_remove(sa->nic, &filter->specs[i]);
+}
+
+static int
+sfc_repr_proxy_port_rule_insert(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ int rc;
+
+ rc = sfc_repr_proxy_mae_rule_insert(sa, port);
+ if (rc != 0)
+ goto fail_mae_rule_insert;
+
+ return 0;
+
+fail_mae_rule_insert:
+ return rc;
+}
+
+static void
+sfc_repr_proxy_port_rule_remove(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ sfc_repr_proxy_mae_rule_remove(sa, port);
+}
+
static int
sfc_repr_proxy_ports_init(struct sfc_adapter *sa)
{
@@ -644,24 +815,105 @@ sfc_repr_proxy_detach(struct sfc_adapter *sa)
sfc_log_init(sa, "done");
}
+static int
+sfc_repr_proxy_do_start_port(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ rc = sfc_repr_proxy_port_rule_insert(sa, port);
+ if (rc != 0)
+ goto fail_filter_insert;
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_START_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to start proxy port %u",
+ port->repr_id);
+ goto fail_port_start;
+ }
+ } else {
+ port->started = true;
+ }
+
+ return 0;
+
+fail_port_start:
+ sfc_repr_proxy_port_rule_remove(sa, port);
+fail_filter_insert:
+ sfc_err(sa, "%s() failed %s", __func__, rte_strerror(rc));
+
+ return rc;
+}
+
+static int
+sfc_repr_proxy_do_stop_port(struct sfc_adapter *sa,
+ struct sfc_repr_proxy_port *port)
+
+{
+ struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ int rc;
+
+ if (rp->started) {
+ rc = sfc_repr_proxy_mbox_send(&rp->mbox, port,
+ SFC_REPR_PROXY_MBOX_STOP_PORT);
+ if (rc != 0) {
+ sfc_err(sa, "failed to stop proxy port %u: %s",
+ port->repr_id, rte_strerror(rc));
+ return rc;
+ }
+ } else {
+ port->started = false;
+ }
+
+ sfc_repr_proxy_port_rule_remove(sa, port);
+
+ return 0;
+}
+
+static bool
+sfc_repr_proxy_port_enabled(struct sfc_repr_proxy_port *port)
+{
+ return port->rte_port_id != RTE_MAX_ETHPORTS && port->enabled;
+}
+
+static bool
+sfc_repr_proxy_ports_disabled(struct sfc_repr_proxy *rp)
+{
+ struct sfc_repr_proxy_port *port;
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port))
+ return false;
+ }
+
+ return true;
+}
+
int
sfc_repr_proxy_start(struct sfc_adapter *sa)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_repr_proxy_port *last_port = NULL;
+ struct sfc_repr_proxy_port *port;
int rc;
sfc_log_init(sa, "entry");
- /*
- * The condition to start the proxy is insufficient. It will be
- * complemented with representor port start/stop support.
- */
+ /* Representor proxy is not started when no representors are started */
if (!sfc_repr_available(sas)) {
sfc_log_init(sa, "representors not supported - skip");
return 0;
}
+ if (sfc_repr_proxy_ports_disabled(rp)) {
+ sfc_log_init(sa, "no started representor ports - skip");
+ return 0;
+ }
+
rc = sfc_repr_proxy_rxq_start(sa);
if (rc != 0)
goto fail_rxq_start;
@@ -698,12 +950,40 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
goto fail_runstate_set;
}
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port)) {
+ rc = sfc_repr_proxy_do_start_port(sa, port);
+ if (rc != 0)
+ goto fail_start_id;
+
+ last_port = port;
+ }
+ }
+
+ rc = sfc_repr_proxy_mport_filter_insert(sa);
+ if (rc != 0)
+ goto fail_mport_filter_insert;
+
rp->started = true;
sfc_log_init(sa, "done");
return 0;
+fail_mport_filter_insert:
+fail_start_id:
+ if (last_port != NULL) {
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port)) {
+ (void)sfc_repr_proxy_do_stop_port(sa, port);
+ if (port == last_port)
+ break;
+ }
+ }
+ }
+
+ rte_service_runstate_set(rp->service_id, 0);
+
fail_runstate_set:
rte_service_component_runstate_set(rp->service_id, 0);
@@ -726,6 +1006,7 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
{
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ struct sfc_repr_proxy_port *port;
int rc;
sfc_log_init(sa, "entry");
@@ -735,6 +1016,24 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
return;
}
+ if (sfc_repr_proxy_ports_disabled(rp)) {
+ sfc_log_init(sa, "no started representor ports - skip");
+ return;
+ }
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (sfc_repr_proxy_port_enabled(port)) {
+ rc = sfc_repr_proxy_do_stop_port(sa, port);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to stop representor proxy port %u: %s",
+ port->repr_id, rte_strerror(rc));
+ }
+ }
+ }
+
+ sfc_repr_proxy_mport_filter_remove(sa);
+
rc = rte_service_runstate_set(rp->service_id, 0);
if (rc < 0) {
sfc_err(sa, "failed to stop %s: %s",
@@ -759,19 +1058,6 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
sfc_log_init(sa, "done");
}
-static struct sfc_repr_proxy_port *
-sfc_repr_proxy_find_port(struct sfc_repr_proxy *rp, uint16_t repr_id)
-{
- struct sfc_repr_proxy_port *port;
-
- TAILQ_FOREACH(port, &rp->ports, entries) {
- if (port->repr_id == repr_id)
- return port;
- }
-
- return NULL;
-}
-
int
sfc_repr_proxy_add_port(uint16_t pf_port_id, uint16_t repr_id,
uint16_t rte_port_id, const efx_mport_sel_t *mport_sel)
@@ -1020,3 +1306,136 @@ sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
sfc_log_init(sa, "done");
sfc_put_adapter(sa);
}
+
+int
+sfc_repr_proxy_start_repr(uint16_t pf_port_id, uint16_t repr_id)
+{
+ bool proxy_start_required = false;
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ rc = ENOENT;
+ goto fail_not_found;
+ }
+
+ if (port->enabled) {
+ rc = EALREADY;
+ sfc_err(sa, "failed: repr %u proxy port already started",
+ repr_id);
+ goto fail_already_started;
+ }
+
+ if (sa->state == SFC_ETHDEV_STARTED) {
+ if (sfc_repr_proxy_ports_disabled(rp)) {
+ proxy_start_required = true;
+ } else {
+ rc = sfc_repr_proxy_do_start_port(sa, port);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to start repr %u proxy port",
+ repr_id);
+ goto fail_start_id;
+ }
+ }
+ }
+
+ port->enabled = true;
+
+ if (proxy_start_required) {
+ rc = sfc_repr_proxy_start(sa);
+ if (rc != 0) {
+ sfc_err(sa, "failed to start proxy");
+ goto fail_proxy_start;
+ }
+ }
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+
+fail_proxy_start:
+ port->enabled = false;
+
+fail_start_id:
+fail_already_started:
+fail_not_found:
+ sfc_err(sa, "failed to start repr %u proxy port: %s", repr_id,
+ rte_strerror(rc));
+ sfc_put_adapter(sa);
+
+ return rc;
+}
+
+int
+sfc_repr_proxy_stop_repr(uint16_t pf_port_id, uint16_t repr_id)
+{
+ struct sfc_repr_proxy_port *port;
+ struct sfc_repr_proxy_port *p;
+ struct sfc_repr_proxy *rp;
+ struct sfc_adapter *sa;
+ int rc;
+
+ sa = sfc_get_adapter_by_pf_port_id(pf_port_id);
+ rp = sfc_repr_proxy_by_adapter(sa);
+
+ sfc_log_init(sa, "entry");
+
+ port = sfc_repr_proxy_find_port(rp, repr_id);
+ if (port == NULL) {
+ sfc_err(sa, "%s() failed: no such port", __func__);
+ return ENOENT;
+ }
+
+ if (!port->enabled) {
+ sfc_log_init(sa, "repr %u proxy port is not started - skip",
+ repr_id);
+ sfc_put_adapter(sa);
+ return 0;
+ }
+
+ if (sa->state == SFC_ETHDEV_STARTED) {
+ bool last_enabled = true;
+
+ TAILQ_FOREACH(p, &rp->ports, entries) {
+ if (p == port)
+ continue;
+
+ if (sfc_repr_proxy_port_enabled(p)) {
+ last_enabled = false;
+ break;
+ }
+ }
+
+ rc = 0;
+ if (last_enabled)
+ sfc_repr_proxy_stop(sa);
+ else
+ rc = sfc_repr_proxy_do_stop_port(sa, port);
+
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to stop representor proxy TxQ %u: %s",
+ repr_id, rte_strerror(rc));
+ sfc_put_adapter(sa);
+ return rc;
+ }
+ }
+
+ port->enabled = false;
+
+ sfc_log_init(sa, "done");
+ sfc_put_adapter(sa);
+
+ return 0;
+}
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index 1fe7ff3695..c350713a55 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -19,6 +19,8 @@
#include "sfc_repr.h"
#include "sfc_dp.h"
+#include "sfc_flow.h"
+#include "sfc_mae.h"
#ifdef __cplusplus
extern "C" {
@@ -49,6 +51,14 @@ struct sfc_repr_proxy_txq {
struct rte_ring *ring;
};
+struct sfc_repr_proxy_filter {
+ /*
+ * 2 filters are required to match all incoming traffic, unknown
+ * unicast and unknown multicast.
+ */
+ efx_filter_spec_t specs[2];
+};
+
struct sfc_repr_proxy_port {
TAILQ_ENTRY(sfc_repr_proxy_port) entries;
uint16_t repr_id;
@@ -56,6 +66,9 @@ struct sfc_repr_proxy_port {
efx_mport_id_t egress_mport;
struct sfc_repr_proxy_rxq rxq[SFC_REPR_RXQ_MAX];
struct sfc_repr_proxy_txq txq[SFC_REPR_TXQ_MAX];
+ struct sfc_mae_rule *mae_rule;
+ bool enabled;
+ bool started;
};
struct sfc_repr_proxy_dp_rxq {
@@ -72,6 +85,8 @@ struct sfc_repr_proxy_dp_txq {
enum sfc_repr_proxy_mbox_op {
SFC_REPR_PROXY_MBOX_ADD_PORT,
SFC_REPR_PROXY_MBOX_DEL_PORT,
+ SFC_REPR_PROXY_MBOX_START_PORT,
+ SFC_REPR_PROXY_MBOX_STOP_PORT,
};
struct sfc_repr_proxy_mbox {
@@ -92,6 +107,7 @@ struct sfc_repr_proxy {
bool started;
struct sfc_repr_proxy_dp_rxq dp_rxq[SFC_REPR_PROXY_NB_RXQ_MAX];
struct sfc_repr_proxy_dp_txq dp_txq[SFC_REPR_PROXY_NB_TXQ_MAX];
+ struct sfc_repr_proxy_filter mport_filter;
struct sfc_repr_proxy_mbox mbox;
};
diff --git a/drivers/net/sfc/sfc_repr_proxy_api.h b/drivers/net/sfc/sfc_repr_proxy_api.h
index d1c0760efa..95b065801d 100644
--- a/drivers/net/sfc/sfc_repr_proxy_api.h
+++ b/drivers/net/sfc/sfc_repr_proxy_api.h
@@ -38,6 +38,9 @@ int sfc_repr_proxy_add_txq(uint16_t pf_port_id, uint16_t repr_id,
void sfc_repr_proxy_del_txq(uint16_t pf_port_id, uint16_t repr_id,
uint16_t queue_id);
+int sfc_repr_proxy_start_repr(uint16_t pf_port_id, uint16_t repr_id);
+int sfc_repr_proxy_stop_repr(uint16_t pf_port_id, uint16_t repr_id);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 20/38] net/sfc: implement port representor link update
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (18 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 19/38] net/sfc: implement port representor start and stop Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 21/38] net/sfc: support multiple device probe Andrew Rybchenko
` (18 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement the callback by reporting link down if the representor
is not started, otherwise report link up with undefined link speed.
Link speed is undefined since representors can pass traffic to each
other even if the PF link is down.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index f60106c196..9b70a3be76 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -463,6 +463,24 @@ sfc_repr_dev_infos_get(struct rte_eth_dev *dev,
return 0;
}
+static int
+sfc_repr_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete)
+{
+ struct sfc_repr *sr = sfc_repr_by_eth_dev(dev);
+ struct rte_eth_link link;
+
+ if (sr->state != SFC_ETHDEV_STARTED) {
+ sfc_port_link_mode_to_info(EFX_LINK_UNKNOWN, &link);
+ } else {
+ memset(&link, 0, sizeof(link));
+ link.link_status = ETH_LINK_UP;
+ link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+ }
+
+ return rte_eth_linkstatus_set(dev, &link);
+}
+
static int
sfc_repr_ring_create(uint16_t pf_port_id, uint16_t repr_id,
const char *type_name, uint16_t qid, uint16_t nb_desc,
@@ -760,6 +778,7 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_stop = sfc_repr_dev_stop,
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
+ .link_update = sfc_repr_dev_link_update,
.rx_queue_setup = sfc_repr_rx_queue_setup,
.rx_queue_release = sfc_repr_rx_queue_release,
.tx_queue_setup = sfc_repr_tx_queue_setup,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 21/38] net/sfc: support multiple device probe
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (19 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 20/38] net/sfc: implement port representor link update Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 22/38] net/sfc: implement representor Tx routine Andrew Rybchenko
` (17 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Support probing the device multiple times so that additional port
representors can be created with hotplug EAL API. To hotplug a
representor, the PF must be hotplugged with different representor
device argument.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ethdev.c | 55 ++++++++++++++++++++++++------------
drivers/net/sfc/sfc_repr.c | 35 +++++++++++++----------
2 files changed, 57 insertions(+), 33 deletions(-)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index efd5e6b1ab..f69bbde11a 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2440,31 +2440,40 @@ sfc_parse_rte_devargs(const char *args, struct rte_eth_devargs *devargs)
}
static int
-sfc_eth_dev_create(struct rte_pci_device *pci_dev,
- struct sfc_ethdev_init_data *init_data,
- struct rte_eth_dev **devp)
+sfc_eth_dev_find_or_create(struct rte_pci_device *pci_dev,
+ struct sfc_ethdev_init_data *init_data,
+ struct rte_eth_dev **devp,
+ bool *dev_created)
{
struct rte_eth_dev *dev;
+ bool created = false;
int rc;
- rc = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
- sizeof(struct sfc_adapter_shared),
- eth_dev_pci_specific_init, pci_dev,
- sfc_eth_dev_init, init_data);
- if (rc != 0) {
- SFC_GENERIC_LOG(ERR, "Failed to create sfc ethdev '%s'",
- pci_dev->device.name);
- return rc;
- }
-
dev = rte_eth_dev_allocated(pci_dev->device.name);
if (dev == NULL) {
- SFC_GENERIC_LOG(ERR, "Failed to find allocated sfc ethdev '%s'",
+ rc = rte_eth_dev_create(&pci_dev->device, pci_dev->device.name,
+ sizeof(struct sfc_adapter_shared),
+ eth_dev_pci_specific_init, pci_dev,
+ sfc_eth_dev_init, init_data);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR, "Failed to create sfc ethdev '%s'",
+ pci_dev->device.name);
+ return rc;
+ }
+
+ created = true;
+
+ dev = rte_eth_dev_allocated(pci_dev->device.name);
+ if (dev == NULL) {
+ SFC_GENERIC_LOG(ERR,
+ "Failed to find allocated sfc ethdev '%s'",
pci_dev->device.name);
- return -ENODEV;
+ return -ENODEV;
+ }
}
*devp = dev;
+ *dev_created = created;
return 0;
}
@@ -2525,6 +2534,7 @@ static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct sfc_ethdev_init_data init_data;
struct rte_eth_devargs eth_da;
struct rte_eth_dev *dev;
+ bool dev_created;
int rc;
if (pci_dev->device.devargs != NULL) {
@@ -2546,13 +2556,21 @@ static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
return -ENOTSUP;
}
- rc = sfc_eth_dev_create(pci_dev, &init_data, &dev);
+ /*
+ * Driver supports RTE_PCI_DRV_PROBE_AGAIN. Hence create device only
+ * if it does not already exist. Re-probing an existing device is
+ * expected to allow additional representors to be configured.
+ */
+ rc = sfc_eth_dev_find_or_create(pci_dev, &init_data, &dev,
+ &dev_created);
if (rc != 0)
return rc;
rc = sfc_eth_dev_create_representors(dev, ð_da);
if (rc != 0) {
- (void)rte_eth_dev_destroy(dev, sfc_eth_dev_uninit);
+ if (dev_created)
+ (void)rte_eth_dev_destroy(dev, sfc_eth_dev_uninit);
+
return rc;
}
@@ -2568,7 +2586,8 @@ static struct rte_pci_driver sfc_efx_pmd = {
.id_table = pci_id_sfc_efx_map,
.drv_flags =
RTE_PCI_DRV_INTR_LSC |
- RTE_PCI_DRV_NEED_MAPPING,
+ RTE_PCI_DRV_NEED_MAPPING |
+ RTE_PCI_DRV_PROBE_AGAIN,
.probe = sfc_eth_dev_pci_probe,
.remove = sfc_eth_dev_pci_remove,
};
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 9b70a3be76..922f4da432 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -908,6 +908,7 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
struct sfc_repr_init_data repr_data;
char name[RTE_ETH_NAME_MAX_LEN];
int ret;
+ struct rte_eth_dev *dev;
if (snprintf(name, sizeof(name), "net_%s_representor_%u",
parent->device->name, representor_id) >=
@@ -916,20 +917,24 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
return -ENAMETOOLONG;
}
- memset(&repr_data, 0, sizeof(repr_data));
- repr_data.pf_port_id = parent->data->port_id;
- repr_data.repr_id = representor_id;
- repr_data.switch_domain_id = switch_domain_id;
- repr_data.mport_sel = *mport_sel;
-
- ret = rte_eth_dev_create(parent->device, name,
- sizeof(struct sfc_repr_shared),
- NULL, NULL,
- sfc_repr_eth_dev_init, &repr_data);
- if (ret != 0)
- SFC_GENERIC_LOG(ERR, "%s() failed to create device", __func__);
-
- SFC_GENERIC_LOG(INFO, "%s() done: %s", __func__, rte_strerror(-ret));
+ dev = rte_eth_dev_allocated(name);
+ if (dev == NULL) {
+ memset(&repr_data, 0, sizeof(repr_data));
+ repr_data.pf_port_id = parent->data->port_id;
+ repr_data.repr_id = representor_id;
+ repr_data.switch_domain_id = switch_domain_id;
+ repr_data.mport_sel = *mport_sel;
+
+ ret = rte_eth_dev_create(parent->device, name,
+ sizeof(struct sfc_repr_shared),
+ NULL, NULL,
+ sfc_repr_eth_dev_init, &repr_data);
+ if (ret != 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to create device",
+ __func__);
+ return ret;
+ }
+ }
- return ret;
+ return 0;
}
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 22/38] net/sfc: implement representor Tx routine
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (20 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 21/38] net/sfc: support multiple device probe Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 23/38] net/sfc: use xword type for EF100 Rx prefix Andrew Rybchenko
` (16 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Forward traffic that is transmitted from a port representor to the
corresponding virtual function using the dedicated TxQ.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 45 ++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.c | 88 +++++++++++++++++++++++++++++++-
drivers/net/sfc/sfc_repr_proxy.h | 8 +++
3 files changed, 140 insertions(+), 1 deletion(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 922f4da432..3e74313b12 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -160,6 +160,49 @@ sfc_repr_tx_queue_stop(void *queue)
rte_ring_reset(txq->ring);
}
+static uint16_t
+sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+ struct sfc_repr_txq *txq = tx_queue;
+ unsigned int n_tx;
+ void **objs;
+ uint16_t i;
+
+ /*
+ * mbuf is likely cache-hot. Set flag and egress m-port here instead of
+ * doing that in representors proxy. Also, it should help to avoid
+ * cache bounce. Moreover, potentially, it allows to use one
+ * multi-producer single-consumer ring for all representors.
+ *
+ * The only potential problem is doing so many times if enqueue
+ * fails and sender retries.
+ */
+ for (i = 0; i < nb_pkts; ++i) {
+ struct rte_mbuf *m = tx_pkts[i];
+
+ m->ol_flags |= sfc_dp_mport_override;
+ *RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset,
+ efx_mport_id_t *) = txq->egress_mport;
+ }
+
+ objs = (void *)&tx_pkts[0];
+ n_tx = rte_ring_sp_enqueue_burst(txq->ring, objs, nb_pkts, NULL);
+
+ /*
+ * Remove m-port override flag from packets that were not enqueued
+ * Setting the flag only for enqueued packets after the burst is
+ * not possible since the ownership of enqueued packets is
+ * transferred to representor proxy.
+ */
+ for (i = n_tx; i < nb_pkts; ++i) {
+ struct rte_mbuf *m = tx_pkts[i];
+
+ m->ol_flags &= ~sfc_dp_mport_override;
+ }
+
+ return n_tx;
+}
+
static int
sfc_repr_start(struct rte_eth_dev *dev)
{
@@ -760,6 +803,7 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
(void)sfc_repr_proxy_del_port(srs->pf_port_id, srs->repr_id);
+ dev->tx_pkt_burst = NULL;
dev->dev_ops = NULL;
sfc_repr_unlock(sr);
@@ -880,6 +924,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
goto fail_mac_addrs;
}
+ dev->tx_pkt_burst = sfc_repr_tx_burst;
dev->dev_ops = &sfc_repr_dev_ops;
sr->state = SFC_ETHDEV_INITIALIZED;
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index ea03d5afdd..d8934bab65 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -25,6 +25,12 @@
*/
#define SFC_REPR_PROXY_MBOX_POLL_TIMEOUT_MS 1000
+/**
+ * Amount of time to wait for the representor proxy routine (which is
+ * running on a service core) to terminate after service core is stopped.
+ */
+#define SFC_REPR_PROXY_ROUTINE_TERMINATE_TIMEOUT_MS 10000
+
static struct sfc_repr_proxy *
sfc_repr_proxy_by_adapter(struct sfc_adapter *sa)
{
@@ -148,16 +154,71 @@ sfc_repr_proxy_mbox_handle(struct sfc_repr_proxy *rp)
__atomic_store_n(&mbox->ack, true, __ATOMIC_RELEASE);
}
+static void
+sfc_repr_proxy_handle_tx(struct sfc_repr_proxy_dp_txq *rp_txq,
+ struct sfc_repr_proxy_txq *repr_txq)
+{
+ /*
+ * With multiple representor proxy queues configured it is
+ * possible that not all of the corresponding representor
+ * queues were created. Skip the queues that do not exist.
+ */
+ if (repr_txq->ring == NULL)
+ return;
+
+ if (rp_txq->available < RTE_DIM(rp_txq->tx_pkts)) {
+ rp_txq->available +=
+ rte_ring_sc_dequeue_burst(repr_txq->ring,
+ (void **)(&rp_txq->tx_pkts[rp_txq->available]),
+ RTE_DIM(rp_txq->tx_pkts) - rp_txq->available,
+ NULL);
+
+ if (rp_txq->available == rp_txq->transmitted)
+ return;
+ }
+
+ rp_txq->transmitted += rp_txq->pkt_burst(rp_txq->dp,
+ &rp_txq->tx_pkts[rp_txq->transmitted],
+ rp_txq->available - rp_txq->transmitted);
+
+ if (rp_txq->available == rp_txq->transmitted) {
+ rp_txq->available = 0;
+ rp_txq->transmitted = 0;
+ }
+}
+
static int32_t
sfc_repr_proxy_routine(void *arg)
{
+ struct sfc_repr_proxy_port *port;
struct sfc_repr_proxy *rp = arg;
+ unsigned int i;
sfc_repr_proxy_mbox_handle(rp);
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (!port->started)
+ continue;
+
+ for (i = 0; i < rp->nb_txq; i++)
+ sfc_repr_proxy_handle_tx(&rp->dp_txq[i], &port->txq[i]);
+ }
+
return 0;
}
+static struct sfc_txq_info *
+sfc_repr_proxy_txq_info_get(struct sfc_adapter *sa, unsigned int repr_queue_id)
+{
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy_dp_txq *dp_txq;
+
+ SFC_ASSERT(repr_queue_id < sfc_repr_nb_txq(sas));
+ dp_txq = &sa->repr_proxy.dp_txq[repr_queue_id];
+
+ return &sas->txq_info[dp_txq->sw_index];
+}
+
static int
sfc_repr_proxy_txq_attach(struct sfc_adapter *sa)
{
@@ -289,11 +350,20 @@ sfc_repr_proxy_txq_fini(struct sfc_adapter *sa)
static int
sfc_repr_proxy_txq_start(struct sfc_adapter *sa)
{
+ struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
+ unsigned int i;
sfc_log_init(sa, "entry");
- RTE_SET_USED(rp);
+ for (i = 0; i < sfc_repr_nb_txq(sas); i++) {
+ struct sfc_repr_proxy_dp_txq *txq = &rp->dp_txq[i];
+
+ txq->dp = sfc_repr_proxy_txq_info_get(sa, i)->dp;
+ txq->pkt_burst = sa->eth_dev->tx_pkt_burst;
+ txq->available = 0;
+ txq->transmitted = 0;
+ }
sfc_log_init(sa, "done");
@@ -922,6 +992,8 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
if (rc != 0)
goto fail_txq_start;
+ rp->nb_txq = sfc_repr_nb_txq(sas);
+
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
if (rc != 0 && rc != -EALREADY) {
@@ -1007,6 +1079,9 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
struct sfc_repr_proxy *rp = &sa->repr_proxy;
struct sfc_repr_proxy_port *port;
+ const unsigned int wait_ms_total =
+ SFC_REPR_PROXY_ROUTINE_TERMINATE_TIMEOUT_MS;
+ unsigned int i;
int rc;
sfc_log_init(sa, "entry");
@@ -1050,6 +1125,17 @@ sfc_repr_proxy_stop(struct sfc_adapter *sa)
/* Service lcore may be shared and we never stop it */
+ /*
+ * Wait for the representor proxy routine to finish the last iteration.
+ * Give up on timeout.
+ */
+ for (i = 0; i < wait_ms_total; i++) {
+ if (rte_service_may_be_active(rp->service_id) == 0)
+ break;
+
+ rte_delay_ms(1);
+ }
+
sfc_repr_proxy_rxq_stop(sa);
sfc_repr_proxy_txq_stop(sa);
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index c350713a55..d47e0a431a 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -79,6 +79,13 @@ struct sfc_repr_proxy_dp_rxq {
};
struct sfc_repr_proxy_dp_txq {
+ eth_tx_burst_t pkt_burst;
+ struct sfc_dp_txq *dp;
+
+ unsigned int available;
+ unsigned int transmitted;
+ struct rte_mbuf *tx_pkts[SFC_REPR_PROXY_TX_BURST];
+
sfc_sw_index_t sw_index;
};
@@ -110,6 +117,7 @@ struct sfc_repr_proxy {
struct sfc_repr_proxy_filter mport_filter;
struct sfc_repr_proxy_mbox mbox;
+ unsigned int nb_txq;
};
struct sfc_adapter;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 23/38] net/sfc: use xword type for EF100 Rx prefix
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (21 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 22/38] net/sfc: implement representor Tx routine Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 24/38] net/sfc: handle ingress m-port in " Andrew Rybchenko
` (15 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Layout of the EF100 Rx prefix is defined in terms of a 32 bytes long
value type (xword). Replace oword with xword to avoid truncation.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ef100_rx.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 1bf04f565a..6e58b8c243 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -379,7 +379,7 @@ static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
static bool
sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
- const efx_oword_t *rx_prefix,
+ const efx_xword_t *rx_prefix,
struct rte_mbuf *m)
{
const efx_word_t *class;
@@ -399,19 +399,19 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
m->packet_type = sfc_ef100_rx_class_decode(*class, &ol_flags);
if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
- EFX_TEST_OWORD_BIT(rx_prefix[0],
+ EFX_TEST_XWORD_BIT(rx_prefix[0],
ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
ol_flags |= PKT_RX_RSS_HASH;
- /* EFX_OWORD_FIELD converts little-endian to CPU */
- m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
+ /* EFX_XWORD_FIELD converts little-endian to CPU */
+ m->hash.rss = EFX_XWORD_FIELD(rx_prefix[0],
ESF_GZ_RX_PREFIX_RSS_HASH);
}
if (rxq->flags & SFC_EF100_RXQ_USER_MARK) {
uint32_t user_mark;
- /* EFX_OWORD_FIELD converts little-endian to CPU */
- user_mark = EFX_OWORD_FIELD(rx_prefix[0],
+ /* EFX_XWORD_FIELD converts little-endian to CPU */
+ user_mark = EFX_XWORD_FIELD(rx_prefix[0],
ESF_GZ_RX_PREFIX_USER_MARK);
if (user_mark != SFC_EF100_USER_MARK_INVALID) {
ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
@@ -480,7 +480,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
while (rxq->ready_pkts > 0 && rx_pkts != rx_pkts_end) {
struct rte_mbuf *pkt;
struct rte_mbuf *lastseg;
- const efx_oword_t *rx_prefix;
+ const efx_xword_t *rx_prefix;
uint16_t pkt_len;
uint16_t seg_len;
bool deliver;
@@ -495,9 +495,9 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
pkt->rearm_data[0] = rxq->rearm_data;
/* data_off already moved past Rx prefix */
- rx_prefix = (const efx_oword_t *)sfc_ef100_rx_pkt_prefix(pkt);
+ rx_prefix = (const efx_xword_t *)sfc_ef100_rx_pkt_prefix(pkt);
- pkt_len = EFX_OWORD_FIELD(rx_prefix[0],
+ pkt_len = EFX_XWORD_FIELD(rx_prefix[0],
ESF_GZ_RX_PREFIX_LENGTH);
SFC_ASSERT(pkt_len > 0);
rte_pktmbuf_pkt_len(pkt) = pkt_len;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 24/38] net/sfc: handle ingress m-port in EF100 Rx prefix
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (22 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 23/38] net/sfc: use xword type for EF100 Rx prefix Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 25/38] net/sfc: implement representor Rx routine Andrew Rybchenko
` (14 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Set ingress mport dynamic field in mbuf in EF100.
For a given PF, Rx queues of representor devices
function on top of the only Rx queue operated by
the PF representor proxy facility. This field is
a means to demultiplex traffic hitting the queue.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_ef100_rx.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 6e58b8c243..378c0314ae 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -62,6 +62,7 @@ struct sfc_ef100_rxq {
#define SFC_EF100_RXQ_RSS_HASH 0x10
#define SFC_EF100_RXQ_USER_MARK 0x20
#define SFC_EF100_RXQ_FLAG_INTR_EN 0x40
+#define SFC_EF100_RXQ_INGRESS_MPORT 0x80
unsigned int ptr_mask;
unsigned int evq_phase_bit_shift;
unsigned int ready_pkts;
@@ -370,6 +371,8 @@ static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
SFC_EF100_RX_PREFIX_FIELD(LENGTH, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(RSS_HASH_VALID, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(CLASS, B_FALSE),
+ EFX_RX_PREFIX_FIELD(INGRESS_MPORT,
+ ESF_GZ_RX_PREFIX_INGRESS_MPORT, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(RSS_HASH, B_FALSE),
SFC_EF100_RX_PREFIX_FIELD(USER_MARK, B_FALSE),
@@ -419,6 +422,15 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
}
}
+ if (rxq->flags & SFC_EF100_RXQ_INGRESS_MPORT) {
+ ol_flags |= sfc_dp_mport_override;
+ *RTE_MBUF_DYNFIELD(m,
+ sfc_dp_mport_offset,
+ typeof(&((efx_mport_id_t *)0)->id)) =
+ EFX_XWORD_FIELD(rx_prefix[0],
+ ESF_GZ_RX_PREFIX_INGRESS_MPORT);
+ }
+
m->ol_flags = ol_flags;
return true;
}
@@ -806,6 +818,12 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
else
rxq->flags &= ~SFC_EF100_RXQ_USER_MARK;
+ if ((unsup_rx_prefix_fields &
+ (1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT)) == 0)
+ rxq->flags |= SFC_EF100_RXQ_INGRESS_MPORT;
+ else
+ rxq->flags &= ~SFC_EF100_RXQ_INGRESS_MPORT;
+
rxq->prefix_size = pinfo->erpl_length;
rxq->rearm_data = sfc_ef100_mk_mbuf_rearm_data(rxq->dp.dpq.port_id,
rxq->prefix_size);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 25/38] net/sfc: implement representor Rx routine
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (23 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 24/38] net/sfc: handle ingress m-port in " Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 26/38] net/sfc: add simple port representor statistics Andrew Rybchenko
` (13 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Implement traffic forwarding for representor and representor proxy
from virtual functions to representor Rx queues.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 12 +++
drivers/net/sfc/sfc_repr_proxy.c | 134 +++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_repr_proxy.h | 11 +++
3 files changed, 157 insertions(+)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 3e74313b12..067496c5f0 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -160,6 +160,16 @@ sfc_repr_tx_queue_stop(void *queue)
rte_ring_reset(txq->ring);
}
+static uint16_t
+sfc_repr_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+ struct sfc_repr_rxq *rxq = rx_queue;
+ void **objs = (void *)&rx_pkts[0];
+
+ /* mbufs port is already filled correctly by representors proxy */
+ return rte_ring_sc_dequeue_burst(rxq->ring, objs, nb_pkts, NULL);
+}
+
static uint16_t
sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
@@ -803,6 +813,7 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
(void)sfc_repr_proxy_del_port(srs->pf_port_id, srs->repr_id);
+ dev->rx_pkt_burst = NULL;
dev->tx_pkt_burst = NULL;
dev->dev_ops = NULL;
@@ -924,6 +935,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
goto fail_mac_addrs;
}
+ dev->rx_pkt_burst = sfc_repr_rx_burst;
dev->tx_pkt_burst = sfc_repr_tx_burst;
dev->dev_ops = &sfc_repr_dev_ops;
diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c
index d8934bab65..535b07ea52 100644
--- a/drivers/net/sfc/sfc_repr_proxy.c
+++ b/drivers/net/sfc/sfc_repr_proxy.c
@@ -18,6 +18,7 @@
#include "sfc_ev.h"
#include "sfc_rx.h"
#include "sfc_tx.h"
+#include "sfc_dp_rx.h"
/**
* Amount of time to wait for the representor proxy routine (which is
@@ -31,6 +32,8 @@
*/
#define SFC_REPR_PROXY_ROUTINE_TERMINATE_TIMEOUT_MS 10000
+#define SFC_REPR_INVALID_ROUTE_PORT_ID (UINT16_MAX)
+
static struct sfc_repr_proxy *
sfc_repr_proxy_by_adapter(struct sfc_adapter *sa)
{
@@ -187,6 +190,113 @@ sfc_repr_proxy_handle_tx(struct sfc_repr_proxy_dp_txq *rp_txq,
}
}
+static struct sfc_repr_proxy_port *
+sfc_repr_proxy_rx_route_mbuf(struct sfc_repr_proxy *rp, struct rte_mbuf *m)
+{
+ struct sfc_repr_proxy_port *port;
+ efx_mport_id_t mport_id;
+
+ mport_id.id = *RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset,
+ typeof(&((efx_mport_id_t *)0)->id));
+
+ TAILQ_FOREACH(port, &rp->ports, entries) {
+ if (port->egress_mport.id == mport_id.id) {
+ m->port = port->rte_port_id;
+ m->ol_flags &= ~sfc_dp_mport_override;
+ return port;
+ }
+ }
+
+ return NULL;
+}
+
+/*
+ * Returns true if a packet is encountered which should be forwarded to a
+ * port which is different from the one that is currently routed.
+ */
+static bool
+sfc_repr_proxy_rx_route(struct sfc_repr_proxy *rp,
+ struct sfc_repr_proxy_dp_rxq *rp_rxq)
+{
+ unsigned int i;
+
+ for (i = rp_rxq->routed;
+ i < rp_rxq->available && !rp_rxq->stop_route;
+ i++, rp_rxq->routed++) {
+ struct sfc_repr_proxy_port *port;
+ struct rte_mbuf *m = rp_rxq->pkts[i];
+
+ port = sfc_repr_proxy_rx_route_mbuf(rp, m);
+ /* Cannot find destination representor */
+ if (port == NULL) {
+ /* Effectively drop the packet */
+ rp_rxq->forwarded++;
+ continue;
+ }
+
+ /* Currently routed packets are mapped to a different port */
+ if (port->repr_id != rp_rxq->route_port_id &&
+ rp_rxq->route_port_id != SFC_REPR_INVALID_ROUTE_PORT_ID)
+ return true;
+
+ rp_rxq->route_port_id = port->repr_id;
+ }
+
+ return false;
+}
+
+static void
+sfc_repr_proxy_rx_forward(struct sfc_repr_proxy *rp,
+ struct sfc_repr_proxy_dp_rxq *rp_rxq)
+{
+ struct sfc_repr_proxy_port *port;
+
+ if (rp_rxq->route_port_id != SFC_REPR_INVALID_ROUTE_PORT_ID) {
+ port = sfc_repr_proxy_find_port(rp, rp_rxq->route_port_id);
+
+ if (port != NULL && port->started) {
+ rp_rxq->forwarded +=
+ rte_ring_sp_enqueue_burst(port->rxq[0].ring,
+ (void **)(&rp_rxq->pkts[rp_rxq->forwarded]),
+ rp_rxq->routed - rp_rxq->forwarded, NULL);
+ } else {
+ /* Drop all routed packets if the port is not started */
+ rp_rxq->forwarded = rp_rxq->routed;
+ }
+ }
+
+ if (rp_rxq->forwarded == rp_rxq->routed) {
+ rp_rxq->route_port_id = SFC_REPR_INVALID_ROUTE_PORT_ID;
+ rp_rxq->stop_route = false;
+ } else {
+ /* Stall packet routing if not all packets were forwarded */
+ rp_rxq->stop_route = true;
+ }
+
+ if (rp_rxq->available == rp_rxq->forwarded)
+ rp_rxq->available = rp_rxq->forwarded = rp_rxq->routed = 0;
+}
+
+static void
+sfc_repr_proxy_handle_rx(struct sfc_repr_proxy *rp,
+ struct sfc_repr_proxy_dp_rxq *rp_rxq)
+{
+ bool route_again;
+
+ if (rp_rxq->available < RTE_DIM(rp_rxq->pkts)) {
+ rp_rxq->available += rp_rxq->pkt_burst(rp_rxq->dp,
+ &rp_rxq->pkts[rp_rxq->available],
+ RTE_DIM(rp_rxq->pkts) - rp_rxq->available);
+ if (rp_rxq->available == rp_rxq->forwarded)
+ return;
+ }
+
+ do {
+ route_again = sfc_repr_proxy_rx_route(rp, rp_rxq);
+ sfc_repr_proxy_rx_forward(rp, rp_rxq);
+ } while (route_again && !rp_rxq->stop_route);
+}
+
static int32_t
sfc_repr_proxy_routine(void *arg)
{
@@ -204,6 +314,9 @@ sfc_repr_proxy_routine(void *arg)
sfc_repr_proxy_handle_tx(&rp->dp_txq[i], &port->txq[i]);
}
+ for (i = 0; i < rp->nb_rxq; i++)
+ sfc_repr_proxy_handle_rx(rp, &rp->dp_rxq[i]);
+
return 0;
}
@@ -412,6 +525,18 @@ sfc_repr_proxy_rxq_detach(struct sfc_adapter *sa)
sfc_log_init(sa, "done");
}
+static struct sfc_rxq_info *
+sfc_repr_proxy_rxq_info_get(struct sfc_adapter *sa, unsigned int repr_queue_id)
+{
+ struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+ struct sfc_repr_proxy_dp_rxq *dp_rxq;
+
+ SFC_ASSERT(repr_queue_id < sfc_repr_nb_rxq(sas));
+ dp_rxq = &sa->repr_proxy.dp_rxq[repr_queue_id];
+
+ return &sas->rxq_info[dp_rxq->sw_index];
+}
+
static int
sfc_repr_proxy_rxq_init(struct sfc_adapter *sa,
struct sfc_repr_proxy_dp_rxq *rxq)
@@ -539,6 +664,14 @@ sfc_repr_proxy_rxq_start(struct sfc_adapter *sa)
i);
goto fail_start;
}
+
+ rxq->dp = sfc_repr_proxy_rxq_info_get(sa, i)->dp;
+ rxq->pkt_burst = sa->eth_dev->rx_pkt_burst;
+ rxq->available = 0;
+ rxq->routed = 0;
+ rxq->forwarded = 0;
+ rxq->stop_route = false;
+ rxq->route_port_id = SFC_REPR_INVALID_ROUTE_PORT_ID;
}
sfc_log_init(sa, "done");
@@ -993,6 +1126,7 @@ sfc_repr_proxy_start(struct sfc_adapter *sa)
goto fail_txq_start;
rp->nb_txq = sfc_repr_nb_txq(sas);
+ rp->nb_rxq = sfc_repr_nb_rxq(sas);
/* Service core may be in "stopped" state, start it */
rc = rte_service_lcore_start(rp->service_core_id);
diff --git a/drivers/net/sfc/sfc_repr_proxy.h b/drivers/net/sfc/sfc_repr_proxy.h
index d47e0a431a..b49b1a2a96 100644
--- a/drivers/net/sfc/sfc_repr_proxy.h
+++ b/drivers/net/sfc/sfc_repr_proxy.h
@@ -75,6 +75,16 @@ struct sfc_repr_proxy_dp_rxq {
struct rte_mempool *mp;
unsigned int ref_count;
+ eth_rx_burst_t pkt_burst;
+ struct sfc_dp_rxq *dp;
+
+ uint16_t route_port_id;
+ bool stop_route;
+ unsigned int available;
+ unsigned int forwarded;
+ unsigned int routed;
+ struct rte_mbuf *pkts[SFC_REPR_PROXY_TX_BURST];
+
sfc_sw_index_t sw_index;
};
@@ -118,6 +128,7 @@ struct sfc_repr_proxy {
struct sfc_repr_proxy_mbox mbox;
unsigned int nb_txq;
+ unsigned int nb_rxq;
};
struct sfc_adapter;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 26/38] net/sfc: add simple port representor statistics
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (24 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 25/38] net/sfc: implement representor Rx routine Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 27/38] net/sfc: free MAE lock once switch domain is assigned Andrew Rybchenko
` (12 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov
From: Igor Romanov <igor.romanov@oktetlabs.ru>
Gather statistics of enqueued and dequeued packets in Rx and Tx burst
callbacks to report in stats_get callback.
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
drivers/net/sfc/sfc_repr.c | 60 ++++++++++++++++++++++++++++++++++++--
1 file changed, 58 insertions(+), 2 deletions(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 067496c5f0..87f10092c3 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -32,15 +32,21 @@ struct sfc_repr_shared {
uint16_t switch_port_id;
};
+struct sfc_repr_queue_stats {
+ union sfc_pkts_bytes packets_bytes;
+};
+
struct sfc_repr_rxq {
/* Datapath members */
struct rte_ring *ring;
+ struct sfc_repr_queue_stats stats;
};
struct sfc_repr_txq {
/* Datapath members */
struct rte_ring *ring;
efx_mport_id_t egress_mport;
+ struct sfc_repr_queue_stats stats;
};
/** Primary process representor private data */
@@ -165,15 +171,30 @@ sfc_repr_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
{
struct sfc_repr_rxq *rxq = rx_queue;
void **objs = (void *)&rx_pkts[0];
+ unsigned int n_rx;
/* mbufs port is already filled correctly by representors proxy */
- return rte_ring_sc_dequeue_burst(rxq->ring, objs, nb_pkts, NULL);
+ n_rx = rte_ring_sc_dequeue_burst(rxq->ring, objs, nb_pkts, NULL);
+
+ if (n_rx > 0) {
+ unsigned int n_bytes = 0;
+ unsigned int i = 0;
+
+ do {
+ n_bytes += rx_pkts[i]->pkt_len;
+ } while (++i < n_rx);
+
+ sfc_pkts_bytes_add(&rxq->stats.packets_bytes, n_rx, n_bytes);
+ }
+
+ return n_rx;
}
static uint16_t
sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
struct sfc_repr_txq *txq = tx_queue;
+ unsigned int n_bytes = 0;
unsigned int n_tx;
void **objs;
uint16_t i;
@@ -193,6 +214,7 @@ sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
m->ol_flags |= sfc_dp_mport_override;
*RTE_MBUF_DYNFIELD(m, sfc_dp_mport_offset,
efx_mport_id_t *) = txq->egress_mport;
+ n_bytes += tx_pkts[i]->pkt_len;
}
objs = (void *)&tx_pkts[0];
@@ -202,14 +224,18 @@ sfc_repr_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
* Remove m-port override flag from packets that were not enqueued
* Setting the flag only for enqueued packets after the burst is
* not possible since the ownership of enqueued packets is
- * transferred to representor proxy.
+ * transferred to representor proxy. The same logic applies to
+ * counting the enqueued packets' bytes.
*/
for (i = n_tx; i < nb_pkts; ++i) {
struct rte_mbuf *m = tx_pkts[i];
m->ol_flags &= ~sfc_dp_mport_override;
+ n_bytes -= m->pkt_len;
}
+ sfc_pkts_bytes_add(&txq->stats.packets_bytes, n_tx, n_bytes);
+
return n_tx;
}
@@ -827,6 +853,35 @@ sfc_repr_dev_close(struct rte_eth_dev *dev)
return 0;
}
+static int
+sfc_repr_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+ union sfc_pkts_bytes queue_stats;
+ uint16_t i;
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ struct sfc_repr_rxq *rxq = dev->data->rx_queues[i];
+
+ sfc_pkts_bytes_get(&rxq->stats.packets_bytes,
+ &queue_stats);
+
+ stats->ipackets += queue_stats.pkts;
+ stats->ibytes += queue_stats.bytes;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ struct sfc_repr_txq *txq = dev->data->tx_queues[i];
+
+ sfc_pkts_bytes_get(&txq->stats.packets_bytes,
+ &queue_stats);
+
+ stats->opackets += queue_stats.pkts;
+ stats->obytes += queue_stats.bytes;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_configure = sfc_repr_dev_configure,
.dev_start = sfc_repr_dev_start,
@@ -834,6 +889,7 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
.dev_close = sfc_repr_dev_close,
.dev_infos_get = sfc_repr_dev_infos_get,
.link_update = sfc_repr_dev_link_update,
+ .stats_get = sfc_repr_stats_get,
.rx_queue_setup = sfc_repr_rx_queue_setup,
.rx_queue_release = sfc_repr_rx_queue_release,
.tx_queue_setup = sfc_repr_tx_queue_setup,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 27/38] net/sfc: free MAE lock once switch domain is assigned
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (25 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 26/38] net/sfc: add simple port representor statistics Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 28/38] common/sfc_efx/base: add multi-host function M-port selector Andrew Rybchenko
` (11 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, stable, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
If for some reason the hardware switch ID initialization function fails,
MAE lock is still held after the function finishes. This patch fixes that.
Fixes: 1e7fbdf0ba19 ("net/sfc: support concept of switch domains/ports")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_switch.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index c37cdf4a61..80c884a599 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -214,9 +214,9 @@ sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
fail_mem_alloc:
sfc_hw_switch_id_fini(sa, hw_switch_id);
- rte_spinlock_unlock(&sfc_mae_switch.lock);
fail_hw_switch_id_init:
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
return rc;
}
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 28/38] common/sfc_efx/base: add multi-host function M-port selector
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (26 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 27/38] net/sfc: free MAE lock once switch domain is assigned Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs Andrew Rybchenko
` (10 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Provide helper function to compose multi-host aware PCIe
function M-port selector.
The firmware expects mport selectors to use different sets of values to
represent a PCIe interface in mport selectors and elsewhere. In order to
avoid having the user perform the conversion themselves, it is now done
automatically when a selector is constructed.
In addition, a type has been added to libefx for possible PCIe interfaces.
This is done to abstract different representations away from the users.
Allow to support matching traffic coming from an arbitrary PCIe
end-point of the NIC and redirect traffic to it.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 22 +++++++
drivers/common/sfc_efx/base/efx_mae.c | 86 +++++++++++++++++++++++----
drivers/common/sfc_efx/version.map | 1 +
3 files changed, 96 insertions(+), 13 deletions(-)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 0a178128ba..159e7957a3 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -82,6 +82,13 @@ efx_family(
#if EFSYS_OPT_PCI
+/* PCIe interface numbers for multi-host configurations. */
+typedef enum efx_pcie_interface_e {
+ EFX_PCIE_INTERFACE_CALLER = 1000,
+ EFX_PCIE_INTERFACE_HOST_PRIMARY,
+ EFX_PCIE_INTERFACE_NIC_EMBEDDED,
+} efx_pcie_interface_t;
+
typedef struct efx_pci_ops_s {
/*
* Function for reading PCIe configuration space.
@@ -4237,6 +4244,21 @@ efx_mae_mport_by_pcie_function(
__in uint32_t vf,
__out efx_mport_sel_t *mportp);
+/*
+ * Get MPORT selector of a multi-host PCIe function.
+ *
+ * The resulting MPORT selector is opaque to the caller and can be
+ * passed as an argument to efx_mae_match_spec_mport_set()
+ * and efx_mae_action_set_populate_deliver().
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_mport_by_pcie_mh_function(
+ __in efx_pcie_interface_t intf,
+ __in uint32_t pf,
+ __in uint32_t vf,
+ __out efx_mport_sel_t *mportp);
+
/*
* Get MPORT selector by an MPORT ID
*
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 3f498fe189..37cc48eafc 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -727,35 +727,95 @@ efx_mae_mport_by_pcie_function(
efx_dword_t dword;
efx_rc_t rc;
+ rc = efx_mae_mport_by_pcie_mh_function(EFX_PCIE_INTERFACE_CALLER,
+ pf, vf, mportp);
+ if (rc != 0)
+ goto fail1;
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+static __checkReturn efx_rc_t
+efx_mae_intf_to_selector(
+ __in efx_pcie_interface_t intf,
+ __out uint32_t *selector_intfp)
+{
+ efx_rc_t rc;
+
+ switch (intf) {
+ case EFX_PCIE_INTERFACE_HOST_PRIMARY:
+ EFX_STATIC_ASSERT(MAE_MPORT_SELECTOR_HOST_PRIMARY <=
+ EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_INTF_ID));
+ *selector_intfp = MAE_MPORT_SELECTOR_HOST_PRIMARY;
+ break;
+ case EFX_PCIE_INTERFACE_NIC_EMBEDDED:
+ EFX_STATIC_ASSERT(MAE_MPORT_SELECTOR_NIC_EMBEDDED <=
+ EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_INTF_ID));
+ *selector_intfp = MAE_MPORT_SELECTOR_NIC_EMBEDDED;
+ break;
+ case EFX_PCIE_INTERFACE_CALLER:
+ EFX_STATIC_ASSERT(MAE_MPORT_SELECTOR_CALLER_INTF <=
+ EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_INTF_ID));
+ *selector_intfp = MAE_MPORT_SELECTOR_CALLER_INTF;
+ break;
+ default:
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_mport_by_pcie_mh_function(
+ __in efx_pcie_interface_t intf,
+ __in uint32_t pf,
+ __in uint32_t vf,
+ __out efx_mport_sel_t *mportp)
+{
+ uint32_t selector_intf;
+ efx_dword_t dword;
+ efx_rc_t rc;
+
EFX_STATIC_ASSERT(EFX_PCI_VF_INVALID ==
MAE_MPORT_SELECTOR_FUNC_VF_ID_NULL);
- if (pf > EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_PF_ID)) {
- rc = EINVAL;
+ rc = efx_mae_intf_to_selector(intf, &selector_intf);
+ if (rc != 0)
goto fail1;
+
+ if (pf > EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_MH_PF_ID)) {
+ rc = EINVAL;
+ goto fail2;
}
if (vf > EFX_MASK32(MAE_MPORT_SELECTOR_FUNC_VF_ID)) {
rc = EINVAL;
- goto fail2;
+ goto fail3;
}
- EFX_POPULATE_DWORD_3(dword,
- MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_FUNC,
- MAE_MPORT_SELECTOR_FUNC_PF_ID, pf,
+
+ EFX_POPULATE_DWORD_4(dword,
+ MAE_MPORT_SELECTOR_TYPE, MAE_MPORT_SELECTOR_TYPE_MH_FUNC,
+ MAE_MPORT_SELECTOR_FUNC_INTF_ID, selector_intf,
+ MAE_MPORT_SELECTOR_FUNC_MH_PF_ID, pf,
MAE_MPORT_SELECTOR_FUNC_VF_ID, vf);
memset(mportp, 0, sizeof (*mportp));
- /*
- * The constructed DWORD is little-endian,
- * but the resulting value is meant to be
- * passed to MCDIs, where it will undergo
- * host-order to little endian conversion.
- */
- mportp->sel = EFX_DWORD_FIELD(dword, EFX_DWORD_0);
+ mportp->sel = dword.ed_u32[0];
return (0);
+fail3:
+ EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 3488367f68..225909892b 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -125,6 +125,7 @@ INTERNAL {
efx_mae_match_specs_class_cmp;
efx_mae_match_specs_equal;
efx_mae_mport_by_pcie_function;
+ efx_mae_mport_by_pcie_mh_function;
efx_mae_mport_by_phy_port;
efx_mae_mport_by_id;
efx_mae_mport_free;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (27 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 28/38] common/sfc_efx/base: add multi-host function M-port selector Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 30/38] common/sfc_efx/base: add a means to read MAE mport journal Andrew Rybchenko
` (9 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
This information is required to be able to fully identify the function.
Add this information to the NIC configuration structure for easy access.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/ef10_impl.h | 3 +-
drivers/common/sfc_efx/base/ef10_nic.c | 4 +-
drivers/common/sfc_efx/base/efx.h | 1 +
drivers/common/sfc_efx/base/efx_impl.h | 6 +++
drivers/common/sfc_efx/base/efx_mcdi.c | 55 +++++++++++++++++++++++--
5 files changed, 64 insertions(+), 5 deletions(-)
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 7c8d51b7a5..d48f238479 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -1372,7 +1372,8 @@ extern __checkReturn efx_rc_t
efx_mcdi_get_function_info(
__in efx_nic_t *enp,
__out uint32_t *pfp,
- __out_opt uint32_t *vfp);
+ __out_opt uint32_t *vfp,
+ __out_opt efx_pcie_interface_t *intfp);
LIBEFX_INTERNAL
extern __checkReturn efx_rc_t
diff --git a/drivers/common/sfc_efx/base/ef10_nic.c b/drivers/common/sfc_efx/base/ef10_nic.c
index eda0ad3068..3cd9ff89d0 100644
--- a/drivers/common/sfc_efx/base/ef10_nic.c
+++ b/drivers/common/sfc_efx/base/ef10_nic.c
@@ -1847,6 +1847,7 @@ efx_mcdi_nic_board_cfg(
efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
ef10_link_state_t els;
efx_port_t *epp = &(enp->en_port);
+ efx_pcie_interface_t intf;
uint32_t board_type = 0;
uint32_t base, nvec;
uint32_t port;
@@ -1875,11 +1876,12 @@ efx_mcdi_nic_board_cfg(
* - PCIe PF: pf = PF number, vf = 0xffff.
* - PCIe VF: pf = parent PF, vf = VF number.
*/
- if ((rc = efx_mcdi_get_function_info(enp, &pf, &vf)) != 0)
+ if ((rc = efx_mcdi_get_function_info(enp, &pf, &vf, &intf)) != 0)
goto fail3;
encp->enc_pf = pf;
encp->enc_vf = vf;
+ encp->enc_intf = intf;
if ((rc = ef10_mcdi_get_pf_count(enp, &encp->enc_hw_pf_count)) != 0)
goto fail4;
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 159e7957a3..996126217e 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1511,6 +1511,7 @@ typedef struct efx_nic_cfg_s {
uint32_t enc_bist_mask;
#endif /* EFSYS_OPT_BIST */
#if EFSYS_OPT_RIVERHEAD || EFX_OPTS_EF10()
+ efx_pcie_interface_t enc_intf;
uint32_t enc_pf;
uint32_t enc_vf;
uint32_t enc_privilege_mask;
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 992edbabe3..e0efbb8cdd 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1529,6 +1529,12 @@ efx_mcdi_get_workarounds(
#if EFSYS_OPT_RIVERHEAD || EFX_OPTS_EF10()
+LIBEFX_INTERNAL
+extern __checkReturn efx_rc_t
+efx_mcdi_intf_from_pcie(
+ __in uint32_t pcie_intf,
+ __out efx_pcie_interface_t *efx_intf);
+
LIBEFX_INTERNAL
extern __checkReturn efx_rc_t
efx_mcdi_init_evq(
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index b68fc0503d..69bf7ce70f 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -2130,6 +2130,36 @@ efx_mcdi_mac_stats_periodic(
#if EFSYS_OPT_RIVERHEAD || EFX_OPTS_EF10()
+ __checkReturn efx_rc_t
+efx_mcdi_intf_from_pcie(
+ __in uint32_t pcie_intf,
+ __out efx_pcie_interface_t *efx_intf)
+{
+ efx_rc_t rc;
+
+ switch (pcie_intf) {
+ case PCIE_INTERFACE_CALLER:
+ *efx_intf = EFX_PCIE_INTERFACE_CALLER;
+ break;
+ case PCIE_INTERFACE_HOST_PRIMARY:
+ *efx_intf = EFX_PCIE_INTERFACE_HOST_PRIMARY;
+ break;
+ case PCIE_INTERFACE_NIC_EMBEDDED:
+ *efx_intf = EFX_PCIE_INTERFACE_NIC_EMBEDDED;
+ break;
+ default:
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ return (0);
+
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+ return (rc);
+}
+
/*
* This function returns the pf and vf number of a function. If it is a pf the
* vf number is 0xffff. The vf number is the index of the vf on that
@@ -2140,18 +2170,21 @@ efx_mcdi_mac_stats_periodic(
efx_mcdi_get_function_info(
__in efx_nic_t *enp,
__out uint32_t *pfp,
- __out_opt uint32_t *vfp)
+ __out_opt uint32_t *vfp,
+ __out_opt efx_pcie_interface_t *intfp)
{
+ efx_pcie_interface_t intf;
efx_mcdi_req_t req;
EFX_MCDI_DECLARE_BUF(payload, MC_CMD_GET_FUNCTION_INFO_IN_LEN,
- MC_CMD_GET_FUNCTION_INFO_OUT_LEN);
+ MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN);
+ uint32_t pcie_intf;
efx_rc_t rc;
req.emr_cmd = MC_CMD_GET_FUNCTION_INFO;
req.emr_in_buf = payload;
req.emr_in_length = MC_CMD_GET_FUNCTION_INFO_IN_LEN;
req.emr_out_buf = payload;
- req.emr_out_length = MC_CMD_GET_FUNCTION_INFO_OUT_LEN;
+ req.emr_out_length = MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN;
efx_mcdi_execute(enp, &req);
@@ -2169,8 +2202,24 @@ efx_mcdi_get_function_info(
if (vfp != NULL)
*vfp = MCDI_OUT_DWORD(req, GET_FUNCTION_INFO_OUT_VF);
+ if (req.emr_out_length < MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN) {
+ intf = EFX_PCIE_INTERFACE_HOST_PRIMARY;
+ } else {
+ pcie_intf = MCDI_OUT_DWORD(req,
+ GET_FUNCTION_INFO_OUT_V2_INTF);
+
+ rc = efx_mcdi_intf_from_pcie(pcie_intf, &intf);
+ if (rc != 0)
+ goto fail3;
+ }
+
+ if (intfp != NULL)
+ *intfp = intf;
+
return (0);
+fail3:
+ EFSYS_PROBE(fail3);
fail2:
EFSYS_PROBE(fail2);
fail1:
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 30/38] common/sfc_efx/base: add a means to read MAE mport journal
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (28 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles Andrew Rybchenko
` (8 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
This is required to provide the driver with the current state of mports.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 56 +++++++
drivers/common/sfc_efx/base/efx_mae.c | 224 +++++++++++++++++++++++++
drivers/common/sfc_efx/base/efx_mcdi.h | 54 ++++++
drivers/common/sfc_efx/version.map | 1 +
4 files changed, 335 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 996126217e..e77b297950 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4205,6 +4205,42 @@ typedef struct efx_mport_id_s {
uint32_t id;
} efx_mport_id_t;
+typedef enum efx_mport_type_e {
+ EFX_MPORT_TYPE_NET_PORT = 0,
+ EFX_MPORT_TYPE_ALIAS,
+ EFX_MPORT_TYPE_VNIC,
+} efx_mport_type_t;
+
+typedef enum efx_mport_vnic_client_type_e {
+ EFX_MPORT_VNIC_CLIENT_FUNCTION = 1,
+ EFX_MPORT_VNIC_CLIENT_PLUGIN,
+} efx_mport_vnic_client_type_t;
+
+typedef struct efx_mport_desc_s {
+ efx_mport_id_t emd_id;
+ boolean_t emd_can_receive_on;
+ boolean_t emd_can_deliver_to;
+ boolean_t emd_can_delete;
+ boolean_t emd_zombie;
+ efx_mport_type_t emd_type;
+ union {
+ struct {
+ uint32_t ep_index;
+ } emd_net_port;
+ struct {
+ efx_mport_id_t ea_target_mport_id;
+ } emd_alias;
+ struct {
+ efx_mport_vnic_client_type_t ev_client_type;
+ efx_pcie_interface_t ev_intf;
+ uint16_t ev_pf;
+ uint16_t ev_vf;
+ /* MCDI client handle for this VNIC. */
+ uint32_t ev_handle;
+ } emd_vnic;
+ };
+} efx_mport_desc_t;
+
#define EFX_MPORT_NULL (0U)
/*
@@ -4635,6 +4671,26 @@ efx_mae_mport_free(
__in efx_nic_t *enp,
__in const efx_mport_id_t *mportp);
+typedef __checkReturn efx_rc_t
+(efx_mae_read_mport_journal_cb)(
+ __in void *cb_datap,
+ __in efx_mport_desc_t *mportp,
+ __in size_t mport_len);
+
+/*
+ * Read mport descriptions from the MAE journal (which describes added and
+ * removed mports) and pass them to a user-supplied callback. The user gets
+ * only one chance to process the data it's given. Once the callback function
+ * finishes, that particular mport description will be gone.
+ * The journal will be fully repopulated on PCI reset (efx_nic_reset function).
+ */
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mae_read_mport_journal(
+ __in efx_nic_t *enp,
+ __in efx_mae_read_mport_journal_cb *cbp,
+ __in void *cb_datap);
+
#endif /* EFSYS_OPT_MAE */
#if EFSYS_OPT_VIRTIO
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 37cc48eafc..110addd92d 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -3292,4 +3292,228 @@ efx_mae_mport_free(
return (rc);
}
+static __checkReturn efx_rc_t
+efx_mae_read_mport_journal_single(
+ __in uint8_t *entry_buf,
+ __out efx_mport_desc_t *desc)
+{
+ uint32_t pcie_intf;
+ efx_rc_t rc;
+
+ memset(desc, 0, sizeof (*desc));
+
+ desc->emd_id.id = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_MPORT_ID);
+
+ desc->emd_can_receive_on = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_CAN_RECEIVE_ON);
+
+ desc->emd_can_deliver_to = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_CAN_DELIVER_TO);
+
+ desc->emd_can_delete = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_CAN_DELETE);
+
+ desc->emd_zombie = MCDI_STRUCT_DWORD_FIELD(entry_buf,
+ MAE_MPORT_DESC_V2_FLAGS,
+ MAE_MPORT_DESC_V2_IS_ZOMBIE);
+
+ desc->emd_type = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_MPORT_TYPE);
+
+ /*
+ * We can't check everything here. If some additional checks are
+ * required, they should be performed by the callback function.
+ */
+ switch (desc->emd_type) {
+ case EFX_MPORT_TYPE_NET_PORT:
+ desc->emd_net_port.ep_index =
+ MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_NET_PORT_IDX);
+ break;
+ case EFX_MPORT_TYPE_ALIAS:
+ desc->emd_alias.ea_target_mport_id.id =
+ MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID);
+ break;
+ case EFX_MPORT_TYPE_VNIC:
+ desc->emd_vnic.ev_client_type =
+ MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE);
+ if (desc->emd_vnic.ev_client_type !=
+ EFX_MPORT_VNIC_CLIENT_FUNCTION)
+ break;
+
+ pcie_intf = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE);
+ rc = efx_mcdi_intf_from_pcie(pcie_intf,
+ &desc->emd_vnic.ev_intf);
+ if (rc != 0)
+ goto fail1;
+
+ desc->emd_vnic.ev_pf = MCDI_STRUCT_WORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX);
+ desc->emd_vnic.ev_vf = MCDI_STRUCT_WORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX);
+ desc->emd_vnic.ev_handle = MCDI_STRUCT_DWORD(entry_buf,
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE);
+ break;
+ default:
+ rc = EINVAL;
+ goto fail2;
+ }
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+static __checkReturn efx_rc_t
+efx_mae_read_mport_journal_batch(
+ __in efx_nic_t *enp,
+ __in efx_mae_read_mport_journal_cb *cbp,
+ __in void *cb_datap,
+ __out uint32_t *morep)
+{
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_MAE_MPORT_READ_JOURNAL_IN_LEN,
+ MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMAX_MCDI2);
+ uint32_t n_entries;
+ uint32_t entry_sz;
+ uint8_t *entry_buf;
+ unsigned int i;
+ efx_rc_t rc;
+
+ EFX_STATIC_ASSERT(EFX_MPORT_TYPE_NET_PORT ==
+ MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT);
+ EFX_STATIC_ASSERT(EFX_MPORT_TYPE_ALIAS ==
+ MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS);
+ EFX_STATIC_ASSERT(EFX_MPORT_TYPE_VNIC ==
+ MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC);
+
+ EFX_STATIC_ASSERT(EFX_MPORT_VNIC_CLIENT_FUNCTION ==
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION);
+ EFX_STATIC_ASSERT(EFX_MPORT_VNIC_CLIENT_PLUGIN ==
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN);
+
+ if (cbp == NULL) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_MAE_MPORT_READ_JOURNAL;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_MAE_MPORT_READ_JOURNAL_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMAX_MCDI2;
+
+ MCDI_IN_SET_DWORD(req, MAE_MPORT_READ_JOURNAL_IN_FLAGS, 0);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ if (req.emr_out_length_used <
+ MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMIN) {
+ rc = EMSGSIZE;
+ goto fail3;
+ }
+
+ if (morep != NULL) {
+ *morep = MCDI_OUT_DWORD_FIELD(req,
+ MAE_MPORT_READ_JOURNAL_OUT_FLAGS,
+ MAE_MPORT_READ_JOURNAL_OUT_MORE);
+ }
+ n_entries = MCDI_OUT_DWORD(req,
+ MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_COUNT);
+ entry_sz = MCDI_OUT_DWORD(req,
+ MAE_MPORT_READ_JOURNAL_OUT_SIZEOF_MPORT_DESC);
+ entry_buf = MCDI_OUT2(req, uint8_t,
+ MAE_MPORT_READ_JOURNAL_OUT_MPORT_DESC_DATA);
+
+ if (entry_sz < MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST +
+ MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN) {
+ rc = EINVAL;
+ goto fail4;
+ }
+ if (n_entries * entry_sz / entry_sz != n_entries) {
+ rc = EINVAL;
+ goto fail5;
+ }
+ if (req.emr_out_length_used !=
+ MC_CMD_MAE_MPORT_READ_JOURNAL_OUT_LENMIN + n_entries * entry_sz) {
+ rc = EINVAL;
+ goto fail6;
+ }
+
+ for (i = 0; i < n_entries; i++) {
+ efx_mport_desc_t desc;
+
+ rc = efx_mae_read_mport_journal_single(entry_buf, &desc);
+ if (rc != 0)
+ continue;
+
+ (*cbp)(cb_datap, &desc, sizeof (desc));
+ entry_buf += entry_sz;
+ }
+
+ return (0);
+
+fail6:
+ EFSYS_PROBE(fail6);
+fail5:
+ EFSYS_PROBE(fail5);
+fail4:
+ EFSYS_PROBE(fail4);
+fail3:
+ EFSYS_PROBE(fail3);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mae_read_mport_journal(
+ __in efx_nic_t *enp,
+ __in efx_mae_read_mport_journal_cb *cbp,
+ __in void *cb_datap)
+{
+ const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
+ uint32_t more = 0;
+ efx_rc_t rc;
+
+ if (encp->enc_mae_supported == B_FALSE) {
+ rc = ENOTSUP;
+ goto fail1;
+ }
+
+ do {
+ rc = efx_mae_read_mport_journal_batch(enp, cbp, cb_datap,
+ &more);
+ if (rc != 0)
+ goto fail2;
+ } while (more != 0);
+
+ return (0);
+
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
#endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index 90b70de97b..96f237b1b0 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -462,6 +462,60 @@ efx_mcdi_phy_module_get_info(
EFX_DWORD_FIELD(*(MCDI_OUT2(_emr, efx_dword_t, _ofst) + \
(_idx)), _field)
+#define MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, _type, _arr_ofst, _idx, \
+ _member_ofst) \
+ ((_type *)(MCDI_OUT2(_emr, uint8_t, _arr_ofst) + \
+ _idx * MC_CMD_ ## _arr_ofst ## _LEN + \
+ _member_ofst ## _OFST))
+
+#define MCDI_OUT_INDEXED_MEMBER_DWORD(_emr, _arr_ofst, _idx, \
+ _member_ofst) \
+ EFX_DWORD_FIELD( \
+ *(MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, efx_dword_t, \
+ _arr_ofst, _idx, \
+ _member_ofst)), \
+ EFX_DWORD_0)
+
+#define MCDI_OUT_INDEXED_MEMBER_QWORD(_emr, _arr_ofst, _idx, \
+ _member_ofst) \
+ ((uint64_t)EFX_QWORD_FIELD( \
+ *(MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, efx_qword_t, \
+ _arr_ofst, _idx, \
+ _member_ofst)), \
+ EFX_DWORD_0) | \
+ (uint64_t)EFX_QWORD_FIELD( \
+ *(MCDI_OUT_INDEXED_STRUCT_MEMBER(_emr, efx_qword_t, \
+ _arr_ofst, _idx, \
+ _member_ofst)), \
+ EFX_DWORD_1) << 32)
+
+#define MCDI_STRUCT_MEMBER(_buf, _type, _ofst) \
+ ((_type *)((char *)_buf + _ofst ## _OFST)) \
+
+#define MCDI_STRUCT_BYTE(_buf, _ofst) \
+ EFX_BYTE_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_byte_t, _ofst), \
+ EFX_BYTE_0)
+
+#define MCDI_STRUCT_BYTE_FIELD(_buf, _ofst, _field) \
+ EFX_BYTE_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_byte_t, _ofst), \
+ _field)
+
+#define MCDI_STRUCT_WORD(_buf, _ofst) \
+ EFX_WORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_word_t, _ofst), \
+ EFX_WORD_0)
+
+#define MCDI_STRUCT_WORD_FIELD(_buf, _ofst, _field) \
+ EFX_WORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_word_t, _ofst), \
+ _field)
+
+#define MCDI_STRUCT_DWORD(_buf, _ofst) \
+ EFX_DWORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_dword_t, _ofst), \
+ EFX_DWORD_0)
+
+#define MCDI_STRUCT_DWORD_FIELD(_buf, _ofst, _field) \
+ EFX_DWORD_FIELD(*MCDI_STRUCT_MEMBER(_buf, efx_dword_t, _ofst), \
+ _field)
+
#define MCDI_EV_FIELD(_eqp, _field) \
EFX_QWORD_FIELD(*_eqp, MCDI_EVENT_ ## _field)
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 225909892b..10216bb37d 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -133,6 +133,7 @@ INTERNAL {
efx_mae_mport_invalid;
efx_mae_outer_rule_insert;
efx_mae_outer_rule_remove;
+ efx_mae_read_mport_journal;
efx_mcdi_fini;
efx_mcdi_get_proxy_handle;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (29 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 30/38] common/sfc_efx/base: add a means to read MAE mport journal Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 32/38] net/sfc: maintain controller to EFX interface mapping Andrew Rybchenko
` (7 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Equality checks between VNICs should be done by comparing their client
handles. This means that clients should be able to retrieve client handles
for arbitrary functions and themselves.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/common/sfc_efx/base/efx.h | 15 ++++++
drivers/common/sfc_efx/base/efx_mcdi.c | 73 ++++++++++++++++++++++++++
drivers/common/sfc_efx/version.map | 2 +
3 files changed, 90 insertions(+)
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index e77b297950..b61984a8e3 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -391,6 +391,21 @@ extern __checkReturn boolean_t
efx_mcdi_request_abort(
__in efx_nic_t *enp);
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mcdi_get_client_handle(
+ __in efx_nic_t *enp,
+ __in efx_pcie_interface_t intf,
+ __in uint16_t pf,
+ __in uint16_t vf,
+ __out uint32_t *handle);
+
+LIBEFX_API
+extern __checkReturn efx_rc_t
+efx_mcdi_get_own_client_handle(
+ __in efx_nic_t *enp,
+ __out uint32_t *handle);
+
LIBEFX_API
extern void
efx_mcdi_fini(
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index 69bf7ce70f..cdf7181e0d 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -647,6 +647,79 @@ efx_mcdi_request_abort(
return (aborted);
}
+ __checkReturn efx_rc_t
+efx_mcdi_get_client_handle(
+ __in efx_nic_t *enp,
+ __in efx_pcie_interface_t intf,
+ __in uint16_t pf,
+ __in uint16_t vf,
+ __out uint32_t *handle)
+{
+ efx_mcdi_req_t req;
+ EFX_MCDI_DECLARE_BUF(payload,
+ MC_CMD_GET_CLIENT_HANDLE_IN_LEN,
+ MC_CMD_GET_CLIENT_HANDLE_OUT_LEN);
+ efx_rc_t rc;
+
+ if (handle == NULL) {
+ rc = EINVAL;
+ goto fail1;
+ }
+
+ req.emr_cmd = MC_CMD_GET_CLIENT_HANDLE;
+ req.emr_in_buf = payload;
+ req.emr_in_length = MC_CMD_GET_CLIENT_HANDLE_IN_LEN;
+ req.emr_out_buf = payload;
+ req.emr_out_length = MC_CMD_GET_CLIENT_HANDLE_OUT_LEN;
+
+ MCDI_IN_SET_DWORD(req, GET_CLIENT_HANDLE_IN_TYPE,
+ MC_CMD_GET_CLIENT_HANDLE_IN_TYPE_FUNC);
+ MCDI_IN_SET_WORD(req, GET_CLIENT_HANDLE_IN_FUNC_PF, pf);
+ MCDI_IN_SET_WORD(req, GET_CLIENT_HANDLE_IN_FUNC_VF, vf);
+ MCDI_IN_SET_DWORD(req, GET_CLIENT_HANDLE_IN_FUNC_INTF, intf);
+
+ efx_mcdi_execute(enp, &req);
+
+ if (req.emr_rc != 0) {
+ rc = req.emr_rc;
+ goto fail2;
+ }
+
+ if (req.emr_out_length_used < MC_CMD_GET_CLIENT_HANDLE_OUT_LEN) {
+ rc = EMSGSIZE;
+ goto fail3;
+ }
+
+ *handle = MCDI_OUT_DWORD(req, GET_CLIENT_HANDLE_OUT_HANDLE);
+
+ return 0;
+fail3:
+ EFSYS_PROBE(fail3);
+fail2:
+ EFSYS_PROBE(fail2);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
+ __checkReturn efx_rc_t
+efx_mcdi_get_own_client_handle(
+ __in efx_nic_t *enp,
+ __out uint32_t *handle)
+{
+ efx_rc_t rc;
+
+ rc = efx_mcdi_get_client_handle(enp, PCIE_INTERFACE_CALLER,
+ PCIE_FUNCTION_PF_NULL, PCIE_FUNCTION_VF_NULL, handle);
+ if (rc != 0)
+ goto fail1;
+
+ return (0);
+fail1:
+ EFSYS_PROBE1(fail1, efx_rc_t, rc);
+ return (rc);
+}
+
void
efx_mcdi_get_timeout(
__in efx_nic_t *enp,
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 10216bb37d..346deb4b12 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -136,6 +136,8 @@ INTERNAL {
efx_mae_read_mport_journal;
efx_mcdi_fini;
+ efx_mcdi_get_client_handle;
+ efx_mcdi_get_own_client_handle;
efx_mcdi_get_proxy_handle;
efx_mcdi_get_timeout;
efx_mcdi_init;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 32/38] net/sfc: maintain controller to EFX interface mapping
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (30 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 33/38] net/sfc: store PCI address for represented entities Andrew Rybchenko
` (6 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Newer hardware may have arbitrarily complex controller configurations,
and for this reason the mapping has been made dynamic: it is represented
with a dynamic array that is indexed by controller numbers and each
element contains an EFX interface number. Since the number of controllers
is expected to be small, this approach should not hurt the performance.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_ethdev.c | 184 +++++++++++++++++++++++++++++++++++
drivers/net/sfc/sfc_switch.c | 57 +++++++++++
drivers/net/sfc/sfc_switch.h | 8 ++
3 files changed, 249 insertions(+)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f69bbde11a..f93b9cc921 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -30,6 +30,7 @@
#include "sfc_dp_rx.h"
#include "sfc_repr.h"
#include "sfc_sw_stats.h"
+#include "sfc_switch.h"
#define SFC_XSTAT_ID_INVALID_VAL UINT64_MAX
#define SFC_XSTAT_ID_INVALID_NAME '\0'
@@ -1863,6 +1864,177 @@ sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
return sap->dp_rx->intr_disable(rxq_info->dp);
}
+struct sfc_mport_journal_ctx {
+ struct sfc_adapter *sa;
+ uint16_t switch_domain_id;
+ uint32_t mcdi_handle;
+ bool controllers_assigned;
+ efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+};
+
+static int
+sfc_journal_ctx_add_controller(struct sfc_mport_journal_ctx *ctx,
+ efx_pcie_interface_t intf)
+{
+ efx_pcie_interface_t *new_controllers;
+ size_t i, target;
+ size_t new_size;
+
+ if (ctx->controllers == NULL) {
+ ctx->controllers = rte_malloc("sfc_controller_mapping",
+ sizeof(ctx->controllers[0]), 0);
+ if (ctx->controllers == NULL)
+ return ENOMEM;
+
+ ctx->controllers[0] = intf;
+ ctx->nb_controllers = 1;
+
+ return 0;
+ }
+
+ for (i = 0; i < ctx->nb_controllers; i++) {
+ if (ctx->controllers[i] == intf)
+ return 0;
+ if (ctx->controllers[i] > intf)
+ break;
+ }
+ target = i;
+
+ ctx->nb_controllers += 1;
+ new_size = ctx->nb_controllers * sizeof(ctx->controllers[0]);
+
+ new_controllers = rte_realloc(ctx->controllers, new_size, 0);
+ if (new_controllers == NULL) {
+ rte_free(ctx->controllers);
+ return ENOMEM;
+ }
+ ctx->controllers = new_controllers;
+
+ for (i = target + 1; i < ctx->nb_controllers; i++)
+ ctx->controllers[i] = ctx->controllers[i - 1];
+
+ ctx->controllers[target] = intf;
+
+ return 0;
+}
+
+static efx_rc_t
+sfc_process_mport_journal_entry(struct sfc_mport_journal_ctx *ctx,
+ efx_mport_desc_t *mport)
+{
+ efx_mport_sel_t ethdev_mport;
+ int rc;
+
+ sfc_dbg(ctx->sa,
+ "processing mport id %u (controller %u pf %u vf %u)",
+ mport->emd_id.id, mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf, mport->emd_vnic.ev_vf);
+ efx_mae_mport_invalid(ðdev_mport);
+
+ if (!ctx->controllers_assigned) {
+ rc = sfc_journal_ctx_add_controller(ctx,
+ mport->emd_vnic.ev_intf);
+ if (rc != 0)
+ return rc;
+ }
+
+ return 0;
+}
+
+static efx_rc_t
+sfc_process_mport_journal_cb(void *data, efx_mport_desc_t *mport,
+ size_t mport_len)
+{
+ struct sfc_mport_journal_ctx *ctx = data;
+
+ if (ctx == NULL || ctx->sa == NULL) {
+ sfc_err(ctx->sa, "received NULL context or SFC adapter");
+ return EINVAL;
+ }
+
+ if (mport_len != sizeof(*mport)) {
+ sfc_err(ctx->sa, "actual and expected mport buffer sizes differ");
+ return EINVAL;
+ }
+
+ SFC_ASSERT(sfc_adapter_is_locked(ctx->sa));
+
+ /*
+ * If a zombie flag is set, it means the mport has been marked for
+ * deletion and cannot be used for any new operations. The mport will
+ * be destroyed completely once all references to it are released.
+ */
+ if (mport->emd_zombie) {
+ sfc_dbg(ctx->sa, "mport is a zombie, skipping");
+ return 0;
+ }
+ if (mport->emd_type != EFX_MPORT_TYPE_VNIC) {
+ sfc_dbg(ctx->sa, "mport is not a VNIC, skipping");
+ return 0;
+ }
+ if (mport->emd_vnic.ev_client_type != EFX_MPORT_VNIC_CLIENT_FUNCTION) {
+ sfc_dbg(ctx->sa, "mport is not a function, skipping");
+ return 0;
+ }
+ if (mport->emd_vnic.ev_handle == ctx->mcdi_handle) {
+ sfc_dbg(ctx->sa, "mport is this driver instance, skipping");
+ return 0;
+ }
+
+ return sfc_process_mport_journal_entry(ctx, mport);
+}
+
+static int
+sfc_process_mport_journal(struct sfc_adapter *sa)
+{
+ struct sfc_mport_journal_ctx ctx;
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ efx_rc_t efx_rc;
+ int rc;
+
+ memset(&ctx, 0, sizeof(ctx));
+ ctx.sa = sa;
+ ctx.switch_domain_id = sa->mae.switch_domain_id;
+
+ efx_rc = efx_mcdi_get_own_client_handle(sa->nic, &ctx.mcdi_handle);
+ if (efx_rc != 0) {
+ sfc_err(sa, "failed to get own MCDI handle");
+ SFC_ASSERT(efx_rc > 0);
+ return efx_rc;
+ }
+
+ rc = sfc_mae_switch_domain_controllers(ctx.switch_domain_id,
+ &controllers, &nb_controllers);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get controller mapping");
+ return rc;
+ }
+
+ ctx.controllers_assigned = controllers != NULL;
+ ctx.controllers = NULL;
+ ctx.nb_controllers = 0;
+
+ efx_rc = efx_mae_read_mport_journal(sa->nic,
+ sfc_process_mport_journal_cb, &ctx);
+ if (efx_rc != 0) {
+ sfc_err(sa, "failed to process MAE mport journal");
+ SFC_ASSERT(efx_rc > 0);
+ return efx_rc;
+ }
+
+ if (controllers == NULL) {
+ rc = sfc_mae_switch_domain_map_controllers(ctx.switch_domain_id,
+ ctx.controllers,
+ ctx.nb_controllers);
+ if (rc != 0)
+ return rc;
+ }
+
+ return 0;
+}
+
static const struct eth_dev_ops sfc_eth_dev_ops = {
.dev_configure = sfc_dev_configure,
.dev_start = sfc_dev_start,
@@ -2502,6 +2674,18 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
return -ENOTSUP;
}
+ /*
+ * This is needed to construct the DPDK controller -> EFX interface
+ * mapping.
+ */
+ sfc_adapter_lock(sa);
+ rc = sfc_process_mport_journal(sa);
+ sfc_adapter_unlock(sa);
+ if (rc != 0) {
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
for (i = 0; i < eth_da->nb_representor_ports; ++i) {
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
efx_mport_sel_t mport_sel;
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 80c884a599..f72f6648b8 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -87,6 +87,10 @@ struct sfc_mae_switch_domain {
struct sfc_mae_switch_ports ports;
/** RTE switch domain ID allocated for a group of devices */
uint16_t id;
+ /** DPDK controller -> EFX interface mapping */
+ efx_pcie_interface_t *controllers;
+ /** Number of DPDK controllers and EFX interfaces */
+ size_t nb_controllers;
};
TAILQ_HEAD(sfc_mae_switch_domains, sfc_mae_switch_domain);
@@ -220,6 +224,59 @@ sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
return rc;
}
+int
+sfc_mae_switch_domain_controllers(uint16_t switch_domain_id,
+ const efx_pcie_interface_t **controllers,
+ size_t *nb_controllers)
+{
+ struct sfc_mae_switch_domain *domain;
+
+ if (controllers == NULL || nb_controllers == NULL)
+ return EINVAL;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ *controllers = domain->controllers;
+ *nb_controllers = domain->nb_controllers;
+
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return 0;
+}
+
+int
+sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
+ efx_pcie_interface_t *controllers,
+ size_t nb_controllers)
+{
+ struct sfc_mae_switch_domain *domain;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ /* Controller mapping may be set only once */
+ if (domain->controllers != NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ domain->controllers = controllers;
+ domain->nb_controllers = nb_controllers;
+
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return 0;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_port *
sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index a1a2ab9848..1eee5fc0b6 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -44,6 +44,14 @@ struct sfc_mae_switch_port_request {
int sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
uint16_t *switch_domain_id);
+int sfc_mae_switch_domain_controllers(uint16_t switch_domain_id,
+ const efx_pcie_interface_t **controllers,
+ size_t *nb_controllers);
+
+int sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
+ efx_pcie_interface_t *controllers,
+ size_t nb_controllers);
+
int sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
uint16_t *switch_port_id);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 33/38] net/sfc: store PCI address for represented entities
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (31 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 32/38] net/sfc: maintain controller to EFX interface mapping Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 34/38] net/sfc: include controller and port in representor name Andrew Rybchenko
` (5 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
This information will be useful when representor info API is implemented.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_ethdev.c | 11 +++++++++--
drivers/net/sfc/sfc_repr.c | 20 +++++++++++++++-----
drivers/net/sfc/sfc_repr.h | 10 +++++++++-
drivers/net/sfc/sfc_switch.c | 14 ++++++++++++++
drivers/net/sfc/sfc_switch.h | 11 +++++++++++
5 files changed, 58 insertions(+), 8 deletions(-)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f93b9cc921..53008f477f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2688,6 +2688,7 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
for (i = 0; i < eth_da->nb_representor_ports; ++i) {
const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+ struct sfc_repr_entity_info entity;
efx_mport_sel_t mport_sel;
rc = efx_mae_mport_by_pcie_function(encp->enc_pf,
@@ -2700,8 +2701,14 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
continue;
}
- rc = sfc_repr_create(dev, eth_da->representor_ports[i],
- sa->mae.switch_domain_id, &mport_sel);
+ memset(&entity, 0, sizeof(entity));
+ entity.type = eth_da->type;
+ entity.intf = encp->enc_intf;
+ entity.pf = encp->enc_pf;
+ entity.vf = eth_da->representor_ports[i];
+
+ rc = sfc_repr_create(dev, &entity, sa->mae.switch_domain_id,
+ &mport_sel);
if (rc != 0) {
sfc_err(sa, "cannot create representor %u: %s - ignore",
eth_da->representor_ports[i],
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index 87f10092c3..f87188ed7a 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -902,6 +902,9 @@ struct sfc_repr_init_data {
uint16_t repr_id;
uint16_t switch_domain_id;
efx_mport_sel_t mport_sel;
+ efx_pcie_interface_t intf;
+ uint16_t pf;
+ uint16_t vf;
};
static int
@@ -939,6 +942,9 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
switch_port_request.ethdev_mportp = ðdev_mport_sel;
switch_port_request.entity_mportp = &repr_data->mport_sel;
switch_port_request.ethdev_port_id = dev->data->port_id;
+ switch_port_request.port_data.repr.intf = repr_data->intf;
+ switch_port_request.port_data.repr.pf = repr_data->pf;
+ switch_port_request.port_data.repr.vf = repr_data->vf;
ret = sfc_repr_assign_mae_switch_port(repr_data->switch_domain_id,
&switch_port_request,
@@ -1015,8 +1021,10 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
}
int
-sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
- uint16_t switch_domain_id, const efx_mport_sel_t *mport_sel)
+sfc_repr_create(struct rte_eth_dev *parent,
+ struct sfc_repr_entity_info *entity,
+ uint16_t switch_domain_id,
+ const efx_mport_sel_t *mport_sel)
{
struct sfc_repr_init_data repr_data;
char name[RTE_ETH_NAME_MAX_LEN];
@@ -1024,8 +1032,7 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
struct rte_eth_dev *dev;
if (snprintf(name, sizeof(name), "net_%s_representor_%u",
- parent->device->name, representor_id) >=
- (int)sizeof(name)) {
+ parent->device->name, entity->vf) >= (int)sizeof(name)) {
SFC_GENERIC_LOG(ERR, "%s() failed name too long", __func__);
return -ENAMETOOLONG;
}
@@ -1034,9 +1041,12 @@ sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
if (dev == NULL) {
memset(&repr_data, 0, sizeof(repr_data));
repr_data.pf_port_id = parent->data->port_id;
- repr_data.repr_id = representor_id;
+ repr_data.repr_id = entity->vf;
repr_data.switch_domain_id = switch_domain_id;
repr_data.mport_sel = *mport_sel;
+ repr_data.intf = entity->intf;
+ repr_data.pf = entity->pf;
+ repr_data.vf = entity->vf;
ret = rte_eth_dev_create(parent->device, name,
sizeof(struct sfc_repr_shared),
diff --git a/drivers/net/sfc/sfc_repr.h b/drivers/net/sfc/sfc_repr.h
index 1347206006..2093973761 100644
--- a/drivers/net/sfc/sfc_repr.h
+++ b/drivers/net/sfc/sfc_repr.h
@@ -26,7 +26,15 @@ extern "C" {
/** Max count of the representor Tx queues */
#define SFC_REPR_TXQ_MAX 1
-int sfc_repr_create(struct rte_eth_dev *parent, uint16_t representor_id,
+struct sfc_repr_entity_info {
+ enum rte_eth_representor_type type;
+ efx_pcie_interface_t intf;
+ uint16_t pf;
+ uint16_t vf;
+};
+
+int sfc_repr_create(struct rte_eth_dev *parent,
+ struct sfc_repr_entity_info *entity,
uint16_t switch_domain_id,
const efx_mport_sel_t *mport_sel);
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index f72f6648b8..7a0b332f33 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -63,6 +63,8 @@ struct sfc_mae_switch_port {
enum sfc_mae_switch_port_type type;
/** RTE switch port ID */
uint16_t id;
+
+ union sfc_mae_switch_port_data data;
};
TAILQ_HEAD(sfc_mae_switch_ports, sfc_mae_switch_port);
@@ -335,6 +337,18 @@ sfc_mae_assign_switch_port(uint16_t switch_domain_id,
port->ethdev_mport = *req->ethdev_mportp;
port->ethdev_port_id = req->ethdev_port_id;
+ switch (req->type) {
+ case SFC_MAE_SWITCH_PORT_INDEPENDENT:
+ /* No data */
+ break;
+ case SFC_MAE_SWITCH_PORT_REPRESENTOR:
+ memcpy(&port->data.repr, &req->port_data,
+ sizeof(port->data.repr));
+ break;
+ default:
+ SFC_ASSERT(B_FALSE);
+ }
+
*switch_port_id = port->id;
rte_spinlock_unlock(&sfc_mae_switch.lock);
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index 1eee5fc0b6..a072507375 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -34,11 +34,22 @@ enum sfc_mae_switch_port_type {
SFC_MAE_SWITCH_PORT_REPRESENTOR,
};
+struct sfc_mae_switch_port_repr_data {
+ efx_pcie_interface_t intf;
+ uint16_t pf;
+ uint16_t vf;
+};
+
+union sfc_mae_switch_port_data {
+ struct sfc_mae_switch_port_repr_data repr;
+};
+
struct sfc_mae_switch_port_request {
enum sfc_mae_switch_port_type type;
const efx_mport_sel_t *entity_mportp;
const efx_mport_sel_t *ethdev_mportp;
uint16_t ethdev_port_id;
+ union sfc_mae_switch_port_data port_data;
};
int sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 34/38] net/sfc: include controller and port in representor name
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (32 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 33/38] net/sfc: store PCI address for represented entities Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 35/38] net/sfc: support new representor parameter syntax Andrew Rybchenko
` (4 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Make representor names unique on multi-host configurations.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_repr.c | 28 ++++++++++++++++++++++++++--
drivers/net/sfc/sfc_switch.c | 28 ++++++++++++++++++++++++++++
drivers/net/sfc/sfc_switch.h | 4 ++++
3 files changed, 58 insertions(+), 2 deletions(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index f87188ed7a..b4ff4da60a 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -1028,11 +1028,35 @@ sfc_repr_create(struct rte_eth_dev *parent,
{
struct sfc_repr_init_data repr_data;
char name[RTE_ETH_NAME_MAX_LEN];
+ int controller;
int ret;
+ int rc;
struct rte_eth_dev *dev;
- if (snprintf(name, sizeof(name), "net_%s_representor_%u",
- parent->device->name, entity->vf) >= (int)sizeof(name)) {
+ controller = -1;
+ rc = sfc_mae_switch_domain_get_controller(switch_domain_id,
+ entity->intf, &controller);
+ if (rc != 0) {
+ SFC_GENERIC_LOG(ERR, "%s() failed to get DPDK controller for %d",
+ __func__, entity->intf);
+ return -rc;
+ }
+
+ switch (entity->type) {
+ case RTE_ETH_REPRESENTOR_VF:
+ ret = snprintf(name, sizeof(name), "net_%s_representor_c%upf%uvf%u",
+ parent->device->name, controller, entity->pf,
+ entity->vf);
+ break;
+ case RTE_ETH_REPRESENTOR_PF:
+ ret = snprintf(name, sizeof(name), "net_%s_representor_c%upf%u",
+ parent->device->name, controller, entity->pf);
+ break;
+ default:
+ return -ENOTSUP;
+ }
+
+ if (ret >= (int)sizeof(name)) {
SFC_GENERIC_LOG(ERR, "%s() failed name too long", __func__);
return -ENAMETOOLONG;
}
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 7a0b332f33..225d07fa15 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -279,6 +279,34 @@ sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
return 0;
}
+int
+sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
+ efx_pcie_interface_t intf,
+ int *controller)
+{
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ size_t i;
+ int rc;
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
+ &nb_controllers);
+ if (rc != 0)
+ return rc;
+
+ if (controllers == NULL)
+ return ENOENT;
+
+ for (i = 0; i < nb_controllers; i++) {
+ if (controllers[i] == intf) {
+ *controller = i;
+ return 0;
+ }
+ }
+
+ return ENOENT;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_port *
sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index a072507375..294baae9a2 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -63,6 +63,10 @@ int sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
efx_pcie_interface_t *controllers,
size_t nb_controllers);
+int sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
+ efx_pcie_interface_t intf,
+ int *controller);
+
int sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
uint16_t *switch_port_id);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 35/38] net/sfc: support new representor parameter syntax
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (33 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 34/38] net/sfc: include controller and port in representor name Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 36/38] net/sfc: use switch port ID as representor ID Andrew Rybchenko
` (3 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Allow the user to specify representor entities using the structured
parameter values.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_ethdev.c | 181 ++++++++++++++++++++++++++++-------
drivers/net/sfc/sfc_switch.c | 24 +++++
drivers/net/sfc/sfc_switch.h | 4 +
3 files changed, 176 insertions(+), 33 deletions(-)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 53008f477f..69ab2a60b0 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2650,18 +2650,143 @@ sfc_eth_dev_find_or_create(struct rte_pci_device *pci_dev,
return 0;
}
+static int
+sfc_eth_dev_create_repr(struct sfc_adapter *sa,
+ efx_pcie_interface_t controller,
+ uint16_t port,
+ uint16_t repr_port,
+ enum rte_eth_representor_type type)
+{
+ struct sfc_repr_entity_info entity;
+ efx_mport_sel_t mport_sel;
+ int rc;
+
+ switch (type) {
+ case RTE_ETH_REPRESENTOR_NONE:
+ return 0;
+ case RTE_ETH_REPRESENTOR_VF:
+ case RTE_ETH_REPRESENTOR_PF:
+ break;
+ case RTE_ETH_REPRESENTOR_SF:
+ sfc_err(sa, "SF representors are not supported");
+ return ENOTSUP;
+ default:
+ sfc_err(sa, "unknown representor type: %d", type);
+ return ENOTSUP;
+ }
+
+ rc = efx_mae_mport_by_pcie_mh_function(controller,
+ port,
+ repr_port,
+ &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to get m-port selector for controller %u port %u repr_port %u: %s",
+ controller, port, repr_port, rte_strerror(-rc));
+ return rc;
+ }
+
+ memset(&entity, 0, sizeof(entity));
+ entity.type = type;
+ entity.intf = controller;
+ entity.pf = port;
+ entity.vf = repr_port;
+
+ rc = sfc_repr_create(sa->eth_dev, &entity, sa->mae.switch_domain_id,
+ &mport_sel);
+ if (rc != 0) {
+ sfc_err(sa,
+ "failed to create representor for controller %u port %u repr_port %u: %s",
+ controller, port, repr_port, rte_strerror(-rc));
+ return rc;
+ }
+
+ return 0;
+}
+
+static int
+sfc_eth_dev_create_repr_port(struct sfc_adapter *sa,
+ const struct rte_eth_devargs *eth_da,
+ efx_pcie_interface_t controller,
+ uint16_t port)
+{
+ int first_error = 0;
+ uint16_t i;
+ int rc;
+
+ if (eth_da->type == RTE_ETH_REPRESENTOR_PF) {
+ return sfc_eth_dev_create_repr(sa, controller, port,
+ EFX_PCI_VF_INVALID,
+ eth_da->type);
+ }
+
+ for (i = 0; i < eth_da->nb_representor_ports; i++) {
+ rc = sfc_eth_dev_create_repr(sa, controller, port,
+ eth_da->representor_ports[i],
+ eth_da->type);
+ if (rc != 0 && first_error == 0)
+ first_error = rc;
+ }
+
+ return first_error;
+}
+
+static int
+sfc_eth_dev_create_repr_controller(struct sfc_adapter *sa,
+ const struct rte_eth_devargs *eth_da,
+ efx_pcie_interface_t controller)
+{
+ const efx_nic_cfg_t *encp;
+ int first_error = 0;
+ uint16_t default_port;
+ uint16_t i;
+ int rc;
+
+ if (eth_da->nb_ports == 0) {
+ encp = efx_nic_cfg_get(sa->nic);
+ default_port = encp->enc_intf == controller ? encp->enc_pf : 0;
+ return sfc_eth_dev_create_repr_port(sa, eth_da, controller,
+ default_port);
+ }
+
+ for (i = 0; i < eth_da->nb_ports; i++) {
+ rc = sfc_eth_dev_create_repr_port(sa, eth_da, controller,
+ eth_da->ports[i]);
+ if (rc != 0 && first_error == 0)
+ first_error = rc;
+ }
+
+ return first_error;
+}
+
static int
sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
const struct rte_eth_devargs *eth_da)
{
+ efx_pcie_interface_t intf;
+ const efx_nic_cfg_t *encp;
struct sfc_adapter *sa;
- unsigned int i;
+ uint16_t switch_domain_id;
+ uint16_t i;
int rc;
- if (eth_da->nb_representor_ports == 0)
- return 0;
-
sa = sfc_adapter_by_eth_dev(dev);
+ switch_domain_id = sa->mae.switch_domain_id;
+
+ switch (eth_da->type) {
+ case RTE_ETH_REPRESENTOR_NONE:
+ return 0;
+ case RTE_ETH_REPRESENTOR_PF:
+ case RTE_ETH_REPRESENTOR_VF:
+ break;
+ case RTE_ETH_REPRESENTOR_SF:
+ sfc_err(sa, "SF representors are not supported");
+ return -ENOTSUP;
+ default:
+ sfc_err(sa, "unknown representor type: %d",
+ eth_da->type);
+ return -ENOTSUP;
+ }
if (!sa->switchdev) {
sfc_err(sa, "cannot create representors in non-switchdev mode");
@@ -2686,34 +2811,20 @@ sfc_eth_dev_create_representors(struct rte_eth_dev *dev,
return -rc;
}
- for (i = 0; i < eth_da->nb_representor_ports; ++i) {
- const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
- struct sfc_repr_entity_info entity;
- efx_mport_sel_t mport_sel;
-
- rc = efx_mae_mport_by_pcie_function(encp->enc_pf,
- eth_da->representor_ports[i], &mport_sel);
- if (rc != 0) {
- sfc_err(sa,
- "failed to get representor %u m-port: %s - ignore",
- eth_da->representor_ports[i],
- rte_strerror(-rc));
- continue;
- }
-
- memset(&entity, 0, sizeof(entity));
- entity.type = eth_da->type;
- entity.intf = encp->enc_intf;
- entity.pf = encp->enc_pf;
- entity.vf = eth_da->representor_ports[i];
-
- rc = sfc_repr_create(dev, &entity, sa->mae.switch_domain_id,
- &mport_sel);
- if (rc != 0) {
- sfc_err(sa, "cannot create representor %u: %s - ignore",
- eth_da->representor_ports[i],
- rte_strerror(-rc));
+ if (eth_da->nb_mh_controllers > 0) {
+ for (i = 0; i < eth_da->nb_mh_controllers; i++) {
+ rc = sfc_mae_switch_domain_get_intf(switch_domain_id,
+ eth_da->mh_controllers[i],
+ &intf);
+ if (rc != 0) {
+ sfc_err(sa, "failed to get representor");
+ continue;
+ }
+ sfc_eth_dev_create_repr_controller(sa, eth_da, intf);
}
+ } else {
+ encp = efx_nic_cfg_get(sa->nic);
+ sfc_eth_dev_create_repr_controller(sa, eth_da, encp->enc_intf);
}
return 0;
@@ -2737,9 +2848,13 @@ static int sfc_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
memset(ð_da, 0, sizeof(eth_da));
}
- init_data.nb_representors = eth_da.nb_representor_ports;
+ /* If no VF representors specified, check for PF ones */
+ if (eth_da.nb_representor_ports > 0)
+ init_data.nb_representors = eth_da.nb_representor_ports;
+ else
+ init_data.nb_representors = eth_da.nb_ports;
- if (eth_da.nb_representor_ports > 0 &&
+ if (init_data.nb_representors > 0 &&
rte_eal_process_type() != RTE_PROC_PRIMARY) {
SFC_GENERIC_LOG(ERR,
"Create representors from secondary process not supported, dev '%s'",
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 225d07fa15..5cd9b46d26 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -307,6 +307,30 @@ sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
return ENOENT;
}
+int sfc_mae_switch_domain_get_intf(uint16_t switch_domain_id,
+ int controller,
+ efx_pcie_interface_t *intf)
+{
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ int rc;
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
+ &nb_controllers);
+ if (rc != 0)
+ return rc;
+
+ if (controllers == NULL)
+ return ENOENT;
+
+ if ((size_t)controller > nb_controllers)
+ return EINVAL;
+
+ *intf = controllers[controller];
+
+ return 0;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_port *
sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index 294baae9a2..d187c6dbbb 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -67,6 +67,10 @@ int sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
efx_pcie_interface_t intf,
int *controller);
+int sfc_mae_switch_domain_get_intf(uint16_t switch_domain_id,
+ int controller,
+ efx_pcie_interface_t *intf);
+
int sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
uint16_t *switch_port_id);
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 36/38] net/sfc: use switch port ID as representor ID
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (34 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 35/38] net/sfc: support new representor parameter syntax Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 37/38] net/sfc: implement the representor info API Andrew Rybchenko
` (2 subsequent siblings)
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Representor IDs must be unique for each representor. VFs, which are
currently used, are not unique as they may repeat in combination with
different PCI controllers and PFs. On the other hand, switch port IDs
are unique, so they are a better fit for this role.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_repr.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/net/sfc/sfc_repr.c b/drivers/net/sfc/sfc_repr.c
index b4ff4da60a..6ec83873ab 100644
--- a/drivers/net/sfc/sfc_repr.c
+++ b/drivers/net/sfc/sfc_repr.c
@@ -899,7 +899,6 @@ static const struct eth_dev_ops sfc_repr_dev_ops = {
struct sfc_repr_init_data {
uint16_t pf_port_id;
- uint16_t repr_id;
uint16_t switch_domain_id;
efx_mport_sel_t mport_sel;
efx_pcie_interface_t intf;
@@ -957,7 +956,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
}
ret = sfc_repr_proxy_add_port(repr_data->pf_port_id,
- repr_data->repr_id,
+ srs->switch_port_id,
dev->data->port_id,
&repr_data->mport_sel);
if (ret != 0) {
@@ -984,7 +983,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
dev->process_private = sr;
srs->pf_port_id = repr_data->pf_port_id;
- srs->repr_id = repr_data->repr_id;
+ srs->repr_id = srs->switch_port_id;
srs->switch_domain_id = repr_data->switch_domain_id;
dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
@@ -1012,7 +1011,7 @@ sfc_repr_eth_dev_init(struct rte_eth_dev *dev, void *init_params)
fail_alloc_sr:
(void)sfc_repr_proxy_del_port(repr_data->pf_port_id,
- repr_data->repr_id);
+ srs->switch_port_id);
fail_create_port:
fail_mae_assign_switch_port:
@@ -1065,7 +1064,6 @@ sfc_repr_create(struct rte_eth_dev *parent,
if (dev == NULL) {
memset(&repr_data, 0, sizeof(repr_data));
repr_data.pf_port_id = parent->data->port_id;
- repr_data.repr_id = entity->vf;
repr_data.switch_domain_id = switch_domain_id;
repr_data.mport_sel = *mport_sel;
repr_data.intf = entity->intf;
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 37/38] net/sfc: implement the representor info API
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (35 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 36/38] net/sfc: use switch port ID as representor ID Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 38/38] net/sfc: update comment about representor support Andrew Rybchenko
2021-10-12 16:45 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Ferruh Yigit
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Let the driver provide the user with information about available
representors by implementing the representor_info_get operation.
Due to the lack of any structure to representor IDs, every ID range
describes exactly one representor.
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
doc/guides/rel_notes/release_21_11.rst | 6 +
drivers/net/sfc/sfc_ethdev.c | 229 +++++++++++++++++++++++++
drivers/net/sfc/sfc_switch.c | 104 +++++++++--
drivers/net/sfc/sfc_switch.h | 24 +++
4 files changed, 352 insertions(+), 11 deletions(-)
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 89d4b33ef1..a89dcb2c63 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -101,6 +101,12 @@ New Features
* Default VLAN strip behavior is changed. VLAN tag won't be stripped unless
``DEV_RX_OFFLOAD_VLAN_STRIP`` offload is enabled.
+* **Updated Solarflare network PMD.**
+
+ Updated the Solarflare ``sfc_efx`` driver with changes including:
+
+ * Added port representors support on SN1000 SmartNICs
+
* **Updated Marvell cnxk crypto PMD.**
* Added AES-CBC SHA1-HMAC support in lookaside protocol (IPsec) for CN10K.
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 69ab2a60b0..54711d349f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1923,7 +1923,11 @@ static efx_rc_t
sfc_process_mport_journal_entry(struct sfc_mport_journal_ctx *ctx,
efx_mport_desc_t *mport)
{
+ struct sfc_mae_switch_port_request req;
+ efx_mport_sel_t entity_selector;
efx_mport_sel_t ethdev_mport;
+ uint16_t switch_port_id;
+ efx_rc_t efx_rc;
int rc;
sfc_dbg(ctx->sa,
@@ -1939,6 +1943,63 @@ sfc_process_mport_journal_entry(struct sfc_mport_journal_ctx *ctx,
return rc;
}
+ /* Build Mport selector */
+ efx_rc = efx_mae_mport_by_pcie_mh_function(mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf,
+ &entity_selector);
+ if (efx_rc != 0) {
+ sfc_err(ctx->sa, "failed to build entity mport selector for c%upf%uvf%u",
+ mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf);
+ return efx_rc;
+ }
+
+ rc = sfc_mae_switch_port_id_by_entity(ctx->switch_domain_id,
+ &entity_selector,
+ SFC_MAE_SWITCH_PORT_REPRESENTOR,
+ &switch_port_id);
+ switch (rc) {
+ case 0:
+ /* Already registered */
+ break;
+ case ENOENT:
+ /*
+ * No representor has been created for this entity.
+ * Create a dummy switch registry entry with an invalid ethdev
+ * mport selector. When a corresponding representor is created,
+ * this entry will be updated.
+ */
+ req.type = SFC_MAE_SWITCH_PORT_REPRESENTOR;
+ req.entity_mportp = &entity_selector;
+ req.ethdev_mportp = ðdev_mport;
+ req.ethdev_port_id = RTE_MAX_ETHPORTS;
+ req.port_data.repr.intf = mport->emd_vnic.ev_intf;
+ req.port_data.repr.pf = mport->emd_vnic.ev_pf;
+ req.port_data.repr.vf = mport->emd_vnic.ev_vf;
+
+ rc = sfc_mae_assign_switch_port(ctx->switch_domain_id,
+ &req, &switch_port_id);
+ if (rc != 0) {
+ sfc_err(ctx->sa,
+ "failed to assign MAE switch port for c%upf%uvf%u: %s",
+ mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf,
+ rte_strerror(rc));
+ return rc;
+ }
+ break;
+ default:
+ sfc_err(ctx->sa, "failed to find MAE switch port for c%upf%uvf%u: %s",
+ mport->emd_vnic.ev_intf,
+ mport->emd_vnic.ev_pf,
+ mport->emd_vnic.ev_vf,
+ rte_strerror(rc));
+ return rc;
+ }
+
return 0;
}
@@ -2035,6 +2096,173 @@ sfc_process_mport_journal(struct sfc_adapter *sa)
return 0;
}
+static void
+sfc_count_representors_cb(enum sfc_mae_switch_port_type type,
+ const efx_mport_sel_t *ethdev_mportp __rte_unused,
+ uint16_t ethdev_port_id __rte_unused,
+ const efx_mport_sel_t *entity_mportp __rte_unused,
+ uint16_t switch_port_id __rte_unused,
+ union sfc_mae_switch_port_data *port_datap
+ __rte_unused,
+ void *user_datap)
+{
+ int *counter = user_datap;
+
+ SFC_ASSERT(counter != NULL);
+
+ if (type == SFC_MAE_SWITCH_PORT_REPRESENTOR)
+ (*counter)++;
+}
+
+struct sfc_get_representors_ctx {
+ struct rte_eth_representor_info *info;
+ struct sfc_adapter *sa;
+ uint16_t switch_domain_id;
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+};
+
+static void
+sfc_get_representors_cb(enum sfc_mae_switch_port_type type,
+ const efx_mport_sel_t *ethdev_mportp __rte_unused,
+ uint16_t ethdev_port_id __rte_unused,
+ const efx_mport_sel_t *entity_mportp __rte_unused,
+ uint16_t switch_port_id,
+ union sfc_mae_switch_port_data *port_datap,
+ void *user_datap)
+{
+ struct sfc_get_representors_ctx *ctx = user_datap;
+ struct rte_eth_representor_range *range;
+ int ret;
+ int rc;
+
+ SFC_ASSERT(ctx != NULL);
+ SFC_ASSERT(ctx->info != NULL);
+ SFC_ASSERT(ctx->sa != NULL);
+
+ if (type != SFC_MAE_SWITCH_PORT_REPRESENTOR) {
+ sfc_dbg(ctx->sa, "not a representor, skipping");
+ return;
+ }
+ if (ctx->info->nb_ranges >= ctx->info->nb_ranges_alloc) {
+ sfc_dbg(ctx->sa, "info structure is full already");
+ return;
+ }
+
+ range = &ctx->info->ranges[ctx->info->nb_ranges];
+ rc = sfc_mae_switch_controller_from_mapping(ctx->controllers,
+ ctx->nb_controllers,
+ port_datap->repr.intf,
+ &range->controller);
+ if (rc != 0) {
+ sfc_err(ctx->sa, "invalid representor controller: %d",
+ port_datap->repr.intf);
+ range->controller = -1;
+ }
+ range->pf = port_datap->repr.pf;
+ range->id_base = switch_port_id;
+ range->id_end = switch_port_id;
+
+ if (port_datap->repr.vf != EFX_PCI_VF_INVALID) {
+ range->type = RTE_ETH_REPRESENTOR_VF;
+ range->vf = port_datap->repr.vf;
+ ret = snprintf(range->name, RTE_DEV_NAME_MAX_LEN,
+ "c%dpf%dvf%d", range->controller, range->pf,
+ range->vf);
+ } else {
+ range->type = RTE_ETH_REPRESENTOR_PF;
+ ret = snprintf(range->name, RTE_DEV_NAME_MAX_LEN,
+ "c%dpf%d", range->controller, range->pf);
+ }
+ if (ret >= RTE_DEV_NAME_MAX_LEN) {
+ sfc_err(ctx->sa, "representor name has been truncated: %s",
+ range->name);
+ }
+
+ ctx->info->nb_ranges++;
+}
+
+static int
+sfc_representor_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_representor_info *info)
+{
+ struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+ struct sfc_get_representors_ctx get_repr_ctx;
+ const efx_nic_cfg_t *nic_cfg;
+ uint16_t switch_domain_id;
+ uint32_t nb_repr;
+ int controller;
+ int rc;
+
+ sfc_adapter_lock(sa);
+
+ if (sa->mae.status != SFC_MAE_STATUS_SUPPORTED) {
+ sfc_adapter_unlock(sa);
+ return -ENOTSUP;
+ }
+
+ rc = sfc_process_mport_journal(sa);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ switch_domain_id = sa->mae.switch_domain_id;
+
+ nb_repr = 0;
+ rc = sfc_mae_switch_ports_iterate(switch_domain_id,
+ sfc_count_representors_cb,
+ &nb_repr);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ if (info == NULL) {
+ sfc_adapter_unlock(sa);
+ return nb_repr;
+ }
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id,
+ &get_repr_ctx.controllers,
+ &get_repr_ctx.nb_controllers);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ nic_cfg = efx_nic_cfg_get(sa->nic);
+
+ rc = sfc_mae_switch_domain_get_controller(switch_domain_id,
+ nic_cfg->enc_intf,
+ &controller);
+ if (rc != 0) {
+ sfc_err(sa, "invalid controller: %d", nic_cfg->enc_intf);
+ controller = -1;
+ }
+
+ info->controller = controller;
+ info->pf = nic_cfg->enc_pf;
+
+ get_repr_ctx.info = info;
+ get_repr_ctx.sa = sa;
+ get_repr_ctx.switch_domain_id = switch_domain_id;
+ rc = sfc_mae_switch_ports_iterate(switch_domain_id,
+ sfc_get_representors_cb,
+ &get_repr_ctx);
+ if (rc != 0) {
+ sfc_adapter_unlock(sa);
+ SFC_ASSERT(rc > 0);
+ return -rc;
+ }
+
+ sfc_adapter_unlock(sa);
+ return nb_repr;
+}
+
static const struct eth_dev_ops sfc_eth_dev_ops = {
.dev_configure = sfc_dev_configure,
.dev_start = sfc_dev_start,
@@ -2082,6 +2310,7 @@ static const struct eth_dev_ops sfc_eth_dev_ops = {
.xstats_get_by_id = sfc_xstats_get_by_id,
.xstats_get_names_by_id = sfc_xstats_get_names_by_id,
.pool_ops_supported = sfc_pool_ops_supported,
+ .representor_info_get = sfc_representor_info_get,
};
struct sfc_ethdev_init_data {
diff --git a/drivers/net/sfc/sfc_switch.c b/drivers/net/sfc/sfc_switch.c
index 5cd9b46d26..dc5b9a676c 100644
--- a/drivers/net/sfc/sfc_switch.c
+++ b/drivers/net/sfc/sfc_switch.c
@@ -151,6 +151,34 @@ sfc_mae_find_switch_domain_by_id(uint16_t switch_domain_id)
return NULL;
}
+int
+sfc_mae_switch_ports_iterate(uint16_t switch_domain_id,
+ sfc_mae_switch_port_iterator_cb *cb,
+ void *data)
+{
+ struct sfc_mae_switch_domain *domain;
+ struct sfc_mae_switch_port *port;
+
+ if (cb == NULL)
+ return EINVAL;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL) {
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return EINVAL;
+ }
+
+ TAILQ_FOREACH(port, &domain->ports, switch_domain_ports) {
+ cb(port->type, &port->ethdev_mport, port->ethdev_port_id,
+ &port->entity_mport, port->id, &port->data, data);
+ }
+
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+ return 0;
+}
+
/* This function expects to be called only when the lock is held */
static struct sfc_mae_switch_domain *
sfc_mae_find_switch_domain_by_hw_switch_id(const struct sfc_hw_switch_id *id)
@@ -280,19 +308,12 @@ sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
}
int
-sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
- efx_pcie_interface_t intf,
- int *controller)
+sfc_mae_switch_controller_from_mapping(const efx_pcie_interface_t *controllers,
+ size_t nb_controllers,
+ efx_pcie_interface_t intf,
+ int *controller)
{
- const efx_pcie_interface_t *controllers;
- size_t nb_controllers;
size_t i;
- int rc;
-
- rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
- &nb_controllers);
- if (rc != 0)
- return rc;
if (controllers == NULL)
return ENOENT;
@@ -307,6 +328,26 @@ sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
return ENOENT;
}
+int
+sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
+ efx_pcie_interface_t intf,
+ int *controller)
+{
+ const efx_pcie_interface_t *controllers;
+ size_t nb_controllers;
+ int rc;
+
+ rc = sfc_mae_switch_domain_controllers(switch_domain_id, &controllers,
+ &nb_controllers);
+ if (rc != 0)
+ return rc;
+
+ return sfc_mae_switch_controller_from_mapping(controllers,
+ nb_controllers,
+ intf,
+ controller);
+}
+
int sfc_mae_switch_domain_get_intf(uint16_t switch_domain_id,
int controller,
efx_pcie_interface_t *intf)
@@ -350,6 +391,30 @@ sfc_mae_find_switch_port_by_entity(const struct sfc_mae_switch_domain *domain,
return NULL;
}
+/* This function expects to be called only when the lock is held */
+static int
+sfc_mae_find_switch_port_id_by_entity(uint16_t switch_domain_id,
+ const efx_mport_sel_t *entity_mportp,
+ enum sfc_mae_switch_port_type type,
+ uint16_t *switch_port_id)
+{
+ struct sfc_mae_switch_domain *domain;
+ struct sfc_mae_switch_port *port;
+
+ SFC_ASSERT(rte_spinlock_is_locked(&sfc_mae_switch.lock));
+
+ domain = sfc_mae_find_switch_domain_by_id(switch_domain_id);
+ if (domain == NULL)
+ return EINVAL;
+
+ port = sfc_mae_find_switch_port_by_entity(domain, entity_mportp, type);
+ if (port == NULL)
+ return ENOENT;
+
+ *switch_port_id = port->id;
+ return 0;
+}
+
int
sfc_mae_assign_switch_port(uint16_t switch_domain_id,
const struct sfc_mae_switch_port_request *req,
@@ -455,3 +520,20 @@ sfc_mae_switch_port_by_ethdev(uint16_t switch_domain_id,
return rc;
}
+
+int
+sfc_mae_switch_port_id_by_entity(uint16_t switch_domain_id,
+ const efx_mport_sel_t *entity_mportp,
+ enum sfc_mae_switch_port_type type,
+ uint16_t *switch_port_id)
+{
+ int rc;
+
+ rte_spinlock_lock(&sfc_mae_switch.lock);
+ rc = sfc_mae_find_switch_port_id_by_entity(switch_domain_id,
+ entity_mportp, type,
+ switch_port_id);
+ rte_spinlock_unlock(&sfc_mae_switch.lock);
+
+ return rc;
+}
diff --git a/drivers/net/sfc/sfc_switch.h b/drivers/net/sfc/sfc_switch.h
index d187c6dbbb..a77d2e6f28 100644
--- a/drivers/net/sfc/sfc_switch.h
+++ b/drivers/net/sfc/sfc_switch.h
@@ -52,6 +52,19 @@ struct sfc_mae_switch_port_request {
union sfc_mae_switch_port_data port_data;
};
+typedef void (sfc_mae_switch_port_iterator_cb)(
+ enum sfc_mae_switch_port_type type,
+ const efx_mport_sel_t *ethdev_mportp,
+ uint16_t ethdev_port_id,
+ const efx_mport_sel_t *entity_mportp,
+ uint16_t switch_port_id,
+ union sfc_mae_switch_port_data *port_datap,
+ void *user_datap);
+
+int sfc_mae_switch_ports_iterate(uint16_t switch_domain_id,
+ sfc_mae_switch_port_iterator_cb *cb,
+ void *data);
+
int sfc_mae_assign_switch_domain(struct sfc_adapter *sa,
uint16_t *switch_domain_id);
@@ -63,6 +76,12 @@ int sfc_mae_switch_domain_map_controllers(uint16_t switch_domain_id,
efx_pcie_interface_t *controllers,
size_t nb_controllers);
+int sfc_mae_switch_controller_from_mapping(
+ const efx_pcie_interface_t *controllers,
+ size_t nb_controllers,
+ efx_pcie_interface_t intf,
+ int *controller);
+
int sfc_mae_switch_domain_get_controller(uint16_t switch_domain_id,
efx_pcie_interface_t intf,
int *controller);
@@ -79,6 +98,11 @@ int sfc_mae_switch_port_by_ethdev(uint16_t switch_domain_id,
uint16_t ethdev_port_id,
efx_mport_sel_t *mport_sel);
+int sfc_mae_switch_port_id_by_entity(uint16_t switch_domain_id,
+ const efx_mport_sel_t *entity_mportp,
+ enum sfc_mae_switch_port_type type,
+ uint16_t *switch_port_id);
+
#ifdef __cplusplus
}
#endif
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* [dpdk-dev] [PATCH v2 38/38] net/sfc: update comment about representor support
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (36 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 37/38] net/sfc: implement the representor info API Andrew Rybchenko
@ 2021-10-11 14:48 ` Andrew Rybchenko
2021-10-12 16:45 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Ferruh Yigit
38 siblings, 0 replies; 79+ messages in thread
From: Andrew Rybchenko @ 2021-10-11 14:48 UTC (permalink / raw)
To: dev; +Cc: Viacheslav Galaktionov, stable, Andy Moreton
From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
The representor support has been implemented to some extent, and the fact
that ethdev mport is equivalent to entity mport is by design.
Fixes: 1fb65e4dae8 ("net/sfc: support flow action port ID in transfer rules")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
drivers/net/sfc/sfc_mae.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 7be77054ab..fa60c948ca 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -228,10 +228,7 @@ sfc_mae_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "assign RTE switch port");
switch_port_request.type = SFC_MAE_SWITCH_PORT_INDEPENDENT;
switch_port_request.entity_mportp = &entity_mport;
- /*
- * As of now, the driver does not support representors, so
- * RTE ethdev MPORT simply matches that of the entity.
- */
+ /* RTE ethdev MPORT matches that of the entity for independent ports. */
switch_port_request.ethdev_mportp = &entity_mport;
switch_port_request.ethdev_port_id = sas->port_id;
rc = sfc_mae_assign_switch_port(mae->switch_domain_id,
--
2.30.2
^ permalink raw reply [flat|nested] 79+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
` (37 preceding siblings ...)
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 38/38] net/sfc: update comment about representor support Andrew Rybchenko
@ 2021-10-12 16:45 ` Ferruh Yigit
38 siblings, 0 replies; 79+ messages in thread
From: Ferruh Yigit @ 2021-10-12 16:45 UTC (permalink / raw)
To: Andrew Rybchenko; +Cc: dev
On 10/11/2021 3:48 PM, Andrew Rybchenko wrote:
> Support port representors on SN1000 SmartNICs including:
> - new syntax with controller, PF and VF specification
> - PF representors
> - two controllers: host and embedded SoC
>
> The patch series depends on [1] (including build dependency) since it
> provides representors info on admin PF only.
>
> [1] https://patches.dpdk.org/project/dpdk/list/?series=18373
>
> v2:
> - rebase on top of release callback prototype changes
> - improve switch mode auto-detection
>
> Andrew Rybchenko (2):
> common/sfc_efx/base: update MCDI headers
> common/sfc_efx/base: update EF100 registers definitions
>
> Igor Romanov (23):
> net/sfc: add switch mode device argument
> net/sfc: insert switchdev mode MAE rules
> common/sfc_efx/base: add an API to get mport ID by selector
> net/sfc: support EF100 Tx override prefix
> net/sfc: add representors proxy infrastructure
> net/sfc: reserve TxQ and RxQ for port representors
> net/sfc: move adapter state enum to separate header
> net/sfc: add port representors infrastructure
> common/sfc_efx/base: add filter ingress mport matching field
> common/sfc_efx/base: add API to get mport selector by ID
> common/sfc_efx/base: add mport alias MCDI wrappers
> net/sfc: add representor proxy port API
> net/sfc: implement representor queue setup and release
> net/sfc: implement representor RxQ start/stop
> net/sfc: implement representor TxQ start/stop
> net/sfc: implement port representor start and stop
> net/sfc: implement port representor link update
> net/sfc: support multiple device probe
> net/sfc: implement representor Tx routine
> net/sfc: use xword type for EF100 Rx prefix
> net/sfc: handle ingress m-port in EF100 Rx prefix
> net/sfc: implement representor Rx routine
> net/sfc: add simple port representor statistics
>
> Viacheslav Galaktionov (13):
> common/sfc_efx/base: allow creating invalid mport selectors
> net/sfc: free MAE lock once switch domain is assigned
> common/sfc_efx/base: add multi-host function M-port selector
> common/sfc_efx/base: retrieve function interfaces for VNICs
> common/sfc_efx/base: add a means to read MAE mport journal
> common/sfc_efx/base: allow getting VNIC MCDI client handles
> net/sfc: maintain controller to EFX interface mapping
> net/sfc: store PCI address for represented entities
> net/sfc: include controller and port in representor name
> net/sfc: support new representor parameter syntax
> net/sfc: use switch port ID as representor ID
> net/sfc: implement the representor info API
> net/sfc: update comment about representor support
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 79+ messages in thread
end of thread, other threads:[~2021-10-12 17:11 UTC | newest]
Thread overview: 79+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-27 6:56 [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 02/38] common/sfc_efx/base: update EF100 registers definitions Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 03/38] net/sfc: add switch mode device argument Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 04/38] net/sfc: insert switchdev mode MAE rules Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 05/38] common/sfc_efx/base: add an API to get mport ID by selector Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 06/38] net/sfc: support EF100 Tx override prefix Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 07/38] net/sfc: add representors proxy infrastructure Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 08/38] net/sfc: reserve TxQ and RxQ for port representors Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 09/38] net/sfc: move adapter state enum to separate header Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 10/38] common/sfc_efx/base: allow creating invalid mport selectors Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 11/38] net/sfc: add port representors infrastructure Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 12/38] common/sfc_efx/base: add filter ingress mport matching field Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 13/38] common/sfc_efx/base: add API to get mport selector by ID Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 14/38] common/sfc_efx/base: add mport alias MCDI wrappers Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 15/38] net/sfc: add representor proxy port API Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 16/38] net/sfc: implement representor queue setup and release Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 17/38] net/sfc: implement representor RxQ start/stop Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 18/38] net/sfc: implement representor TxQ start/stop Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 19/38] net/sfc: implement port representor start and stop Andrew Rybchenko
2021-08-27 6:56 ` [dpdk-dev] [PATCH 20/38] net/sfc: implement port representor link update Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 21/38] net/sfc: support multiple device probe Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 22/38] net/sfc: implement representor Tx routine Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 23/38] net/sfc: use xword type for EF100 Rx prefix Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 24/38] net/sfc: handle ingress m-port in " Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 25/38] net/sfc: implement representor Rx routine Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 26/38] net/sfc: add simple port representor statistics Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 27/38] net/sfc: free MAE lock once switch domain is assigned Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 28/38] common/sfc_efx/base: add multi-host function M-port selector Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 30/38] common/sfc_efx/base: add a means to read MAE mport journal Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 32/38] net/sfc: maintain controller to EFX interface mapping Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 33/38] net/sfc: store PCI address for represented entities Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 34/38] net/sfc: include controller and port in representor name Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 35/38] net/sfc: support new representor parameter syntax Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 36/38] net/sfc: use switch port ID as representor ID Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 37/38] net/sfc: implement the representor info API Andrew Rybchenko
2021-08-27 6:57 ` [dpdk-dev] [PATCH 38/38] net/sfc: update comment about representor support Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 02/38] common/sfc_efx/base: update EF100 registers definitions Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 03/38] net/sfc: add switch mode device argument Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 04/38] net/sfc: insert switchdev mode MAE rules Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 05/38] common/sfc_efx/base: add an API to get mport ID by selector Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 06/38] net/sfc: support EF100 Tx override prefix Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 07/38] net/sfc: add representors proxy infrastructure Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 08/38] net/sfc: reserve TxQ and RxQ for port representors Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 09/38] net/sfc: move adapter state enum to separate header Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 10/38] common/sfc_efx/base: allow creating invalid mport selectors Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 11/38] net/sfc: add port representors infrastructure Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 12/38] common/sfc_efx/base: add filter ingress mport matching field Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 13/38] common/sfc_efx/base: add API to get mport selector by ID Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 14/38] common/sfc_efx/base: add mport alias MCDI wrappers Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 15/38] net/sfc: add representor proxy port API Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 16/38] net/sfc: implement representor queue setup and release Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 17/38] net/sfc: implement representor RxQ start/stop Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 18/38] net/sfc: implement representor TxQ start/stop Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 19/38] net/sfc: implement port representor start and stop Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 20/38] net/sfc: implement port representor link update Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 21/38] net/sfc: support multiple device probe Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 22/38] net/sfc: implement representor Tx routine Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 23/38] net/sfc: use xword type for EF100 Rx prefix Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 24/38] net/sfc: handle ingress m-port in " Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 25/38] net/sfc: implement representor Rx routine Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 26/38] net/sfc: add simple port representor statistics Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 27/38] net/sfc: free MAE lock once switch domain is assigned Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 28/38] common/sfc_efx/base: add multi-host function M-port selector Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 29/38] common/sfc_efx/base: retrieve function interfaces for VNICs Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 30/38] common/sfc_efx/base: add a means to read MAE mport journal Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 31/38] common/sfc_efx/base: allow getting VNIC MCDI client handles Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 32/38] net/sfc: maintain controller to EFX interface mapping Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 33/38] net/sfc: store PCI address for represented entities Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 34/38] net/sfc: include controller and port in representor name Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 35/38] net/sfc: support new representor parameter syntax Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 36/38] net/sfc: use switch port ID as representor ID Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 37/38] net/sfc: implement the representor info API Andrew Rybchenko
2021-10-11 14:48 ` [dpdk-dev] [PATCH v2 38/38] net/sfc: update comment about representor support Andrew Rybchenko
2021-10-12 16:45 ` [dpdk-dev] [PATCH v2 00/38] net/sfc: support port representors Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).