* [RFC 0/9] add support for idpf PMD in DPDK
@ 2022-05-07 7:07 Junfeng Guo
2022-05-07 7:07 ` [RFC 1/9] net/idpf/base: introduce base code Junfeng Guo
` (8 more replies)
0 siblings, 9 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
This is a draft of idpf (Infrastructure Data Path Function) PMD
in DPDK for Intel Device ID of 0x1452.
Junfeng Guo (9):
net/idpf/base: introduce base code
net/idpf/base: add OS specific implementation
net/idpf: support device initialization
net/idpf: support queue ops
net/idpf: support getting device information
net/idpf: support packet type getting
net/idpf: support link update
net/idpf: support basic Rx/Tx
net/idpf: support RSS
drivers/net/idpf/base/iecm_alloc.h | 22 +
drivers/net/idpf/base/iecm_common.c | 359 +++
drivers/net/idpf/base/iecm_controlq.c | 662 ++++
drivers/net/idpf/base/iecm_controlq.h | 214 ++
drivers/net/idpf/base/iecm_controlq_api.h | 227 ++
drivers/net/idpf/base/iecm_controlq_setup.c | 179 ++
drivers/net/idpf/base/iecm_devids.h | 17 +
drivers/net/idpf/base/iecm_lan_pf_regs.h | 134 +
drivers/net/idpf/base/iecm_lan_txrx.h | 428 +++
drivers/net/idpf/base/iecm_lan_vf_regs.h | 114 +
drivers/net/idpf/base/iecm_osdep.h | 365 +++
drivers/net/idpf/base/iecm_prototype.h | 45 +
drivers/net/idpf/base/iecm_type.h | 106 +
drivers/net/idpf/base/meson.build | 27 +
drivers/net/idpf/base/siov_regs.h | 41 +
drivers/net/idpf/base/virtchnl.h | 2743 +++++++++++++++++
drivers/net/idpf/base/virtchnl2.h | 1411 +++++++++
drivers/net/idpf/base/virtchnl2_lan_desc.h | 603 ++++
drivers/net/idpf/base/virtchnl_inline_ipsec.h | 567 ++++
drivers/net/idpf/idpf_ethdev.c | 1030 +++++++
drivers/net/idpf/idpf_ethdev.h | 223 ++
drivers/net/idpf/idpf_logs.h | 38 +
drivers/net/idpf/idpf_rxtx.c | 2180 +++++++++++++
drivers/net/idpf/idpf_rxtx.h | 203 ++
drivers/net/idpf/idpf_vchnl.c | 900 ++++++
drivers/net/idpf/meson.build | 19 +
drivers/net/idpf/version.map | 3 +
drivers/net/meson.build | 1 +
28 files changed, 12861 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_alloc.h
create mode 100644 drivers/net/idpf/base/iecm_common.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
create mode 100644 drivers/net/idpf/base/iecm_devids.h
create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_osdep.h
create mode 100644 drivers/net/idpf/base/iecm_prototype.h
create mode 100644 drivers/net/idpf/base/iecm_type.h
create mode 100644 drivers/net/idpf/base/meson.build
create mode 100644 drivers/net/idpf/base/siov_regs.h
create mode 100644 drivers/net/idpf/base/virtchnl.h
create mode 100644 drivers/net/idpf/base/virtchnl2.h
create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
create mode 100644 drivers/net/idpf/idpf_ethdev.c
create mode 100644 drivers/net/idpf/idpf_ethdev.h
create mode 100644 drivers/net/idpf/idpf_logs.h
create mode 100644 drivers/net/idpf/idpf_rxtx.c
create mode 100644 drivers/net/idpf/idpf_rxtx.h
create mode 100644 drivers/net/idpf/idpf_vchnl.c
create mode 100644 drivers/net/idpf/meson.build
create mode 100644 drivers/net/idpf/version.map
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 1/9] net/idpf/base: introduce base code
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-07 7:07 ` [RFC 2/9] net/idpf/base: add OS specific implementation Junfeng Guo
` (7 subsequent siblings)
8 siblings, 1 reply; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Introduce base code for IDPF (Infrastructure Data Path Function) PMD.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_alloc.h | 22 +
drivers/net/idpf/base/iecm_common.c | 359 +++
drivers/net/idpf/base/iecm_controlq.c | 662 ++++
drivers/net/idpf/base/iecm_controlq.h | 214 ++
drivers/net/idpf/base/iecm_controlq_api.h | 227 ++
drivers/net/idpf/base/iecm_controlq_setup.c | 179 ++
drivers/net/idpf/base/iecm_devids.h | 17 +
drivers/net/idpf/base/iecm_lan_pf_regs.h | 134 +
drivers/net/idpf/base/iecm_lan_txrx.h | 428 +++
drivers/net/idpf/base/iecm_lan_vf_regs.h | 114 +
drivers/net/idpf/base/iecm_prototype.h | 45 +
drivers/net/idpf/base/iecm_type.h | 106 +
drivers/net/idpf/base/meson.build | 27 +
drivers/net/idpf/base/siov_regs.h | 41 +
drivers/net/idpf/base/virtchnl.h | 2743 +++++++++++++++++
drivers/net/idpf/base/virtchnl2.h | 1411 +++++++++
drivers/net/idpf/base/virtchnl2_lan_desc.h | 603 ++++
drivers/net/idpf/base/virtchnl_inline_ipsec.h | 567 ++++
18 files changed, 7899 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_alloc.h
create mode 100644 drivers/net/idpf/base/iecm_common.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
create mode 100644 drivers/net/idpf/base/iecm_devids.h
create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_prototype.h
create mode 100644 drivers/net/idpf/base/iecm_type.h
create mode 100644 drivers/net/idpf/base/meson.build
create mode 100644 drivers/net/idpf/base/siov_regs.h
create mode 100644 drivers/net/idpf/base/virtchnl.h
create mode 100644 drivers/net/idpf/base/virtchnl2.h
create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
diff --git a/drivers/net/idpf/base/iecm_alloc.h b/drivers/net/idpf/base/iecm_alloc.h
new file mode 100644
index 0000000000..7ea219c784
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_ALLOC_H_
+#define _IECM_ALLOC_H_
+
+/* Memory types */
+enum iecm_memset_type {
+ IECM_NONDMA_MEM = 0,
+ IECM_DMA_MEM
+};
+
+/* Memcpy types */
+enum iecm_memcpy_type {
+ IECM_NONDMA_TO_NONDMA = 0,
+ IECM_NONDMA_TO_DMA,
+ IECM_DMA_TO_DMA,
+ IECM_DMA_TO_NONDMA
+};
+
+#endif /* _IECM_ALLOC_H_ */
diff --git a/drivers/net/idpf/base/iecm_common.c b/drivers/net/idpf/base/iecm_common.c
new file mode 100644
index 0000000000..418fd99298
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_common.c
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#include "iecm_type.h"
+#include "iecm_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * iecm_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+int iecm_set_mac_type(struct iecm_hw *hw)
+{
+ int status = IECM_SUCCESS;
+
+ DEBUGFUNC("iecm_set_mac_type\n");
+
+ if (hw->vendor_id == IECM_INTEL_VENDOR_ID) {
+ switch (hw->device_id) {
+ case IECM_DEV_ID_PF:
+ hw->mac.type = IECM_MAC_PF;
+ break;
+ default:
+ hw->mac.type = IECM_MAC_GENERIC;
+ break;
+ }
+ } else {
+ status = IECM_ERR_DEVICE_NOT_SUPPORTED;
+ }
+
+ DEBUGOUT2("iecm_set_mac_type found mac: %d, returns: %d\n",
+ hw->mac.type, status);
+ return status;
+}
+
+/**
+ * iecm_init_hw - main initialization routine
+ * @hw: pointer to the hardware structure
+ * @ctlq_size: struct to pass ctlq size data
+ */
+int iecm_init_hw(struct iecm_hw *hw, struct iecm_ctlq_size ctlq_size)
+{
+ struct iecm_ctlq_create_info *q_info;
+ int status = IECM_SUCCESS;
+ struct iecm_ctlq_info *cq = NULL;
+
+ /* Setup initial control queues */
+ q_info = (struct iecm_ctlq_create_info *)
+ iecm_calloc(hw, 2, sizeof(struct iecm_ctlq_create_info));
+ if (!q_info)
+ return IECM_ERR_NO_MEMORY;
+
+ q_info[0].type = IECM_CTLQ_TYPE_MAILBOX_TX;
+ q_info[0].buf_size = ctlq_size.asq_buf_size;
+ q_info[0].len = ctlq_size.asq_ring_size;
+ q_info[0].id = -1; /* default queue */
+
+ if (hw->mac.type == IECM_MAC_PF) {
+ q_info[0].reg.head = PF_FW_ATQH;
+ q_info[0].reg.tail = PF_FW_ATQT;
+ q_info[0].reg.len = PF_FW_ATQLEN;
+ q_info[0].reg.bah = PF_FW_ATQBAH;
+ q_info[0].reg.bal = PF_FW_ATQBAL;
+ q_info[0].reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
+ q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
+ q_info[0].reg.head_mask = PF_FW_ATQH_ATQH_M;
+ } else {
+ q_info[0].reg.head = VF_ATQH;
+ q_info[0].reg.tail = VF_ATQT;
+ q_info[0].reg.len = VF_ATQLEN;
+ q_info[0].reg.bah = VF_ATQBAH;
+ q_info[0].reg.bal = VF_ATQBAL;
+ q_info[0].reg.len_mask = VF_ATQLEN_ATQLEN_M;
+ q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
+ q_info[0].reg.head_mask = VF_ATQH_ATQH_M;
+ }
+
+ q_info[1].type = IECM_CTLQ_TYPE_MAILBOX_RX;
+ q_info[1].buf_size = ctlq_size.arq_buf_size;
+ q_info[1].len = ctlq_size.arq_ring_size;
+ q_info[1].id = -1; /* default queue */
+
+ if (hw->mac.type == IECM_MAC_PF) {
+ q_info[1].reg.head = PF_FW_ARQH;
+ q_info[1].reg.tail = PF_FW_ARQT;
+ q_info[1].reg.len = PF_FW_ARQLEN;
+ q_info[1].reg.bah = PF_FW_ARQBAH;
+ q_info[1].reg.bal = PF_FW_ARQBAL;
+ q_info[1].reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
+ q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
+ q_info[1].reg.head_mask = PF_FW_ARQH_ARQH_M;
+ } else {
+ q_info[1].reg.head = VF_ARQH;
+ q_info[1].reg.tail = VF_ARQT;
+ q_info[1].reg.len = VF_ARQLEN;
+ q_info[1].reg.bah = VF_ARQBAH;
+ q_info[1].reg.bal = VF_ARQBAL;
+ q_info[1].reg.len_mask = VF_ARQLEN_ARQLEN_M;
+ q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
+ q_info[1].reg.head_mask = VF_ARQH_ARQH_M;
+ }
+
+ status = iecm_ctlq_init(hw, 2, q_info);
+ if (status != IECM_SUCCESS) {
+ /* TODO return error */
+ iecm_free(hw, q_info);
+ return status;
+ }
+
+ LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, iecm_ctlq_info, cq_list) {
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ hw->asq = cq;
+ else if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX)
+ hw->arq = cq;
+ }
+
+ /* TODO hardcode a mac addr for now */
+ hw->mac.addr[0] = 0x00;
+ hw->mac.addr[1] = 0x00;
+ hw->mac.addr[2] = 0x00;
+ hw->mac.addr[3] = 0x00;
+ hw->mac.addr[4] = 0x03;
+ hw->mac.addr[5] = 0x14;
+
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_send_msg_to_cp
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to CP. By default, this message
+ * is sent asynchronously, i.e. iecm_asq_send_command() does not wait for
+ * completion before returning.
+ */
+int iecm_send_msg_to_cp(struct iecm_hw *hw, enum virtchnl_ops v_opcode,
+ int v_retval, u8 *msg, u16 msglen)
+{
+ struct iecm_ctlq_msg ctlq_msg = { 0 };
+ struct iecm_dma_mem dma_mem = { 0 };
+ int status;
+
+ ctlq_msg.opcode = iecm_mbq_opc_send_msg_to_pf;
+ ctlq_msg.func_id = 0;
+ ctlq_msg.data_len = msglen;
+ ctlq_msg.cookie.mbx.chnl_retval = v_retval;
+ ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
+
+ if (msglen > 0) {
+ dma_mem.va = (struct iecm_dma_mem *)
+ iecm_alloc_dma_mem(hw, &dma_mem, msglen);
+ if (!dma_mem.va)
+ return IECM_ERR_NO_MEMORY;
+
+ iecm_memcpy(dma_mem.va, msg, msglen, IECM_NONDMA_TO_DMA);
+ ctlq_msg.ctx.indirect.payload = &dma_mem;
+ }
+ status = iecm_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
+
+ if (dma_mem.va)
+ iecm_free_dma_mem(hw, &dma_mem);
+
+ return status;
+}
+
+/**
+ * iecm_asq_done - check if FW has processed the Admin Send Queue
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool iecm_asq_done(struct iecm_hw *hw)
+{
+ /* AQ designers suggest use of head for better
+ * timing reliability than DD bit
+ */
+ return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
+}
+
+/**
+ * iecm_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool iecm_check_asq_alive(struct iecm_hw *hw)
+{
+ if (hw->asq->reg.len)
+ return !!(rd32(hw, hw->asq->reg.len) &
+ PF_FW_ATQLEN_ATQENABLE_M);
+
+ return false;
+}
+
+/**
+ * iecm_clean_arq_element
+ * @hw: pointer to the hw struct
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'
+ */
+int iecm_clean_arq_element(struct iecm_hw *hw,
+ struct iecm_arq_event_info *e, u16 *pending)
+{
+ struct iecm_ctlq_msg msg = { 0 };
+ int status;
+
+ *pending = 1;
+
+ status = iecm_ctlq_recv(hw->arq, pending, &msg);
+
+ /* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
+ e->desc.opcode = msg.opcode;
+ e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
+ e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
+ e->desc.ret_val = msg.status;
+ e->desc.datalen = msg.data_len;
+ if (msg.data_len > 0) {
+ e->buf_len = msg.data_len;
+ iecm_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg.data_len,
+ IECM_DMA_TO_NONDMA);
+ }
+ return status;
+}
+
+/**
+ * iecm_deinit_hw - shutdown routine
+ * @hw: pointer to the hardware structure
+ */
+int iecm_deinit_hw(struct iecm_hw *hw)
+{
+ hw->asq = NULL;
+ hw->arq = NULL;
+
+ return iecm_ctlq_deinit(hw);
+}
+
+/**
+ * iecm_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a RESET message to the CPF. Does not wait for response from CPF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the control queue should be shut down and (optionally) reinitialized.
+ */
+int iecm_reset(struct iecm_hw *hw)
+{
+ return iecm_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
+ IECM_SUCCESS, NULL, 0);
+}
+
+/**
+ * iecm_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ */
+STATIC int iecm_get_set_rss_lut(struct iecm_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size,
+ bool set)
+{
+ /* TODO fill out command */
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+int iecm_get_rss_lut(struct iecm_hw *hw, u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size)
+{
+ return iecm_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
+}
+
+/**
+ * iecm_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+int iecm_set_rss_lut(struct iecm_hw *hw, u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size)
+{
+ return iecm_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * iecm_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ */
+STATIC int iecm_get_set_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key,
+ bool set)
+{
+ /* TODO fill out command */
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ */
+int iecm_get_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key)
+{
+ return iecm_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * iecm_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+int iecm_set_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key)
+{
+ return iecm_get_set_rss_key(hw, vsi_id, key, true);
+}
diff --git a/drivers/net/idpf/base/iecm_controlq.c b/drivers/net/idpf/base/iecm_controlq.c
new file mode 100644
index 0000000000..3a877bbf74
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq.c
@@ -0,0 +1,662 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#include "iecm_controlq.h"
+
+/**
+ * iecm_ctlq_setup_regs - initialize control queue registers
+ * @cq: pointer to the specific control queue
+ * @q_create_info: structs containing info for each queue to be initialized
+ */
+static void
+iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq,
+ struct iecm_ctlq_create_info *q_create_info)
+{
+ /* set head and tail registers in our local struct */
+ cq->reg.head = q_create_info->reg.head;
+ cq->reg.tail = q_create_info->reg.tail;
+ cq->reg.len = q_create_info->reg.len;
+ cq->reg.bah = q_create_info->reg.bah;
+ cq->reg.bal = q_create_info->reg.bal;
+ cq->reg.len_mask = q_create_info->reg.len_mask;
+ cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask;
+ cq->reg.head_mask = q_create_info->reg.head_mask;
+}
+
+/**
+ * iecm_ctlq_init_regs - Initialize control queue registers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ * @is_rxq: true if receive control queue, false otherwise
+ *
+ * Initialize registers. The caller is expected to have already initialized the
+ * descriptor ring memory and buffer memory
+ */
+static void iecm_ctlq_init_regs(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ bool is_rxq)
+{
+ /* Update tail to post pre-allocated buffers for rx queues */
+ if (is_rxq)
+ wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));
+
+ /* For non-Mailbox control queues only TAIL need to be set */
+ if (cq->q_id != -1)
+ return;
+
+ /* Clear Head for both send or receive */
+ wr32(hw, cq->reg.head, 0);
+
+ /* set starting point */
+ wr32(hw, cq->reg.bal, IECM_LO_DWORD(cq->desc_ring.pa));
+ wr32(hw, cq->reg.bah, IECM_HI_DWORD(cq->desc_ring.pa));
+ wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));
+}
+
+/**
+ * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf
+ * @cq: pointer to the specific Control queue
+ *
+ * Record the address of the receive queue DMA buffers in the descriptors.
+ * The buffers must have been previously allocated.
+ */
+static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq)
+{
+ int i = 0;
+
+ for (i = 0; i < cq->ring_size; i++) {
+ struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i);
+ struct iecm_dma_mem *bi = cq->bi.rx_buff[i];
+
+ /* No buffer to post to descriptor, continue */
+ if (!bi)
+ continue;
+
+ desc->flags =
+ CPU_TO_LE16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+ desc->opcode = 0;
+ desc->datalen = (__le16)CPU_TO_LE16(bi->size);
+ desc->ret_val = 0;
+ desc->cookie_high = 0;
+ desc->cookie_low = 0;
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(bi->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(bi->pa));
+ desc->params.indirect.param0 = 0;
+ desc->params.indirect.param1 = 0;
+ }
+}
+
+/**
+ * iecm_ctlq_shutdown - shutdown the CQ
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for any controq queue
+ */
+static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ iecm_acquire_lock(&cq->cq_lock);
+
+ if (!cq->ring_size)
+ goto shutdown_sq_out;
+
+
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_dealloc_ring_res(hw, cq);
+
+ /* Set ring_size to 0 to indicate uninitialized queue */
+ cq->ring_size = 0;
+
+shutdown_sq_out:
+ iecm_release_lock(&cq->cq_lock);
+ iecm_destroy_lock(&cq->cq_lock);
+}
+
+/**
+ * iecm_ctlq_add - add one control queue
+ * @hw: pointer to hardware struct
+ * @qinfo: info for queue to be created
+ * @cq_out: (output) double pointer to control queue to be created
+ *
+ * Allocate and initialize a control queue and add it to the control queue list.
+ * The cq parameter will be allocated/initialized and passed back to the caller
+ * if no errors occur.
+ *
+ * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add
+ */
+int iecm_ctlq_add(struct iecm_hw *hw,
+ struct iecm_ctlq_create_info *qinfo,
+ struct iecm_ctlq_info **cq_out)
+{
+ bool is_rxq = false;
+ int status = IECM_SUCCESS;
+
+ if (!qinfo->len || !qinfo->buf_size ||
+ qinfo->len > IECM_CTLQ_MAX_RING_SIZE ||
+ qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN)
+ return IECM_ERR_CFG;
+
+ *cq_out = (struct iecm_ctlq_info *)
+ iecm_calloc(hw, 1, sizeof(struct iecm_ctlq_info));
+ if (!(*cq_out))
+ return IECM_ERR_NO_MEMORY;
+
+ (*cq_out)->cq_type = qinfo->type;
+ (*cq_out)->q_id = qinfo->id;
+ (*cq_out)->buf_size = qinfo->buf_size;
+ (*cq_out)->ring_size = qinfo->len;
+
+ (*cq_out)->next_to_use = 0;
+ (*cq_out)->next_to_clean = 0;
+ (*cq_out)->next_to_post = (*cq_out)->ring_size - 1;
+
+ switch (qinfo->type) {
+ case IECM_CTLQ_TYPE_MAILBOX_RX:
+ is_rxq = true;
+ fallthrough;
+ case IECM_CTLQ_TYPE_MAILBOX_TX:
+ status = iecm_ctlq_alloc_ring_res(hw, *cq_out);
+ break;
+ default:
+ status = IECM_ERR_PARAM;
+ break;
+ }
+
+ if (status)
+ goto init_free_q;
+
+ if (is_rxq) {
+ iecm_ctlq_init_rxq_bufs(*cq_out);
+ } else {
+ /* Allocate the array of msg pointers for TX queues */
+ (*cq_out)->bi.tx_msg = (struct iecm_ctlq_msg **)
+ iecm_calloc(hw, qinfo->len,
+ sizeof(struct iecm_ctlq_msg *));
+ if (!(*cq_out)->bi.tx_msg) {
+ status = IECM_ERR_NO_MEMORY;
+ goto init_dealloc_q_mem;
+ }
+ }
+
+ iecm_ctlq_setup_regs(*cq_out, qinfo);
+
+ iecm_ctlq_init_regs(hw, *cq_out, is_rxq);
+
+ iecm_init_lock(&(*cq_out)->cq_lock);
+
+ LIST_INSERT_HEAD(&hw->cq_list_head, (*cq_out), cq_list);
+
+ return status;
+
+init_dealloc_q_mem:
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_dealloc_ring_res(hw, *cq_out);
+init_free_q:
+ iecm_free(hw, *cq_out);
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_remove - deallocate and remove specified control queue
+ * @hw: pointer to hardware struct
+ * @cq: pointer to control queue to be removed
+ */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ LIST_REMOVE(cq, cq_list);
+ iecm_ctlq_shutdown(hw, cq);
+ iecm_free(hw, cq);
+}
+
+/**
+ * iecm_ctlq_init - main initialization routine for all control queues
+ * @hw: pointer to hardware struct
+ * @num_q: number of queues to initialize
+ * @q_info: array of structs containing info for each queue to be initialized
+ *
+ * This initializes any number and any type of control queues. This is an all
+ * or nothing routine; if one fails, all previously allocated queues will be
+ * destroyed. This must be called prior to using the individual add/remove
+ * APIs.
+ */
+int iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+ struct iecm_ctlq_create_info *q_info)
+{
+ struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+ int ret_code = IECM_SUCCESS;
+ int i = 0;
+
+ LIST_INIT(&hw->cq_list_head);
+
+ for (i = 0; i < num_q; i++) {
+ struct iecm_ctlq_create_info *qinfo = q_info + i;
+
+ ret_code = iecm_ctlq_add(hw, qinfo, &cq);
+ if (ret_code)
+ goto init_destroy_qs;
+ }
+
+ return ret_code;
+
+init_destroy_qs:
+ LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
+ iecm_ctlq_info, cq_list)
+ iecm_ctlq_remove(hw, cq);
+
+ return ret_code;
+}
+
+/**
+ * iecm_ctlq_deinit - destroy all control queues
+ * @hw: pointer to hw struct
+ */
+int iecm_ctlq_deinit(struct iecm_hw *hw)
+{
+ struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+ int ret_code = IECM_SUCCESS;
+
+ LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
+ iecm_ctlq_info, cq_list)
+ iecm_ctlq_remove(hw, cq);
+
+ return ret_code;
+}
+
+/**
+ * iecm_ctlq_send - send command to Control Queue (CTQ)
+ * @hw: pointer to hw struct
+ * @cq: handle to control queue struct to send on
+ * @num_q_msg: number of messages to send on control queue
+ * @q_msg: pointer to array of queue messages to be sent
+ *
+ * The caller is expected to allocate DMAable buffers and pass them to the
+ * send routine via the q_msg struct / control queue specific data struct.
+ * The control queue will hold a reference to each send message until
+ * the completion for that message has been cleaned.
+ */
+int iecm_ctlq_send(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ u16 num_q_msg, struct iecm_ctlq_msg q_msg[])
+{
+ struct iecm_ctlq_desc *desc;
+ int num_desc_avail = 0;
+ int status = IECM_SUCCESS;
+ int i = 0;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ /* Ensure there are enough descriptors to send all messages */
+ num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq);
+ if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
+ status = IECM_ERR_CTLQ_FULL;
+ goto sq_send_command_out;
+ }
+
+ for (i = 0; i < num_q_msg; i++) {
+ struct iecm_ctlq_msg *msg = &q_msg[i];
+ u64 msg_cookie;
+
+ desc = IECM_CTLQ_DESC(cq, cq->next_to_use);
+
+ desc->opcode = CPU_TO_LE16(msg->opcode);
+ desc->pfid_vfid = CPU_TO_LE16(msg->func_id);
+
+ msg_cookie = *(u64 *)&msg->cookie;
+ desc->cookie_high =
+ CPU_TO_LE32(IECM_HI_DWORD(msg_cookie));
+ desc->cookie_low =
+ CPU_TO_LE32(IECM_LO_DWORD(msg_cookie));
+
+ desc->flags = CPU_TO_LE16((msg->host_id & IECM_HOST_ID_MASK) <<
+ IECM_CTLQ_FLAG_HOST_ID_S);
+ if (msg->data_len) {
+ struct iecm_dma_mem *buff = msg->ctx.indirect.payload;
+
+ desc->datalen |= CPU_TO_LE16(msg->data_len);
+ desc->flags |= CPU_TO_LE16(IECM_CTLQ_FLAG_BUF);
+ desc->flags |= CPU_TO_LE16(IECM_CTLQ_FLAG_RD);
+
+ /* Update the address values in the desc with the pa
+ * value for respective buffer
+ */
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(buff->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(buff->pa));
+
+ iecm_memcpy(&desc->params, msg->ctx.indirect.context,
+ IECM_INDIRECT_CTX_SIZE, IECM_NONDMA_TO_DMA);
+ } else {
+ iecm_memcpy(&desc->params, msg->ctx.direct,
+ IECM_DIRECT_CTX_SIZE, IECM_NONDMA_TO_DMA);
+ }
+
+ /* Store buffer info */
+ cq->bi.tx_msg[cq->next_to_use] = msg;
+
+ (cq->next_to_use)++;
+ if (cq->next_to_use == cq->ring_size)
+ cq->next_to_use = 0;
+ }
+
+ /* Force memory write to complete before letting hardware
+ * know that there are new descriptors to fetch.
+ */
+ iecm_wmb();
+
+ wr32(hw, cq->reg.tail, cq->next_to_use);
+
+sq_send_command_out:
+ iecm_release_lock(&cq->cq_lock);
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the
+ * requested queue
+ * @cq: pointer to the specific Control queue
+ * @clean_count: (input|output) number of descriptors to clean as input, and
+ * number of descriptors actually cleaned as output
+ * @msg_status: (output) pointer to msg pointer array to be populated; needs
+ * to be allocated by caller
+ *
+ * Returns an array of message pointers associated with the cleaned
+ * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned
+ * descriptors. The status will be returned for each; any messages that failed
+ * to send will have a non-zero status. The caller is expected to free original
+ * ctlq_msgs and free or reuse the DMA buffers.
+ */
+int iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count,
+ struct iecm_ctlq_msg *msg_status[])
+{
+ struct iecm_ctlq_desc *desc;
+ u16 i = 0, num_to_clean;
+ u16 ntc, desc_err;
+ int ret = IECM_SUCCESS;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ if (*clean_count == 0)
+ return IECM_SUCCESS;
+ if (*clean_count > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *clean_count;
+
+ for (i = 0; i < num_to_clean; i++) {
+ /* Fetch next descriptor and check if marked as done */
+ desc = IECM_CTLQ_DESC(cq, ntc);
+ if (!(LE16_TO_CPU(desc->flags) & IECM_CTLQ_FLAG_DD))
+ break;
+
+ desc_err = LE16_TO_CPU(desc->ret_val);
+ if (desc_err) {
+ /* strip off FW internal code */
+ desc_err &= 0xff;
+ }
+
+ msg_status[i] = cq->bi.tx_msg[ntc];
+ msg_status[i]->status = desc_err;
+
+ cq->bi.tx_msg[ntc] = NULL;
+
+ /* Zero out any stale data */
+ iecm_memset(desc, 0, sizeof(*desc), IECM_DMA_MEM);
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ }
+
+ cq->next_to_clean = ntc;
+
+ iecm_release_lock(&cq->cq_lock);
+
+ /* Return number of descriptors actually cleaned */
+ *clean_count = i;
+
+ return ret;
+}
+
+/**
+ * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue handle
+ * @buff_count: (input|output) input is number of buffers caller is trying to
+ * return; output is number of buffers that were not posted
+ * @buffs: array of pointers to dma mem structs to be given to hardware
+ *
+ * Caller uses this function to return DMA buffers to the descriptor ring after
+ * consuming them; buff_count will be the number of buffers.
+ *
+ * Note: this function needs to be called after a receive call even
+ * if there are no DMA buffers to be returned, i.e. buff_count = 0,
+ * buffs = NULL to support direct commands
+ */
+int iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ u16 *buff_count, struct iecm_dma_mem **buffs)
+{
+ struct iecm_ctlq_desc *desc;
+ u16 ntp = cq->next_to_post;
+ bool buffs_avail = false;
+ u16 tbp = ntp + 1;
+ int status = IECM_SUCCESS;
+ int i = 0;
+
+ if (*buff_count > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ if (*buff_count > 0)
+ buffs_avail = true;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ if (tbp == cq->next_to_clean)
+ /* Nothing to do */
+ goto post_buffs_out;
+
+ /* Post buffers for as many as provided or up until the last one used */
+ while (ntp != cq->next_to_clean) {
+ desc = IECM_CTLQ_DESC(cq, ntp);
+
+ if (cq->bi.rx_buff[ntp])
+ goto fill_desc;
+ if (!buffs_avail) {
+ /* If the caller hasn't given us any buffers or
+ * there are none left, search the ring itself
+ * for an available buffer to move to this
+ * entry starting at the next entry in the ring
+ */
+ tbp = ntp + 1;
+
+ /* Wrap ring if necessary */
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ while (tbp != cq->next_to_clean) {
+ if (cq->bi.rx_buff[tbp]) {
+ cq->bi.rx_buff[ntp] =
+ cq->bi.rx_buff[tbp];
+ cq->bi.rx_buff[tbp] = NULL;
+
+ /* Found a buffer, no need to
+ * search anymore
+ */
+ break;
+ }
+
+ /* Wrap ring if necessary */
+ tbp++;
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+ }
+
+ if (tbp == cq->next_to_clean)
+ goto post_buffs_out;
+ } else {
+ /* Give back pointer to DMA buffer */
+ cq->bi.rx_buff[ntp] = buffs[i];
+ i++;
+
+ if (i >= *buff_count)
+ buffs_avail = false;
+ }
+
+fill_desc:
+ desc->flags =
+ CPU_TO_LE16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+
+ /* Post buffers to descriptor */
+ desc->datalen = CPU_TO_LE16(cq->bi.rx_buff[ntp]->size);
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(cq->bi.rx_buff[ntp]->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(cq->bi.rx_buff[ntp]->pa));
+
+ ntp++;
+ if (ntp == cq->ring_size)
+ ntp = 0;
+ }
+
+post_buffs_out:
+ /* Only update tail if buffers were actually posted */
+ if (cq->next_to_post != ntp) {
+ if (ntp)
+ /* Update next_to_post to ntp - 1 since current ntp
+ * will not have a buffer
+ */
+ cq->next_to_post = ntp - 1;
+ else
+ /* Wrap to end of end ring since current ntp is 0 */
+ cq->next_to_post = cq->ring_size - 1;
+
+ wr32(hw, cq->reg.tail, cq->next_to_post);
+ }
+
+ iecm_release_lock(&cq->cq_lock);
+
+ /* return the number of buffers that were not posted */
+ *buff_count = *buff_count - i;
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_recv - receive control queue message call back
+ * @cq: pointer to control queue handle to receive on
+ * @num_q_msg: (input|output) input number of messages that should be received;
+ * output number of messages actually received
+ * @q_msg: (output) array of received control queue messages on this q;
+ * needs to be pre-allocated by caller for as many messages as requested
+ *
+ * Called by interrupt handler or polling mechanism. Caller is expected
+ * to free buffers
+ */
+int iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg,
+ struct iecm_ctlq_msg *q_msg)
+{
+ u16 num_to_clean, ntc, ret_val, flags;
+ struct iecm_ctlq_desc *desc;
+ int ret_code = IECM_SUCCESS;
+ u16 i = 0;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ if (*num_q_msg == 0)
+ return IECM_SUCCESS;
+ else if (*num_q_msg > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ /* take the lock before we start messing with the ring */
+ iecm_acquire_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *num_q_msg;
+
+ for (i = 0; i < num_to_clean; i++) {
+ u64 msg_cookie;
+
+ /* Fetch next descriptor and check if marked as done */
+ desc = IECM_CTLQ_DESC(cq, ntc);
+ flags = LE16_TO_CPU(desc->flags);
+
+ if (!(flags & IECM_CTLQ_FLAG_DD))
+ break;
+
+ ret_val = LE16_TO_CPU(desc->ret_val);
+
+ q_msg[i].vmvf_type = (flags &
+ (IECM_CTLQ_FLAG_FTYPE_VM |
+ IECM_CTLQ_FLAG_FTYPE_PF)) >>
+ IECM_CTLQ_FLAG_FTYPE_S;
+
+ if (flags & IECM_CTLQ_FLAG_ERR)
+ ret_code = IECM_ERR_CTLQ_ERROR;
+
+ msg_cookie = (u64)LE32_TO_CPU(desc->cookie_high) << 32;
+ msg_cookie |= (u64)LE32_TO_CPU(desc->cookie_low);
+ iecm_memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64),
+ IECM_NONDMA_TO_NONDMA);
+
+ q_msg[i].opcode = LE16_TO_CPU(desc->opcode);
+ q_msg[i].data_len = LE16_TO_CPU(desc->datalen);
+ q_msg[i].status = ret_val;
+
+ if (desc->datalen) {
+ iecm_memcpy(q_msg[i].ctx.indirect.context,
+ &desc->params.indirect,
+ IECM_INDIRECT_CTX_SIZE,
+ IECM_DMA_TO_NONDMA);
+
+ /* Assign pointer to dma buffer to ctlq_msg array
+ * to be given to upper layer
+ */
+ q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
+
+ /* Zero out pointer to DMA buffer info;
+ * will be repopulated by post buffers API
+ */
+ cq->bi.rx_buff[ntc] = NULL;
+ } else {
+ iecm_memcpy(q_msg[i].ctx.direct,
+ desc->params.raw,
+ IECM_DIRECT_CTX_SIZE,
+ IECM_DMA_TO_NONDMA);
+ }
+
+ /* Zero out stale data in descriptor */
+ iecm_memset(desc, 0, sizeof(struct iecm_ctlq_desc),
+ IECM_DMA_MEM);
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ };
+
+ cq->next_to_clean = ntc;
+
+ iecm_release_lock(&cq->cq_lock);
+
+ *num_q_msg = i;
+ if (*num_q_msg == 0)
+ ret_code = IECM_ERR_CTLQ_NO_WORK;
+
+ return ret_code;
+}
diff --git a/drivers/net/idpf/base/iecm_controlq.h b/drivers/net/idpf/base/iecm_controlq.h
new file mode 100644
index 0000000000..0964146b49
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq.h
@@ -0,0 +1,214 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_CONTROLQ_H_
+#define _IECM_CONTROLQ_H_
+
+#ifdef __KERNEL__
+#include <linux/slab.h>
+#endif
+
+#ifndef __KERNEL__
+#include "iecm_osdep.h"
+#include "iecm_alloc.h"
+/* This is used to explicitly annotate when a switch case falls through to the
+ * next case.
+ */
+#define fallthrough do {} while (0)
+#endif
+#include "iecm_controlq_api.h"
+
+/* Maximum buffer lengths for all control queue types */
+#define IECM_CTLQ_MAX_RING_SIZE 1024
+#define IECM_CTLQ_MAX_BUF_LEN 4096
+
+#define IECM_CTLQ_DESC(R, i) \
+ (&(((struct iecm_ctlq_desc *)((R)->desc_ring.va))[i]))
+
+#define IECM_CTLQ_DESC_UNUSED(R) \
+ (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
+ (R)->next_to_clean - (R)->next_to_use - 1)
+
+#ifndef __KERNEL__
+/* Data type manipulation macros. */
+#define IECM_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define IECM_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF))
+#define IECM_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF))
+#define IECM_LO_WORD(x) ((u16)((x) & 0xFFFF))
+
+#endif
+/* Control Queue default settings */
+#define IECM_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */
+
+struct iecm_ctlq_desc {
+ __le16 flags;
+ __le16 opcode;
+ __le16 datalen; /* 0 for direct commands */
+ union {
+ __le16 ret_val;
+ __le16 pfid_vfid;
+#define IECM_CTLQ_DESC_VF_ID_S 0
+#define IECM_CTLQ_DESC_VF_ID_M (0x7FF << IECM_CTLQ_DESC_VF_ID_S)
+#define IECM_CTLQ_DESC_PF_ID_S 11
+#define IECM_CTLQ_DESC_PF_ID_M (0x1F << IECM_CTLQ_DESC_PF_ID_S)
+ };
+ __le32 cookie_high;
+ __le32 cookie_low;
+ union {
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 param2;
+ __le32 param3;
+ } direct;
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 addr_high;
+ __le32 addr_low;
+ } indirect;
+ u8 raw[16];
+ } params;
+};
+
+/* Flags sub-structure
+ * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| HOST_ID |
+ */
+/* command flags and offsets */
+#define IECM_CTLQ_FLAG_DD_S 0
+#define IECM_CTLQ_FLAG_CMP_S 1
+#define IECM_CTLQ_FLAG_ERR_S 2
+#define IECM_CTLQ_FLAG_FTYPE_S 6
+#define IECM_CTLQ_FLAG_RD_S 10
+#define IECM_CTLQ_FLAG_VFC_S 11
+#define IECM_CTLQ_FLAG_BUF_S 12
+#define IECM_CTLQ_FLAG_HOST_ID_S 13
+
+#define IECM_CTLQ_FLAG_DD BIT(IECM_CTLQ_FLAG_DD_S) /* 0x1 */
+#define IECM_CTLQ_FLAG_CMP BIT(IECM_CTLQ_FLAG_CMP_S) /* 0x2 */
+#define IECM_CTLQ_FLAG_ERR BIT(IECM_CTLQ_FLAG_ERR_S) /* 0x4 */
+#define IECM_CTLQ_FLAG_FTYPE_VM BIT(IECM_CTLQ_FLAG_FTYPE_S) /* 0x40 */
+#define IECM_CTLQ_FLAG_FTYPE_PF BIT(IECM_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */
+#define IECM_CTLQ_FLAG_RD BIT(IECM_CTLQ_FLAG_RD_S) /* 0x400 */
+#define IECM_CTLQ_FLAG_VFC BIT(IECM_CTLQ_FLAG_VFC_S) /* 0x800 */
+#define IECM_CTLQ_FLAG_BUF BIT(IECM_CTLQ_FLAG_BUF_S) /* 0x1000 */
+
+/* Host ID is a special field that has 3b and not a 1b flag */
+#define IECM_CTLQ_FLAG_HOST_ID_M MAKE_MASK(0x7000UL, IECM_CTLQ_FLAG_HOST_ID_S)
+
+struct iecm_mbxq_desc {
+ u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */
+ u32 chnl_opcode; /* avoid confusion with desc->opcode */
+ u32 chnl_retval; /* ditto for desc->retval */
+ u32 pf_vf_id; /* used by CP when sending to PF */
+};
+
+enum iecm_mac_type {
+ IECM_MAC_UNKNOWN = 0,
+ IECM_MAC_PF,
+ IECM_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct iecm_mac_info {
+ enum iecm_mac_type type;
+ u8 addr[ETH_ALEN];
+ u8 perm_addr[ETH_ALEN];
+};
+
+#define IECM_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum iecm_bus_type {
+ iecm_bus_type_unknown = 0,
+ iecm_bus_type_pci,
+ iecm_bus_type_pcix,
+ iecm_bus_type_pci_express,
+ iecm_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum iecm_bus_speed {
+ iecm_bus_speed_unknown = 0,
+ iecm_bus_speed_33 = 33,
+ iecm_bus_speed_66 = 66,
+ iecm_bus_speed_100 = 100,
+ iecm_bus_speed_120 = 120,
+ iecm_bus_speed_133 = 133,
+ iecm_bus_speed_2500 = 2500,
+ iecm_bus_speed_5000 = 5000,
+ iecm_bus_speed_8000 = 8000,
+ iecm_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum iecm_bus_width {
+ iecm_bus_width_unknown = 0,
+ iecm_bus_width_pcie_x1 = 1,
+ iecm_bus_width_pcie_x2 = 2,
+ iecm_bus_width_pcie_x4 = 4,
+ iecm_bus_width_pcie_x8 = 8,
+ iecm_bus_width_32 = 32,
+ iecm_bus_width_64 = 64,
+ iecm_bus_width_reserved
+};
+
+/* Bus parameters */
+struct iecm_bus_info {
+ enum iecm_bus_speed speed;
+ enum iecm_bus_width width;
+ enum iecm_bus_type type;
+
+ u16 func;
+ u16 device;
+ u16 lan_id;
+ u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct iecm_hw_func_caps {
+ u32 num_alloc_vfs;
+ u32 vf_base_id;
+};
+
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct iecm_hw {
+ u8 *hw_addr;
+ u64 hw_addr_len;
+ void *back;
+
+ /* control queue - send and receive */
+ struct iecm_ctlq_info *asq;
+ struct iecm_ctlq_info *arq;
+
+ /* subsystem structs */
+ struct iecm_mac_info mac;
+ struct iecm_bus_info bus;
+ struct iecm_hw_func_caps func_caps;
+
+ /* pci info */
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+ bool adapter_stopped;
+
+ LIST_HEAD_TYPE(list_head, iecm_ctlq_info) cq_list_head;
+};
+
+int iecm_ctlq_alloc_ring_res(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq);
+
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq);
+
+/* prototype for functions used for dynamic memory allocation */
+void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem,
+ u64 size);
+void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem);
+#endif /* _IECM_CONTROLQ_H_ */
diff --git a/drivers/net/idpf/base/iecm_controlq_api.h b/drivers/net/idpf/base/iecm_controlq_api.h
new file mode 100644
index 0000000000..27511ffd51
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq_api.h
@@ -0,0 +1,227 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_CONTROLQ_API_H_
+#define _IECM_CONTROLQ_API_H_
+
+#ifdef __KERNEL__
+#include "iecm_mem.h"
+#else /* !__KERNEL__ */
+/* Error Codes */
+/* Linux kernel driver can't directly use these. Instead, they are mapped to
+ * linux compatible error codes which get translated in the build script.
+ */
+#define IECM_SUCCESS 0
+#define IECM_ERR_PARAM -53 /* -EBADR */
+#define IECM_ERR_NOT_IMPL -95 /* -EOPNOTSUPP */
+#define IECM_ERR_NOT_READY -16 /* -EBUSY */
+#define IECM_ERR_BAD_PTR -14 /* -EFAULT */
+#define IECM_ERR_INVAL_SIZE -90 /* -EMSGSIZE */
+#define IECM_ERR_DEVICE_NOT_SUPPORTED -19 /* -ENODEV */
+#define IECM_ERR_FW_API_VER -13 /* -EACCESS */
+#define IECM_ERR_NO_MEMORY -12 /* -ENOMEM */
+#define IECM_ERR_CFG -22 /* -EINVAL */
+#define IECM_ERR_OUT_OF_RANGE -34 /* -ERANGE */
+#define IECM_ERR_ALREADY_EXISTS -17 /* -EEXIST */
+#define IECM_ERR_DOES_NOT_EXIST -6 /* -ENXIO */
+#define IECM_ERR_IN_USE -114 /* -EALREADY */
+#define IECM_ERR_MAX_LIMIT -109 /* -ETOOMANYREFS */
+#define IECM_ERR_RESET_ONGOING -104 /* -ECONNRESET */
+
+/* CRQ/CSQ specific error codes */
+#define IECM_ERR_CTLQ_ERROR -74 /* -EBADMSG */
+#define IECM_ERR_CTLQ_TIMEOUT -110 /* -ETIMEDOUT */
+#define IECM_ERR_CTLQ_FULL -28 /* -ENOSPC */
+#define IECM_ERR_CTLQ_NO_WORK -42 /* -ENOMSG */
+#define IECM_ERR_CTLQ_EMPTY -105 /* -ENOBUFS */
+#endif /* !__KERNEL__ */
+
+struct iecm_hw;
+
+/* Used for queue init, response and events */
+enum iecm_ctlq_type {
+ IECM_CTLQ_TYPE_MAILBOX_TX = 0,
+ IECM_CTLQ_TYPE_MAILBOX_RX = 1,
+ IECM_CTLQ_TYPE_CONFIG_TX = 2,
+ IECM_CTLQ_TYPE_CONFIG_RX = 3,
+ IECM_CTLQ_TYPE_EVENT_RX = 4,
+ IECM_CTLQ_TYPE_RDMA_TX = 5,
+ IECM_CTLQ_TYPE_RDMA_RX = 6,
+ IECM_CTLQ_TYPE_RDMA_COMPL = 7
+};
+
+/*
+ * Generic Control Queue Structures
+ */
+
+struct iecm_ctlq_reg {
+ /* used for queue tracking */
+ u32 head;
+ u32 tail;
+ /* Below applies only to default mb (if present) */
+ u32 len;
+ u32 bah;
+ u32 bal;
+ u32 len_mask;
+ u32 len_ena_mask;
+ u32 head_mask;
+};
+
+/* Generic queue msg structure */
+struct iecm_ctlq_msg {
+ u8 vmvf_type; /* represents the source of the message on recv */
+#define IECM_VMVF_TYPE_VF 0
+#define IECM_VMVF_TYPE_VM 1
+#define IECM_VMVF_TYPE_PF 2
+ u8 host_id;
+ /* 3b field used only when sending a message to peer - to be used in
+ * combination with target func_id to route the message
+ */
+#define IECM_HOST_ID_MASK 0x7
+
+ u16 opcode;
+ u16 data_len; /* data_len = 0 when no payload is attached */
+ union {
+ u16 func_id; /* when sending a message */
+ u16 status; /* when receiving a message */
+ };
+ union {
+ struct {
+ u32 chnl_retval;
+ u32 chnl_opcode;
+ } mbx;
+ } cookie;
+ union {
+#define IECM_DIRECT_CTX_SIZE 16
+#define IECM_INDIRECT_CTX_SIZE 8
+ /* 16 bytes of context can be provided or 8 bytes of context
+ * plus the address of a DMA buffer
+ */
+ u8 direct[IECM_DIRECT_CTX_SIZE];
+ struct {
+ u8 context[IECM_INDIRECT_CTX_SIZE];
+ struct iecm_dma_mem *payload;
+ } indirect;
+ } ctx;
+};
+
+/* Generic queue info structures */
+/* MB, CONFIG and EVENT q do not have extended info */
+struct iecm_ctlq_create_info {
+ enum iecm_ctlq_type type;
+ int id; /* absolute queue offset passed as input
+ * -1 for default mailbox if present
+ */
+ u16 len; /* Queue length passed as input */
+ u16 buf_size; /* buffer size passed as input */
+ u64 base_address; /* output, HPA of the Queue start */
+ struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+
+ int ext_info_size;
+ void *ext_info; /* Specific to q type */
+};
+
+/* Control Queue information */
+struct iecm_ctlq_info {
+ LIST_ENTRY_TYPE(iecm_ctlq_info) cq_list;
+
+ enum iecm_ctlq_type cq_type;
+ int q_id;
+ iecm_lock cq_lock; /* queue lock
+ * iecm_lock is defined in OSdep.h
+ */
+ /* used for interrupt processing */
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 next_to_post; /* starting descriptor to post buffers
+ * to after recev
+ */
+
+ struct iecm_dma_mem desc_ring; /* descriptor ring memory
+ * iecm_dma_mem is defined in OSdep.h
+ */
+ union {
+ struct iecm_dma_mem **rx_buff;
+ struct iecm_ctlq_msg **tx_msg;
+ } bi;
+
+ u16 buf_size; /* queue buffer size */
+ u16 ring_size; /* Number of descriptors */
+ struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+};
+
+/* PF/VF mailbox commands */
+enum iecm_mbx_opc {
+ /* iecm_mbq_opc_send_msg_to_pf:
+ * usage: used by PF or VF to send a message to its CPF
+ * target: RX queue and function ID of parent PF taken from HW
+ */
+ iecm_mbq_opc_send_msg_to_pf = 0x0801,
+
+ /* iecm_mbq_opc_send_msg_to_vf:
+ * usage: used by PF to send message to a VF
+ * target: VF control queue ID must be specified in descriptor
+ */
+ iecm_mbq_opc_send_msg_to_vf = 0x0802,
+
+ /* iecm_mbq_opc_send_msg_to_peer_pf:
+ * usage: used by any function to send message to any peer PF
+ * target: RX queue and host of parent PF taken from HW
+ */
+ iecm_mbq_opc_send_msg_to_peer_pf = 0x0803,
+
+ /* iecm_mbq_opc_send_msg_to_peer_drv:
+ * usage: used by any function to send message to any peer driver
+ * target: RX queue and target host must be specific in descriptor
+ */
+ iecm_mbq_opc_send_msg_to_peer_drv = 0x0804,
+};
+
+/*
+ * API supported for control queue management
+ */
+
+/* Will init all required q including default mb. "q_info" is an array of
+ * create_info structs equal to the number of control queues to be created.
+ */
+int iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+ struct iecm_ctlq_create_info *q_info);
+
+/* Allocate and initialize a single control queue, which will be added to the
+ * control queue list; returns a handle to the created control queue
+ */
+int iecm_ctlq_add(struct iecm_hw *hw,
+ struct iecm_ctlq_create_info *qinfo,
+ struct iecm_ctlq_info **cq);
+
+/* Deinitialize and deallocate a single control queue */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq);
+
+/* Sends messages to HW and will also free the buffer*/
+int iecm_ctlq_send(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq,
+ u16 num_q_msg,
+ struct iecm_ctlq_msg q_msg[]);
+
+/* Receives messages and called by interrupt handler/polling
+ * initiated by app/process. Also caller is supposed to free the buffers
+ */
+int iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg,
+ struct iecm_ctlq_msg *q_msg);
+
+/* Reclaims send descriptors on HW write back */
+int iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count,
+ struct iecm_ctlq_msg *msg_status[]);
+
+/* Indicate RX buffers are done being processed */
+int iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq,
+ u16 *buff_count,
+ struct iecm_dma_mem **buffs);
+
+/* Will destroy all q including the default mb */
+int iecm_ctlq_deinit(struct iecm_hw *hw);
+
+#endif /* _IECM_CONTROLQ_API_H_ */
diff --git a/drivers/net/idpf/base/iecm_controlq_setup.c b/drivers/net/idpf/base/iecm_controlq_setup.c
new file mode 100644
index 0000000000..eb6cf7651d
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq_setup.c
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+
+#include "iecm_controlq.h"
+
+
+/**
+ * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ */
+static int
+iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc);
+
+ cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size);
+ if (!cq->desc_ring.va)
+ return IECM_ERR_NO_MEMORY;
+
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Allocate the buffer head for all control queues, and if it's a receive
+ * queue, allocate DMA buffers
+ */
+static int iecm_ctlq_alloc_bufs(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ int i = 0;
+
+ /* Do not allocate DMA buffers for transmit queues */
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ return IECM_SUCCESS;
+
+ /* We'll be allocating the buffer info memory first, then we can
+ * allocate the mapped buffers for the event processing
+ */
+ cq->bi.rx_buff = (struct iecm_dma_mem **)
+ iecm_calloc(hw, cq->ring_size,
+ sizeof(struct iecm_dma_mem *));
+ if (!cq->bi.rx_buff)
+ return IECM_ERR_NO_MEMORY;
+
+ /* allocate the mapped buffers (except for the last one) */
+ for (i = 0; i < cq->ring_size - 1; i++) {
+ struct iecm_dma_mem *bi;
+ int num = 1; /* number of iecm_dma_mem to be allocated */
+
+ cq->bi.rx_buff[i] = (struct iecm_dma_mem *)iecm_calloc(hw, num,
+ sizeof(struct iecm_dma_mem));
+ if (!cq->bi.rx_buff[i])
+ goto unwind_alloc_cq_bufs;
+
+ bi = cq->bi.rx_buff[i];
+
+ bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size);
+ if (!bi->va) {
+ /* unwind will not free the failed entry */
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ goto unwind_alloc_cq_bufs;
+ }
+ }
+
+ return IECM_SUCCESS;
+
+unwind_alloc_cq_bufs:
+ /* don't try to free the one that failed... */
+ i--;
+ for (; i >= 0; i--) {
+ iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ }
+ iecm_free(hw, cq->bi.rx_buff);
+
+ return IECM_ERR_NO_MEMORY;
+}
+
+/**
+ * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * This assumes the posted send buffers have already been cleaned
+ * and de-allocated
+ */
+static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ iecm_free_dma_mem(hw, &cq->desc_ring);
+}
+
+/**
+ * iecm_ctlq_free_bufs - Free CQ buffer info elements
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
+ * queues. The upper layers are expected to manage freeing of TX DMA buffers
+ */
+static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ void *bi;
+
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) {
+ int i;
+
+ /* free DMA buffers for rx queues*/
+ for (i = 0; i < cq->ring_size; i++) {
+ if (cq->bi.rx_buff[i]) {
+ iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ }
+ }
+
+ bi = (void *)cq->bi.rx_buff;
+ } else {
+ bi = (void *)cq->bi.tx_msg;
+ }
+
+ /* free the buffer header */
+ iecm_free(hw, bi);
+}
+
+/**
+ * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the memory used by the ring, buffers and other related structures
+ */
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_free_bufs(hw, cq);
+ iecm_ctlq_free_desc_ring(hw, cq);
+}
+
+/**
+ * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue struct
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+int iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ int ret_code;
+
+ /* verify input for valid configuration */
+ if (!cq->ring_size || !cq->buf_size)
+ return IECM_ERR_CFG;
+
+ /* allocate the ring memory */
+ ret_code = iecm_ctlq_alloc_desc_ring(hw, cq);
+ if (ret_code)
+ return ret_code;
+
+ /* allocate buffers in the rings */
+ ret_code = iecm_ctlq_alloc_bufs(hw, cq);
+ if (ret_code)
+ goto iecm_init_cq_free_ring;
+
+ /* success! */
+ return IECM_SUCCESS;
+
+iecm_init_cq_free_ring:
+ iecm_free_dma_mem(hw, &cq->desc_ring);
+ return ret_code;
+}
diff --git a/drivers/net/idpf/base/iecm_devids.h b/drivers/net/idpf/base/iecm_devids.h
new file mode 100644
index 0000000000..839214cb40
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_DEVIDS_H_
+#define _IECM_DEVIDS_H_
+
+/* Vendor ID */
+#define IECM_INTEL_VENDOR_ID 0x8086
+
+/* Device IDs */
+#define IECM_DEV_ID_PF 0x1452
+
+
+
+
+#endif /* _IECM_DEVIDS_H_ */
diff --git a/drivers/net/idpf/base/iecm_lan_pf_regs.h b/drivers/net/idpf/base/iecm_lan_pf_regs.h
new file mode 100644
index 0000000000..c6c460dab0
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_pf_regs.h
@@ -0,0 +1,134 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_PF_REGS_H_
+#define _IECM_LAN_PF_REGS_H_
+
+
+/* Receive queues */
+#define PF_QRX_BASE 0x00000000
+#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000)))
+#define PF_QRX_BUFFQ_BASE 0x03000000
+#define PF_QRX_BUFFQ_TAIL(_QRX) (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
+
+/* Transmit queues */
+#define PF_QTX_BASE 0x05000000
+#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000))
+
+
+/* Control(PF Mailbox) Queue */
+#define PF_FW_BASE 0x08400000
+
+#define PF_FW_ARQBAL (PF_FW_BASE)
+#define PF_FW_ARQBAH (PF_FW_BASE + 0x4)
+#define PF_FW_ARQLEN (PF_FW_BASE + 0x8)
+#define PF_FW_ARQLEN_ARQLEN_S 0
+#define PF_FW_ARQLEN_ARQLEN_M MAKEMASK(0x1FFF, PF_FW_ARQLEN_ARQLEN_S)
+#define PF_FW_ARQLEN_ARQVFE_S 28
+#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S)
+#define PF_FW_ARQLEN_ARQOVFL_S 29
+#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S)
+#define PF_FW_ARQLEN_ARQCRIT_S 30
+#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S)
+#define PF_FW_ARQLEN_ARQENABLE_S 31
+#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S)
+#define PF_FW_ARQH (PF_FW_BASE + 0xC)
+#define PF_FW_ARQH_ARQH_S 0
+#define PF_FW_ARQH_ARQH_M MAKEMASK(0x1FFF, PF_FW_ARQH_ARQH_S)
+#define PF_FW_ARQT (PF_FW_BASE + 0x10)
+
+#define PF_FW_ATQBAL (PF_FW_BASE + 0x14)
+#define PF_FW_ATQBAH (PF_FW_BASE + 0x18)
+#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C)
+#define PF_FW_ATQLEN_ATQLEN_S 0
+#define PF_FW_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, PF_FW_ATQLEN_ATQLEN_S)
+#define PF_FW_ATQLEN_ATQVFE_S 28
+#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S)
+#define PF_FW_ATQLEN_ATQOVFL_S 29
+#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S)
+#define PF_FW_ATQLEN_ATQCRIT_S 30
+#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S)
+#define PF_FW_ATQLEN_ATQENABLE_S 31
+#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S)
+#define PF_FW_ATQH (PF_FW_BASE + 0x20)
+#define PF_FW_ATQH_ATQH_S 0
+#define PF_FW_ATQH_ATQH_M MAKEMASK(0x3FF, PF_FW_ATQH_ATQH_S)
+#define PF_FW_ATQT (PF_FW_BASE + 0x24)
+
+/* Interrupts */
+#define PF_GLINT_BASE 0x08900000
+#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000))
+#define PF_GLINT_DYN_CTL_INTENA_S 0
+#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S)
+#define PF_GLINT_DYN_CTL_CLEARPBA_S 1
+#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
+#define PF_GLINT_DYN_CTL_ITR_INDX_S 3
+#define PF_GLINT_DYN_CTL_ITR_INDX_M MAKEMASK(0x3, PF_GLINT_DYN_CTL_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_INTERVAL_S 5
+#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_S 30
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_M BIT(PF_GLINT_DYN_CTL_WB_ON_ITR_S)
+#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31
+#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
+#define PF_GLINT_ITR_V2(_i, _reg_start) (((_i) * 4) + (_reg_start))
+#define PF_GLINT_ITR(_i, _INT) (PF_GLINT_BASE + (((_i) + 1) * 4) + ((_INT) * 0x1000))
+#define PF_GLINT_ITR_MAX_INDEX 2
+#define PF_GLINT_ITR_INTERVAL_S 0
+#define PF_GLINT_ITR_INTERVAL_M MAKEMASK(0xFFF, PF_GLINT_ITR_INTERVAL_S)
+
+/* Timesync registers */
+#define PF_TIMESYNC_BASE 0x08404000
+#define PF_GLTSYN_CMD_SYNC (PF_TIMESYNC_BASE)
+#define PF_GLTSYN_CMD_SYNC_EXEC_CMD_S 0
+#define PF_GLTSYN_CMD_SYNC_EXEC_CMD_M MAKEMASK(0x3, PF_GLTSYN_CMD_SYNC_EXEC_CMD_S)
+#define PF_GLTSYN_CMD_SYNC_SHTIME_EN_S 2
+#define PF_GLTSYN_CMD_SYNC_SHTIME_EN_M BIT(PF_GLTSYN_CMD_SYNC_SHTIME_EN_S)
+#define PF_GLTSYN_SHTIME_0 (PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L (PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H (PF_TIMESYNC_BASE + 0xC)
+#define PF_GLTSYN_ART_L (PF_TIMESYNC_BASE + 0x10)
+#define PF_GLTSYN_ART_H (PF_TIMESYNC_BASE + 0x14)
+
+/* Generic registers */
+#define PF_INT_DIR_OICR_ENA 0x08406000
+#define PF_INT_DIR_OICR_ENA_S 0
+#define PF_INT_DIR_OICR_ENA_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_ENA_S)
+#define PF_INT_DIR_OICR 0x08406004
+#define PF_INT_DIR_OICR_TSYN_EVNT 0
+#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1)
+#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2)
+#define PF_INT_DIR_OICR_CAUSE 0x08406008
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_CAUSE_CAUSE_S)
+#define PF_INT_PBA_CLEAR 0x0840600C
+
+#define PF_FUNC_RID 0x08406010
+#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M MAKEMASK(0x7, PF_FUNC_RID_FUNCTION_NUMBER_S)
+#define PF_FUNC_RID_DEVICE_NUMBER_S 3
+#define PF_FUNC_RID_DEVICE_NUMBER_M MAKEMASK(0x1F, PF_FUNC_RID_DEVICE_NUMBER_S)
+#define PF_FUNC_RID_BUS_NUMBER_S 8
+#define PF_FUNC_RID_BUS_NUMBER_M MAKEMASK(0xFF, PF_FUNC_RID_BUS_NUMBER_S)
+
+/* Reset registers */
+#define PFGEN_RTRIG 0x08407000
+#define PFGEN_RTRIG_CORER_S 0
+#define PFGEN_RTRIG_CORER_M BIT(0)
+#define PFGEN_RTRIG_LINKR_S 1
+#define PFGEN_RTRIG_LINKR_M BIT(1)
+#define PFGEN_RTRIG_IMCR_S 2
+#define PFGEN_RTRIG_IMCR_M BIT(2)
+#define PFGEN_RSTAT 0x08407008 /* PFR Status */
+#define PFGEN_RSTAT_PFR_STATE_S 0
+#define PFGEN_RSTAT_PFR_STATE_M MAKEMASK(0x3, PFGEN_RSTAT_PFR_STATE_S)
+#define PFGEN_CTRL 0x0840700C
+#define PFGEN_CTRL_PFSWR BIT(0)
+
+#endif
diff --git a/drivers/net/idpf/base/iecm_lan_txrx.h b/drivers/net/idpf/base/iecm_lan_txrx.h
new file mode 100644
index 0000000000..3e5320975d
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_txrx.h
@@ -0,0 +1,428 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_TXRX_H_
+#define _IECM_LAN_TXRX_H_
+#ifndef __KERNEL__
+#include "iecm_osdep.h"
+#endif
+
+enum iecm_rss_hash {
+ /* Values 0 - 28 are reserved for future use */
+ IECM_HASH_INVALID = 0,
+ IECM_HASH_NONF_UNICAST_IPV4_UDP = 29,
+ IECM_HASH_NONF_MULTICAST_IPV4_UDP,
+ IECM_HASH_NONF_IPV4_UDP,
+ IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
+ IECM_HASH_NONF_IPV4_TCP,
+ IECM_HASH_NONF_IPV4_SCTP,
+ IECM_HASH_NONF_IPV4_OTHER,
+ IECM_HASH_FRAG_IPV4,
+ /* Values 37-38 are reserved */
+ IECM_HASH_NONF_UNICAST_IPV6_UDP = 39,
+ IECM_HASH_NONF_MULTICAST_IPV6_UDP,
+ IECM_HASH_NONF_IPV6_UDP,
+ IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
+ IECM_HASH_NONF_IPV6_TCP,
+ IECM_HASH_NONF_IPV6_SCTP,
+ IECM_HASH_NONF_IPV6_OTHER,
+ IECM_HASH_FRAG_IPV6,
+ IECM_HASH_NONF_RSVD47,
+ IECM_HASH_NONF_FCOE_OX,
+ IECM_HASH_NONF_FCOE_RX,
+ IECM_HASH_NONF_FCOE_OTHER,
+ /* Values 51-62 are reserved */
+ IECM_HASH_L2_PAYLOAD = 63,
+ IECM_HASH_MAX
+};
+
+/* Supported RSS offloads */
+#define IECM_DEFAULT_RSS_HASH ( \
+ BIT_ULL(IECM_HASH_NONF_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_SCTP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_TCP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_OTHER) | \
+ BIT_ULL(IECM_HASH_FRAG_IPV4) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_TCP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_SCTP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_OTHER) | \
+ BIT_ULL(IECM_HASH_FRAG_IPV6) | \
+ BIT_ULL(IECM_HASH_L2_PAYLOAD))
+
+ /* TODO: Wrap belwo comment under internal flag
+ * Below 6 pcktypes are not supported by FVL or older products
+ * They are supported by FPK and future products
+ */
+#define IECM_DEFAULT_RSS_HASH_EXPANDED (IECM_DEFAULT_RSS_HASH | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IECM_HASH_NONF_UNICAST_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IECM_HASH_NONF_UNICAST_IPV6_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV6_UDP))
+
+/* For iecm_splitq_base_tx_compl_desc */
+#define IECM_TXD_COMPLQ_GEN_S 15
+#define IECM_TXD_COMPLQ_GEN_M BIT_ULL(IECM_TXD_COMPLQ_GEN_S)
+#define IECM_TXD_COMPLQ_COMPL_TYPE_S 11
+#define IECM_TXD_COMPLQ_COMPL_TYPE_M \
+ MAKEMASK(0x7UL, IECM_TXD_COMPLQ_COMPL_TYPE_S)
+#define IECM_TXD_COMPLQ_QID_S 0
+#define IECM_TXD_COMPLQ_QID_M MAKEMASK(0x3FFUL, IECM_TXD_COMPLQ_QID_S)
+
+/* For base mode TX descriptors */
+
+#define IECM_TXD_CTX_QW0_TUNN_L4T_CS_S 23
+#define IECM_TXD_CTX_QW0_TUNN_L4T_CS_M BIT_ULL(IECM_TXD_CTX_QW0_TUNN_L4T_CS_S)
+#define IECM_TXD_CTX_QW0_TUNN_DECTTL_S 19
+#define IECM_TXD_CTX_QW0_TUNN_DECTTL_M \
+ (0xFULL << IECM_TXD_CTX_QW0_TUNN_DECTTL_S)
+#define IECM_TXD_CTX_QW0_TUNN_NATLEN_S 12
+#define IECM_TXD_CTX_QW0_TUNN_NATLEN_M \
+ (0X7FULL << IECM_TXD_CTX_QW0_TUNN_NATLEN_S)
+#define IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_S 11
+#define IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_M \
+ BIT_ULL(IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_S)
+#define IECM_TXD_CTX_EIP_NOINC_IPID_CONST \
+ IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_M
+#define IECM_TXD_CTX_QW0_TUNN_NATT_S 9
+#define IECM_TXD_CTX_QW0_TUNN_NATT_M (0x3ULL << IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_UDP_TUNNELING BIT_ULL(IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_GRE_TUNNELING (0x2ULL << IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_S 2
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_M \
+ (0x3FULL << IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_S)
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IP_S 0
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IP_M \
+ (0x3ULL << IECM_TXD_CTX_QW0_TUNN_EXT_IP_S)
+
+#define IECM_TXD_CTX_QW1_MSS_S 50
+#define IECM_TXD_CTX_QW1_MSS_M \
+ MAKEMASK(0x3FFFULL, IECM_TXD_CTX_QW1_MSS_S)
+#define IECM_TXD_CTX_QW1_TSO_LEN_S 30
+#define IECM_TXD_CTX_QW1_TSO_LEN_M \
+ MAKEMASK(0x3FFFFULL, IECM_TXD_CTX_QW1_TSO_LEN_S)
+#define IECM_TXD_CTX_QW1_CMD_S 4
+#define IECM_TXD_CTX_QW1_CMD_M \
+ MAKEMASK(0xFFFUL, IECM_TXD_CTX_QW1_CMD_S)
+#define IECM_TXD_CTX_QW1_DTYPE_S 0
+#define IECM_TXD_CTX_QW1_DTYPE_M \
+ MAKEMASK(0xFUL, IECM_TXD_CTX_QW1_DTYPE_S)
+#define IECM_TXD_QW1_L2TAG1_S 48
+#define IECM_TXD_QW1_L2TAG1_M \
+ MAKEMASK(0xFFFFULL, IECM_TXD_QW1_L2TAG1_S)
+#define IECM_TXD_QW1_TX_BUF_SZ_S 34
+#define IECM_TXD_QW1_TX_BUF_SZ_M \
+ MAKEMASK(0x3FFFULL, IECM_TXD_QW1_TX_BUF_SZ_S)
+#define IECM_TXD_QW1_OFFSET_S 16
+#define IECM_TXD_QW1_OFFSET_M \
+ MAKEMASK(0x3FFFFULL, IECM_TXD_QW1_OFFSET_S)
+#define IECM_TXD_QW1_CMD_S 4
+#define IECM_TXD_QW1_CMD_M MAKEMASK(0xFFFUL, IECM_TXD_QW1_CMD_S)
+#define IECM_TXD_QW1_DTYPE_S 0
+#define IECM_TXD_QW1_DTYPE_M MAKEMASK(0xFUL, IECM_TXD_QW1_DTYPE_S)
+
+/* TX Completion Descriptor Completion Types */
+#define IECM_TXD_COMPLT_ITR_FLUSH 0
+#define IECM_TXD_COMPLT_RULE_MISS 1
+#define IECM_TXD_COMPLT_RS 2
+#define IECM_TXD_COMPLT_REINJECTED 3
+#define IECM_TXD_COMPLT_RE 4
+#define IECM_TXD_COMPLT_SW_MARKER 5
+
+enum iecm_tx_desc_dtype_value {
+ IECM_TX_DESC_DTYPE_DATA = 0,
+ IECM_TX_DESC_DTYPE_CTX = 1,
+ IECM_TX_DESC_DTYPE_REINJECT_CTX = 2,
+ IECM_TX_DESC_DTYPE_FLEX_DATA = 3,
+ IECM_TX_DESC_DTYPE_FLEX_CTX = 4,
+ IECM_TX_DESC_DTYPE_FLEX_TSO_CTX = 5,
+ IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 = 6,
+ IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7,
+ IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX = 8,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_TSO_CTX = 9,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_CTX = 10,
+ IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX = 11,
+ IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_TSO_CTX = 13,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_CTX = 14,
+ /* DESC_DONE - HW has completed write-back of descriptor */
+ IECM_TX_DESC_DTYPE_DESC_DONE = 15,
+};
+
+enum iecm_tx_ctx_desc_cmd_bits {
+ IECM_TX_CTX_DESC_TSO = 0x01,
+ IECM_TX_CTX_DESC_TSYN = 0x02,
+ IECM_TX_CTX_DESC_IL2TAG2 = 0x04,
+ IECM_TX_CTX_DESC_RSVD = 0x08,
+ IECM_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
+ IECM_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
+ IECM_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
+ IECM_TX_CTX_DESC_SWTCH_VSI = 0x30,
+ IECM_TX_CTX_DESC_FILT_AU_EN = 0x40,
+ IECM_TX_CTX_DESC_FILT_AU_EVICT = 0x80,
+ IECM_TX_CTX_DESC_RSVD1 = 0xF00
+};
+
+enum iecm_tx_desc_len_fields {
+ /* Note: These are predefined bit offsets */
+ IECM_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */
+ IECM_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */
+ IECM_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */
+};
+
+#define IECM_TXD_QW1_MACLEN_M MAKEMASK(0x7FUL, IECM_TX_DESC_LEN_MACLEN_S)
+#define IECM_TXD_QW1_IPLEN_M MAKEMASK(0x7FUL, IECM_TX_DESC_LEN_IPLEN_S)
+#define IECM_TXD_QW1_L4LEN_M MAKEMASK(0xFUL, IECM_TX_DESC_LEN_L4_LEN_S)
+#define IECM_TXD_QW1_FCLEN_M MAKEMASK(0xFUL, IECM_TX_DESC_LEN_L4_LEN_S)
+
+enum iecm_tx_base_desc_cmd_bits {
+ IECM_TX_DESC_CMD_EOP = 0x0001,
+ IECM_TX_DESC_CMD_RS = 0x0002,
+ /* only on VFs else RSVD */
+ IECM_TX_DESC_CMD_ICRC = 0x0004,
+ IECM_TX_DESC_CMD_IL2TAG1 = 0x0008,
+ IECM_TX_DESC_CMD_RSVD1 = 0x0010,
+ IECM_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */
+ IECM_TX_DESC_CMD_RSVD2 = 0x0080,
+ IECM_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */
+ IECM_TX_DESC_CMD_RSVD3 = 0x0400,
+ IECM_TX_DESC_CMD_RSVD4 = 0x0800,
+};
+
+/* Transmit descriptors */
+/* splitq tx buf, singleq tx buf and singleq compl desc */
+struct iecm_base_tx_desc {
+ __le64 buf_addr; /* Address of descriptor's data buf */
+ __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
+};/* read used with buffer queues*/
+
+struct iecm_splitq_tx_compl_desc {
+ /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
+ __le16 qid_comptype_gen;
+ union {
+ __le16 q_head; /* Queue head */
+ __le16 compl_tag; /* Completion tag */
+ } q_head_compl_tag;
+ u32 rsvd;
+
+};/* writeback used with completion queues*/
+
+/* Context descriptors */
+struct iecm_base_tx_ctx_desc {
+ struct {
+ __le32 tunneling_params;
+ __le16 l2tag2;
+ __le16 rsvd1;
+ } qw0;
+ __le64 qw1; /* type_cmd_tlen_mss/rt_hint */
+};
+
+/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
+enum iecm_tx_flex_desc_cmd_bits {
+ IECM_TX_FLEX_DESC_CMD_EOP = 0x01,
+ IECM_TX_FLEX_DESC_CMD_RS = 0x02,
+ IECM_TX_FLEX_DESC_CMD_RE = 0x04,
+ IECM_TX_FLEX_DESC_CMD_IL2TAG1 = 0x08,
+ IECM_TX_FLEX_DESC_CMD_DUMMY = 0x10,
+ IECM_TX_FLEX_DESC_CMD_CS_EN = 0x20,
+ IECM_TX_FLEX_DESC_CMD_FILT_AU_EN = 0x40,
+ IECM_TX_FLEX_DESC_CMD_FILT_AU_EVICT = 0x80,
+};
+
+struct iecm_flex_tx_desc {
+ __le64 buf_addr; /* Packet buffer address */
+ struct {
+ __le16 cmd_dtype;
+#define IECM_FLEX_TXD_QW1_DTYPE_S 0
+#define IECM_FLEX_TXD_QW1_DTYPE_M \
+ MAKEMASK(0x1FUL, IECM_FLEX_TXD_QW1_DTYPE_S)
+#define IECM_FLEX_TXD_QW1_CMD_S 5
+#define IECM_FLEX_TXD_QW1_CMD_M MAKEMASK(0x7FFUL, IECM_TXD_QW1_CMD_S)
+ union {
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_DATA_(0x03) */
+ u8 raw[4];
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 (0x06) */
+ struct {
+ __le16 l2tag1;
+ u8 flex;
+ u8 tsync;
+ } tsync;
+
+ /* DTYPE=IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
+ struct {
+ __le16 l2tag1;
+ __le16 l2tag2;
+ } l2tags;
+ } flex;
+ __le16 buf_size;
+ } qw1;
+};
+
+struct iecm_flex_tx_sched_desc {
+ __le64 buf_addr; /* Packet buffer address */
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
+ struct {
+ u8 cmd_dtype;
+#define IECM_TXD_FLEX_FLOW_DTYPE_M 0x1F
+#define IECM_TXD_FLEX_FLOW_CMD_EOP 0x20
+#define IECM_TXD_FLEX_FLOW_CMD_CS_EN 0x40
+#define IECM_TXD_FLEX_FLOW_CMD_RE 0x80
+
+ u8 rsvd[3];
+
+ __le16 compl_tag;
+ __le16 rxr_bufsize;
+#define IECM_TXD_FLEX_FLOW_RXR 0x4000
+#define IECM_TXD_FLEX_FLOW_BUFSIZE_M 0x3FFF
+ } qw1;
+};
+
+/* Common cmd fields for all flex context descriptors
+ * Note: these defines already account for the 5 bit dtype in the cmd_dtype
+ * field
+ */
+enum iecm_tx_flex_ctx_desc_cmd_bits {
+ IECM_TX_FLEX_CTX_DESC_CMD_TSO = 0x0020,
+ IECM_TX_FLEX_CTX_DESC_CMD_TSYN_EN = 0x0040,
+ IECM_TX_FLEX_CTX_DESC_CMD_L2TAG2 = 0x0080,
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = 0x0200, /* 2 bits */
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = 0x0400, /* 2 bits */
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = 0x0600, /* 2 bits */
+};
+
+/* Standard flex descriptor TSO context quad word */
+struct iecm_flex_tx_tso_ctx_qw {
+ __le32 flex_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF
+#define IECM_TXD_FLEX_TSO_CTX_FLEX_S 24
+ __le16 mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF
+ u8 hdr_len;
+ u8 flex;
+};
+
+union iecm_flex_tx_ctx_desc {
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_CTX (0x04) */
+ struct {
+ u8 qw0_flex[8];
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag1;
+ u8 qw1_flex[4];
+ } qw1;
+ } gen;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
+ struct {
+ struct iecm_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ u8 flex[6];
+ } qw1;
+ } tso;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX (0x08) */
+ struct {
+ struct iecm_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag2;
+ u8 flex0;
+ u8 ptag;
+ u8 flex1[2];
+ } qw1;
+ } tso_l2tag2_ptag;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX (0x0B) */
+ struct {
+ u8 qw0_flex[8];
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag2;
+ u8 flex[4];
+ } qw1;
+ } l2tag2;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_REINJECT_CTX (0x02) */
+ struct {
+ struct {
+ __le32 sa_domain;
+#define IECM_TXD_FLEX_CTX_SA_DOM_M 0xFFFF
+#define IECM_TXD_FLEX_CTX_SA_DOM_VAL 0x10000
+ __le32 sa_idx;
+#define IECM_TXD_FLEX_CTX_SAIDX_M 0x1FFFFF
+ } qw0;
+ struct {
+ __le16 cmd_dtype;
+ __le16 txr2comp;
+#define IECM_TXD_FLEX_CTX_TXR2COMP 0x1
+ __le16 miss_txq_comp_tag;
+ __le16 miss_txq_id;
+ } qw1;
+ } reinjection_pkt;
+};
+
+/* Host Split Context Descriptors */
+struct iecm_flex_tx_hs_ctx_desc {
+ union {
+ struct {
+ __le32 host_fnum_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_S 0
+#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF
+#define IECM_TXD_FLEX_CTX_FNUM_S 18
+#define IECM_TXD_FLEX_CTX_FNUM_M 0x7FF
+#define IECM_TXD_FLEX_CTX_HOST_S 29
+#define IECM_TXD_FLEX_CTX_HOST_M 0x7
+ __le16 ftype_mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_0 0
+#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF
+#define IECM_TXD_FLEX_CTX_FTYPE_S 14
+#define IECM_TXD_FLEX_CTX_FTYPE_VF MAKEMASK(0x0, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_VDEV MAKEMASK(0x1, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_PF MAKEMASK(0x2, IECM_TXD_FLEX_CTX_FTYPE_S)
+ u8 hdr_len;
+ u8 ptag;
+ } tso;
+ struct {
+ u8 flex0[2];
+ __le16 host_fnum_ftype;
+ u8 flex1[3];
+ u8 ptag;
+ } no_tso;
+ } qw0;
+
+ __le64 qw1_cmd_dtype;
+#define IECM_TXD_FLEX_CTX_QW1_PASID_S 16
+#define IECM_TXD_FLEX_CTX_QW1_PASID_M 0xFFFFF
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID_S 36
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_PASID_VALID_S)
+#define IECM_TXD_FLEX_CTX_QW1_TPH_S 37
+#define IECM_TXD_FLEX_CTX_QW1_TPH \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_TPH_S)
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_S 38
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_M 0xF
+/* The following are only valid for DTYPE = 0x09 and DTYPE = 0x0A */
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_S 42
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_M 0x1FFFFF
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S 63
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VALID \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S)
+/* The following are only valid for DTYPE = 0x0D and DTYPE = 0x0E */
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_S 48
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_M 0xFF
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_S 56
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_M 0xFF
+};
+#endif /* _IECM_LAN_TXRX_H_ */
diff --git a/drivers/net/idpf/base/iecm_lan_vf_regs.h b/drivers/net/idpf/base/iecm_lan_vf_regs.h
new file mode 100644
index 0000000000..1ba1a8dea6
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_vf_regs.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_VF_REGS_H_
+#define _IECM_LAN_VF_REGS_H_
+
+
+/* Reset */
+#define VFGEN_RSTAT 0x00008800
+#define VFGEN_RSTAT_VFR_STATE_S 0
+#define VFGEN_RSTAT_VFR_STATE_M MAKEMASK(0x3, VFGEN_RSTAT_VFR_STATE_S)
+
+/* Control(VF Mailbox) Queue */
+#define VF_BASE 0x00006000
+
+#define VF_ATQBAL (VF_BASE + 0x1C00)
+#define VF_ATQBAH (VF_BASE + 0x1800)
+#define VF_ATQLEN (VF_BASE + 0x0800)
+#define VF_ATQLEN_ATQLEN_S 0
+#define VF_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, VF_ATQLEN_ATQLEN_S)
+#define VF_ATQLEN_ATQVFE_S 28
+#define VF_ATQLEN_ATQVFE_M BIT(VF_ATQLEN_ATQVFE_S)
+#define VF_ATQLEN_ATQOVFL_S 29
+#define VF_ATQLEN_ATQOVFL_M BIT(VF_ATQLEN_ATQOVFL_S)
+#define VF_ATQLEN_ATQCRIT_S 30
+#define VF_ATQLEN_ATQCRIT_M BIT(VF_ATQLEN_ATQCRIT_S)
+#define VF_ATQLEN_ATQENABLE_S 31
+#define VF_ATQLEN_ATQENABLE_M BIT(VF_ATQLEN_ATQENABLE_S)
+#define VF_ATQH (VF_BASE + 0x0400)
+#define VF_ATQH_ATQH_S 0
+#define VF_ATQH_ATQH_M MAKEMASK(0x3FF, VF_ATQH_ATQH_S)
+#define VF_ATQT (VF_BASE + 0x2400)
+
+#define VF_ARQBAL (VF_BASE + 0x0C00)
+#define VF_ARQBAH (VF_BASE)
+#define VF_ARQLEN (VF_BASE + 0x2000)
+#define VF_ARQLEN_ARQLEN_S 0
+#define VF_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, VF_ARQLEN_ARQLEN_S)
+#define VF_ARQLEN_ARQVFE_S 28
+#define VF_ARQLEN_ARQVFE_M BIT(VF_ARQLEN_ARQVFE_S)
+#define VF_ARQLEN_ARQOVFL_S 29
+#define VF_ARQLEN_ARQOVFL_M BIT(VF_ARQLEN_ARQOVFL_S)
+#define VF_ARQLEN_ARQCRIT_S 30
+#define VF_ARQLEN_ARQCRIT_M BIT(VF_ARQLEN_ARQCRIT_S)
+#define VF_ARQLEN_ARQENABLE_S 31
+#define VF_ARQLEN_ARQENABLE_M BIT(VF_ARQLEN_ARQENABLE_S)
+#define VF_ARQH (VF_BASE + 0x1400)
+#define VF_ARQH_ARQH_S 0
+#define VF_ARQH_ARQH_M MAKEMASK(0x1FFF, VF_ARQH_ARQH_S)
+#define VF_ARQT (VF_BASE + 0x1000)
+
+/* Transmit queues */
+#define VF_QTX_TAIL_BASE 0x00000000
+#define VF_QTX_TAIL(_QTX) (VF_QTX_TAIL_BASE + (_QTX) * 0x4)
+#define VF_QTX_TAIL_EXT_BASE 0x00040000
+#define VF_QTX_TAIL_EXT(_QTX) (VF_QTX_TAIL_EXT_BASE + ((_QTX) * 4))
+
+/* Receive queues */
+#define VF_QRX_TAIL_BASE 0x00002000
+#define VF_QRX_TAIL(_QRX) (VF_QRX_TAIL_BASE + ((_QRX) * 4))
+#define VF_QRX_TAIL_EXT_BASE 0x00050000
+#define VF_QRX_TAIL_EXT(_QRX) (VF_QRX_TAIL_EXT_BASE + ((_QRX) * 4))
+#define VF_QRXB_TAIL_BASE 0x00060000
+#define VF_QRXB_TAIL(_QRX) (VF_QRXB_TAIL_BASE + ((_QRX) * 4))
+
+/* Interrupts */
+#define VF_INT_DYN_CTL0 0x00005C00
+#define VF_INT_DYN_CTL0_INTENA_S 0
+#define VF_INT_DYN_CTL0_INTENA_M BIT(VF_INT_DYN_CTL0_INTENA_S)
+#define VF_INT_DYN_CTL0_ITR_INDX_S 3
+#define VF_INT_DYN_CTL0_ITR_INDX_M MAKEMASK(0x3, VF_INT_DYN_CTL0_ITR_INDX_S)
+#define VF_INT_DYN_CTLN(_INT) (0x00003800 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_EXT(_INT) (0x00070000 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_INTENA_S 0
+#define VF_INT_DYN_CTLN_INTENA_M BIT(VF_INT_DYN_CTLN_INTENA_S)
+#define VF_INT_DYN_CTLN_CLEARPBA_S 1
+#define VF_INT_DYN_CTLN_CLEARPBA_M BIT(VF_INT_DYN_CTLN_CLEARPBA_S)
+#define VF_INT_DYN_CTLN_SWINT_TRIG_S 2
+#define VF_INT_DYN_CTLN_SWINT_TRIG_M BIT(VF_INT_DYN_CTLN_SWINT_TRIG_S)
+#define VF_INT_DYN_CTLN_ITR_INDX_S 3
+#define VF_INT_DYN_CTLN_ITR_INDX_M MAKEMASK(0x3, VF_INT_DYN_CTLN_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_INTERVAL_S 5
+#define VF_INT_DYN_CTLN_INTERVAL_M BIT(VF_INT_DYN_CTLN_INTERVAL_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S 24
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_S 25
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_WB_ON_ITR_S 30
+#define VF_INT_DYN_CTLN_WB_ON_ITR_M BIT(VF_INT_DYN_CTLN_WB_ON_ITR_S)
+#define VF_INT_DYN_CTLN_INTENA_MSK_S 31
+#define VF_INT_DYN_CTLN_INTENA_MSK_M BIT(VF_INT_DYN_CTLN_INTENA_MSK_S)
+#define VF_INT_ITR0(_i) (0x00004C00 + ((_i) * 4))
+#define VF_INT_ITRN_V2(_i, _reg_start) ((_reg_start) + (((_i)) * 4))
+#define VF_INT_ITRN(_i, _INT) (0x00002800 + ((_i) * 4) + ((_INT) * 0x40))
+#define VF_INT_ITRN_64(_i, _INT) (0x00002C00 + ((_i) * 4) + ((_INT) * 0x100))
+#define VF_INT_ITRN_2K(_i, _INT) (0x00072000 + ((_i) * 4) + ((_INT) * 0x100))
+#define VF_INT_ITRN_MAX_INDEX 2
+#define VF_INT_ITRN_INTERVAL_S 0
+#define VF_INT_ITRN_INTERVAL_M MAKEMASK(0xFFF, VF_INT_ITRN_INTERVAL_S)
+#define VF_INT_PBA_CLEAR 0x00008900
+
+#define VF_INT_ICR0_ENA1 0x00005000
+#define VF_INT_ICR0_ENA1_ADMINQ_S 30
+#define VF_INT_ICR0_ENA1_ADMINQ_M BIT(VF_INT_ICR0_ENA1_ADMINQ_S)
+#define VF_INT_ICR0_ENA1_RSVD_S 31
+#define VF_INT_ICR01 0x00004800
+#define VF_QF_HENA(_i) (0x0000C400 + ((_i) * 4))
+#define VF_QF_HENA_MAX_INDX 1
+#define VF_QF_HKEY(_i) (0x0000CC00 + ((_i) * 4))
+#define VF_QF_HKEY_MAX_INDX 12
+#define VF_QF_HLUT(_i) (0x0000D000 + ((_i) * 4))
+#define VF_QF_HLUT_MAX_INDX 15
+#endif
diff --git a/drivers/net/idpf/base/iecm_prototype.h b/drivers/net/idpf/base/iecm_prototype.h
new file mode 100644
index 0000000000..cd3ee8dcbc
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_prototype.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_PROTOTYPE_H_
+#define _IECM_PROTOTYPE_H_
+
+/* Include generic macros and types first */
+#include "iecm_osdep.h"
+#include "iecm_controlq.h"
+#include "iecm_type.h"
+#include "iecm_alloc.h"
+#include "iecm_devids.h"
+#include "iecm_controlq_api.h"
+#include "iecm_lan_pf_regs.h"
+#include "iecm_lan_vf_regs.h"
+#include "iecm_lan_txrx.h"
+#include "virtchnl.h"
+
+#define APF
+
+int iecm_init_hw(struct iecm_hw *hw, struct iecm_ctlq_size ctlq_size);
+int iecm_deinit_hw(struct iecm_hw *hw);
+
+int iecm_clean_arq_element(struct iecm_hw *hw,
+ struct iecm_arq_event_info *e,
+ u16 *events_pending);
+bool iecm_asq_done(struct iecm_hw *hw);
+bool iecm_check_asq_alive(struct iecm_hw *hw);
+
+int iecm_get_rss_lut(struct iecm_hw *hw, u16 seid, bool pf_lut,
+ u8 *lut, u16 lut_size);
+int iecm_set_rss_lut(struct iecm_hw *hw, u16 seid, bool pf_lut,
+ u8 *lut, u16 lut_size);
+int iecm_get_rss_key(struct iecm_hw *hw, u16 seid,
+ struct iecm_get_set_rss_key_data *key);
+int iecm_set_rss_key(struct iecm_hw *hw, u16 seid,
+ struct iecm_get_set_rss_key_data *key);
+
+int iecm_set_mac_type(struct iecm_hw *hw);
+
+int iecm_reset(struct iecm_hw *hw);
+int iecm_send_msg_to_cp(struct iecm_hw *hw, enum virtchnl_ops v_opcode,
+ int v_retval, u8 *msg, u16 msglen);
+#endif /* _IECM_PROTOTYPE_H_ */
diff --git a/drivers/net/idpf/base/iecm_type.h b/drivers/net/idpf/base/iecm_type.h
new file mode 100644
index 0000000000..fdde9c6e61
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_type.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_TYPE_H_
+#define _IECM_TYPE_H_
+
+#include "iecm_controlq.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#define MAKEMASK(m, s) ((m) << (s))
+
+struct iecm_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+};
+
+/* Statistics collected by the MAC */
+struct iecm_hw_port_stats {
+ /* eth stats collected by the port */
+ struct iecm_eth_stats eth;
+
+ /* additional port specific stats */
+ u64 tx_dropped_link_down; /* tdold */
+ u64 crc_errors; /* crcerrs */
+ u64 illegal_bytes; /* illerrc */
+ u64 error_bytes; /* errbc */
+ u64 mac_local_faults; /* mlfc */
+ u64 mac_remote_faults; /* mrfc */
+ u64 rx_length_errors; /* rlec */
+ u64 link_xon_rx; /* lxonrxc */
+ u64 link_xoff_rx; /* lxoffrxc */
+ u64 priority_xon_rx[8]; /* pxonrxc[8] */
+ u64 priority_xoff_rx[8]; /* pxoffrxc[8] */
+ u64 link_xon_tx; /* lxontxc */
+ u64 link_xoff_tx; /* lxofftxc */
+ u64 priority_xon_tx[8]; /* pxontxc[8] */
+ u64 priority_xoff_tx[8]; /* pxofftxc[8] */
+ u64 priority_xon_2_xoff[8]; /* pxon2offc[8] */
+ u64 rx_size_64; /* prc64 */
+ u64 rx_size_127; /* prc127 */
+ u64 rx_size_255; /* prc255 */
+ u64 rx_size_511; /* prc511 */
+ u64 rx_size_1023; /* prc1023 */
+ u64 rx_size_1522; /* prc1522 */
+ u64 rx_size_big; /* prc9522 */
+ u64 rx_undersize; /* ruc */
+ u64 rx_fragments; /* rfc */
+ u64 rx_oversize; /* roc */
+ u64 rx_jabber; /* rjc */
+ u64 tx_size_64; /* ptc64 */
+ u64 tx_size_127; /* ptc127 */
+ u64 tx_size_255; /* ptc255 */
+ u64 tx_size_511; /* ptc511 */
+ u64 tx_size_1023; /* ptc1023 */
+ u64 tx_size_1522; /* ptc1522 */
+ u64 tx_size_big; /* ptc9522 */
+ u64 mac_short_packet_dropped; /* mspdc */
+ u64 checksum_error; /* xec */
+};
+/* Static buffer size to initialize control queue */
+struct iecm_ctlq_size {
+ u16 asq_buf_size;
+ u16 asq_ring_size;
+ u16 arq_buf_size;
+ u16 arq_ring_size;
+};
+
+/* Temporary definition to compile - TBD if needed */
+struct iecm_arq_event_info {
+ struct iecm_ctlq_desc desc;
+ u16 msg_len;
+ u16 buf_len;
+ u8 *msg_buf;
+};
+
+struct iecm_get_set_rss_key_data {
+ u8 standard_rss_key[0x28];
+ u8 extended_hash_key[0xc];
+};
+
+struct iecm_aq_get_phy_abilities_resp {
+ __le32 phy_type;
+};
+
+struct iecm_filter_program_desc {
+ __le32 qid;
+};
+
+#endif /* _IECM_TYPE_H_ */
diff --git a/drivers/net/idpf/base/meson.build b/drivers/net/idpf/base/meson.build
new file mode 100644
index 0000000000..1ad9a87d9d
--- /dev/null
+++ b/drivers/net/idpf/base/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+sources = [
+ 'iecm_common.c',
+ 'iecm_controlq.c',
+ 'iecm_controlq_setup.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+ '-Wno-unused-but-set-variable',
+ '-Wno-unused-variable',
+ '-Wno-unused-parameter',
+]
+
+c_args = cflags
+
+foreach flag: error_cflags
+ if cc.has_argument(flag)
+ c_args += flag
+ endif
+endforeach
+
+base_lib = static_library('idpf_base', sources,
+ dependencies: static_rte_eal,
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects()
\ No newline at end of file
diff --git a/drivers/net/idpf/base/siov_regs.h b/drivers/net/idpf/base/siov_regs.h
new file mode 100644
index 0000000000..bb7b2daac0
--- /dev/null
+++ b/drivers/net/idpf/base/siov_regs.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+#ifndef _SIOV_REGS_H_
+#define _SIOV_REGS_H_
+#define VDEV_MBX_START 0x20000 /* Begin at 128KB */
+#define VDEV_MBX_ATQBAL (VDEV_MBX_START + 0x0000)
+#define VDEV_MBX_ATQBAH (VDEV_MBX_START + 0x0004)
+#define VDEV_MBX_ATQLEN (VDEV_MBX_START + 0x0008)
+#define VDEV_MBX_ATQH (VDEV_MBX_START + 0x000C)
+#define VDEV_MBX_ATQT (VDEV_MBX_START + 0x0010)
+#define VDEV_MBX_ARQBAL (VDEV_MBX_START + 0x0014)
+#define VDEV_MBX_ARQBAH (VDEV_MBX_START + 0x0018)
+#define VDEV_MBX_ARQLEN (VDEV_MBX_START + 0x001C)
+#define VDEV_MBX_ARQH (VDEV_MBX_START + 0x0020)
+#define VDEV_MBX_ARQT (VDEV_MBX_START + 0x0024)
+#define VDEV_GET_RSTAT 0x21000 /* 132KB for RSTAT */
+
+/* Begin at offset after 1MB (after 256 4k pages) */
+#define VDEV_QRX_TAIL_START 0x100000
+#define VDEV_QRX_TAIL(_i) (VDEV_QRX_TAIL_START + ((_i) * 0x1000)) /* 2k Rx queues */
+
+#define VDEV_QRX_BUFQ_TAIL_START 0x900000 /* Begin at offset of 9MB for Rx buffer queue tail register pages */
+#define VDEV_QRX_BUFQ_TAIL(_i) (VDEV_QRX_BUFQ_TAIL_START + ((_i) * 0x1000)) /* 2k Rx buffer queues */
+
+#define VDEV_QTX_TAIL_START 0x1100000 /* Begin at offset of 17MB for 2k Tx queues */
+#define VDEV_QTX_TAIL(_i) (VDEV_QTX_TAIL_START + ((_i) * 0x1000)) /* 2k Tx queues */
+
+#define VDEV_QTX_COMPL_TAIL_START 0x1900000 /* Begin at offset of 25MB for 2k Tx completion queues */
+#define VDEV_QTX_COMPL_TAIL(_i) (VDEV_QTX_COMPL_TAIL_START + ((_i) * 0x1000)) /* 2k Tx completion queues */
+
+#define VDEV_INT_DYN_CTL01 0x2100000 /* Begin at offset 33MB */
+
+#define VDEV_INT_DYN_START (VDEV_INT_DYN_CTL01 + 0x1000) /* Begin at offset of 33MB + 4k to accomdate CTL01 register */
+#define VDEV_INT_DYN_CTL(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000))
+#define VDEV_INT_ITR_0(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x04)
+#define VDEV_INT_ITR_1(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x08)
+#define VDEV_INT_ITR_2(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x0C)
+
+/* Next offset to begin at 42MB (0x2A00000) */
+#endif /* _SIOV_REGS_H_ */
diff --git a/drivers/net/idpf/base/virtchnl.h b/drivers/net/idpf/base/virtchnl.h
new file mode 100644
index 0000000000..b5d0d5ffd3
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl.h
@@ -0,0 +1,2743 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the Virtual Function (VF) - Physical Function
+ * (PF) communication protocol used by the drivers for all devices starting
+ * from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI. Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The returned value
+ * is of virtchnl_status_code type, defined here.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS 6
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+ { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL_CHECK_UNION_LEN(n, X) enum virtchnl_static_asset_enum_##X \
+ { virtchnl_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+
+/* Error Codes
+ * Note that many older versions of various iAVF drivers convert the reported
+ * status code directly into an iavf_status enumeration. For this reason, it
+ * is important that the values of these enumerations line up.
+ */
+enum virtchnl_status_code {
+ VIRTCHNL_STATUS_SUCCESS = 0,
+ VIRTCHNL_STATUS_ERR_PARAM = -5,
+ VIRTCHNL_STATUS_ERR_NO_MEMORY = -18,
+ VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH = -38,
+ VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR = -39,
+ VIRTCHNL_STATUS_ERR_INVALID_VF_ID = -40,
+ VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR = -53,
+ VIRTCHNL_STATUS_ERR_NOT_SUPPORTED = -64,
+};
+
+/* Backward compatibility */
+#define VIRTCHNL_ERR_PARAM VIRTCHNL_STATUS_ERR_PARAM
+#define VIRTCHNL_STATUS_NOT_SUPPORTED VIRTCHNL_STATUS_ERR_NOT_SUPPORTED
+
+#define VIRTCHNL_LINK_SPEED_2_5GB_SHIFT 0x0
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT 0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT 0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT 0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT 0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT 0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT 0x6
+#define VIRTCHNL_LINK_SPEED_5GB_SHIFT 0x7
+
+enum virtchnl_link_speed {
+ VIRTCHNL_LINK_SPEED_UNKNOWN = 0,
+ VIRTCHNL_LINK_SPEED_100MB = BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+ VIRTCHNL_LINK_SPEED_1GB = BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+ VIRTCHNL_LINK_SPEED_10GB = BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_40GB = BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_20GB = BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_25GB = BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_2_5GB = BIT(VIRTCHNL_LINK_SPEED_2_5GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_5GB = BIT(VIRTCHNL_LINK_SPEED_5GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+ VIRTCHNL_RX_HSPLIT_NO_SPLIT = 0,
+ VIRTCHNL_RX_HSPLIT_SPLIT_L2 = 1,
+ VIRTCHNL_RX_HSPLIT_SPLIT_IP = 2,
+ VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+ VIRTCHNL_RX_HSPLIT_SPLIT_SCTP = 8,
+};
+
+enum virtchnl_bw_limit_type {
+ VIRTCHNL_BW_SHAPER = 0,
+};
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ *
+ */
+ VIRTCHNL_OP_UNKNOWN = 0,
+ VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+ VIRTCHNL_OP_RESET_VF = 2,
+ VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+ VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+ VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+ VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+ VIRTCHNL_OP_ENABLE_QUEUES = 8,
+ VIRTCHNL_OP_DISABLE_QUEUES = 9,
+ VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+ VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+ VIRTCHNL_OP_ADD_VLAN = 12,
+ VIRTCHNL_OP_DEL_VLAN = 13,
+ VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+ VIRTCHNL_OP_GET_STATS = 15,
+ VIRTCHNL_OP_RSVD = 16,
+ VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+ /* opcode 19 is reserved */
+ /* opcodes 20, 21, and 22 are reserved */
+ VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+ VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+ VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+ VIRTCHNL_OP_SET_RSS_HENA = 26,
+ VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+ VIRTCHNL_OP_REQUEST_QUEUES = 29,
+ VIRTCHNL_OP_ENABLE_CHANNELS = 30,
+ VIRTCHNL_OP_DISABLE_CHANNELS = 31,
+ VIRTCHNL_OP_ADD_CLOUD_FILTER = 32,
+ VIRTCHNL_OP_DEL_CLOUD_FILTER = 33,
+ /* opcode 34 is reserved */
+ /* opcodes 38, 39, 40, 41, 42 and 43 are reserved */
+ /* opcode 44 is reserved */
+ VIRTCHNL_OP_ADD_RSS_CFG = 45,
+ VIRTCHNL_OP_DEL_RSS_CFG = 46,
+ VIRTCHNL_OP_ADD_FDIR_FILTER = 47,
+ VIRTCHNL_OP_DEL_FDIR_FILTER = 48,
+ VIRTCHNL_OP_GET_MAX_RSS_QREGION = 50,
+ VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51,
+ VIRTCHNL_OP_ADD_VLAN_V2 = 52,
+ VIRTCHNL_OP_DEL_VLAN_V2 = 53,
+ VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 = 54,
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55,
+ VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56,
+ VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57,
+ VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2 = 58,
+ VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 = 59,
+ VIRTCHNL_OP_1588_PTP_GET_CAPS = 60,
+ VIRTCHNL_OP_1588_PTP_GET_TIME = 61,
+ VIRTCHNL_OP_1588_PTP_SET_TIME = 62,
+ VIRTCHNL_OP_1588_PTP_ADJ_TIME = 63,
+ VIRTCHNL_OP_1588_PTP_ADJ_FREQ = 64,
+ VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP = 65,
+ VIRTCHNL_OP_GET_QOS_CAPS = 66,
+ VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP = 67,
+ VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS = 68,
+ VIRTCHNL_OP_1588_PTP_SET_PIN_CFG = 69,
+ VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP = 70,
+ VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107,
+ VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108,
+ VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111,
+ VIRTCHNL_OP_MAX,
+};
+
+static inline const char *virtchnl_op_str(enum virtchnl_ops v_opcode)
+{
+ switch (v_opcode) {
+ case VIRTCHNL_OP_UNKNOWN:
+ return "VIRTCHNL_OP_UNKNOWN";
+ case VIRTCHNL_OP_VERSION:
+ return "VIRTCHNL_OP_VERSION";
+ case VIRTCHNL_OP_RESET_VF:
+ return "VIRTCHNL_OP_RESET_VF";
+ case VIRTCHNL_OP_GET_VF_RESOURCES:
+ return "VIRTCHNL_OP_GET_VF_RESOURCES";
+ case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+ return "VIRTCHNL_OP_CONFIG_TX_QUEUE";
+ case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+ return "VIRTCHNL_OP_CONFIG_RX_QUEUE";
+ case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+ return "VIRTCHNL_OP_CONFIG_VSI_QUEUES";
+ case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+ return "VIRTCHNL_OP_CONFIG_IRQ_MAP";
+ case VIRTCHNL_OP_ENABLE_QUEUES:
+ return "VIRTCHNL_OP_ENABLE_QUEUES";
+ case VIRTCHNL_OP_DISABLE_QUEUES:
+ return "VIRTCHNL_OP_DISABLE_QUEUES";
+ case VIRTCHNL_OP_ADD_ETH_ADDR:
+ return "VIRTCHNL_OP_ADD_ETH_ADDR";
+ case VIRTCHNL_OP_DEL_ETH_ADDR:
+ return "VIRTCHNL_OP_DEL_ETH_ADDR";
+ case VIRTCHNL_OP_ADD_VLAN:
+ return "VIRTCHNL_OP_ADD_VLAN";
+ case VIRTCHNL_OP_DEL_VLAN:
+ return "VIRTCHNL_OP_DEL_VLAN";
+ case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+ return "VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE";
+ case VIRTCHNL_OP_GET_STATS:
+ return "VIRTCHNL_OP_GET_STATS";
+ case VIRTCHNL_OP_RSVD:
+ return "VIRTCHNL_OP_RSVD";
+ case VIRTCHNL_OP_EVENT:
+ return "VIRTCHNL_OP_EVENT";
+ case VIRTCHNL_OP_CONFIG_RSS_KEY:
+ return "VIRTCHNL_OP_CONFIG_RSS_KEY";
+ case VIRTCHNL_OP_CONFIG_RSS_LUT:
+ return "VIRTCHNL_OP_CONFIG_RSS_LUT";
+ case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ return "VIRTCHNL_OP_GET_RSS_HENA_CAPS";
+ case VIRTCHNL_OP_SET_RSS_HENA:
+ return "VIRTCHNL_OP_SET_RSS_HENA";
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+ return "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING";
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ return "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING";
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ return "VIRTCHNL_OP_REQUEST_QUEUES";
+ case VIRTCHNL_OP_ENABLE_CHANNELS:
+ return "VIRTCHNL_OP_ENABLE_CHANNELS";
+ case VIRTCHNL_OP_DISABLE_CHANNELS:
+ return "VIRTCHNL_OP_DISABLE_CHANNELS";
+ case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+ return "VIRTCHNL_OP_ADD_CLOUD_FILTER";
+ case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+ return "VIRTCHNL_OP_DEL_CLOUD_FILTER";
+ case VIRTCHNL_OP_ADD_RSS_CFG:
+ return "VIRTCHNL_OP_ADD_RSS_CFG";
+ case VIRTCHNL_OP_DEL_RSS_CFG:
+ return "VIRTCHNL_OP_DEL_RSS_CFG";
+ case VIRTCHNL_OP_ADD_FDIR_FILTER:
+ return "VIRTCHNL_OP_ADD_FDIR_FILTER";
+ case VIRTCHNL_OP_DEL_FDIR_FILTER:
+ return "VIRTCHNL_OP_DEL_FDIR_FILTER";
+ case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
+ return "VIRTCHNL_OP_GET_MAX_RSS_QREGION";
+ case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS:
+ return "VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS";
+ case VIRTCHNL_OP_ADD_VLAN_V2:
+ return "VIRTCHNL_OP_ADD_VLAN_V2";
+ case VIRTCHNL_OP_DEL_VLAN_V2:
+ return "VIRTCHNL_OP_DEL_VLAN_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2";
+ case VIRTCHNL_OP_1588_PTP_GET_CAPS:
+ return "VIRTCHNL_OP_1588_PTP_GET_CAPS";
+ case VIRTCHNL_OP_1588_PTP_GET_TIME:
+ return "VIRTCHNL_OP_1588_PTP_GET_TIME";
+ case VIRTCHNL_OP_1588_PTP_SET_TIME:
+ return "VIRTCHNL_OP_1588_PTP_SET_TIME";
+ case VIRTCHNL_OP_1588_PTP_ADJ_TIME:
+ return "VIRTCHNL_OP_1588_PTP_ADJ_TIME";
+ case VIRTCHNL_OP_1588_PTP_ADJ_FREQ:
+ return "VIRTCHNL_OP_1588_PTP_ADJ_FREQ";
+ case VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP:
+ return "VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP";
+ case VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS:
+ return "VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS";
+ case VIRTCHNL_OP_1588_PTP_SET_PIN_CFG:
+ return "VIRTCHNL_OP_1588_PTP_SET_PIN_CFG";
+ case VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP:
+ return "VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP";
+ case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+ return "VIRTCHNL_OP_ENABLE_QUEUES_V2";
+ case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+ return "VIRTCHNL_OP_DISABLE_QUEUES_V2";
+ case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+ return "VIRTCHNL_OP_MAP_QUEUE_VECTOR";
+ case VIRTCHNL_OP_MAX:
+ return "VIRTCHNL_OP_MAX";
+ default:
+ return "Unsupported (update virtchnl.h)";
+ }
+}
+
+static inline const char *virtchnl_stat_str(enum virtchnl_status_code v_status)
+{
+ switch (v_status) {
+ case VIRTCHNL_STATUS_SUCCESS:
+ return "VIRTCHNL_STATUS_SUCCESS";
+ case VIRTCHNL_STATUS_ERR_PARAM:
+ return "VIRTCHNL_STATUS_ERR_PARAM";
+ case VIRTCHNL_STATUS_ERR_NO_MEMORY:
+ return "VIRTCHNL_STATUS_ERR_NO_MEMORY";
+ case VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH:
+ return "VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH";
+ case VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR:
+ return "VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR";
+ case VIRTCHNL_STATUS_ERR_INVALID_VF_ID:
+ return "VIRTCHNL_STATUS_ERR_INVALID_VF_ID";
+ case VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR:
+ return "VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR";
+ case VIRTCHNL_STATUS_ERR_NOT_SUPPORTED:
+ return "VIRTCHNL_STATUS_ERR_NOT_SUPPORTED";
+ default:
+ return "Unknown status code (update virtchnl.h)";
+ }
+}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+ u8 pad[8]; /* AQ flags/opcode/len/retval fields */
+
+ /* avoid confusion with desc->opcode */
+ enum virtchnl_ops v_opcode;
+
+ /* ditto for desc->retval */
+ enum virtchnl_status_code v_retval;
+ u32 vfid; /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures. */
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR 1
+#define VIRTCHNL_VERSION_MINOR 1
+#define VIRTCHNL_VERSION_MAJOR_2 2
+#define VIRTCHNL_VERSION_MINOR_0 0
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS 0
+
+struct virtchnl_version_info {
+ u32 major;
+ u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_ver) (((_ver)->major == 1) && ((_ver)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+#define VF_IS_V20(_ver) (((_ver)->major == 2) && ((_ver)->minor == 0))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+ VIRTCHNL_VSI_TYPE_INVALID = 0,
+ VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ */
+
+struct virtchnl_vsi_resource {
+ u16 vsi_id;
+ u16 num_queue_pairs;
+
+ /* see enum virtchnl_vsi_type */
+ s32 vsi_type;
+ u16 qset_handle;
+ u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2 BIT(0)
+#define VIRTCHNL_VF_OFFLOAD_IWARP BIT(1)
+#define VIRTCHNL_VF_CAP_RDMA VIRTCHNL_VF_OFFLOAD_IWARP
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ BIT(3)
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG BIT(4)
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR BIT(5)
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6)
+/* used to negotiate communicating link speeds in Mbps */
+#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7)
+ /* BIT(8) is reserved */
+#define VIRTCHNL_VF_LARGE_NUM_QPAIRS BIT(9)
+#define VIRTCHNL_VF_OFFLOAD_CRC BIT(10)
+#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15)
+#define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16)
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17)
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 BIT(18)
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF BIT(19)
+#define VIRTCHNL_VF_OFFLOAD_ENCAP BIT(20)
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM BIT(21)
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM BIT(22)
+#define VIRTCHNL_VF_OFFLOAD_ADQ BIT(23)
+#define VIRTCHNL_VF_OFFLOAD_ADQ_V2 BIT(24)
+#define VIRTCHNL_VF_OFFLOAD_USO BIT(25)
+ /* BIT(26) is reserved */
+#define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF BIT(27)
+#define VIRTCHNL_VF_OFFLOAD_FDIR_PF BIT(28)
+#define VIRTCHNL_VF_OFFLOAD_QOS BIT(29)
+ /* BIT(30) is reserved */
+#define VIRTCHNL_VF_CAP_PTP BIT(31)
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+ VIRTCHNL_VF_OFFLOAD_VLAN | \
+ VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+ u16 num_vsis;
+ u16 num_queue_pairs;
+ u16 max_vectors;
+ u16 max_mtu;
+
+ u32 vf_cap_flags;
+ u32 rss_key_size;
+ u32 rss_lut_size;
+
+ struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+ u16 vsi_id;
+ u16 queue_id;
+ u16 ring_len; /* number of descriptors, multiple of 8 */
+ u16 headwb_enabled; /* deprecated with AVF 1.0 */
+ u64 dma_ring_addr;
+ u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* RX descriptor IDs (range from 0 to 63) */
+enum virtchnl_rx_desc_ids {
+ VIRTCHNL_RXDID_0_16B_BASE = 0,
+ VIRTCHNL_RXDID_1_32B_BASE = 1,
+ VIRTCHNL_RXDID_2_FLEX_SQ_NIC = 2,
+ VIRTCHNL_RXDID_3_FLEX_SQ_SW = 3,
+ VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB = 4,
+ VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL = 5,
+ VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2 = 6,
+ VIRTCHNL_RXDID_7_HW_RSVD = 7,
+ /* 8 through 15 are reserved */
+ VIRTCHNL_RXDID_16_COMMS_GENERIC = 16,
+ VIRTCHNL_RXDID_17_COMMS_AUX_VLAN = 17,
+ VIRTCHNL_RXDID_18_COMMS_AUX_IPV4 = 18,
+ VIRTCHNL_RXDID_19_COMMS_AUX_IPV6 = 19,
+ VIRTCHNL_RXDID_20_COMMS_AUX_FLOW = 20,
+ VIRTCHNL_RXDID_21_COMMS_AUX_TCP = 21,
+ /* 22 through 63 are reserved */
+};
+
+/* RX descriptor ID bitmasks */
+enum virtchnl_rx_desc_id_bitmasks {
+ VIRTCHNL_RXDID_0_16B_BASE_M = BIT(VIRTCHNL_RXDID_0_16B_BASE),
+ VIRTCHNL_RXDID_1_32B_BASE_M = BIT(VIRTCHNL_RXDID_1_32B_BASE),
+ VIRTCHNL_RXDID_2_FLEX_SQ_NIC_M = BIT(VIRTCHNL_RXDID_2_FLEX_SQ_NIC),
+ VIRTCHNL_RXDID_3_FLEX_SQ_SW_M = BIT(VIRTCHNL_RXDID_3_FLEX_SQ_SW),
+ VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB_M = BIT(VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB),
+ VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL_M = BIT(VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL),
+ VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2_M = BIT(VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2),
+ VIRTCHNL_RXDID_7_HW_RSVD_M = BIT(VIRTCHNL_RXDID_7_HW_RSVD),
+ /* 9 through 15 are reserved */
+ VIRTCHNL_RXDID_16_COMMS_GENERIC_M = BIT(VIRTCHNL_RXDID_16_COMMS_GENERIC),
+ VIRTCHNL_RXDID_17_COMMS_AUX_VLAN_M = BIT(VIRTCHNL_RXDID_17_COMMS_AUX_VLAN),
+ VIRTCHNL_RXDID_18_COMMS_AUX_IPV4_M = BIT(VIRTCHNL_RXDID_18_COMMS_AUX_IPV4),
+ VIRTCHNL_RXDID_19_COMMS_AUX_IPV6_M = BIT(VIRTCHNL_RXDID_19_COMMS_AUX_IPV6),
+ VIRTCHNL_RXDID_20_COMMS_AUX_FLOW_M = BIT(VIRTCHNL_RXDID_20_COMMS_AUX_FLOW),
+ VIRTCHNL_RXDID_21_COMMS_AUX_TCP_M = BIT(VIRTCHNL_RXDID_21_COMMS_AUX_TCP),
+ /* 22 through 63 are reserved */
+};
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code. The
+ * crc_disable flag disables CRC stripping on the VF. Setting
+ * the crc_disable flag to 1 will disable CRC stripping for each
+ * queue in the VF where the flag is set. The VIRTCHNL_VF_OFFLOAD_CRC
+ * offload must have been set prior to sending this info or the PF
+ * will ignore the request. This flag should be set the same for
+ * all of the queues for a VF.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+ u16 vsi_id;
+ u16 queue_id;
+ u32 ring_len; /* number of descriptors, multiple of 32 */
+ u16 hdr_size;
+ u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+ u32 databuffer_size;
+ u32 max_pkt_size;
+ u8 crc_disable;
+ u8 pad1[3];
+ u64 dma_ring_addr;
+
+ /* see enum virtchnl_rx_hsplit; deprecated with AVF 1.0 */
+ s32 rx_split_pos;
+ u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ * NOTE: The VF is not required to configure all queues in a single request.
+ * It may send multiple messages. PF drivers must correctly handle all VF
+ * requests.
+ */
+struct virtchnl_queue_pair_info {
+ /* NOTE: vsi_id and queue_id should be identical for both queues. */
+ struct virtchnl_txq_info txq;
+ struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+ u16 vsi_id;
+ u16 num_queue_pairs;
+ u32 pad;
+ struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF. Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated. This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support. If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+ u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0. The VF may not request
+ * that vector 0 be used for traffic.
+ * PF configures interrupt mapping and returns status.
+ * NOTE: due to hardware requirements, all active queues (both TX and RX)
+ * should be mapped to interrupts, even if the driver intends to operate
+ * only in polling mode. In this case the interrupt may be disabled, but
+ * the ITR timer will still run to trigger writebacks.
+ */
+struct virtchnl_vector_map {
+ u16 vsi_id;
+ u16 vector_id;
+ u16 rxq_map;
+ u16 txq_map;
+ u16 rxitr_idx;
+ u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+ u16 num_vectors;
+ struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ * NOTE: The VF is not required to enable/disable all queues in a single
+ * request. It may send multiple messages.
+ * PF drivers must correctly handle all VF requests.
+ */
+struct virtchnl_queue_select {
+ u16 vsi_id;
+ u16 pad;
+ u32 rx_queues;
+ u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_GET_MAX_RSS_QREGION
+ *
+ * if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated in VIRTCHNL_OP_GET_VF_RESOURCES
+ * then this op must be supported.
+ *
+ * VF sends this message in order to query the max RSS queue region
+ * size supported by PF, when VIRTCHNL_VF_LARGE_NUM_QPAIRS is enabled.
+ * This information should be used when configuring the RSS LUT and/or
+ * configuring queue region based filters.
+ *
+ * The maximum RSS queue region is 2^qregion_width. So, a qregion_width
+ * of 6 would inform the VF that the PF supports a maximum RSS queue region
+ * of 64.
+ *
+ * A queue region represents a range of queues that can be used to configure
+ * a RSS LUT. For example, if a VF is given 64 queues, but only a max queue
+ * region size of 16 (i.e. 2^qregion_width = 16) then it will only be able
+ * to configure the RSS LUT with queue indices from 0 to 15. However, other
+ * filters can be used to direct packets to queues >15 via specifying a queue
+ * base/offset and queue region width.
+ */
+struct virtchnl_max_rss_qregion {
+ u16 vport_id;
+ u16 qregion_width;
+ u8 pad[4];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_max_rss_qregion);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_LEGACY
+ * Prior to adding the @type member to virtchnl_ether_addr, there were 2 pad
+ * bytes. Moving forward all VF drivers should not set type to
+ * VIRTCHNL_ETHER_ADDR_LEGACY. This is only here to not break previous/legacy
+ * behavior. The control plane function (i.e. PF) can use a best effort method
+ * of tracking the primary/device unicast in this case, but there is no
+ * guarantee and functionality depends on the implementation of the PF.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_PRIMARY
+ * All VF drivers should set @type to VIRTCHNL_ETHER_ADDR_PRIMARY for the
+ * primary/device unicast MAC address filter for VIRTCHNL_OP_ADD_ETH_ADDR and
+ * VIRTCHNL_OP_DEL_ETH_ADDR. This allows for the underlying control plane
+ * function (i.e. PF) to accurately track and use this MAC address for
+ * displaying on the host and for VM/function reset.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_EXTRA
+ * All VF drivers should set @type to VIRTCHNL_ETHER_ADDR_EXTRA for any extra
+ * unicast and/or multicast filters that are being added/deleted via
+ * VIRTCHNL_OP_DEL_ETH_ADDR/VIRTCHNL_OP_ADD_ETH_ADDR respectively.
+ */
+struct virtchnl_ether_addr {
+ u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ u8 type;
+#define VIRTCHNL_ETHER_ADDR_LEGACY 0
+#define VIRTCHNL_ETHER_ADDR_PRIMARY 1
+#define VIRTCHNL_ETHER_ADDR_EXTRA 2
+#define VIRTCHNL_ETHER_ADDR_TYPE_MASK 3 /* first two bits of type are valid */
+ u8 pad;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+ u16 vsi_id;
+ u16 num_elements;
+ struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+ u16 vsi_id;
+ u16 num_elements;
+ u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* This enum is used for all of the VIRTCHNL_VF_OFFLOAD_VLAN_V2_CAPS related
+ * structures and opcodes.
+ *
+ * VIRTCHNL_VLAN_UNSUPPORTED - This field is not supported and if a VF driver
+ * populates it the PF should return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED.
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 - This field supports 0x8100 ethertype.
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 - This field supports 0x88A8 ethertype.
+ * VIRTCHNL_VLAN_ETHERTYPE_9100 - This field supports 0x9100 ethertype.
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_AND - Used when multiple ethertypes can be supported
+ * by the PF concurrently. For example, if the PF can support
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 AND VIRTCHNL_VLAN_ETHERTYPE_88A8 filters it
+ * would OR the following bits:
+ *
+ * VIRTHCNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * The VF would interpret this as VLAN filtering can be supported on both 0x8100
+ * and 0x88A8 VLAN ethertypes.
+ *
+ * VIRTCHNL_ETHERTYPE_XOR - Used when only a single ethertype can be supported
+ * by the PF concurrently. For example if the PF can support
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 XOR VIRTCHNL_VLAN_ETHERTYPE_88A8 stripping
+ * offload it would OR the following bits:
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * The VF would interpret this as VLAN stripping can be supported on either
+ * 0x8100 or 0x88a8 VLAN ethertypes. So when requesting VLAN stripping via
+ * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 the specified ethertype will override
+ * the previously set value.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 - Used to tell the VF to insert and/or
+ * strip the VLAN tag using the L2TAG1 field of the Tx/Rx descriptors.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to insert hardware
+ * offloaded VLAN tags using the L2TAG2 field of the Tx descriptor.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to strip hardware
+ * offloaded VLAN tags using the L2TAG2_2 field of the Rx descriptor.
+ *
+ * VIRTCHNL_VLAN_PRIO - This field supports VLAN priority bits. This is used for
+ * VLAN filtering if the underlying PF supports it.
+ *
+ * VIRTCHNL_VLAN_TOGGLE_ALLOWED - This field is used to say whether a
+ * certain VLAN capability can be toggled. For example if the underlying PF/CP
+ * allows the VF to toggle VLAN filtering, stripping, and/or insertion it should
+ * set this bit along with the supported ethertypes.
+ */
+enum virtchnl_vlan_support {
+ VIRTCHNL_VLAN_UNSUPPORTED = 0,
+ VIRTCHNL_VLAN_ETHERTYPE_8100 = 0x00000001,
+ VIRTCHNL_VLAN_ETHERTYPE_88A8 = 0x00000002,
+ VIRTCHNL_VLAN_ETHERTYPE_9100 = 0x00000004,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 = 0x00000100,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 = 0x00000200,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 = 0x00000400,
+ VIRTCHNL_VLAN_PRIO = 0x01000000,
+ VIRTCHNL_VLAN_FILTER_MASK = 0x10000000,
+ VIRTCHNL_VLAN_ETHERTYPE_AND = 0x20000000,
+ VIRTCHNL_VLAN_ETHERTYPE_XOR = 0x40000000,
+ VIRTCHNL_VLAN_TOGGLE = 0x80000000
+};
+
+/* This structure is used as part of the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
+ * for filtering, insertion, and stripping capabilities.
+ *
+ * If only outer capabilities are supported (for filtering, insertion, and/or
+ * stripping) then this refers to the outer most or single VLAN from the VF's
+ * perspective.
+ *
+ * If only inner capabilities are supported (for filtering, insertion, and/or
+ * stripping) then this refers to the outer most or single VLAN from the VF's
+ * perspective. Functionally this is the same as if only outer capabilities are
+ * supported. The VF driver is just forced to use the inner fields when
+ * adding/deleting filters and enabling/disabling offloads (if supported).
+ *
+ * If both outer and inner capabilities are supported (for filtering, insertion,
+ * and/or stripping) then outer refers to the outer most or single VLAN and
+ * inner refers to the second VLAN, if it exists, in the packet.
+ *
+ * There is no support for tunneled VLAN offloads, so outer or inner are never
+ * referring to a tunneled packet from the VF's perspective.
+ */
+struct virtchnl_vlan_supported_caps {
+ u32 outer;
+ u32 inner;
+};
+
+/* The PF populates these fields based on the supported VLAN filtering. If a
+ * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will
+ * reject any VIRTCHNL_OP_ADD_VLAN_V2 or VIRTCHNL_OP_DEL_VLAN_V2 messages using
+ * the unsupported fields.
+ *
+ * Also, a VF is only allowed to toggle its VLAN filtering setting if the
+ * VIRTCHNL_VLAN_TOGGLE bit is set.
+ *
+ * The ethertype(s) specified in the ethertype_init field are the ethertypes
+ * enabled for VLAN filtering. VLAN filtering in this case refers to the outer
+ * most VLAN from the VF's perspective. If both inner and outer filtering are
+ * allowed then ethertype_init only refers to the outer most VLAN as only
+ * VLAN ethertype supported for inner VLAN filtering is
+ * VIRTCHNL_VLAN_ETHERTYPE_8100. By default, inner VLAN filtering is disabled
+ * when both inner and outer filtering are allowed.
+ *
+ * The max_filters field tells the VF how many VLAN filters it's allowed to have
+ * at any one time. If it exceeds this amount and tries to add another filter,
+ * then the request will be rejected by the PF. To prevent failures, the VF
+ * should keep track of how many VLAN filters it has added and not attempt to
+ * add more than max_filters.
+ */
+struct virtchnl_vlan_filtering_caps {
+ struct virtchnl_vlan_supported_caps filtering_support;
+ u32 ethertype_init;
+ u16 max_filters;
+ u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_filtering_caps);
+
+/* This enum is used for the virtchnl_vlan_offload_caps structure to specify
+ * if the PF supports a different ethertype for stripping and insertion.
+ *
+ * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION - The ethertype(s) specified
+ * for stripping affect the ethertype(s) specified for insertion and visa versa
+ * as well. If the VF tries to configure VLAN stripping via
+ * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 with VIRTCHNL_VLAN_ETHERTYPE_8100 then
+ * that will be the ethertype for both stripping and insertion.
+ *
+ * VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED - The ethertype(s) specified for
+ * stripping do not affect the ethertype(s) specified for insertion and visa
+ * versa.
+ */
+enum virtchnl_vlan_ethertype_match {
+ VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION = 0,
+ VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED = 1,
+};
+
+/* The PF populates these fields based on the supported VLAN offloads. If a
+ * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will
+ * reject any VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 or
+ * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 messages using the unsupported fields.
+ *
+ * Also, a VF is only allowed to toggle its VLAN offload setting if the
+ * VIRTCHNL_VLAN_TOGGLE_ALLOWED bit is set.
+ *
+ * The VF driver needs to be aware of how the tags are stripped by hardware and
+ * inserted by the VF driver based on the level of offload support. The PF will
+ * populate these fields based on where the VLAN tags are expected to be
+ * offloaded via the VIRTHCNL_VLAN_TAG_LOCATION_* bits. The VF will need to
+ * interpret these fields. See the definition of the
+ * VIRTCHNL_VLAN_TAG_LOCATION_* bits above the virtchnl_vlan_support
+ * enumeration.
+ */
+struct virtchnl_vlan_offload_caps {
+ struct virtchnl_vlan_supported_caps stripping_support;
+ struct virtchnl_vlan_supported_caps insertion_support;
+ u32 ethertype_init;
+ u8 ethertype_match;
+ u8 pad[3];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_vlan_offload_caps);
+
+/* VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
+ * VF sends this message to determine its VLAN capabilities.
+ *
+ * PF will mark which capabilities it supports based on hardware support and
+ * current configuration. For example, if a port VLAN is configured the PF will
+ * not allow outer VLAN filtering, stripping, or insertion to be configured so
+ * it will block these features from the VF.
+ *
+ * The VF will need to cross reference its capabilities with the PFs
+ * capabilities in the response message from the PF to determine the VLAN
+ * support.
+ */
+struct virtchnl_vlan_caps {
+ struct virtchnl_vlan_filtering_caps filtering;
+ struct virtchnl_vlan_offload_caps offloads;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_caps);
+
+struct virtchnl_vlan {
+ u16 tci; /* tci[15:13] = PCP and tci[11:0] = VID */
+ u16 tci_mask; /* only valid if VIRTCHNL_VLAN_FILTER_MASK set in
+ * filtering caps
+ */
+ u16 tpid; /* 0x8100, 0x88a8, etc. and only type(s) set in
+ * filtering caps. Note that tpid here does not refer to
+ * VIRTCHNL_VLAN_ETHERTYPE_*, but it refers to the
+ * actual 2-byte VLAN TPID
+ */
+ u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan);
+
+struct virtchnl_vlan_filter {
+ struct virtchnl_vlan inner;
+ struct virtchnl_vlan outer;
+ u8 pad[16];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(32, virtchnl_vlan_filter);
+
+/* VIRTCHNL_OP_ADD_VLAN_V2
+ * VIRTCHNL_OP_DEL_VLAN_V2
+ *
+ * VF sends these messages to add/del one or more VLAN tag filters for Rx
+ * traffic.
+ *
+ * The PF attempts to add the filters and returns status.
+ *
+ * The VF should only ever attempt to add/del virtchnl_vlan_filter(s) using the
+ * supported fields negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS.
+ */
+struct virtchnl_vlan_filter_list_v2 {
+ u16 vport_id;
+ u16 num_elements;
+ u8 pad[4];
+ struct virtchnl_vlan_filter filters[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_filter_list_v2);
+
+/* VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2
+ * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2
+ *
+ * VF sends this message to enable or disable VLAN stripping or insertion. It
+ * also needs to specify an ethertype. The VF knows which VLAN ethertypes are
+ * allowed and whether or not it's allowed to enable/disable the specific
+ * offload via the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to
+ * parse the virtchnl_vlan_caps.offloads fields to determine which offload
+ * messages are allowed.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.offloads in the
+ * following manner the VF will be allowed to enable and/or disable 0x8100 inner
+ * VLAN insertion and/or stripping via the opcodes listed above. Inner in this
+ * case means the outer most or single VLAN from the VF's perspective. This is
+ * because no outer offloads are supported. See the comments above the
+ * virtchnl_vlan_supported_caps structure for more details.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.inner =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.inner =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * In order to enable inner (again note that in this case inner is the outer
+ * most or single VLAN from the VF's perspective) VLAN stripping for 0x8100
+ * VLANs, the VF would populate the virtchnl_vlan_setting structure in the
+ * following manner and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message.
+ *
+ * virtchnl_vlan_setting.inner_ethertype_setting =
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * The reason that VLAN TPID(s) are not being used for the
+ * outer_ethertype_setting and inner_ethertype_setting fields is because it's
+ * possible a device could support VLAN insertion and/or stripping offload on
+ * multiple ethertypes concurrently, so this method allows a VF to request
+ * multiple ethertypes in one message using the virtchnl_vlan_support
+ * enumeration.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.offloads in the
+ * following manner the VF will be allowed to enable 0x8100 and 0x88a8 outer
+ * VLAN insertion and stripping simultaneously. The
+ * virtchnl_vlan_caps.offloads.ethertype_match field will also have to be
+ * populated based on what the PF can support.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * In order to enable outer VLAN stripping for 0x8100 and 0x88a8 VLANs, the VF
+ * would populate the virthcnl_vlan_offload_structure in the following manner
+ * and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting =
+ * VIRTHCNL_VLAN_ETHERTYPE_8100 |
+ * VIRTHCNL_VLAN_ETHERTYPE_88A8;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * There is also the case where a PF and the underlying hardware can support
+ * VLAN offloads on multiple ethertypes, but not concurrently. For example, if
+ * the PF populates the virtchnl_vlan_caps.offloads in the following manner the
+ * VF will be allowed to enable and/or disable 0x8100 XOR 0x88a8 outer VLAN
+ * offloads. The ethertypes must match for stripping and insertion.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * virtchnl_vlan_caps.offloads.ethertype_match =
+ * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION;
+ *
+ * In order to enable outer VLAN stripping for 0x88a8 VLANs, the VF would
+ * populate the virtchnl_vlan_setting structure in the following manner and send
+ * the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2. Also, this will change the
+ * ethertype for VLAN insertion if it's enabled. So, for completeness, a
+ * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 with the same ethertype should be sent.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting = VIRTHCNL_VLAN_ETHERTYPE_88A8;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2
+ *
+ * VF sends this message to enable or disable VLAN filtering. It also needs to
+ * specify an ethertype. The VF knows which VLAN ethertypes are allowed and
+ * whether or not it's allowed to enable/disable filtering via the
+ * VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to
+ * parse the virtchnl_vlan_caps.filtering fields to determine which, if any,
+ * filtering messages are allowed.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.filtering in the
+ * following manner the VF will be allowed to enable/disable 0x8100 and 0x88a8
+ * outer VLAN filtering together. Note, that the VIRTCHNL_VLAN_ETHERTYPE_AND
+ * means that all filtering ethertypes will to be enabled and disabled together
+ * regardless of the request from the VF. This means that the underlying
+ * hardware only supports VLAN filtering for all VLAN the specified ethertypes
+ * or none of them.
+ *
+ * virtchnl_vlan_caps.filtering.filtering_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTHCNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_9100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * In order to enable outer VLAN filtering for 0x88a8 and 0x8100 VLANs (0x9100
+ * VLANs aren't supported by the VF driver), the VF would populate the
+ * virtchnl_vlan_setting structure in the following manner and send the
+ * VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2. The same message format would be used
+ * to disable outer VLAN filtering for 0x88a8 and 0x8100 VLANs, but the
+ * VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 opcode is used.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting =
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8;
+ *
+ */
+struct virtchnl_vlan_setting {
+ u32 outer_ethertype_setting;
+ u32 inner_ethertype_setting;
+ u16 vport_id;
+ u8 pad[6];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_setting);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+ u16 vsi_id;
+ u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC 0x00000001
+#define FLAG_VF_MULTICAST_PROMISC 0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+ u64 rx_bytes; /* received bytes */
+ u64 rx_unicast; /* received unicast pkts */
+ u64 rx_multicast; /* received multicast pkts */
+ u64 rx_broadcast; /* received broadcast pkts */
+ u64 rx_discards;
+ u64 rx_unknown_protocol;
+ u64 tx_bytes; /* transmitted bytes */
+ u64 tx_unicast; /* transmitted unicast pkts */
+ u64 tx_multicast; /* transmitted multicast pkts */
+ u64 tx_broadcast; /* transmitted broadcast pkts */
+ u64 tx_discards;
+ u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+ u16 vsi_id;
+ u16 key_len;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+ u16 vsi_id;
+ u16 lut_entries;
+ u8 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+ u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* Type of RSS algorithm */
+enum virtchnl_rss_algorithm {
+ VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0,
+ VIRTCHNL_RSS_ALG_R_ASYMMETRIC = 1,
+ VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC = 2,
+ VIRTCHNL_RSS_ALG_XOR_SYMMETRIC = 3,
+};
+
+/* This is used by PF driver to enforce how many channels can be supported.
+ * When ADQ_V2 capability is negotiated, it will allow 16 channels otherwise
+ * PF driver will allow only max 4 channels
+ */
+#define VIRTCHNL_MAX_ADQ_CHANNELS 4
+#define VIRTCHNL_MAX_ADQ_V2_CHANNELS 16
+
+/* VIRTCHNL_OP_ENABLE_CHANNELS
+ * VIRTCHNL_OP_DISABLE_CHANNELS
+ * VF sends these messages to enable or disable channels based on
+ * the user specified queue count and queue offset for each traffic class.
+ * This struct encompasses all the information that the PF needs from
+ * VF to create a channel.
+ */
+struct virtchnl_channel_info {
+ u16 count; /* number of queues in a channel */
+ u16 offset; /* queues in a channel start from 'offset' */
+ u32 pad;
+ u64 max_tx_rate;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_channel_info);
+
+struct virtchnl_tc_info {
+ u32 num_tc;
+ u32 pad;
+ struct virtchnl_channel_info list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_tc_info);
+
+/* VIRTCHNL_ADD_CLOUD_FILTER
+ * VIRTCHNL_DEL_CLOUD_FILTER
+ * VF sends these messages to add or delete a cloud filter based on the
+ * user specified match and action filters. These structures encompass
+ * all the information that the PF needs from the VF to add/delete a
+ * cloud filter.
+ */
+
+struct virtchnl_l4_spec {
+ u8 src_mac[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ u8 dst_mac[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ /* vlan_prio is part of this 16 bit field even from OS perspective
+ * vlan_id:12 is actual vlan_id, then vlanid:bit14..12 is vlan_prio
+ * in future, when decided to offload vlan_prio, pass that information
+ * as part of the "vlan_id" field, Bit14..12
+ */
+ __be16 vlan_id;
+ __be16 pad; /* reserved for future use */
+ __be32 src_ip[4];
+ __be32 dst_ip[4];
+ __be16 src_port;
+ __be16 dst_port;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(52, virtchnl_l4_spec);
+
+union virtchnl_flow_spec {
+ struct virtchnl_l4_spec tcp_spec;
+ u8 buffer[128]; /* reserved for future use */
+};
+
+VIRTCHNL_CHECK_UNION_LEN(128, virtchnl_flow_spec);
+
+enum virtchnl_action {
+ /* action types */
+ VIRTCHNL_ACTION_DROP = 0,
+ VIRTCHNL_ACTION_TC_REDIRECT,
+ VIRTCHNL_ACTION_PASSTHRU,
+ VIRTCHNL_ACTION_QUEUE,
+ VIRTCHNL_ACTION_Q_REGION,
+ VIRTCHNL_ACTION_MARK,
+ VIRTCHNL_ACTION_COUNT,
+};
+
+enum virtchnl_flow_type {
+ /* flow types */
+ VIRTCHNL_TCP_V4_FLOW = 0,
+ VIRTCHNL_TCP_V6_FLOW,
+ VIRTCHNL_UDP_V4_FLOW,
+ VIRTCHNL_UDP_V6_FLOW,
+};
+
+struct virtchnl_filter {
+ union virtchnl_flow_spec data;
+ union virtchnl_flow_spec mask;
+
+ /* see enum virtchnl_flow_type */
+ s32 flow_type;
+
+ /* see enum virtchnl_action */
+ s32 action;
+ u32 action_meta;
+ u8 field_flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(272, virtchnl_filter);
+
+struct virtchnl_shaper_bw {
+ /* Unit is Kbps */
+ u32 committed;
+ u32 peak;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_shaper_bw);
+
+
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+ VIRTCHNL_EVENT_UNKNOWN = 0,
+ VIRTCHNL_EVENT_LINK_CHANGE,
+ VIRTCHNL_EVENT_RESET_IMPENDING,
+ VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO 0
+#define PF_EVENT_SEVERITY_ATTENTION 1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED 2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM 255
+
+struct virtchnl_pf_event {
+ /* see enum virtchnl_event_codes */
+ s32 event;
+ union {
+ /* If the PF driver does not support the new speed reporting
+ * capabilities then use link_event else use link_event_adv to
+ * get the speed and link information. The ability to understand
+ * new speeds is indicated by setting the capability flag
+ * VIRTCHNL_VF_CAP_ADV_LINK_SPEED in vf_cap_flags parameter
+ * in virtchnl_vf_resource struct and can be used to determine
+ * which link event struct to use below.
+ */
+ struct {
+ enum virtchnl_link_speed link_speed;
+ bool link_status;
+ u8 pad[3];
+ } link_event;
+ struct {
+ /* link_speed provided in Mbps */
+ u32 link_speed;
+ u8 link_status;
+ u8 pad[3];
+ } link_event_adv;
+ } event_data;
+
+ s32 severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+ VIRTCHNL_VFR_INPROGRESS = 0,
+ VIRTCHNL_VFR_COMPLETED,
+ VIRTCHNL_VFR_VFACTIVE,
+};
+
+#define VIRTCHNL_MAX_NUM_PROTO_HDRS 32
+#define PROTO_HDR_SHIFT 5
+#define PROTO_HDR_FIELD_START(proto_hdr_type) \
+ (proto_hdr_type << PROTO_HDR_SHIFT)
+#define PROTO_HDR_FIELD_MASK ((1UL << PROTO_HDR_SHIFT) - 1)
+
+/* VF use these macros to configure each protocol header.
+ * Specify which protocol headers and protocol header fields base on
+ * virtchnl_proto_hdr_type and virtchnl_proto_hdr_field.
+ * @param hdr: a struct of virtchnl_proto_hdr
+ * @param hdr_type: ETH/IPV4/TCP, etc
+ * @param field: SRC/DST/TEID/SPI, etc
+ */
+#define VIRTCHNL_ADD_PROTO_HDR_FIELD(hdr, field) \
+ ((hdr)->field_selector |= BIT((field) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_DEL_PROTO_HDR_FIELD(hdr, field) \
+ ((hdr)->field_selector &= ~BIT((field) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_TEST_PROTO_HDR_FIELD(hdr, val) \
+ ((hdr)->field_selector & BIT((val) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_GET_PROTO_HDR_FIELD(hdr) ((hdr)->field_selector)
+
+#define VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, hdr_type, field) \
+ (VIRTCHNL_ADD_PROTO_HDR_FIELD(hdr, \
+ VIRTCHNL_PROTO_HDR_ ## hdr_type ## _ ## field))
+#define VIRTCHNL_DEL_PROTO_HDR_FIELD_BIT(hdr, hdr_type, field) \
+ (VIRTCHNL_DEL_PROTO_HDR_FIELD(hdr, \
+ VIRTCHNL_PROTO_HDR_ ## hdr_type ## _ ## field))
+
+#define VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, hdr_type) \
+ ((hdr)->type = VIRTCHNL_PROTO_HDR_ ## hdr_type)
+#define VIRTCHNL_GET_PROTO_HDR_TYPE(hdr) \
+ (((hdr)->type) >> PROTO_HDR_SHIFT)
+#define VIRTCHNL_TEST_PROTO_HDR_TYPE(hdr, val) \
+ ((hdr)->type == ((s32)((val) >> PROTO_HDR_SHIFT)))
+#define VIRTCHNL_TEST_PROTO_HDR(hdr, val) \
+ (VIRTCHNL_TEST_PROTO_HDR_TYPE(hdr, val) && \
+ VIRTCHNL_TEST_PROTO_HDR_FIELD(hdr, val))
+
+/* Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+enum virtchnl_proto_hdr_type {
+ VIRTCHNL_PROTO_HDR_NONE,
+ VIRTCHNL_PROTO_HDR_ETH,
+ VIRTCHNL_PROTO_HDR_S_VLAN,
+ VIRTCHNL_PROTO_HDR_C_VLAN,
+ VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_TCP,
+ VIRTCHNL_PROTO_HDR_UDP,
+ VIRTCHNL_PROTO_HDR_SCTP,
+ VIRTCHNL_PROTO_HDR_GTPU_IP,
+ VIRTCHNL_PROTO_HDR_GTPU_EH,
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN,
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP,
+ VIRTCHNL_PROTO_HDR_PPPOE,
+ VIRTCHNL_PROTO_HDR_L2TPV3,
+ VIRTCHNL_PROTO_HDR_ESP,
+ VIRTCHNL_PROTO_HDR_AH,
+ VIRTCHNL_PROTO_HDR_PFCP,
+ VIRTCHNL_PROTO_HDR_GTPC,
+ VIRTCHNL_PROTO_HDR_ECPRI,
+ VIRTCHNL_PROTO_HDR_L2TPV2,
+ VIRTCHNL_PROTO_HDR_PPP,
+ /* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL_PROTO_HDR_IPV4 and VIRTCHNL_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
+ VIRTCHNL_PROTO_HDR_GRE,
+};
+
+/* Protocol header field within a protocol header. */
+enum virtchnl_proto_hdr_field {
+ /* ETHER */
+ VIRTCHNL_PROTO_HDR_ETH_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ETH),
+ VIRTCHNL_PROTO_HDR_ETH_DST,
+ VIRTCHNL_PROTO_HDR_ETH_ETHERTYPE,
+ /* S-VLAN */
+ VIRTCHNL_PROTO_HDR_S_VLAN_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_S_VLAN),
+ /* C-VLAN */
+ VIRTCHNL_PROTO_HDR_C_VLAN_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_C_VLAN),
+ /* IPV4 */
+ VIRTCHNL_PROTO_HDR_IPV4_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4),
+ VIRTCHNL_PROTO_HDR_IPV4_DST,
+ VIRTCHNL_PROTO_HDR_IPV4_DSCP,
+ VIRTCHNL_PROTO_HDR_IPV4_TTL,
+ VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_CHKSUM,
+ /* IPV6 */
+ VIRTCHNL_PROTO_HDR_IPV6_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
+ VIRTCHNL_PROTO_HDR_IPV6_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_TC,
+ VIRTCHNL_PROTO_HDR_IPV6_HOP_LIMIT,
+ VIRTCHNL_PROTO_HDR_IPV6_PROT,
+ /* IPV6 Prefix */
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX32_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX32_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX40_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX40_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX48_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX48_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX56_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX56_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* TCP */
+ VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
+ VIRTCHNL_PROTO_HDR_TCP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_TCP_CHKSUM,
+ /* UDP */
+ VIRTCHNL_PROTO_HDR_UDP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_UDP),
+ VIRTCHNL_PROTO_HDR_UDP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_UDP_CHKSUM,
+ /* SCTP */
+ VIRTCHNL_PROTO_HDR_SCTP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_SCTP),
+ VIRTCHNL_PROTO_HDR_SCTP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_SCTP_CHKSUM,
+ /* GTPU_IP */
+ VIRTCHNL_PROTO_HDR_GTPU_IP_TEID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_IP),
+ /* GTPU_EH */
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH),
+ VIRTCHNL_PROTO_HDR_GTPU_EH_QFI,
+ /* PPPOE */
+ VIRTCHNL_PROTO_HDR_PPPOE_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_PPPOE),
+ /* L2TPV3 */
+ VIRTCHNL_PROTO_HDR_L2TPV3_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_L2TPV3),
+ /* ESP */
+ VIRTCHNL_PROTO_HDR_ESP_SPI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ESP),
+ /* AH */
+ VIRTCHNL_PROTO_HDR_AH_SPI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_AH),
+ /* PFCP */
+ VIRTCHNL_PROTO_HDR_PFCP_S_FIELD =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_PFCP),
+ VIRTCHNL_PROTO_HDR_PFCP_SEID,
+ /* GTPC */
+ VIRTCHNL_PROTO_HDR_GTPC_TEID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPC),
+ /* ECPRI */
+ VIRTCHNL_PROTO_HDR_ECPRI_MSG_TYPE =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ECPRI),
+ VIRTCHNL_PROTO_HDR_ECPRI_PC_RTC_ID,
+ /* IPv4 Dummy Fragment */
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG),
+ /* IPv6 Extension Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
+ /* GTPU_DWN/UP */
+ VIRTCHNL_PROTO_HDR_GTPU_DWN_QFI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN),
+ VIRTCHNL_PROTO_HDR_GTPU_UP_QFI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP),
+ /* L2TPv2 */
+ VIRTCHNL_PROTO_HDR_L2TPV2_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_L2TPV2),
+ VIRTCHNL_PROTO_HDR_L2TPV2_LEN_SESS_ID,
+};
+
+struct virtchnl_proto_hdr {
+ /* see enum virtchnl_proto_hdr_type */
+ s32 type;
+ u32 field_selector; /* a bit mask to select field for header type */
+ u8 buffer[64];
+ /**
+ * binary buffer in network order for specific header type.
+ * For example, if type = VIRTCHNL_PROTO_HDR_IPV4, a IPv4
+ * header is expected to be copied into the buffer.
+ */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
+
+struct virtchnl_proto_hdrs {
+ u8 tunnel_level;
+ /**
+ * specify where protocol header start from.
+ * 0 - from the outer layer
+ * 1 - from the first inner layer
+ * 2 - from the second inner layer
+ * ....
+ **/
+ int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
+ struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
+
+struct virtchnl_rss_cfg {
+ struct virtchnl_proto_hdrs proto_hdrs; /* protocol headers */
+
+ /* see enum virtchnl_rss_algorithm; rss algorithm type */
+ s32 rss_algorithm;
+ u8 reserved[128]; /* reserve for future */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2444, virtchnl_rss_cfg);
+
+/* action configuration for FDIR */
+struct virtchnl_filter_action {
+ /* see enum virtchnl_action type */
+ s32 type;
+ union {
+ /* used for queue and qgroup action */
+ struct {
+ u16 index;
+ u8 region;
+ } queue;
+ /* used for count action */
+ struct {
+ /* share counter ID with other flow rules */
+ u8 shared;
+ u32 id; /* counter ID */
+ } count;
+ /* used for mark action */
+ u32 mark_id;
+ u8 reserve[32];
+ } act_conf;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_filter_action);
+
+#define VIRTCHNL_MAX_NUM_ACTIONS 8
+
+struct virtchnl_filter_action_set {
+ /* action number must be less then VIRTCHNL_MAX_NUM_ACTIONS */
+ int count;
+ struct virtchnl_filter_action actions[VIRTCHNL_MAX_NUM_ACTIONS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(292, virtchnl_filter_action_set);
+
+/* pattern and action for FDIR rule */
+struct virtchnl_fdir_rule {
+ struct virtchnl_proto_hdrs proto_hdrs;
+ struct virtchnl_filter_action_set action_set;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2604, virtchnl_fdir_rule);
+
+/* Status returned to VF after VF requests FDIR commands
+ * VIRTCHNL_FDIR_SUCCESS
+ * VF FDIR related request is successfully done by PF
+ * The request can be OP_ADD/DEL/QUERY_FDIR_FILTER.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE
+ * OP_ADD_FDIR_FILTER request is failed due to no Hardware resource.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_EXIST
+ * OP_ADD_FDIR_FILTER request is failed due to the rule is already existed.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT
+ * OP_ADD_FDIR_FILTER request is failed due to conflict with existing rule.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST
+ * OP_DEL_FDIR_FILTER request is failed due to this rule doesn't exist.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_INVALID
+ * OP_ADD_FDIR_FILTER request is failed due to parameters validation
+ * or HW doesn't support.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT
+ * OP_ADD/DEL_FDIR_FILTER request is failed due to timing out
+ * for programming.
+ *
+ * VIRTCHNL_FDIR_FAILURE_QUERY_INVALID
+ * OP_QUERY_FDIR_FILTER request is failed due to parameters validation,
+ * for example, VF query counter of a rule who has no counter action.
+ */
+enum virtchnl_fdir_prgm_status {
+ VIRTCHNL_FDIR_SUCCESS = 0,
+ VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE,
+ VIRTCHNL_FDIR_FAILURE_RULE_EXIST,
+ VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT,
+ VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST,
+ VIRTCHNL_FDIR_FAILURE_RULE_INVALID,
+ VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT,
+ VIRTCHNL_FDIR_FAILURE_QUERY_INVALID,
+};
+
+/* VIRTCHNL_OP_ADD_FDIR_FILTER
+ * VF sends this request to PF by filling out vsi_id,
+ * validate_only and rule_cfg. PF will return flow_id
+ * if the request is successfully done and return add_status to VF.
+ */
+struct virtchnl_fdir_add {
+ u16 vsi_id; /* INPUT */
+ /*
+ * 1 for validating a fdir rule, 0 for creating a fdir rule.
+ * Validate and create share one ops: VIRTCHNL_OP_ADD_FDIR_FILTER.
+ */
+ u16 validate_only; /* INPUT */
+ u32 flow_id; /* OUTPUT */
+ struct virtchnl_fdir_rule rule_cfg; /* INPUT */
+
+ /* see enum virtchnl_fdir_prgm_status; OUTPUT */
+ s32 status;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2616, virtchnl_fdir_add);
+
+/* VIRTCHNL_OP_DEL_FDIR_FILTER
+ * VF sends this request to PF by filling out vsi_id
+ * and flow_id. PF will return del_status to VF.
+ */
+struct virtchnl_fdir_del {
+ u16 vsi_id; /* INPUT */
+ u16 pad;
+ u32 flow_id; /* INPUT */
+
+ /* see enum virtchnl_fdir_prgm_status; OUTPUT */
+ s32 status;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del);
+
+/* VIRTCHNL_OP_GET_QOS_CAPS
+ * VF sends this message to get its QoS Caps, such as
+ * TC number, Arbiter and Bandwidth.
+ */
+struct virtchnl_qos_cap_elem {
+ u8 tc_num;
+ u8 tc_prio;
+#define VIRTCHNL_ABITER_STRICT 0
+#define VIRTCHNL_ABITER_ETS 2
+ u8 arbiter;
+#define VIRTCHNL_STRICT_WEIGHT 1
+ u8 weight;
+ enum virtchnl_bw_limit_type type;
+ union {
+ struct virtchnl_shaper_bw shaper;
+ u8 pad2[32];
+ };
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_qos_cap_elem);
+
+struct virtchnl_qos_cap_list {
+ u16 vsi_id;
+ u16 num_elem;
+ struct virtchnl_qos_cap_elem cap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(44, virtchnl_qos_cap_list);
+
+/* VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP
+ * VF sends message virtchnl_queue_tc_mapping to set queue to tc
+ * mapping for all the Tx and Rx queues with a specified VSI, and
+ * would get response about bitmap of valid user priorities
+ * associated with queues.
+ */
+struct virtchnl_queue_tc_mapping {
+ u16 vsi_id;
+ u16 num_tc;
+ u16 num_queue_pairs;
+ u8 pad[2];
+ union {
+ struct {
+ u16 start_queue_id;
+ u16 queue_count;
+ } req;
+ struct {
+#define VIRTCHNL_USER_PRIO_TYPE_UP 0
+#define VIRTCHNL_USER_PRIO_TYPE_DSCP 1
+ u16 prio_type;
+ u16 valid_prio_bitmap;
+ } resp;
+ } tc[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_tc_mapping);
+
+/* queue types */
+enum virtchnl_queue_type {
+ VIRTCHNL_QUEUE_TYPE_TX = 0,
+ VIRTCHNL_QUEUE_TYPE_RX = 1,
+};
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl_queue_chunk {
+ /* see enum virtchnl_queue_type */
+ s32 type;
+ u16 start_queue_id;
+ u16 num_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl_queue_chunks {
+ u16 num_chunks;
+ u16 rsvd;
+ struct virtchnl_queue_chunk chunks[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_chunks);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES_V2
+ * VIRTCHNL_OP_DISABLE_QUEUES_V2
+ *
+ * These opcodes can be used if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated in
+ * VIRTCHNL_OP_GET_VF_RESOURCES
+ *
+ * VF sends virtchnl_ena_dis_queues struct to specify the queues to be
+ * enabled/disabled in chunks. Also applicable to single queue RX or
+ * TX. PF performs requested action and returns status.
+ */
+struct virtchnl_del_ena_dis_queues {
+ u16 vport_id;
+ u16 pad;
+ struct virtchnl_queue_chunks chunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_del_ena_dis_queues);
+
+/* Virtchannel interrupt throttling rate index */
+enum virtchnl_itr_idx {
+ VIRTCHNL_ITR_IDX_0 = 0,
+ VIRTCHNL_ITR_IDX_1 = 1,
+ VIRTCHNL_ITR_IDX_NO_ITR = 3,
+};
+
+/* Queue to vector mapping */
+struct virtchnl_queue_vector {
+ u16 queue_id;
+ u16 vector_id;
+ u8 pad[4];
+
+ /* see enum virtchnl_itr_idx */
+ s32 itr_idx;
+
+ /* see enum virtchnl_queue_type */
+ s32 queue_type;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_queue_vector);
+
+/* VIRTCHNL_OP_MAP_QUEUE_VECTOR
+ *
+ * This opcode can be used only if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated
+ * in VIRTCHNL_OP_GET_VF_RESOURCES
+ *
+ * VF sends this message to map queues to vectors and ITR index registers.
+ * External data buffer contains virtchnl_queue_vector_maps structure
+ * that contains num_qv_maps of virtchnl_queue_vector structures.
+ * PF maps the requested queue vector maps after validating the queue and vector
+ * ids and returns a status code.
+ */
+struct virtchnl_queue_vector_maps {
+ u16 vport_id;
+ u16 num_qv_maps;
+ u8 pad[4];
+ struct virtchnl_queue_vector qv_maps[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_queue_vector_maps);
+
+/* VIRTCHNL_VF_CAP_PTP
+ * VIRTCHNL_OP_1588_PTP_GET_CAPS
+ * VIRTCHNL_OP_1588_PTP_GET_TIME
+ * VIRTCHNL_OP_1588_PTP_SET_TIME
+ * VIRTCHNL_OP_1588_PTP_ADJ_TIME
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ
+ * VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP
+ * VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS
+ * VIRTCHNL_OP_1588_PTP_SET_PIN_CFG
+ * VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP
+ *
+ * Support for offloading control of the device PTP hardware clock (PHC) is enabled
+ * by VIRTCHNL_VF_CAP_PTP. This capability allows a VF to request that PF
+ * enable Tx and Rx timestamps, and request access to read and/or write the
+ * PHC on the device, as well as query if the VF has direct access to the PHC
+ * time registers.
+ *
+ * The VF must set VIRTCHNL_VF_CAP_PTP in its capabilities when requesting
+ * resources. If the capability is set in reply, the VF must then send
+ * a VIRTCHNL_OP_1588_PTP_GET_CAPS request during initialization. The VF indicates
+ * what extended capabilities it wants by setting the appropriate flags in the
+ * caps field. The PF reply will indicate what features are enabled for
+ * that VF.
+ */
+#define VIRTCHNL_1588_PTP_CAP_TX_TSTAMP BIT(0)
+#define VIRTCHNL_1588_PTP_CAP_RX_TSTAMP BIT(1)
+#define VIRTCHNL_1588_PTP_CAP_READ_PHC BIT(2)
+#define VIRTCHNL_1588_PTP_CAP_WRITE_PHC BIT(3)
+#define VIRTCHNL_1588_PTP_CAP_PHC_REGS BIT(4)
+#define VIRTCHNL_1588_PTP_CAP_PIN_CFG BIT(5)
+
+/**
+ * virtchnl_phc_regs
+ *
+ * Structure defines how the VF should access PHC related registers. The VF
+ * must request VIRTCHNL_1588_PTP_CAP_PHC_REGS. If the VF has access to PHC
+ * registers, the PF will reply with the capability flag set, and with this
+ * structure detailing what PCIe region and what offsets to use. If direct
+ * access is not available, this entire structure is reserved and the fields
+ * will be zero.
+ *
+ * If necessary in a future extension, a separate capability mutually
+ * exclusive with VIRTCHNL_1588_PTP_CAP_PHC_REGS might be used to change the
+ * entire format of this structure within virtchnl_ptp_caps.
+ *
+ * @clock_hi: Register offset of the high 32 bits of clock time
+ * @clock_lo: Register offset of the low 32 bits of clock time
+ * @pcie_region: The PCIe region the registers are located in.
+ * @rsvd: Reserved bits for future extension
+ */
+struct virtchnl_phc_regs {
+ u32 clock_hi;
+ u32 clock_lo;
+ u8 pcie_region;
+ u8 rsvd[15];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_phc_regs);
+
+/* timestamp format enumeration
+ *
+ * VIRTCHNL_1588_PTP_TSTAMP_40BIT
+ *
+ * This format indicates a timestamp that uses the 40bit format from the
+ * flexible Rx descriptors. It is also the default Tx timestamp format used
+ * today.
+ *
+ * Such a timestamp has the following 40bit format:
+ *
+ * *--------------------------------*-------------------------------*-----------*
+ * | 32 bits of time in nanoseconds | 7 bits of sub-nanosecond time | valid bit |
+ * *--------------------------------*-------------------------------*-----------*
+ *
+ * The timestamp is passed in a u64, with the upper 24bits of the field
+ * reserved as zero.
+ *
+ * With this format, in order to report a full 64bit timestamp to userspace
+ * applications, the VF is responsible for performing timestamp extension by
+ * carefully comparing the timestamp with the PHC time. This can correctly
+ * be achieved with a recent cached copy of the PHC time by doing delta
+ * comparison between the 32bits of nanoseconds in the timestamp with the
+ * lower 32 bits of the clock time. For this to work, the cached PHC time
+ * must be from within 2^31 nanoseconds (~2.1 seconds) of when the timestamp
+ * was captured.
+ *
+ * VIRTCHNL_1588_PTP_TSTAMP_64BIT_NS
+ *
+ * This format indicates a timestamp that is 64 bits of nanoseconds.
+ */
+enum virtchnl_ptp_tstamp_format {
+ VIRTCHNL_1588_PTP_TSTAMP_40BIT = 0,
+ VIRTCHNL_1588_PTP_TSTAMP_64BIT_NS = 1,
+};
+
+/**
+ * virtchnl_ptp_caps
+ *
+ * Structure that defines the PTP capabilities available to the VF. The VF
+ * sends VIRTCHNL_OP_1588_PTP_GET_CAPS, and must fill in the ptp_caps field
+ * indicating what capabilities it is requesting. The PF will respond with the
+ * same message with the virtchnl_ptp_caps structure indicating what is
+ * enabled for the VF.
+ *
+ * @phc_regs: If VIRTCHNL_1588_PTP_CAP_PHC_REGS is set, contains information
+ * on the PHC related registers available to the VF.
+ * @caps: On send, VF sets what capabilities it requests. On reply, PF
+ * indicates what has been enabled for this VF. The PF shall not set
+ * bits which were not requested by the VF.
+ * @max_adj: The maximum adjustment capable of being requested by
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ, in parts per billion. Note that 1 ppb
+ * is approximately 65.5 scaled_ppm. The PF shall clamp any
+ * frequency adjustment in VIRTCHNL_op_1588_ADJ_FREQ to +/- max_adj.
+ * Use of ppb in this field allows fitting the value into 4 bytes
+ * instead of potentially requiring 8 if scaled_ppm units were used.
+ * @tx_tstamp_idx: The Tx timestamp index to set in the transmit descriptor
+ * when requesting a timestamp for an outgoing packet.
+ * Reserved if VIRTCHNL_1588_PTP_CAP_TX_TSTAMP is not enabled.
+ * @n_ext_ts: Number of external timestamp functions available. Reserved
+ * if VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @n_per_out: Number of periodic output functions available. Reserved if
+ * VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @n_pins: Number of physical programmable pins able to be controlled.
+ * Reserved if VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @tx_tstamp_format: Format of the Tx timestamps. Valid formats are defined
+ * by the virtchnl_ptp_tstamp enumeration. Note that Rx
+ * timestamps are tied to the descriptor format, and do not
+ * have a separate format field.
+ * @rsvd: Reserved bits for future extension.
+ *
+ * PTP capabilities
+ *
+ * VIRTCHNL_1588_PTP_CAP_TX_TSTAMP indicates that the VF can request transmit
+ * timestamps for packets in its transmit descriptors. If this is unset,
+ * transmit timestamp requests are ignored. Note that only one outstanding Tx
+ * timestamp request will be honored at a time. The PF shall handle receipt of
+ * the timestamp from the hardware, and will forward this to the VF by sending
+ * a VIRTCHNL_OP_1588_TX_TIMESTAMP message.
+ *
+ * VIRTCHNL_1588_PTP_CAP_RX_TSTAMP indicates that the VF receive queues have
+ * receive timestamps enabled in the flexible descriptors. Note that this
+ * requires a VF to also negotiate to enable advanced flexible descriptors in
+ * the receive path instead of the default legacy descriptor format.
+ *
+ * For a detailed description of the current Tx and Rx timestamp format, see
+ * the section on virtchnl_phc_tx_tstamp. Future extensions may indicate
+ * timestamp format in the capability structure.
+ *
+ * VIRTCHNL_1588_PTP_CAP_READ_PHC indicates that the VF may read the PHC time
+ * via the VIRTCHNL_OP_1588_PTP_GET_TIME command, or by directly reading PHC
+ * registers if VIRTCHNL_1588_PTP_CAP_PHC_REGS is also set.
+ *
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC indicates that the VF may request updates
+ * to the PHC time via VIRTCHNL_OP_1588_PTP_SET_TIME,
+ * VIRTCHNL_OP_1588_PTP_ADJ_TIME, and VIRTCHNL_OP_1588_PTP_ADJ_FREQ.
+ *
+ * VIRTCHNL_1588_PTP_CAP_PHC_REGS indicates that the VF has direct access to
+ * certain PHC related registers, primarily for lower latency access to the
+ * PHC time. If this is set, the VF shall read the virtchnl_phc_regs section
+ * of the capabilities to determine the location of the clock registers. If
+ * this capability is not set, the entire 24 bytes of virtchnl_phc_regs is
+ * reserved as zero. Future extensions define alternative formats for this
+ * data, in which case they will be mutually exclusive with this capability.
+ *
+ * VIRTCHNL_1588_PTP_CAP_PIN_CFG indicates that the VF has the capability to
+ * control software defined pins. These pins can be assigned either as an
+ * input to timestamp external events, or as an output to cause a periodic
+ * signal output.
+ *
+ * Note that in the future, additional capability flags may be added which
+ * indicate additional extended support. All fields marked as reserved by this
+ * header will be set to zero. VF implementations should verify this to ensure
+ * that future extensions do not break compatibility.
+ */
+struct virtchnl_ptp_caps {
+ struct virtchnl_phc_regs phc_regs;
+ u32 caps;
+ s32 max_adj;
+ u8 tx_tstamp_idx;
+ u8 n_ext_ts;
+ u8 n_per_out;
+ u8 n_pins;
+ /* see enum virtchnl_ptp_tstamp_format */
+ u8 tx_tstamp_format;
+ u8 rsvd[11];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_ptp_caps);
+
+/**
+ * virtchnl_phc_time
+ * @time: PHC time in nanoseconds
+ * @rsvd: Reserved for future extension
+ *
+ * Structure sent with VIRTCHNL_OP_1588_PTP_SET_TIME and received with
+ * VIRTCHNL_OP_1588_PTP_GET_TIME. Contains the 64bits of PHC clock time in
+ * nanoseconds.
+ *
+ * VIRTCHNL_OP_1588_PTP_SET_TIME may be sent by the VF if
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC is set. This will request that the PHC time
+ * be set to the requested value. This operation is non-atomic and thus does
+ * not adjust for the delay between request and completion. It is recommended
+ * that the VF use VIRTCHNL_OP_1588_PTP_ADJ_TIME and
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ when possible to steer the PHC clock.
+ *
+ * VIRTCHNL_OP_1588_PTP_GET_TIME may be sent to request the current time of
+ * the PHC. This op is available in case direct access via the PHC registers
+ * is not available.
+ */
+struct virtchnl_phc_time {
+ u64 time;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_time);
+
+/**
+ * virtchnl_phc_adj_time
+ * @delta: offset requested to adjust clock by
+ * @rsvd: reserved for future extension
+ *
+ * Sent with VIRTCHNL_OP_1588_PTP_ADJ_TIME. Used to request an adjustment of
+ * the clock time by the provided delta, with negative values representing
+ * subtraction. VIRTCHNL_OP_1588_PTP_ADJ_TIME may not be sent unless
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC is set.
+ *
+ * The atomicity of this operation is not guaranteed. The PF should perform an
+ * atomic update using appropriate mechanisms if possible. However, this is
+ * not guaranteed.
+ */
+struct virtchnl_phc_adj_time {
+ s64 delta;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_adj_time);
+
+/**
+ * virtchnl_phc_adj_freq
+ * @scaled_ppm: frequency adjustment represented in scaled parts per million
+ * @rsvd: Reserved for future extension
+ *
+ * Sent with the VIRTCHNL_OP_1588_PTP_ADJ_FREQ to request an adjustment to the
+ * clock frequency. The adjustment is in scaled_ppm, which is parts per
+ * million with a 16bit binary fractional portion. 1 part per billion is
+ * approximately 65.5 scaled_ppm.
+ *
+ * ppm = scaled_ppm / 2^16
+ *
+ * ppb = scaled_ppm * 1000 / 2^16 or
+ *
+ * ppb = scaled_ppm * 125 / 2^13
+ *
+ * The PF shall clamp any adjustment request to plus or minus the specified
+ * max_adj in the PTP capabilities.
+ *
+ * Requests for adjustment are always based off of nominal clock frequency and
+ * not compounding. To reset clock frequency, send a request with a scaled_ppm
+ * of 0.
+ */
+struct virtchnl_phc_adj_freq {
+ s64 scaled_ppm;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_adj_freq);
+
+/**
+ * virtchnl_phc_tx_stamp
+ * @tstamp: timestamp value
+ * @rsvd: Reserved for future extension
+ *
+ * Sent along with VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP from the PF when a Tx
+ * timestamp for the index associated with this VF in the tx_tstamp_idx field
+ * is captured by hardware.
+ *
+ * If VIRTCHNL_1588_PTP_CAP_TX_TSTAMP is set, the VF may request a timestamp
+ * for a packet in its transmit context descriptor by setting the appropriate
+ * flag and setting the timestamp index provided by the PF. On transmission,
+ * the timestamp will be captured and sent to the PF. The PF will forward this
+ * timestamp to the VF via the VIRTCHNL_1588_PTP_CAP_TX_TSTAMP op.
+ *
+ * The timestamp format is defined by the tx_tstamp_format field of the
+ * virtchnl_ptp_caps structure.
+ */
+struct virtchnl_phc_tx_tstamp {
+ u64 tstamp;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_tx_tstamp);
+
+enum virtchnl_phc_pin_func {
+ VIRTCHNL_PHC_PIN_FUNC_NONE = 0, /* Not assigned to any function */
+ VIRTCHNL_PHC_PIN_FUNC_EXT_TS = 1, /* Assigned to external timestamp */
+ VIRTCHNL_PHC_PIN_FUNC_PER_OUT = 2, /* Assigned to periodic output */
+};
+
+/* Length of the pin configuration data. All pin configurations belong within
+ * the same union and *must* have this length in bytes.
+ */
+#define VIRTCHNL_PIN_CFG_LEN 64
+
+/* virtchnl_phc_ext_ts_mode
+ *
+ * Mode of the external timestamp, indicating which edges of the input signal
+ * to timestamp.
+ */
+enum virtchnl_phc_ext_ts_mode {
+ VIRTCHNL_PHC_EXT_TS_NONE = 0,
+ VIRTCHNL_PHC_EXT_TS_RISING_EDGE = 1,
+ VIRTCHNL_PHC_EXT_TS_FALLING_EDGE = 2,
+ VIRTCHNL_PHC_EXT_TS_BOTH_EDGES = 3,
+};
+
+/**
+ * virtchnl_phc_ext_ts
+ * @mode: mode of external timestamp request
+ * @rsvd: reserved for future extension
+ *
+ * External timestamp configuration. Defines the configuration for this
+ * external timestamp function.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_NONE, the function is essentially disabled,
+ * timestamping nothing.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_RISING_EDGE, the function shall timestamp
+ * the rising edge of the input when it transitions from low to high signal.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_FALLING_EDGE, the function shall timestamp
+ * the falling edge of the input when it transitions from high to low signal.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_BOTH_EDGES, the function shall timestamp
+ * both the rising and falling edge of the signal whenever it changes.
+ *
+ * The PF shall return an error if the requested mode cannot be implemented on
+ * the function.
+ */
+struct virtchnl_phc_ext_ts {
+ u8 mode; /* see virtchnl_phc_ext_ts_mode */
+ u8 rsvd[63];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(VIRTCHNL_PIN_CFG_LEN, virtchnl_phc_ext_ts);
+
+/* virtchnl_phc_per_out_flags
+ *
+ * Flags defining periodic output functionality.
+ */
+enum virtchnl_phc_per_out_flags {
+ VIRTCHNL_PHC_PER_OUT_PHASE_START = BIT(0),
+};
+
+/**
+ * virtchnl_phc_per_out
+ * @start: absolute start time (if VIRTCHNL_PHC_PER_OUT_PHASE_START unset)
+ * @phase: phase offset to start (if VIRTCHNL_PHC_PER_OUT_PHASE_START set)
+ * @period: time to complete a full clock cycle (low - > high -> low)
+ * @on: length of time the signal should stay high
+ * @flags: flags defining the periodic output operation.
+ * rsvd: reserved for future extension
+ *
+ * Configuration for a periodic output signal. Used to define the signal that
+ * should be generated on a given function.
+ *
+ * The period field determines the full length of the clock cycle, including
+ * both duration hold high transition and duration to hold low transition in
+ * nanoseconds.
+ *
+ * The on field determines how long the signal should remain high. For
+ * a traditional square wave clock that is on for some duration and off for
+ * the same duration, use an on length of precisely half the period. The duty
+ * cycle of the clock is period/on.
+ *
+ * If VIRTCHNL_PHC_PER_OUT_PHASE_START is unset, then the request is to start
+ * a clock an absolute time. This means that the clock should start precisely
+ * at the specified time in the start field. If the start time is in the past,
+ * then the periodic output should start at the next valid multiple of the
+ * period plus the start time:
+ *
+ * new_start = (n * period) + start
+ * (choose n such that new start is in the future)
+ *
+ * Note that the PF should not reject a start time in the past because it is
+ * possible that such a start time was valid when the request was made, but
+ * became invalid due to delay in programming the pin.
+ *
+ * If VIRTCHNL_PHC_PER_OUT_PHASE_START is set, then the request is to start
+ * the next multiple of the period plus the phase offset. The phase must be
+ * less than the period. In this case, the clock should start as soon possible
+ * at the next available multiple of the period. To calculate a start time
+ * when programming this mode, use:
+ *
+ * start = (n * period) + phase
+ * (choose n such that start is in the future)
+ *
+ * A period of zero should be treated as a request to disable the clock
+ * output.
+ */
+struct virtchnl_phc_per_out {
+ union {
+ u64 start;
+ u64 phase;
+ };
+ u64 period;
+ u64 on;
+ u32 flags;
+ u8 rsvd[36];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(VIRTCHNL_PIN_CFG_LEN, virtchnl_phc_per_out);
+
+/* virtchnl_phc_pin_cfg_flags
+ *
+ * Definition of bits in the flags field of the virtchnl_phc_pin_cfg
+ * structure.
+ */
+enum virtchnl_phc_pin_cfg_flags {
+ /* Valid for VIRTCHNL_OP_1588_PTP_SET_PIN_CFG. If set, indicates this
+ * is a request to verify if the function can be assigned to the
+ * provided pin. In this case, the ext_ts and per_out fields are
+ * ignored, and the PF response must be an error if the pin cannot be
+ * assigned to that function index.
+ */
+ VIRTCHNL_PHC_PIN_CFG_VERIFY = BIT(0),
+};
+
+/**
+ * virtchnl_phc_set_pin
+ * @pin_index: The pin to get or set
+ * @func: the function type the pin is assigned to
+ * @func_index: the index of the function the pin is assigned to
+ * @ext_ts: external timestamp configuration
+ * @per_out: periodic output configuration
+ * @rsvd1: Reserved for future extension
+ * @rsvd2: Reserved for future extension
+ *
+ * Sent along with the VIRTCHNL_OP_1588_PTP_SET_PIN_CFG op.
+ *
+ * The VF issues a VIRTCHNL_OP_1588_PTP_SET_PIN_CFG to assign the pin to one
+ * of the functions. It must set the pin_index field, the func field, and
+ * the func_index field. The pin_index must be less than n_pins, and the
+ * func_index must be less than the n_ext_ts or n_per_out depending on which
+ * function type is selected. If func is for an external timestamp, the
+ * ext_ts field must be filled in with the desired configuration. Similarly,
+ * if the function is for a periodic output, the per_out field must be
+ * configured.
+ *
+ * If the VIRTCHNL_PHC_PIN_CFG_VERIFY bit of the flag field is set, this is
+ * a request only to verify the configuration, not to set it. In this case,
+ * the PF should simply report an error if the requested pin cannot be
+ * assigned to the requested function. This allows VF to determine whether or
+ * not a given function can be assigned to a specific pin. Other flag bits are
+ * currently reserved and must be verified as zero on both sides. They may be
+ * extended in the future.
+ */
+struct virtchnl_phc_set_pin {
+ u32 flags; /* see virtchnl_phc_pin_cfg_flags */
+ u8 pin_index;
+ u8 func; /* see virtchnl_phc_pin_func */
+ u8 func_index;
+ u8 rsvd1;
+ union {
+ struct virtchnl_phc_ext_ts ext_ts;
+ struct virtchnl_phc_per_out per_out;
+ };
+ u8 rsvd2[8];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_phc_set_pin);
+
+/**
+ * virtchnl_phc_pin
+ * @pin_index: The pin to get or set
+ * @func: the function type the pin is assigned to
+ * @func_index: the index of the function the pin is assigned to
+ * @rsvd: Reserved for future extension
+ * @name: human readable pin name, supplied by PF on GET_PIN_CFGS
+ *
+ * Sent by the PF as part of the VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS response.
+ *
+ * The VF issues a VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS request to the PF in
+ * order to obtain the current pin configuration for all of the pins that were
+ * assigned to this VF.
+ *
+ * This structure details the pin configuration state, including a pin name
+ * and which function is assigned to the pin currently.
+ */
+struct virtchnl_phc_pin {
+ u8 pin_index;
+ u8 func; /* see virtchnl_phc_pin_func */
+ u8 func_index;
+ u8 rsvd[5];
+ char name[64];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_phc_pin);
+
+/**
+ * virtchnl_phc_pin_cfg
+ * @len: length of the variable pin config array
+ * @pins: variable length pin configuration array
+ *
+ * Variable structure sent by the PF in reply to
+ * VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS. The VF does not send this structure with
+ * its request of the operation.
+ *
+ * It is possible that the PF may need to send more pin configuration data
+ * than can be sent in one virtchnl message. To handle this, the PF should
+ * issue multiple VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS responses. Each response
+ * will indicate the number of pins it covers. The VF should be ready to wait
+ * for multiple responses until it has received a total length equal to the
+ * number of n_pins negotiated during extended PTP capabilities exchange.
+ */
+struct virtchnl_phc_get_pins {
+ u8 len;
+ u8 rsvd[7];
+ struct virtchnl_phc_pin pins[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_phc_get_pins);
+
+/**
+ * virtchnl_phc_ext_stamp
+ * @tstamp: timestamp value
+ * @tstamp_rsvd: Reserved for future extension of the timestamp value.
+ * @tstamp_format: format of the timstamp
+ * @func_index: external timestamp function this timestamp is for
+ * @rsvd2: Reserved for future extension
+ *
+ * Sent along with the VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP from the PF when an
+ * external timestamp function is triggered.
+ *
+ * This will be sent only if one of the external timestamp functions is
+ * configured by the VF, and is only valid if VIRTCHNL_1588_PTP_CAP_PIN_CFG is
+ * negotiated with the PF.
+ *
+ * The timestamp format is defined by the tstamp_format field using the
+ * virtchnl_ptp_tstamp_format enumeration. The tstamp_rsvd field is
+ * exclusively reserved for possible future variants of the timestamp format,
+ * and its access will be controlled by the tstamp_format field.
+ */
+struct virtchnl_phc_ext_tstamp {
+ u64 tstamp;
+ u8 tstamp_rsvd[8];
+ u8 tstamp_format;
+ u8 func_index;
+ u8 rsvd2[6];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_phc_ext_tstamp);
+
+/* Since VF messages are limited by u16 size, precalculate the maximum possible
+ * values of nested elements in virtchnl structures that virtual channel can
+ * possibly handle in a single message.
+ */
+enum virtchnl_vector_limits {
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vsi_queue_config_info)) /
+ sizeof(struct virtchnl_queue_pair_info),
+
+ VIRTCHNL_OP_CONFIG_IRQ_MAP_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_irq_map_info)) /
+ sizeof(struct virtchnl_vector_map),
+
+ VIRTCHNL_OP_ADD_DEL_ETH_ADDR_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_ether_addr_list)) /
+ sizeof(struct virtchnl_ether_addr),
+
+ VIRTCHNL_OP_ADD_DEL_VLAN_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list)) /
+ sizeof(u16),
+
+
+ VIRTCHNL_OP_ENABLE_CHANNELS_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_tc_info)) /
+ sizeof(struct virtchnl_channel_info),
+
+ VIRTCHNL_OP_ENABLE_DISABLE_DEL_QUEUES_V2_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_del_ena_dis_queues)) /
+ sizeof(struct virtchnl_queue_chunk),
+
+ VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_queue_vector_maps)) /
+ sizeof(struct virtchnl_queue_vector),
+
+ VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list_v2)) /
+ sizeof(struct virtchnl_vlan_filter),
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+ u8 *msg, u16 msglen)
+{
+ bool err_msg_format = false;
+ u32 valid_len = 0;
+
+ /* Validate message length. */
+ switch (v_opcode) {
+ case VIRTCHNL_OP_VERSION:
+ valid_len = sizeof(struct virtchnl_version_info);
+ break;
+ case VIRTCHNL_OP_RESET_VF:
+ break;
+ case VIRTCHNL_OP_GET_VF_RESOURCES:
+ if (VF_IS_V11(ver))
+ valid_len = sizeof(u32);
+ break;
+ case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+ valid_len = sizeof(struct virtchnl_txq_info);
+ break;
+ case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+ valid_len = sizeof(struct virtchnl_rxq_info);
+ break;
+ case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+ valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_vsi_queue_config_info *vqc =
+ (struct virtchnl_vsi_queue_config_info *)msg;
+
+ if (vqc->num_queue_pairs == 0 || vqc->num_queue_pairs >
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vqc->num_queue_pairs *
+ sizeof(struct
+ virtchnl_queue_pair_info));
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+ valid_len = sizeof(struct virtchnl_irq_map_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_irq_map_info *vimi =
+ (struct virtchnl_irq_map_info *)msg;
+
+ if (vimi->num_vectors == 0 || vimi->num_vectors >
+ VIRTCHNL_OP_CONFIG_IRQ_MAP_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vimi->num_vectors *
+ sizeof(struct virtchnl_vector_map));
+ }
+ break;
+ case VIRTCHNL_OP_ENABLE_QUEUES:
+ case VIRTCHNL_OP_DISABLE_QUEUES:
+ valid_len = sizeof(struct virtchnl_queue_select);
+ break;
+ case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
+ break;
+ case VIRTCHNL_OP_ADD_ETH_ADDR:
+ case VIRTCHNL_OP_DEL_ETH_ADDR:
+ valid_len = sizeof(struct virtchnl_ether_addr_list);
+ if (msglen >= valid_len) {
+ struct virtchnl_ether_addr_list *veal =
+ (struct virtchnl_ether_addr_list *)msg;
+
+ if (veal->num_elements == 0 || veal->num_elements >
+ VIRTCHNL_OP_ADD_DEL_ETH_ADDR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += veal->num_elements *
+ sizeof(struct virtchnl_ether_addr);
+ }
+ break;
+ case VIRTCHNL_OP_ADD_VLAN:
+ case VIRTCHNL_OP_DEL_VLAN:
+ valid_len = sizeof(struct virtchnl_vlan_filter_list);
+ if (msglen >= valid_len) {
+ struct virtchnl_vlan_filter_list *vfl =
+ (struct virtchnl_vlan_filter_list *)msg;
+
+ if (vfl->num_elements == 0 || vfl->num_elements >
+ VIRTCHNL_OP_ADD_DEL_VLAN_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += vfl->num_elements * sizeof(u16);
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+ valid_len = sizeof(struct virtchnl_promisc_info);
+ break;
+ case VIRTCHNL_OP_GET_STATS:
+ valid_len = sizeof(struct virtchnl_queue_select);
+ break;
+ case VIRTCHNL_OP_CONFIG_RSS_KEY:
+ valid_len = sizeof(struct virtchnl_rss_key);
+ if (msglen >= valid_len) {
+ struct virtchnl_rss_key *vrk =
+ (struct virtchnl_rss_key *)msg;
+
+ if (vrk->key_len == 0) {
+ /* zero length is allowed as input */
+ break;
+ }
+
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_RSS_LUT:
+ valid_len = sizeof(struct virtchnl_rss_lut);
+ if (msglen >= valid_len) {
+ struct virtchnl_rss_lut *vrl =
+ (struct virtchnl_rss_lut *)msg;
+
+ if (vrl->lut_entries == 0) {
+ /* zero entries is allowed as input */
+ break;
+ }
+
+ valid_len += vrl->lut_entries - 1;
+ }
+ break;
+ case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ break;
+ case VIRTCHNL_OP_SET_RSS_HENA:
+ valid_len = sizeof(struct virtchnl_rss_hena);
+ break;
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ break;
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ valid_len = sizeof(struct virtchnl_vf_res_request);
+ break;
+ case VIRTCHNL_OP_ENABLE_CHANNELS:
+ valid_len = sizeof(struct virtchnl_tc_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_tc_info *vti =
+ (struct virtchnl_tc_info *)msg;
+
+ if (vti->num_tc == 0 || vti->num_tc >
+ VIRTCHNL_OP_ENABLE_CHANNELS_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vti->num_tc - 1) *
+ sizeof(struct virtchnl_channel_info);
+ }
+ break;
+ case VIRTCHNL_OP_DISABLE_CHANNELS:
+ break;
+ case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+ case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+ valid_len = sizeof(struct virtchnl_filter);
+ break;
+ case VIRTCHNL_OP_ADD_RSS_CFG:
+ case VIRTCHNL_OP_DEL_RSS_CFG:
+ valid_len = sizeof(struct virtchnl_rss_cfg);
+ break;
+ case VIRTCHNL_OP_ADD_FDIR_FILTER:
+ valid_len = sizeof(struct virtchnl_fdir_add);
+ break;
+ case VIRTCHNL_OP_DEL_FDIR_FILTER:
+ valid_len = sizeof(struct virtchnl_fdir_del);
+ break;
+ case VIRTCHNL_OP_GET_QOS_CAPS:
+ break;
+ case VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP:
+ valid_len = sizeof(struct virtchnl_queue_tc_mapping);
+ if (msglen >= valid_len) {
+ struct virtchnl_queue_tc_mapping *q_tc =
+ (struct virtchnl_queue_tc_mapping *)msg;
+ if (q_tc->num_tc == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (q_tc->num_tc - 1) *
+ sizeof(q_tc->tc[0]);
+ }
+ break;
+ case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS:
+ break;
+ case VIRTCHNL_OP_ADD_VLAN_V2:
+ case VIRTCHNL_OP_DEL_VLAN_V2:
+ valid_len = sizeof(struct virtchnl_vlan_filter_list_v2);
+ if (msglen >= valid_len) {
+ struct virtchnl_vlan_filter_list_v2 *vfl =
+ (struct virtchnl_vlan_filter_list_v2 *)msg;
+
+ if (vfl->num_elements == 0 || vfl->num_elements >
+ VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vfl->num_elements - 1) *
+ sizeof(struct virtchnl_vlan_filter);
+ }
+ break;
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2:
+ case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2:
+ case VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2:
+ valid_len = sizeof(struct virtchnl_vlan_setting);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_CAPS:
+ valid_len = sizeof(struct virtchnl_ptp_caps);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_TIME:
+ case VIRTCHNL_OP_1588_PTP_SET_TIME:
+ valid_len = sizeof(struct virtchnl_phc_time);
+ break;
+ case VIRTCHNL_OP_1588_PTP_ADJ_TIME:
+ valid_len = sizeof(struct virtchnl_phc_adj_time);
+ break;
+ case VIRTCHNL_OP_1588_PTP_ADJ_FREQ:
+ valid_len = sizeof(struct virtchnl_phc_adj_freq);
+ break;
+ case VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP:
+ valid_len = sizeof(struct virtchnl_phc_tx_tstamp);
+ break;
+ case VIRTCHNL_OP_1588_PTP_SET_PIN_CFG:
+ valid_len = sizeof(struct virtchnl_phc_set_pin);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS:
+ break;
+ case VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP:
+ valid_len = sizeof(struct virtchnl_phc_ext_tstamp);
+ break;
+ case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+ case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+ valid_len = sizeof(struct virtchnl_del_ena_dis_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl_del_ena_dis_queues *qs =
+ (struct virtchnl_del_ena_dis_queues *)msg;
+ if (qs->chunks.num_chunks == 0 ||
+ qs->chunks.num_chunks > VIRTCHNL_OP_ENABLE_DISABLE_DEL_QUEUES_V2_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (qs->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl_queue_chunk);
+ }
+ break;
+ case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+ valid_len = sizeof(struct virtchnl_queue_vector_maps);
+ if (msglen >= valid_len) {
+ struct virtchnl_queue_vector_maps *v_qp =
+ (struct virtchnl_queue_vector_maps *)msg;
+ if (v_qp->num_qv_maps == 0 ||
+ v_qp->num_qv_maps > VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_qp->num_qv_maps - 1) *
+ sizeof(struct virtchnl_queue_vector);
+ }
+ break;
+ /* These are always errors coming from the VF. */
+ case VIRTCHNL_OP_EVENT:
+ case VIRTCHNL_OP_UNKNOWN:
+ default:
+ return VIRTCHNL_STATUS_ERR_PARAM;
+ }
+ /* few more checks */
+ if (err_msg_format || valid_len != msglen)
+ return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+ return 0;
+}
+#endif /* _VIRTCHNL_H_ */
diff --git a/drivers/net/idpf/base/virtchnl2.h b/drivers/net/idpf/base/virtchnl2.h
new file mode 100644
index 0000000000..d0af6ef7c7
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl2.h
@@ -0,0 +1,1411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL2_H_
+#define _VIRTCHNL2_H_
+
+/* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
+ * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
+ * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ */
+
+#include "virtchnl2_lan_desc.h"
+
+/* Error Codes
+ * Note that many older versions of various iAVF drivers convert the reported
+ * status code directly into an iavf_status enumeration. For this reason, it
+ * is important that the values of these enumerations line up.
+ */
+#define VIRTCHNL2_STATUS_SUCCESS 0
+#define VIRTCHNL2_STATUS_ERR_PARAM -5
+#define VIRTCHNL2_STATUS_ERR_OPCODE_MISMATCH -38
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
+ { virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
+ { virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+/* New major set of opcodes introduced and so leaving room for
+ * old misc opcodes to be added in future. Also these opcodes may only
+ * be used if both the PF and VF have successfully negotiated the
+ * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
+ */
+#define VIRTCHNL2_OP_UNKNOWN 0
+#define VIRTCHNL2_OP_VERSION 1
+#define VIRTCHNL2_OP_GET_CAPS 500
+#define VIRTCHNL2_OP_CREATE_VPORT 501
+#define VIRTCHNL2_OP_DESTROY_VPORT 502
+#define VIRTCHNL2_OP_ENABLE_VPORT 503
+#define VIRTCHNL2_OP_DISABLE_VPORT 504
+#define VIRTCHNL2_OP_CONFIG_TX_QUEUES 505
+#define VIRTCHNL2_OP_CONFIG_RX_QUEUES 506
+#define VIRTCHNL2_OP_ENABLE_QUEUES 507
+#define VIRTCHNL2_OP_DISABLE_QUEUES 508
+#define VIRTCHNL2_OP_ADD_QUEUES 509
+#define VIRTCHNL2_OP_DEL_QUEUES 510
+#define VIRTCHNL2_OP_MAP_QUEUE_VECTOR 511
+#define VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR 512
+#define VIRTCHNL2_OP_GET_RSS_KEY 513
+#define VIRTCHNL2_OP_SET_RSS_KEY 514
+#define VIRTCHNL2_OP_GET_RSS_LUT 515
+#define VIRTCHNL2_OP_SET_RSS_LUT 516
+#define VIRTCHNL2_OP_GET_RSS_HASH 517
+#define VIRTCHNL2_OP_SET_RSS_HASH 518
+#define VIRTCHNL2_OP_SET_SRIOV_VFS 519
+#define VIRTCHNL2_OP_ALLOC_VECTORS 520
+#define VIRTCHNL2_OP_DEALLOC_VECTORS 521
+#define VIRTCHNL2_OP_EVENT 522
+#define VIRTCHNL2_OP_GET_STATS 523
+#define VIRTCHNL2_OP_RESET_VF 524
+ /* opcode 525 is reserved */
+#define VIRTCHNL2_OP_GET_PTYPE_INFO 526
+ /* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+ * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ */
+ /* opcodes 529, 530, and 531 are reserved */
+#define VIRTCHNL2_OP_CREATE_ADI 532
+#define VIRTCHNL2_OP_DESTROY_ADI 533
+
+#define VIRTCHNL2_MAX_NUM_PROTO_HDRS 32
+
+#define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX 0xFFFF
+
+/* VIRTCHNL2_VPORT_TYPE
+ * Type of virtual port
+ */
+#define VIRTCHNL2_VPORT_TYPE_DEFAULT 0
+#define VIRTCHNL2_VPORT_TYPE_SRIOV 1
+#define VIRTCHNL2_VPORT_TYPE_SIOV 2
+#define VIRTCHNL2_VPORT_TYPE_SUBDEV 3
+#define VIRTCHNL2_VPORT_TYPE_MNG 4
+
+/* VIRTCHNL2_QUEUE_MODEL
+ * Type of queue model
+ *
+ * In the single queue model, the same transmit descriptor queue is used by
+ * software to post descriptors to hardware and by hardware to post completed
+ * descriptors to software.
+ * Likewise, the same receive descriptor queue is used by hardware to post
+ * completions to software and by software to post buffers to hardware.
+ */
+#define VIRTCHNL2_QUEUE_MODEL_SINGLE 0
+/* In the split queue model, hardware uses transmit completion queues to post
+ * descriptor/buffer completions to software, while software uses transmit
+ * descriptor queues to post descriptors to hardware.
+ * Likewise, hardware posts descriptor completions to the receive descriptor
+ * queue, while software uses receive buffer queues to post buffers to hardware.
+ */
+#define VIRTCHNL2_QUEUE_MODEL_SPLIT 1
+
+/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
+ * Checksum offload capability flags
+ */
+#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 BIT(0)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP BIT(1)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP BIT(2)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP BIT(3)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP BIT(4)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP BIT(5)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP BIT(6)
+#define VIRTCHNL2_CAP_TX_CSUM_GENERIC BIT(7)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 BIT(8)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP BIT(9)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP BIT(10)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP BIT(11)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP BIT(12)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP BIT(13)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP BIT(14)
+#define VIRTCHNL2_CAP_RX_CSUM_GENERIC BIT(15)
+#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL BIT(16)
+#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL BIT(17)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL BIT(18)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL BIT(19)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL BIT(20)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL BIT(21)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL BIT(22)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL BIT(23)
+
+/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
+ * Segmentation offload capability flags
+ */
+#define VIRTCHNL2_CAP_SEG_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_SEG_IPV4_UDP BIT(1)
+#define VIRTCHNL2_CAP_SEG_IPV4_SCTP BIT(2)
+#define VIRTCHNL2_CAP_SEG_IPV6_TCP BIT(3)
+#define VIRTCHNL2_CAP_SEG_IPV6_UDP BIT(4)
+#define VIRTCHNL2_CAP_SEG_IPV6_SCTP BIT(5)
+#define VIRTCHNL2_CAP_SEG_GENERIC BIT(6)
+#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL BIT(7)
+#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL BIT(8)
+
+/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
+ * Receive Side Scaling Flow type capability flags
+ */
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP BIT(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP BIT(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER BIT(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP BIT(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP BIT(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP BIT(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER BIT(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH BIT(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP BIT(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP BIT(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH BIT(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP BIT(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP BIT(13)
+
+/* VIRTCHNL2_HEADER_SPLIT_CAPS
+ * Header split capability flags
+ */
+/* for prepended metadata */
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 BIT(0)
+/* all VLANs go into header buffer */
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 BIT(1)
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 BIT(2)
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6 BIT(3)
+
+/* VIRTCHNL2_RSC_OFFLOAD_CAPS
+ * Receive Side Coalescing offload capability flags
+ */
+#define VIRTCHNL2_CAP_RSC_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_RSC_IPV4_SCTP BIT(1)
+#define VIRTCHNL2_CAP_RSC_IPV6_TCP BIT(2)
+#define VIRTCHNL2_CAP_RSC_IPV6_SCTP BIT(3)
+
+/* VIRTCHNL2_OTHER_CAPS
+ * Other capability flags
+ * SPLITQ_QSCHED: Queue based scheduling using split queue model
+ * TX_VLAN: VLAN tag insertion
+ * RX_VLAN: VLAN tag stripping
+ */
+#define VIRTCHNL2_CAP_RDMA BIT(0)
+#define VIRTCHNL2_CAP_SRIOV BIT(1)
+#define VIRTCHNL2_CAP_MACFILTER BIT(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR BIT(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED BIT(4)
+#define VIRTCHNL2_CAP_CRC BIT(5)
+#define VIRTCHNL2_CAP_ADQ BIT(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR BIT(7)
+#define VIRTCHNL2_CAP_PROMISC BIT(8)
+#define VIRTCHNL2_CAP_LINK_SPEED BIT(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC BIT(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES BIT(11)
+/* require additional info */
+#define VIRTCHNL2_CAP_VLAN BIT(12)
+#define VIRTCHNL2_CAP_PTP BIT(13)
+#define VIRTCHNL2_CAP_ADV_RSS BIT(15)
+#define VIRTCHNL2_CAP_FDIR BIT(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC BIT(17)
+#define VIRTCHNL2_CAP_PTYPE BIT(18)
+
+/* VIRTCHNL2_DEVICE_TYPE */
+/* underlying device type */
+#define VIRTCHNL2_MEV_DEVICE 0
+
+/* VIRTCHNL2_TXQ_SCHED_MODE
+ * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
+ * completions where descriptors and buffers are completed at the same time.
+ * Flow scheduling mode allows for out of order packet processing where
+ * descriptors are cleaned in order, but buffers can be completed out of order.
+ */
+#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE 0
+#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW 1
+
+/* VIRTCHNL2_TXQ_FLAGS
+ * Transmit Queue feature flags
+ *
+ * Enable rule miss completion type; packet completion for a packet
+ * sent on exception path; only relevant in flow scheduling mode
+ */
+#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL BIT(0)
+
+/* VIRTCHNL2_PEER_TYPE
+ * Transmit mailbox peer type
+ */
+#define VIRTCHNL2_RDMA_CPF 0
+#define VIRTCHNL2_NVME_CPF 1
+#define VIRTCHNL2_ATE_CPF 2
+#define VIRTCHNL2_LCE_CPF 3
+
+/* VIRTCHNL2_RXQ_FLAGS
+ * Receive Queue Feature flags
+ */
+#define VIRTCHNL2_RXQ_RSC BIT(0)
+#define VIRTCHNL2_RXQ_HDR_SPLIT BIT(1)
+/* When set, packet descriptors are flushed by hardware immediately after
+ * processing each packet.
+ */
+#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK BIT(2)
+#define VIRTCHNL2_RX_DESC_SIZE_16BYTE BIT(3)
+#define VIRTCHNL2_RX_DESC_SIZE_32BYTE BIT(4)
+
+/* VIRTCHNL2_RSS_ALGORITHM
+ * Type of RSS algorithm
+ */
+#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC 0
+#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC 1
+#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC 2
+#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC 3
+
+/* VIRTCHNL2_EVENT_CODES
+ * Type of event
+ */
+#define VIRTCHNL2_EVENT_UNKNOWN 0
+#define VIRTCHNL2_EVENT_LINK_CHANGE 1
+
+/* VIRTCHNL2_QUEUE_TYPE
+ * Transmit and Receive queue types are valid in legacy as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced -
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
+ * the queue where hardware posts completions.
+ */
+#define VIRTCHNL2_QUEUE_TYPE_TX 0
+#define VIRTCHNL2_QUEUE_TYPE_RX 1
+#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION 2
+#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER 3
+#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX 4
+#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX 5
+
+/* VIRTCHNL2_ITR_IDX
+ * Virtchannel interrupt throttling rate index
+ */
+#define VIRTCHNL2_ITR_IDX_0 0
+#define VIRTCHNL2_ITR_IDX_1 1
+#define VIRTCHNL2_ITR_IDX_2 2
+#define VIRTCHNL2_ITR_IDX_NO_ITR 3
+
+/* VIRTCHNL2_VECTOR_LIMITS
+ * Since PF/VF messages are limited by __le16 size, precalculate the maximum
+ * possible values of nested elements in virtchnl structures that virtual
+ * channel can possibly handle in a single message.
+ */
+
+#define VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX (\
+ ((__le16)(~0) - sizeof(struct virtchnl2_del_ena_dis_queues)) / \
+ sizeof(struct virtchnl2_queue_chunk))
+
+#define VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX (\
+ ((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
+ sizeof(struct virtchnl2_queue_vector))
+
+/* VIRTCHNL2_PROTO_HDR_TYPE
+ * Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ANY 0
+#define VIRTCHNL2_PROTO_HDR_PRE_MAC 1
+/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_MAC 2
+#define VIRTCHNL2_PROTO_HDR_POST_MAC 3
+#define VIRTCHNL2_PROTO_HDR_ETHERTYPE 4
+#define VIRTCHNL2_PROTO_HDR_VLAN 5
+#define VIRTCHNL2_PROTO_HDR_SVLAN 6
+#define VIRTCHNL2_PROTO_HDR_CVLAN 7
+#define VIRTCHNL2_PROTO_HDR_MPLS 8
+#define VIRTCHNL2_PROTO_HDR_UMPLS 9
+#define VIRTCHNL2_PROTO_HDR_MMPLS 10
+#define VIRTCHNL2_PROTO_HDR_PTP 11
+#define VIRTCHNL2_PROTO_HDR_CTRL 12
+#define VIRTCHNL2_PROTO_HDR_LLDP 13
+#define VIRTCHNL2_PROTO_HDR_ARP 14
+#define VIRTCHNL2_PROTO_HDR_ECP 15
+#define VIRTCHNL2_PROTO_HDR_EAPOL 16
+#define VIRTCHNL2_PROTO_HDR_PPPOD 17
+#define VIRTCHNL2_PROTO_HDR_PPPOE 18
+/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV4 19
+/* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG 20
+/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV6 21
+/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG 22
+#define VIRTCHNL2_PROTO_HDR_IPV6_EH 23
+/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_UDP 24
+/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_TCP 25
+/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_SCTP 26
+/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ICMP 27
+/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ICMPV6 28
+#define VIRTCHNL2_PROTO_HDR_IGMP 29
+#define VIRTCHNL2_PROTO_HDR_AH 30
+#define VIRTCHNL2_PROTO_HDR_ESP 31
+#define VIRTCHNL2_PROTO_HDR_IKE 32
+#define VIRTCHNL2_PROTO_HDR_NATT_KEEP 33
+/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_PAY 34
+#define VIRTCHNL2_PROTO_HDR_L2TPV2 35
+#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL 36
+#define VIRTCHNL2_PROTO_HDR_L2TPV3 37
+#define VIRTCHNL2_PROTO_HDR_GTP 38
+#define VIRTCHNL2_PROTO_HDR_GTP_EH 39
+#define VIRTCHNL2_PROTO_HDR_GTPCV2 40
+#define VIRTCHNL2_PROTO_HDR_GTPC_TEID 41
+#define VIRTCHNL2_PROTO_HDR_GTPU 42
+#define VIRTCHNL2_PROTO_HDR_GTPU_UL 43
+#define VIRTCHNL2_PROTO_HDR_GTPU_DL 44
+#define VIRTCHNL2_PROTO_HDR_ECPRI 45
+#define VIRTCHNL2_PROTO_HDR_VRRP 46
+#define VIRTCHNL2_PROTO_HDR_OSPF 47
+/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_TUN 48
+#define VIRTCHNL2_PROTO_HDR_GRE 49
+#define VIRTCHNL2_PROTO_HDR_NVGRE 50
+#define VIRTCHNL2_PROTO_HDR_VXLAN 51
+#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE 52
+#define VIRTCHNL2_PROTO_HDR_GENEVE 53
+#define VIRTCHNL2_PROTO_HDR_NSH 54
+#define VIRTCHNL2_PROTO_HDR_QUIC 55
+#define VIRTCHNL2_PROTO_HDR_PFCP 56
+#define VIRTCHNL2_PROTO_HDR_PFCP_NODE 57
+#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION 58
+#define VIRTCHNL2_PROTO_HDR_RTP 59
+#define VIRTCHNL2_PROTO_HDR_ROCE 60
+#define VIRTCHNL2_PROTO_HDR_ROCEV1 61
+#define VIRTCHNL2_PROTO_HDR_ROCEV2 62
+/* protocol ids upto 32767 are reserved for AVF use */
+/* 32768 - 65534 are used for user defined protocol ids */
+/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_NO_PROTO 65535
+
+#define VIRTCHNL2_VERSION_MAJOR_2 2
+#define VIRTCHNL2_VERSION_MINOR_0 0
+
+
+/* VIRTCHNL2_OP_VERSION
+ * VF posts its version number to the CP. CP responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This version opcode MUST always be specified as == 1, regardless of other
+ * changes in the API. The CP must always respond to this message without
+ * error regardless of version mismatch.
+ */
+struct virtchnl2_version_info {
+ u32 major;
+ u32 minor;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
+
+/* VIRTCHNL2_OP_GET_CAPS
+ * Dataplane driver sends this message to CP to negotiate capabilities and
+ * provides a virtchnl2_get_capabilities structure with its desired
+ * capabilities, max_sriov_vfs and num_allocated_vectors.
+ * CP responds with a virtchnl2_get_capabilities structure updated
+ * with allowed capabilities and the other fields as below.
+ * If PF sets max_sriov_vfs as 0, CP will respond with max number of VFs
+ * that can be created by this PF. For any other value 'n', CP responds
+ * with max_sriov_vfs set to min(n, x) where x is the max number of VFs
+ * allowed by CP's policy. max_sriov_vfs is not applicable for VFs.
+ * If dataplane driver sets num_allocated_vectors as 0, CP will respond with 1
+ * which is default vector associated with the default mailbox. For any other
+ * value 'n', CP responds with a value <= n based on the CP's policy of
+ * max number of vectors for a PF.
+ * CP will respond with the vector ID of mailbox allocated to the PF in
+ * mailbox_vector_id and the number of itr index registers in itr_idx_map.
+ * It also responds with default number of vports that the dataplane driver
+ * should comeup with in default_num_vports and maximum number of vports that
+ * can be supported in max_vports
+ */
+struct virtchnl2_get_capabilities {
+ /* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
+ __le32 csum_caps;
+
+ /* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
+ __le32 seg_caps;
+
+ /* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
+ __le32 hsplit_caps;
+
+ /* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
+ __le32 rsc_caps;
+
+ /* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions */
+ __le64 rss_caps;
+
+
+ /* see VIRTCHNL2_OTHER_CAPS definitions */
+ __le64 other_caps;
+
+ /* DYN_CTL register offset and vector id for mailbox provided by CP */
+ __le32 mailbox_dyn_ctl;
+ __le16 mailbox_vector_id;
+ /* Maximum number of allocated vectors for the device */
+ __le16 num_allocated_vectors;
+
+ /* Maximum number of queues that can be supported */
+ __le16 max_rx_q;
+ __le16 max_tx_q;
+ __le16 max_rx_bufq;
+ __le16 max_tx_complq;
+
+ /* The PF sends the maximum VFs it is requesting. The CP responds with
+ * the maximum VFs granted.
+ */
+ __le16 max_sriov_vfs;
+
+ /* maximum number of vports that can be supported */
+ __le16 max_vports;
+ /* default number of vports driver should allocate on load */
+ __le16 default_num_vports;
+
+ /* Max header length hardware can parse/checksum, in bytes */
+ __le16 max_tx_hdr_size;
+
+ /* Max number of scatter gather buffers that can be sent per transmit
+ * packet without needing to be linearized
+ */
+ u8 max_sg_bufs_per_tx_pkt;
+
+ /* see VIRTCHNL2_ITR_IDX definition */
+ u8 itr_idx_map;
+
+ __le16 pad1;
+
+ /* version of Control Plane that is running */
+ __le16 oem_cp_ver_major;
+ __le16 oem_cp_ver_minor;
+ /* see VIRTCHNL2_DEVICE_TYPE definitions */
+ __le32 device_type;
+
+ u8 reserved[12];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
+
+struct virtchnl2_queue_reg_chunk {
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ __le32 pad;
+
+ /* Queue tail register offset and spacing provided by CP */
+ __le64 qtail_reg_start;
+ __le32 qtail_reg_spacing;
+
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl2_queue_reg_chunks {
+ __le16 num_chunks;
+ u8 reserved[6];
+ struct virtchnl2_queue_reg_chunk chunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+
+#define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS 6
+
+/* VIRTCHNL2_OP_CREATE_VPORT
+ * PF sends this message to CP to create a vport by filling in required
+ * fields of virtchnl2_create_vport structure.
+ * CP responds with the updated virtchnl2_create_vport structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_create_vport {
+ /* PF/VF populates the following fields on request */
+ /* see VIRTCHNL2_VPORT_TYPE definitions */
+ __le16 vport_type;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 txq_model;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 rxq_model;
+ __le16 num_tx_q;
+ /* valid only if txq_model is split queue */
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ /* valid only if rxq_model is split queue */
+ __le16 num_rx_bufq;
+ /* relative receive queue index to be used as default */
+ __le16 default_rx_q;
+ /* used to align PF and CP in case of default multiple vports, it is
+ * filled by the PF and CP returns the same value, to enable the driver
+ * to support multiple asynchronous parallel CREATE_VPORT requests and
+ * associate a response to a specific request
+ */
+ __le16 vport_index;
+
+ /* CP populates the following fields on response */
+ __le16 max_mtu;
+ __le32 vport_id;
+ u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
+ __le16 pad;
+ /* see VIRTCHNL2_RX_DESC_IDS definitions */
+ __le64 rx_desc_ids;
+ /* see VIRTCHNL2_TX_DESC_IDS definitions */
+ __le64 tx_desc_ids;
+
+#define MAX_Q_REGIONS 16
+ __le32 max_qs_per_qregion[MAX_Q_REGIONS];
+ __le32 qregion_total_qs;
+ __le16 qregion_type;
+ __le16 pad2;
+
+ /* see VIRTCHNL2_RSS_ALGORITHM definitions */
+ __le32 rss_algorithm;
+ __le16 rss_key_size;
+ __le16 rss_lut_size;
+
+ /* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
+ __le32 rx_split_pos;
+
+ u8 reserved[20];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+
+/* VIRTCHNL2_OP_DESTROY_VPORT
+ * VIRTCHNL2_OP_ENABLE_VPORT
+ * VIRTCHNL2_OP_DISABLE_VPORT
+ * PF sends this message to CP to destroy, enable or disable a vport by filling
+ * in the vport_id in virtchnl2_vport structure.
+ * CP responds with the status of the requested operation.
+ */
+struct virtchnl2_vport {
+ __le32 vport_id;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
+
+/* Transmit queue config info */
+struct virtchnl2_txq_info {
+ __le64 dma_ring_addr;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+
+ __le32 queue_id;
+ /* valid only if queue model is split and type is trasmit queue. Used
+ * in many to one mapping of transmit queues to completion queue
+ */
+ __le16 relative_queue_id;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 model;
+
+ /* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
+ __le16 sched_mode;
+
+ /* see VIRTCHNL2_TXQ_FLAGS definitions */
+ __le16 qflags;
+ __le16 ring_len;
+
+ /* valid only if queue model is split and type is transmit queue */
+ __le16 tx_compl_queue_id;
+ /* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
+ /* see VIRTCHNL2_PEER_TYPE definitions */
+ __le16 peer_type;
+ /* valid only if queue type is CONFIG_TX and used to deliver messages
+ * for the respective CONFIG_TX queue
+ */
+ __le16 peer_rx_queue_id;
+
+ /* value ranges from 0 to 15 */
+ __le16 qregion_id;
+ u8 pad[2];
+
+ /* Egress pasid is used for SIOV use case */
+ __le32 egress_pasid;
+ __le32 egress_hdr_pasid;
+ __le32 egress_buf_pasid;
+
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
+
+/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
+ * PF sends this message to set up parameters for one or more transmit queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_txq_info
+ * structures. CP configures requested queues and returns a status code. If
+ * num_qinfo specified is greater than the number of queues associated with the
+ * vport, an error is returned and no queues are configured.
+ */
+struct virtchnl2_config_tx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+
+ u8 reserved[10];
+ struct virtchnl2_txq_info qinfo[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+
+/* Receive queue config info */
+struct virtchnl2_rxq_info {
+ /* see VIRTCHNL2_RX_DESC_IDS definitions */
+ __le64 desc_ids;
+ __le64 dma_ring_addr;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 queue_id;
+
+ /* see QUEUE_MODEL definitions */
+ __le16 model;
+
+ __le16 hdr_buffer_size;
+ __le32 data_buffer_size;
+ __le32 max_pkt_size;
+
+ __le16 ring_len;
+ u8 buffer_notif_stride;
+ u8 pad[1];
+
+ /* Applicable only for receive buffer queues */
+ __le64 dma_head_wb_addr;
+
+ /* Applicable only for receive completion queues */
+ /* see VIRTCHNL2_RXQ_FLAGS definitions */
+ __le16 qflags;
+
+ __le16 rx_buffer_low_watermark;
+
+ /* valid only in split queue model */
+ __le16 rx_bufq1_id;
+ /* valid only in split queue model */
+ __le16 rx_bufq2_id;
+ /* it indicates if there is a second buffer, rx_bufq2_id is valid only
+ * if this field is set
+ */
+ u8 bufq2_ena;
+ u8 pad2;
+
+ /* value ranges from 0 to 15 */
+ __le16 qregion_id;
+
+ /* Ingress pasid is used for SIOV use case */
+ __le32 ingress_pasid;
+ __le32 ingress_hdr_pasid;
+ __le32 ingress_buf_pasid;
+
+ u8 reserved[16];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
+
+/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
+ * PF sends this message to set up parameters for one or more receive queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
+ * structures. CP configures requested queues and returns a status code.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ */
+struct virtchnl2_config_rx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+
+ u8 reserved[18];
+ struct virtchnl2_rxq_info qinfo[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+
+/* VIRTCHNL2_OP_ADD_QUEUES
+ * PF sends this message to request additional transmit/receive queues beyond
+ * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
+ * structure is used to specify the number of each type of queues.
+ * CP responds with the same structure with the actual number of queues assigned
+ * followed by num_chunks of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_add_queues {
+ __le32 vport_id;
+ __le16 num_tx_q;
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ __le16 num_rx_bufq;
+ u8 reserved[4];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+
+/* Structure to specify a chunk of contiguous interrupt vectors */
+struct virtchnl2_vector_chunk {
+ __le16 start_vector_id;
+ __le16 start_evv_id;
+ __le16 num_vectors;
+ __le16 pad1;
+
+ /* Register offsets and spacing provided by CP.
+ * dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
+ __le32 dynctl_reg_start;
+ __le32 dynctl_reg_spacing;
+
+ __le32 itrn_reg_start;
+ __le32 itrn_reg_spacing;
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
+
+/* Structure to specify several chunks of contiguous interrupt vectors */
+struct virtchnl2_vector_chunks {
+ __le16 num_vchunks;
+ u8 reserved[14];
+ struct virtchnl2_vector_chunk vchunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+
+/* VIRTCHNL2_OP_ALLOC_VECTORS
+ * PF sends this message to request additional interrupt vectors beyond the
+ * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
+ * structure is used to specify the number of vectors requested. CP responds
+ * with the same structure with the actual number of vectors assigned followed
+ * by virtchnl2_vector_chunks structure identifying the vector ids.
+ */
+struct virtchnl2_alloc_vectors {
+ __le16 num_vectors;
+ u8 reserved[14];
+ struct virtchnl2_vector_chunks vchunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+
+/* VIRTCHNL2_OP_DEALLOC_VECTORS
+ * PF sends this message to release the vectors.
+ * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
+ * away. CP performs requested action and returns status.
+ */
+
+/* VIRTCHNL2_OP_GET_RSS_LUT
+ * VIRTCHNL2_OP_SET_RSS_LUT
+ * PF sends this message to get or set RSS lookup table. Only supported if
+ * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl2_rss_lut structure
+ */
+struct virtchnl2_rss_lut {
+ __le32 vport_id;
+ __le16 lut_entries_start;
+ __le16 lut_entries;
+ u8 reserved[4];
+ __le32 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+
+struct virtchnl2_proto_hdr {
+ /* see VIRTCHNL2_PROTO_HDR_TYPE definitions */
+ __le32 type;
+ __le32 field_selector; /* a bit mask to select field for header type */
+ u8 buffer[64];
+ /*
+ * binary buffer in network order for specific header type.
+ * For example, if type = VIRTCHNL2_PROTO_HDR_IPV4, a IPv4
+ * header is expected to be copied into the buffer.
+ */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_proto_hdr);
+
+struct virtchnl2_proto_hdrs {
+ u8 tunnel_level;
+ /*
+ * specify where protocol header start from.
+ * 0 - from the outer layer
+ * 1 - from the first inner layer
+ * 2 - from the second inner layer
+ * ....
+ */
+ __le32 count; /* the proto layers must < VIRTCHNL2_MAX_NUM_PROTO_HDRS */
+ struct virtchnl2_proto_hdr proto_hdr[VIRTCHNL2_MAX_NUM_PROTO_HDRS];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(2312, virtchnl2_proto_hdrs);
+
+struct virtchnl2_rss_cfg {
+ struct virtchnl2_proto_hdrs proto_hdrs;
+
+ /* see VIRTCHNL2_RSS_ALGORITHM definitions */
+ __le32 rss_algorithm;
+ u8 reserved[128];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(2444, virtchnl2_rss_cfg);
+
+/* VIRTCHNL2_OP_GET_RSS_KEY
+ * PF sends this message to get RSS key. Only supported if both PF and CP
+ * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
+ * the virtchnl2_rss_key structure
+ */
+
+/* VIRTCHNL2_OP_GET_RSS_HASH
+ * VIRTCHNL2_OP_SET_RSS_HASH
+ * PF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the CP sets these to all possible traffic types that the
+ * hardware supports. The PF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
+ * during configuration negotiation.
+ */
+struct virtchnl2_rss_hash {
+ /* Packet Type Groups bitmap */
+ __le64 ptype_groups;
+ __le32 vport_id;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
+
+/* VIRTCHNL2_OP_SET_SRIOV_VFS
+ * This message is used to set number of SRIOV VFs to be created. The actual
+ * allocation of resources for the VFs in terms of vport, queues and interrupts
+ * is done by CP. When this call completes, the APF driver calls
+ * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
+ * The number of VFs set to 0 will destroy all the VFs of this function.
+ */
+
+struct virtchnl2_sriov_vfs_info {
+ __le16 num_vfs;
+ __le16 pad;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
+
+/* VIRTCHNL2_OP_CREATE_ADI
+ * PF sends this message to HMA to create ADI by filling in required
+ * fields of virtchnl2_create_adi structure.
+ * HMA responds with the updated virtchnl2_create_adi structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_create_adi {
+ /* PF sends PASID to HMA */
+ __le32 pasid;
+ /*
+ * mbx_id is set to 1 by PF when requesting HMA to provide HW mailbox
+ * id else it is set to 0 by PF
+ */
+ __le16 mbx_id;
+ /* PF sends mailbox vector id to HMA */
+ __le16 mbx_vec_id;
+ /* HMA populates ADI id */
+ __le16 adi_id;
+ u8 reserved[64];
+ u8 pad[6];
+ /* HMA populates queue chunks */
+ struct virtchnl2_queue_reg_chunks chunks;
+ /* PF sends vector chunks to HMA */
+ struct virtchnl2_vector_chunks vchunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_create_adi);
+
+/* VIRTCHNL2_OP_DESTROY_ADI
+ * PF sends this message to HMA to destroy ADI by filling
+ * in the adi_id in virtchnl2_destropy_adi structure.
+ * HMA responds with the status of the requested operation.
+ */
+struct virtchnl2_destroy_adi {
+ __le16 adi_id;
+ u8 reserved[2];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_destroy_adi);
+
+/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+ * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
+ * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
+ * last ptype.
+ */
+struct virtchnl2_ptype {
+ __le16 ptype_id_10;
+ u8 ptype_id_8;
+ /* number of protocol ids the packet supports, maximum of 32
+ * protocol ids are supported
+ */
+ u8 proto_id_count;
+ __le16 pad;
+ /* proto_id_count decides the allocation of protocol id array */
+ /* see VIRTCHNL2_PROTO_HDR_TYPE */
+ __le16 proto_id[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+
+/* VIRTCHNL2_OP_GET_PTYPE_INFO
+ * PF sends this message to CP to get all supported packet types. It does by
+ * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
+ * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
+ * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
+ * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
+ * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
+ * messages, where each message will have the start ptype id, number of ptypes
+ * sent in that message and the ptype array itself. When CP is done updating
+ * all ptype information it extracted from the package (number of ptypes
+ * extracted might be less than what PF expects), it will append a dummy ptype
+ * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
+ * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
+ * messages.
+ */
+struct virtchnl2_get_ptype_info {
+ __le16 start_ptype_id;
+ __le16 num_ptypes;
+ __le32 pad;
+ struct virtchnl2_ptype ptype[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
+
+/* VIRTCHNL2_OP_GET_STATS
+ * PF/VF sends this message to CP to get the update stats by specifying the
+ * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ */
+struct virtchnl2_vport_stats {
+ __le32 vport_id;
+ u8 pad[4];
+
+ __le64 rx_bytes; /* received bytes */
+ __le64 rx_unicast; /* received unicast pkts */
+ __le64 rx_multicast; /* received multicast pkts */
+ __le64 rx_broadcast; /* received broadcast pkts */
+ __le64 rx_discards;
+ __le64 rx_errors;
+ __le64 rx_unknown_protocol;
+ __le64 tx_bytes; /* transmitted bytes */
+ __le64 tx_unicast; /* transmitted unicast pkts */
+ __le64 tx_multicast; /* transmitted multicast pkts */
+ __le64 tx_broadcast; /* transmitted broadcast pkts */
+ __le64 tx_discards;
+ __le64 tx_errors;
+ u8 reserved[16];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
+
+/* VIRTCHNL2_OP_EVENT
+ * CP sends this message to inform the PF/VF driver of events that may affect
+ * it. No direct response is expected from the driver, though it may generate
+ * other messages in response to this one.
+ */
+struct virtchnl2_event {
+ /* see VIRTCHNL2_EVENT_CODES definitions */
+ __le32 event;
+ /* link_speed provided in Mbps */
+ __le32 link_speed;
+ __le32 vport_id;
+ u8 link_status;
+ u8 pad[3];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
+
+/* VIRTCHNL2_OP_GET_RSS_KEY
+ * VIRTCHNL2_OP_SET_RSS_KEY
+ * PF/VF sends this message to get or set RSS key. Only supported if both
+ * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl2_rss_key structure
+ */
+struct virtchnl2_rss_key {
+ __le32 vport_id;
+ __le16 key_len;
+ u8 pad;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl2_queue_chunk {
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl2_queue_chunks {
+ __le16 num_chunks;
+ u8 reserved[6];
+ struct virtchnl2_queue_chunk chunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+
+/* VIRTCHNL2_OP_ENABLE_QUEUES
+ * VIRTCHNL2_OP_DISABLE_QUEUES
+ * VIRTCHNL2_OP_DEL_QUEUES
+ *
+ * PF sends these messages to enable, disable or delete queues specified in
+ * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * to be enabled/disabled/deleted. Also applicable to single queue receive or
+ * transmit. CP performs requested action and returns status.
+ */
+struct virtchnl2_del_ena_dis_queues {
+ __le32 vport_id;
+ u8 reserved[4];
+ struct virtchnl2_queue_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+
+/* Queue to vector mapping */
+struct virtchnl2_queue_vector {
+ __le32 queue_id;
+ __le16 vector_id;
+ u8 pad[2];
+
+ /* see VIRTCHNL2_ITR_IDX definitions */
+ __le32 itr_idx;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 queue_type;
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
+
+/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+ *
+ * PF sends this message to map or unmap queues to vectors and interrupt
+ * throttling rate index registers. External data buffer contains
+ * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
+ * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
+ * after validating the queue and vector ids and returns a status code.
+ */
+struct virtchnl2_queue_vector_maps {
+ __le32 vport_id;
+ __le16 num_qv_maps;
+ u8 pad[10];
+ struct virtchnl2_queue_vector qv_maps[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+
+
+static inline const char *virtchnl2_op_str(__le32 v_opcode)
+{
+ switch (v_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ return "VIRTCHNL2_OP_VERSION";
+ case VIRTCHNL2_OP_GET_CAPS:
+ return "VIRTCHNL2_OP_GET_CAPS";
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ return "VIRTCHNL2_OP_CREATE_VPORT";
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ return "VIRTCHNL2_OP_DESTROY_VPORT";
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ return "VIRTCHNL2_OP_ENABLE_VPORT";
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ return "VIRTCHNL2_OP_DISABLE_VPORT";
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ return "VIRTCHNL2_OP_CONFIG_TX_QUEUES";
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ return "VIRTCHNL2_OP_CONFIG_RX_QUEUES";
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ return "VIRTCHNL2_OP_ENABLE_QUEUES";
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ return "VIRTCHNL2_OP_DISABLE_QUEUES";
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ return "VIRTCHNL2_OP_ADD_QUEUES";
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ return "VIRTCHNL2_OP_DEL_QUEUES";
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ return "VIRTCHNL2_OP_MAP_QUEUE_VECTOR";
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ return "VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR";
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ return "VIRTCHNL2_OP_GET_RSS_KEY";
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ return "VIRTCHNL2_OP_SET_RSS_KEY";
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ return "VIRTCHNL2_OP_GET_RSS_LUT";
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ return "VIRTCHNL2_OP_SET_RSS_LUT";
+ case VIRTCHNL2_OP_GET_RSS_HASH:
+ return "VIRTCHNL2_OP_GET_RSS_HASH";
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ return "VIRTCHNL2_OP_SET_RSS_HASH";
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ return "VIRTCHNL2_OP_SET_SRIOV_VFS";
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ return "VIRTCHNL2_OP_ALLOC_VECTORS";
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ return "VIRTCHNL2_OP_DEALLOC_VECTORS";
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ return "VIRTCHNL2_OP_GET_PTYPE_INFO";
+ case VIRTCHNL2_OP_GET_STATS:
+ return "VIRTCHNL2_OP_GET_STATS";
+ case VIRTCHNL2_OP_EVENT:
+ return "VIRTCHNL2_OP_EVENT";
+ case VIRTCHNL2_OP_RESET_VF:
+ return "VIRTCHNL2_OP_RESET_VF";
+ case VIRTCHNL2_OP_CREATE_ADI:
+ return "VIRTCHNL2_OP_CREATE_ADI";
+ case VIRTCHNL2_OP_DESTROY_ADI:
+ return "VIRTCHNL2_OP_DESTROY_ADI";
+ default:
+ return "Unsupported (update virtchnl2.h)";
+ }
+}
+
+/**
+ * virtchnl2_vc_validate_vf_msg
+ * @ver: Virtchnl2 version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl2_vc_validate_vf_msg(struct virtchnl2_version_info *ver, u32 v_opcode,
+ u8 *msg, __le16 msglen)
+{
+ bool err_msg_format = false;
+ __le32 valid_len = 0;
+
+ /* Validate message length. */
+ switch (v_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ valid_len = sizeof(struct virtchnl2_version_info);
+ break;
+ case VIRTCHNL2_OP_GET_CAPS:
+ valid_len = sizeof(struct virtchnl2_get_capabilities);
+ break;
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ valid_len = sizeof(struct virtchnl2_create_vport);
+ if (msglen >= valid_len) {
+ struct virtchnl2_create_vport *cvport =
+ (struct virtchnl2_create_vport *)msg;
+
+ if (cvport->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (cvport->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_CREATE_ADI:
+ valid_len = sizeof(struct virtchnl2_create_adi);
+ if (msglen >= valid_len) {
+ struct virtchnl2_create_adi *cadi =
+ (struct virtchnl2_create_adi *)msg;
+
+ if (cadi->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ if (cadi->vchunks.num_vchunks == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (cadi->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ valid_len += (cadi->vchunks.num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_DESTROY_ADI:
+ valid_len = sizeof(struct virtchnl2_destroy_adi);
+ break;
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ valid_len = sizeof(struct virtchnl2_vport);
+ break;
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ valid_len = sizeof(struct virtchnl2_config_tx_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_config_tx_queues *ctq =
+ (struct virtchnl2_config_tx_queues *)msg;
+ if (ctq->num_qinfo == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (ctq->num_qinfo - 1) *
+ sizeof(struct virtchnl2_txq_info);
+ }
+ break;
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ valid_len = sizeof(struct virtchnl2_config_rx_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_config_rx_queues *crq =
+ (struct virtchnl2_config_rx_queues *)msg;
+ if (crq->num_qinfo == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (crq->num_qinfo - 1) *
+ sizeof(struct virtchnl2_rxq_info);
+ }
+ break;
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ valid_len = sizeof(struct virtchnl2_add_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)msg;
+
+ if (add_q->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (add_q->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_del_ena_dis_queues *qs =
+ (struct virtchnl2_del_ena_dis_queues *)msg;
+ if (qs->chunks.num_chunks == 0 ||
+ qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (qs->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ valid_len = sizeof(struct virtchnl2_queue_vector_maps);
+ if (msglen >= valid_len) {
+ struct virtchnl2_queue_vector_maps *v_qp =
+ (struct virtchnl2_queue_vector_maps *)msg;
+ if (v_qp->num_qv_maps == 0 ||
+ v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_qp->num_qv_maps - 1) *
+ sizeof(struct virtchnl2_queue_vector);
+ }
+ break;
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ valid_len = sizeof(struct virtchnl2_alloc_vectors);
+ if (msglen >= valid_len) {
+ struct virtchnl2_alloc_vectors *v_av =
+ (struct virtchnl2_alloc_vectors *)msg;
+
+ if (v_av->vchunks.num_vchunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (v_av->vchunks.num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ valid_len = sizeof(struct virtchnl2_vector_chunks);
+ if (msglen >= valid_len) {
+ struct virtchnl2_vector_chunks *v_chunks =
+ (struct virtchnl2_vector_chunks *)msg;
+ if (v_chunks->num_vchunks == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_chunks->num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ valid_len = sizeof(struct virtchnl2_rss_key);
+ if (msglen >= valid_len) {
+ struct virtchnl2_rss_key *vrk =
+ (struct virtchnl2_rss_key *)msg;
+
+ if (vrk->key_len == 0) {
+ /* zero length is allowed as input */
+ break;
+ }
+
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ valid_len = sizeof(struct virtchnl2_rss_lut);
+ if (msglen >= valid_len) {
+ struct virtchnl2_rss_lut *vrl =
+ (struct virtchnl2_rss_lut *)msg;
+
+ if (vrl->lut_entries == 0) {
+ /* zero entries is allowed as input */
+ break;
+ }
+
+ valid_len += (vrl->lut_entries - 1) * sizeof(__le16);
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_HASH:
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ valid_len = sizeof(struct virtchnl2_rss_hash);
+ break;
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
+ break;
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ valid_len = sizeof(struct virtchnl2_get_ptype_info);
+ break;
+ case VIRTCHNL2_OP_GET_STATS:
+ valid_len = sizeof(struct virtchnl2_vport_stats);
+ break;
+ case VIRTCHNL2_OP_RESET_VF:
+ break;
+ /* These are always errors coming from the VF. */
+ case VIRTCHNL2_OP_EVENT:
+ case VIRTCHNL2_OP_UNKNOWN:
+ default:
+ return VIRTCHNL2_STATUS_ERR_PARAM;
+ }
+ /* few more checks */
+ if (err_msg_format || valid_len != msglen)
+ return VIRTCHNL2_STATUS_ERR_OPCODE_MISMATCH;
+
+ return 0;
+}
+
+#endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/net/idpf/base/virtchnl2_lan_desc.h b/drivers/net/idpf/base/virtchnl2_lan_desc.h
new file mode 100644
index 0000000000..2243b17673
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl2_lan_desc.h
@@ -0,0 +1,603 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+/*
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * For licensing information, see the file 'LICENSE' in the root folder
+ */
+#ifndef _VIRTCHNL2_LAN_DESC_H_
+#define _VIRTCHNL2_LAN_DESC_H_
+
+/* VIRTCHNL2_TX_DESC_IDS
+ * Transmit descriptor ID flags
+ */
+#define VIRTCHNL2_TXDID_DATA BIT(0)
+#define VIRTCHNL2_TXDID_CTX BIT(1)
+#define VIRTCHNL2_TXDID_REINJECT_CTX BIT(2)
+#define VIRTCHNL2_TXDID_FLEX_DATA BIT(3)
+#define VIRTCHNL2_TXDID_FLEX_CTX BIT(4)
+#define VIRTCHNL2_TXDID_FLEX_TSO_CTX BIT(5)
+#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1 BIT(6)
+#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2 BIT(7)
+#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX BIT(8)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX BIT(9)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX BIT(10)
+#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX BIT(11)
+#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED BIT(12)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX BIT(13)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX BIT(14)
+#define VIRTCHNL2_TXDID_DESC_DONE BIT(15)
+
+/* VIRTCHNL2_RX_DESC_IDS
+ * Receive descriptor IDs (range from 0 to 63)
+ */
+#define VIRTCHNL2_RXDID_0_16B_BASE 0
+#define VIRTCHNL2_RXDID_1_32B_BASE 1
+/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+ * differentiated based on queue model; e.g. single queue model can
+ * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+ * for DID 2.
+ */
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ 2
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC 2
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW 3
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB 4
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL 5
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2 6
+#define VIRTCHNL2_RXDID_7_HW_RSVD 7
+/* 9 through 15 are reserved */
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC 16
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN 17
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4 18
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6 19
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW 20
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP 21
+/* 22 through 63 are reserved */
+
+/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+ * Receive descriptor ID bitmasks
+ */
+#define VIRTCHNL2_RXDID_0_16B_BASE_M BIT(VIRTCHNL2_RXDID_0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M BIT(VIRTCHNL2_RXDID_1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+/* 9 through 15 are reserved */
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+/* 22 through 63 are reserved */
+
+/* Rx */
+/* For splitq virtchnl2_rx_flex_desc_adv desc members */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M \
+ MAKEMASK(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M \
+ MAKEMASK(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M \
+ MAKEMASK(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M \
+ MAKEMASK(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M \
+ MAKEMASK(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S 7
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST 8 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S 0 /* 2 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST 8 /* this entry must be last!!! */
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields */
+/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+
+/* for virtchnl2_rx_flex_desc.pkt_length member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M \
+ MAKEMASK(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+
+/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * for singleq (flex) virtchnl2_rx_flex_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S 8
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S 9
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST 16 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * for singleq (flex) virtchnl2_rx_flex_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S 0 /* 4 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S 5
+/* [10:6] reserved */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST 16 /* this entry must be last!!! */
+
+/* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S 63
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S 52
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M \
+ MAKEMASK(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S 38
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M \
+ MAKEMASK(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S 30
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M \
+ MAKEMASK(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S 19
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M \
+ MAKEMASK(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S 0
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M \
+ MAKEMASK(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+
+/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S 0
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S 1
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S 2
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S 3
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S 4
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S 5 /* 3 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S 8
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S 9 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S 11
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S 12 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S 14
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S 15
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S 16 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S 18
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST 19 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S 0
+
+/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S 0
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S 1
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S 2
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S 3 /* 3 bits */
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S 3
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S 4
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S 5
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S 6
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S 7
+
+/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA 0
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID 1
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV 2
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH 3
+
+/* Receive Descriptors */
+/* splitq buf
+ | 16| 0|
+ ----------------------------------------------------------------
+ | RSV | Buffer ID |
+ ----------------------------------------------------------------
+ | Rx packet buffer adresss |
+ ----------------------------------------------------------------
+ | Rx header buffer adresss |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | 0|
+ */
+struct virtchnl2_splitq_rx_buf_desc {
+ struct {
+ __le16 buf_id; /* Buffer Identifier */
+ __le16 rsvd0;
+ __le32 rsvd1;
+ } qword0;
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 rsvd2;
+}; /* read used with buffer queues*/
+
+/* singleq buf
+ | 0|
+ ----------------------------------------------------------------
+ | Rx packet buffer adresss |
+ ----------------------------------------------------------------
+ | Rx header buffer adresss |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | 0|
+ */
+struct virtchnl2_singleq_rx_buf_desc {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 rsvd1;
+ __le64 rsvd2;
+}; /* read used with buffer queues*/
+
+union virtchnl2_rx_buf_desc {
+ struct virtchnl2_singleq_rx_buf_desc read;
+ struct virtchnl2_splitq_rx_buf_desc split_rd;
+};
+
+/* (0x00) singleq wb(compl) */
+struct virtchnl2_singleq_base_rx_desc {
+ struct {
+ struct {
+ __le16 mirroring_status;
+ __le16 l2tag1;
+ } lo_dword;
+ union {
+ __le32 rss; /* RSS Hash */
+ __le32 fd_id; /* Flow Director filter id */
+ } hi_dword;
+ } qword0;
+ struct {
+ /* status/error/PTYPE/length */
+ __le64 status_error_ptype_len;
+ } qword1;
+ struct {
+ __le16 ext_status; /* extended status */
+ __le16 rsvd;
+ __le16 l2tag2_1;
+ __le16 l2tag2_2;
+ } qword2;
+ struct {
+ __le32 reserved;
+ __le32 fd_id;
+ } qword3;
+}; /* writeback */
+
+/* (0x01) singleq flex compl */
+struct virtchnl2_rx_flex_desc {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile id */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* (0x02) */
+struct virtchnl2_rx_flex_desc_nic {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le32 flow_id;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct virtchnl2_rx_flex_desc_sw {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 src_vsi; /* [10:15] are reserved */
+ __le16 flex_md1_rsvd;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le32 rsvd; /* flex words 2-3 are reserved */
+ __le32 ts_high;
+};
+
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct virtchnl2_rx_flex_desc_nic_2 {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flow_id;
+ __le16 src_vsi;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* Rx Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile Id 7
+ */
+struct virtchnl2_rx_flex_desc_adv {
+ /* Qword 0 */
+ u8 rxdid_ucast; /* profile_id=[3:0] */
+ /* rsvd=[5:4] */
+ /* ucast=[7:6] */
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0; /* ptype=[9:0] */
+ /* ip_hdr_err=[10:10] */
+ /* udp_len_err=[11:11] */
+ /* ff0=[15:12] */
+ __le16 pktlen_gen_bufq_id; /* plen=[13:0] */
+ /* gen=[14:14] only in splitq */
+ /* bufq_id=[15:15] only in splitq */
+ __le16 hdrlen_flags; /* header=[9:0] */
+ /* rsc=[10:10] only in splitq */
+ /* sph=[11:11] only in splitq */
+ /* ext_udp_0=[12:12] */
+ /* int_udp_0=[13:13] */
+ /* trunc_mirr=[14:14] */
+ /* miss_prepend=[15:15] */
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 fmd0;
+ __le16 fmd1;
+ /* Qword 2 */
+ __le16 fmd2;
+ u8 fflags2;
+ u8 hash3;
+ __le16 fmd3;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 fmd5;
+ __le16 fmd6;
+ __le16 fmd7_0;
+ __le16 fmd7_1;
+}; /* writeback */
+
+/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
+ * RxDID Profile Id 8
+ * Flex-field 0: BufferID
+ * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
+ * Flex-field 2: Hash[15:0]
+ * Flex-flags 2: Hash[23:16]
+ * Flex-field 3: L2TAG2
+ * Flex-field 5: L2TAG1
+ * Flex-field 7: Timestamp (upper 32 bits)
+ */
+struct virtchnl2_rx_flex_desc_adv_nic_3 {
+ /* Qword 0 */
+ u8 rxdid_ucast; /* profile_id=[3:0] */
+ /* rsvd=[5:4] */
+ /* ucast=[7:6] */
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0; /* ptype=[9:0] */
+ /* ip_hdr_err=[10:10] */
+ /* udp_len_err=[11:11] */
+ /* ff0=[15:12] */
+ __le16 pktlen_gen_bufq_id; /* plen=[13:0] */
+ /* gen=[14:14] only in splitq */
+ /* bufq_id=[15:15] only in splitq */
+ __le16 hdrlen_flags; /* header=[9:0] */
+ /* rsc=[10:10] only in splitq */
+ /* sph=[11:11] only in splitq */
+ /* ext_udp_0=[12:12] */
+ /* int_udp_0=[13:13] */
+ /* trunc_mirr=[14:14] */
+ /* miss_prepend=[15:15] */
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 buf_id; /* only in splitq */
+ union {
+ __le16 raw_cs;
+ __le16 l2tag1;
+ __le16 rscseglen;
+ } misc;
+ /* Qword 2 */
+ __le16 hash1;
+ union {
+ u8 fflags2;
+ u8 mirrorid;
+ u8 hash2;
+ } ff2_mirrid_hash2;
+ u8 hash3;
+ __le16 l2tag2;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 l2tag1;
+ __le16 fmd6;
+ __le32 ts_high;
+}; /* writeback */
+
+union virtchnl2_rx_desc {
+ struct virtchnl2_singleq_rx_buf_desc read;
+ struct virtchnl2_singleq_base_rx_desc base_wb;
+ struct virtchnl2_rx_flex_desc flex_wb;
+ struct virtchnl2_rx_flex_desc_nic flex_nic_wb;
+ struct virtchnl2_rx_flex_desc_sw flex_sw_wb;
+ struct virtchnl2_rx_flex_desc_nic_2 flex_nic_2_wb;
+ struct virtchnl2_rx_flex_desc_adv flex_adv_wb;
+ struct virtchnl2_rx_flex_desc_adv_nic_3 flex_adv_nic_3_wb;
+};
+
+#endif /* _VIRTCHNL_LAN_DESC_H_ */
diff --git a/drivers/net/idpf/base/virtchnl_inline_ipsec.h b/drivers/net/idpf/base/virtchnl_inline_ipsec.h
new file mode 100644
index 0000000000..902f63bd51
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl_inline_ipsec.h
@@ -0,0 +1,567 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL_INLINE_IPSEC_H_
+#define _VIRTCHNL_INLINE_IPSEC_H_
+
+#define VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM 3
+#define VIRTCHNL_IPSEC_MAX_ALGO_CAP_NUM 16
+#define VIRTCHNL_IPSEC_MAX_TX_DESC_NUM 128
+#define VIRTCHNL_IPSEC_MAX_CRYPTO_ITEM_NUMBER 2
+#define VIRTCHNL_IPSEC_MAX_KEY_LEN 128
+#define VIRTCHNL_IPSEC_MAX_SA_DESTROY_NUM 8
+#define VIRTCHNL_IPSEC_SA_DESTROY 0
+#define VIRTCHNL_IPSEC_BROADCAST_VFID 0xFFFFFFFF
+#define VIRTCHNL_IPSEC_INVALID_REQ_ID 0xFFFF
+#define VIRTCHNL_IPSEC_INVALID_SA_CFG_RESP 0xFFFFFFFF
+#define VIRTCHNL_IPSEC_INVALID_SP_CFG_RESP 0xFFFFFFFF
+
+/* crypto type */
+#define VIRTCHNL_AUTH 1
+#define VIRTCHNL_CIPHER 2
+#define VIRTCHNL_AEAD 3
+
+/* caps enabled */
+#define VIRTCHNL_IPSEC_ESN_ENA BIT(0)
+#define VIRTCHNL_IPSEC_UDP_ENCAP_ENA BIT(1)
+#define VIRTCHNL_IPSEC_SA_INDEX_SW_ENA BIT(2)
+#define VIRTCHNL_IPSEC_AUDIT_ENA BIT(3)
+#define VIRTCHNL_IPSEC_BYTE_LIMIT_ENA BIT(4)
+#define VIRTCHNL_IPSEC_DROP_ON_AUTH_FAIL_ENA BIT(5)
+#define VIRTCHNL_IPSEC_ARW_CHECK_ENA BIT(6)
+#define VIRTCHNL_IPSEC_24BIT_SPI_ENA BIT(7)
+
+/* algorithm type */
+/* Hash Algorithm */
+#define VIRTCHNL_HASH_NO_ALG 0 /* NULL algorithm */
+#define VIRTCHNL_AES_CBC_MAC 1 /* AES-CBC-MAC algorithm */
+#define VIRTCHNL_AES_CMAC 2 /* AES CMAC algorithm */
+#define VIRTCHNL_AES_GMAC 3 /* AES GMAC algorithm */
+#define VIRTCHNL_AES_XCBC_MAC 4 /* AES XCBC algorithm */
+#define VIRTCHNL_MD5_HMAC 5 /* HMAC using MD5 algorithm */
+#define VIRTCHNL_SHA1_HMAC 6 /* HMAC using 128 bit SHA algorithm */
+#define VIRTCHNL_SHA224_HMAC 7 /* HMAC using 224 bit SHA algorithm */
+#define VIRTCHNL_SHA256_HMAC 8 /* HMAC using 256 bit SHA algorithm */
+#define VIRTCHNL_SHA384_HMAC 9 /* HMAC using 384 bit SHA algorithm */
+#define VIRTCHNL_SHA512_HMAC 10 /* HMAC using 512 bit SHA algorithm */
+#define VIRTCHNL_SHA3_224_HMAC 11 /* HMAC using 224 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_256_HMAC 12 /* HMAC using 256 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_384_HMAC 13 /* HMAC using 384 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_512_HMAC 14 /* HMAC using 512 bit SHA3 algorithm */
+/* Cipher Algorithm */
+#define VIRTCHNL_CIPHER_NO_ALG 15 /* NULL algorithm */
+#define VIRTCHNL_3DES_CBC 16 /* Triple DES algorithm in CBC mode */
+#define VIRTCHNL_AES_CBC 17 /* AES algorithm in CBC mode */
+#define VIRTCHNL_AES_CTR 18 /* AES algorithm in Counter mode */
+/* AEAD Algorithm */
+#define VIRTCHNL_AES_CCM 19 /* AES algorithm in CCM mode */
+#define VIRTCHNL_AES_GCM 20 /* AES algorithm in GCM mode */
+#define VIRTCHNL_CHACHA20_POLY1305 21 /* algorithm of ChaCha20-Poly1305 */
+
+/* protocol type */
+#define VIRTCHNL_PROTO_ESP 1
+#define VIRTCHNL_PROTO_AH 2
+#define VIRTCHNL_PROTO_RSVD1 3
+
+/* sa mode */
+#define VIRTCHNL_SA_MODE_TRANSPORT 1
+#define VIRTCHNL_SA_MODE_TUNNEL 2
+#define VIRTCHNL_SA_MODE_TRAN_TUN 3
+#define VIRTCHNL_SA_MODE_UNKNOWN 4
+
+/* sa direction */
+#define VIRTCHNL_DIR_INGRESS 1
+#define VIRTCHNL_DIR_EGRESS 2
+#define VIRTCHNL_DIR_INGRESS_EGRESS 3
+
+/* sa termination */
+#define VIRTCHNL_TERM_SOFTWARE 1
+#define VIRTCHNL_TERM_HARDWARE 2
+
+/* sa ip type */
+#define VIRTCHNL_IPV4 1
+#define VIRTCHNL_IPV6 2
+
+/* for virtchnl_ipsec_resp */
+enum inline_ipsec_resp {
+ INLINE_IPSEC_SUCCESS = 0,
+ INLINE_IPSEC_FAIL = -1,
+ INLINE_IPSEC_ERR_FIFO_FULL = -2,
+ INLINE_IPSEC_ERR_NOT_READY = -3,
+ INLINE_IPSEC_ERR_VF_DOWN = -4,
+ INLINE_IPSEC_ERR_INVALID_PARAMS = -5,
+ INLINE_IPSEC_ERR_NO_MEM = -6,
+};
+
+/* Detailed opcodes for DPDK and IPsec use */
+enum inline_ipsec_ops {
+ INLINE_IPSEC_OP_GET_CAP = 0,
+ INLINE_IPSEC_OP_GET_STATUS = 1,
+ INLINE_IPSEC_OP_SA_CREATE = 2,
+ INLINE_IPSEC_OP_SA_UPDATE = 3,
+ INLINE_IPSEC_OP_SA_DESTROY = 4,
+ INLINE_IPSEC_OP_SP_CREATE = 5,
+ INLINE_IPSEC_OP_SP_DESTROY = 6,
+ INLINE_IPSEC_OP_SA_READ = 7,
+ INLINE_IPSEC_OP_EVENT = 8,
+ INLINE_IPSEC_OP_RESP = 9,
+};
+
+#pragma pack(1)
+/* Not all valid, if certain field is invalid, set 1 for all bits */
+struct virtchnl_algo_cap {
+ u32 algo_type;
+
+ u16 block_size;
+
+ u16 min_key_size;
+ u16 max_key_size;
+ u16 inc_key_size;
+
+ u16 min_iv_size;
+ u16 max_iv_size;
+ u16 inc_iv_size;
+
+ u16 min_digest_size;
+ u16 max_digest_size;
+ u16 inc_digest_size;
+
+ u16 min_aad_size;
+ u16 max_aad_size;
+ u16 inc_aad_size;
+};
+#pragma pack()
+
+/* vf record the capability of crypto from the virtchnl */
+struct virtchnl_sym_crypto_cap {
+ u8 crypto_type;
+ u8 algo_cap_num;
+ struct virtchnl_algo_cap algo_cap_list[VIRTCHNL_IPSEC_MAX_ALGO_CAP_NUM];
+};
+
+/* VIRTCHNL_OP_GET_IPSEC_CAP
+ * VF pass virtchnl_ipsec_cap to PF
+ * and PF return capability of ipsec from virtchnl.
+ */
+#pragma pack(1)
+struct virtchnl_ipsec_cap {
+ /* max number of SA per VF */
+ u16 max_sa_num;
+
+ /* IPsec SA Protocol - value ref VIRTCHNL_PROTO_XXX */
+ u8 virtchnl_protocol_type;
+
+ /* IPsec SA Mode - value ref VIRTCHNL_SA_MODE_XXX */
+ u8 virtchnl_sa_mode;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 termination_mode;
+
+ /* number of supported crypto capability */
+ u8 crypto_cap_num;
+
+ /* descriptor ID */
+ u16 desc_id;
+
+ /* capabilities enabled - value ref VIRTCHNL_IPSEC_XXX_ENA */
+ u32 caps_enabled;
+
+ /* crypto capabilities */
+ struct virtchnl_sym_crypto_cap cap[VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM];
+};
+
+/* configuration of crypto function */
+struct virtchnl_ipsec_crypto_cfg_item {
+ u8 crypto_type;
+
+ u32 algo_type;
+
+ /* Length of valid IV data. */
+ u16 iv_len;
+
+ /* Length of digest */
+ u16 digest_len;
+
+ /* SA salt */
+ u32 salt;
+
+ /* The length of the symmetric key */
+ u16 key_len;
+
+ /* key data buffer */
+ u8 key_data[VIRTCHNL_IPSEC_MAX_KEY_LEN];
+};
+#pragma pack()
+
+struct virtchnl_ipsec_sym_crypto_cfg {
+ struct virtchnl_ipsec_crypto_cfg_item
+ items[VIRTCHNL_IPSEC_MAX_CRYPTO_ITEM_NUMBER];
+};
+
+#pragma pack(1)
+/* VIRTCHNL_OP_IPSEC_SA_CREATE
+ * VF send this SA configuration to PF using virtchnl;
+ * PF create SA as configuration and PF driver will return
+ * an unique index (sa_idx) for the created SA.
+ */
+struct virtchnl_ipsec_sa_cfg {
+ /* IPsec SA Protocol - AH/ESP */
+ u8 virtchnl_protocol_type;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 virtchnl_termination;
+
+ /* type of outer IP - IPv4/IPv6 */
+ u8 virtchnl_ip_type;
+
+ /* type of esn - !0:enable/0:disable */
+ u8 esn_enabled;
+
+ /* udp encap - !0:enable/0:disable */
+ u8 udp_encap_enabled;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* reserved */
+ u8 reserved1;
+
+ /* SA security parameter index */
+ u32 spi;
+
+ /* outer src ip address */
+ u8 src_addr[16];
+
+ /* outer dst ip address */
+ u8 dst_addr[16];
+
+ /* SPD reference. Used to link an SA with its policy.
+ * PF drivers may ignore this field.
+ */
+ u16 spd_ref;
+
+ /* high 32 bits of esn */
+ u32 esn_hi;
+
+ /* low 32 bits of esn */
+ u32 esn_low;
+
+ /* When enabled, sa_index must be valid */
+ u8 sa_index_en;
+
+ /* SA index when sa_index_en is true */
+ u32 sa_index;
+
+ /* auditing mode - enable/disable */
+ u8 audit_en;
+
+ /* lifetime byte limit - enable/disable
+ * When enabled, byte_limit_hard and byte_limit_soft
+ * must be valid.
+ */
+ u8 byte_limit_en;
+
+ /* hard byte limit count */
+ u64 byte_limit_hard;
+
+ /* soft byte limit count */
+ u64 byte_limit_soft;
+
+ /* drop on authentication failure - enable/disable */
+ u8 drop_on_auth_fail_en;
+
+ /* anti-reply window check - enable/disable
+ * When enabled, arw_size must be valid.
+ */
+ u8 arw_check_en;
+
+ /* size of arw window, offset by 1. Setting to 0
+ * represents ARW window size of 1. Setting to 127
+ * represents ARW window size of 128
+ */
+ u8 arw_size;
+
+ /* no ip offload mode - enable/disable
+ * When enabled, ip type and address must not be valid.
+ */
+ u8 no_ip_offload_en;
+
+ /* SA Domain. Used to logical separate an SADB into groups.
+ * PF drivers supporting a single group ignore this field.
+ */
+ u16 sa_domain;
+
+ /* crypto configuration */
+ struct virtchnl_ipsec_sym_crypto_cfg crypto_cfg;
+};
+#pragma pack()
+
+/* VIRTCHNL_OP_IPSEC_SA_UPDATE
+ * VF send configuration of index of SA to PF
+ * PF will update SA according to configuration
+ */
+struct virtchnl_ipsec_sa_update {
+ u32 sa_index; /* SA to update */
+ u32 esn_hi; /* high 32 bits of esn */
+ u32 esn_low; /* low 32 bits of esn */
+};
+
+#pragma pack(1)
+/* VIRTCHNL_OP_IPSEC_SA_DESTROY
+ * VF send configuration of index of SA to PF
+ * PF will destroy SA according to configuration
+ * flag bitmap indicate all SA or just selected SA will
+ * be destroyed
+ */
+struct virtchnl_ipsec_sa_destroy {
+ /* All zero bitmap indicates all SA will be destroyed.
+ * Non-zero bitmap indicates the selected SA in
+ * array sa_index will be destroyed.
+ */
+ u8 flag;
+
+ /* selected SA index */
+ u32 sa_index[VIRTCHNL_IPSEC_MAX_SA_DESTROY_NUM];
+};
+
+/* VIRTCHNL_OP_IPSEC_SA_READ
+ * VF send this SA configuration to PF using virtchnl;
+ * PF read SA and will return configuration for the created SA.
+ */
+struct virtchnl_ipsec_sa_read {
+ /* SA valid - invalid/valid */
+ u8 valid;
+
+ /* SA active - inactive/active */
+ u8 active;
+
+ /* SA SN rollover - not_rollover/rollover */
+ u8 sn_rollover;
+
+ /* IPsec SA Protocol - AH/ESP */
+ u8 virtchnl_protocol_type;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 virtchnl_termination;
+
+ /* auditing mode - enable/disable */
+ u8 audit_en;
+
+ /* lifetime byte limit - enable/disable
+ * When set to limit, byte_limit_hard and byte_limit_soft
+ * must be valid.
+ */
+ u8 byte_limit_en;
+
+ /* hard byte limit count */
+ u64 byte_limit_hard;
+
+ /* soft byte limit count */
+ u64 byte_limit_soft;
+
+ /* drop on authentication failure - enable/disable */
+ u8 drop_on_auth_fail_en;
+
+ /* anti-replay window check - enable/disable
+ * When set to check, arw_size, arw_top, and arw must be valid
+ */
+ u8 arw_check_en;
+
+ /* size of arw window, offset by 1. Setting to 0
+ * represents ARW window size of 1. Setting to 127
+ * represents ARW window size of 128
+ */
+ u8 arw_size;
+
+ /* reserved */
+ u8 reserved1;
+
+ /* top of anti-replay-window */
+ u64 arw_top;
+
+ /* anti-replay-window */
+ u8 arw[16];
+
+ /* packets processed */
+ u64 packets_processed;
+
+ /* bytes processed */
+ u64 bytes_processed;
+
+ /* packets dropped */
+ u32 packets_dropped;
+
+ /* authentication failures */
+ u32 auth_fails;
+
+ /* ARW check failures */
+ u32 arw_fails;
+
+ /* type of esn - enable/disable */
+ u8 esn;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* SA security parameter index */
+ u32 spi;
+
+ /* SA salt */
+ u32 salt;
+
+ /* high 32 bits of esn */
+ u32 esn_hi;
+
+ /* low 32 bits of esn */
+ u32 esn_low;
+
+ /* SA Domain. Used to logical separate an SADB into groups.
+ * PF drivers supporting a single group ignore this field.
+ */
+ u16 sa_domain;
+
+ /* SPD reference. Used to link an SA with its policy.
+ * PF drivers may ignore this field.
+ */
+ u16 spd_ref;
+
+ /* crypto configuration. Salt and keys are set to 0 */
+ struct virtchnl_ipsec_sym_crypto_cfg crypto_cfg;
+};
+#pragma pack()
+
+/* Add allowlist entry in IES */
+struct virtchnl_ipsec_sp_cfg {
+ u32 spi;
+ u32 dip[4];
+
+ /* Drop frame if true or redirect to QAT if false. */
+ u8 drop;
+
+ /* Congestion domain. For future use. */
+ u8 cgd;
+
+ /* 0 for IPv4 table, 1 for IPv6 table. */
+ u8 table_id;
+
+ /* Set TC (congestion domain) if true. For future use. */
+ u8 set_tc;
+
+ /* 0 for NAT-T unsupported, 1 for NAT-T supported */
+ u8 is_udp;
+
+ /* reserved */
+ u8 reserved;
+
+ /* NAT-T UDP port number. Only valid in case NAT-T supported */
+ u16 udp_port;
+};
+
+#pragma pack(1)
+/* Delete allowlist entry in IES */
+struct virtchnl_ipsec_sp_destroy {
+ /* 0 for IPv4 table, 1 for IPv6 table. */
+ u8 table_id;
+ u32 rule_id;
+};
+#pragma pack()
+
+/* Response from IES to allowlist operations */
+struct virtchnl_ipsec_sp_cfg_resp {
+ u32 rule_id;
+};
+
+struct virtchnl_ipsec_sa_cfg_resp {
+ u32 sa_handle;
+};
+
+#define INLINE_IPSEC_EVENT_RESET 0x1
+#define INLINE_IPSEC_EVENT_CRYPTO_ON 0x2
+#define INLINE_IPSEC_EVENT_CRYPTO_OFF 0x4
+
+struct virtchnl_ipsec_event {
+ u32 ipsec_event_data;
+};
+
+#define INLINE_IPSEC_STATUS_AVAILABLE 0x1
+#define INLINE_IPSEC_STATUS_UNAVAILABLE 0x2
+
+struct virtchnl_ipsec_status {
+ u32 status;
+};
+
+struct virtchnl_ipsec_resp {
+ u32 resp;
+};
+
+/* Internal message descriptor for VF <-> IPsec communication */
+struct inline_ipsec_msg {
+ u16 ipsec_opcode;
+ u16 req_id;
+
+ union {
+ /* IPsec request */
+ struct virtchnl_ipsec_sa_cfg sa_cfg[0];
+ struct virtchnl_ipsec_sp_cfg sp_cfg[0];
+ struct virtchnl_ipsec_sa_update sa_update[0];
+ struct virtchnl_ipsec_sa_destroy sa_destroy[0];
+ struct virtchnl_ipsec_sp_destroy sp_destroy[0];
+
+ /* IPsec response */
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp[0];
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp[0];
+ struct virtchnl_ipsec_cap ipsec_cap[0];
+ struct virtchnl_ipsec_status ipsec_status[0];
+ /* response to del_sa, del_sp, update_sa */
+ struct virtchnl_ipsec_resp ipsec_resp[0];
+
+ /* IPsec event (no req_id is required) */
+ struct virtchnl_ipsec_event event[0];
+
+ /* Reserved */
+ struct virtchnl_ipsec_sa_read sa_read[0];
+ } ipsec_data;
+};
+
+static inline u16 virtchnl_inline_ipsec_val_msg_len(u16 opcode)
+{
+ u16 valid_len = sizeof(struct inline_ipsec_msg);
+
+ switch (opcode) {
+ case INLINE_IPSEC_OP_GET_CAP:
+ case INLINE_IPSEC_OP_GET_STATUS:
+ break;
+ case INLINE_IPSEC_OP_SA_CREATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_cfg);
+ break;
+ case INLINE_IPSEC_OP_SP_CREATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sp_cfg);
+ break;
+ case INLINE_IPSEC_OP_SA_UPDATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_update);
+ break;
+ case INLINE_IPSEC_OP_SA_DESTROY:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_destroy);
+ break;
+ case INLINE_IPSEC_OP_SP_DESTROY:
+ valid_len += sizeof(struct virtchnl_ipsec_sp_destroy);
+ break;
+ /* Only for msg length caculation of response to VF in case of
+ * inline ipsec failure.
+ */
+ case INLINE_IPSEC_OP_RESP:
+ valid_len += sizeof(struct virtchnl_ipsec_resp);
+ break;
+ default:
+ valid_len = 0;
+ break;
+ }
+
+ return valid_len;
+}
+
+#endif /* _VIRTCHNL_INLINE_IPSEC_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 2/9] net/idpf/base: add OS specific implementation
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-07 7:07 ` [RFC 1/9] net/idpf/base: introduce base code Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 3/9] net/idpf: support device initialization Junfeng Guo
` (6 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add some MACRO definations and small functions which are specific
for DPDK.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_osdep.h | 365 +++++++++++++++++++++++++++++
1 file changed, 365 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_osdep.h
diff --git a/drivers/net/idpf/base/iecm_osdep.h b/drivers/net/idpf/base/iecm_osdep.h
new file mode 100644
index 0000000000..60e21fbc1b
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_osdep.h
@@ -0,0 +1,365 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_OSDEP_H_
+#define _IECM_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../idpf_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef int16_t s16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+typedef uint64_t s64;
+
+typedef enum iecm_status iecm_status;
+typedef struct iecm_lock iecm_lock;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x) ((x) & 0xFFFF)
+#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN 6
+#endif
+
+#ifndef __le16
+#define __le16 uint16_t
+#endif
+#ifndef __le32
+#define __le32 uint32_t
+#endif
+#ifndef __le64
+#define __le64 uint64_t
+#endif
+#ifndef __be16
+#define __be16 uint16_t
+#endif
+#ifndef __be32
+#define __be32 uint32_t
+#endif
+#ifndef __be64
+#define __be64 uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((__unused__))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((__unused__))
+#endif
+#ifndef __packed
+#define __packed __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#ifndef BIT
+#define BIT(a) (1ULL << (a))
+#endif
+
+#define FALSE 0
+#define TRUE 1
+#define false 0
+#define true 1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->(f)))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define iecm_debug(h, m, s, ...) \
+ do { \
+ if (((m) & (h)->debug_mask)) \
+ PMD_DRV_LOG_RAW(DEBUG, "iecm %02x.%x " s, \
+ (h)->bus.device, (h)->bus.func, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#define iecm_info(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_warn(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_debug_array(hw, type, rowsize, groupsize, buf, len) \
+ do { \
+ struct iecm_hw *hw_l = hw; \
+ u16 len_l = len; \
+ u8 *buf_l = buf; \
+ int i; \
+ for (i = 0; i < len_l; i += 8) \
+ iecm_debug(hw_l, type, \
+ "0x%04X 0x%016"PRIx64"\n", \
+ i, *((u64 *)((buf_l) + i))); \
+ } while (0)
+#define iecm_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF iecm_snprintf
+#endif
+
+#define IECM_PCI_REG(reg) rte_read32(reg)
+#define IECM_PCI_REG_ADDR(a, reg) \
+ ((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define IECM_PCI_REG64(reg) rte_read64(reg)
+#define IECM_PCI_REG_ADDR64(a, reg) \
+ ((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
+
+#define iecm_wmb() rte_io_wmb()
+#define iecm_rmb() rte_io_rmb()
+#define iecm_mb() rte_io_mb()
+
+static inline uint32_t iecm_read_addr(volatile void *addr)
+{
+ return rte_le_to_cpu_32(IECM_PCI_REG(addr));
+}
+
+static inline uint64_t iecm_read_addr64(volatile void *addr)
+{
+ return rte_le_to_cpu_64(IECM_PCI_REG64(addr));
+}
+
+#define IECM_PCI_REG_WRITE(reg, value) \
+ rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define IECM_PCI_REG_WRITE64(reg, value) \
+ rte_write64((rte_cpu_to_le_64(value)), reg)
+
+#define IECM_READ_REG(hw, reg) iecm_read_addr(IECM_PCI_REG_ADDR((hw), (reg)))
+#define IECM_WRITE_REG(hw, reg, value) \
+ IECM_PCI_REG_WRITE(IECM_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) iecm_read_addr(IECM_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+ IECM_PCI_REG_WRITE(IECM_PCI_REG_ADDR((a), (reg)), (value))
+#define div64_long(n, d) ((n) / (d))
+#define rd64(a, reg) iecm_read_addr64(IECM_PCI_REG_ADDR64((a), (reg)))
+
+#define BITS_PER_BYTE 8
+
+/* memory allocation tracking */
+struct iecm_dma_mem {
+ void *va;
+ u64 pa;
+ u32 size;
+ const void *zone;
+} __attribute__((packed));
+
+struct iecm_virt_mem {
+ void *va;
+ u32 size;
+} __attribute__((packed));
+
+#define iecm_malloc(h, s) rte_zmalloc(NULL, s, 0)
+#define iecm_calloc(h, c, s) rte_zmalloc(NULL, (c) * (s), 0)
+#define iecm_free(h, m) rte_free(m)
+
+#define iecm_memset(a, b, c, d) memset((a), (b), (c))
+#define iecm_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define iecm_memdup(a, b, c, d) rte_memcpy(iecm_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+/* SW spinlock */
+struct iecm_lock {
+ rte_spinlock_t spinlock;
+};
+
+static inline void
+iecm_init_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+iecm_acquire_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+iecm_release_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+iecm_destroy_lock(__attribute__((unused)) struct iecm_lock *sp)
+{
+}
+
+struct iecm_hw;
+
+static inline void *
+iecm_alloc_dma_mem(__attribute__((unused)) struct iecm_hw *hw,
+ struct iecm_dma_mem *mem, u64 size)
+{
+ const struct rte_memzone *mz = NULL;
+ char z_name[RTE_MEMZONE_NAMESIZE];
+
+ if (!mem)
+ return NULL;
+
+ snprintf(z_name, sizeof(z_name), "iecm_dma_%"PRIu64, rte_rand());
+ mz = rte_memzone_reserve_aligned(z_name, size, SOCKET_ID_ANY,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_PGSIZE_4K);
+ if (!mz)
+ return NULL;
+
+ mem->size = size;
+ mem->va = mz->addr;
+ mem->pa = mz->iova;
+ mem->zone = (const void *)mz;
+ memset(mem->va, 0, size);
+
+ return mem->va;
+}
+
+static inline void
+iecm_free_dma_mem(__attribute__((unused)) struct iecm_hw *hw,
+ struct iecm_dma_mem *mem)
+{
+ rte_memzone_free((const struct rte_memzone *)mem->zone);
+ mem->size = 0;
+ mem->va = NULL;
+ mem->pa = 0;
+}
+
+static inline u8
+iecm_hweight8(u32 num)
+{
+ u8 bits = 0;
+ u32 i;
+
+ for (i = 0; i < 8; i++) {
+ bits += (u8)(num & 0x1);
+ num >>= 1;
+ }
+
+ return bits;
+}
+
+static inline u8
+iecm_hweight32(u32 num)
+{
+ u8 bits = 0;
+ u32 i;
+
+ for (i = 0; i < 32; i++) {
+ bits += (u8)(num & 0x1);
+ num >>= 1;
+ }
+
+ return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define iecm_usec_delay(x) rte_delay_us(x)
+#define iecm_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+#ifndef IECM_DBG_TRACE
+#define IECM_DBG_TRACE BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef IECM_INTEL_VENDOR_ID
+#define IECM_INTEL_VENDOR_ID 0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+ ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+ ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr) \
+ ((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+ (((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+ ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+ ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#ifndef LIST_HEAD_TYPE
+#define LIST_HEAD_TYPE(list_name, type) LIST_HEAD(list_name, type)
+#endif
+
+#ifndef LIST_ENTRY_TYPE
+#define LIST_ENTRY_TYPE(type) LIST_ENTRY(type)
+#endif
+
+#ifndef LIST_FOR_EACH_ENTRY_SAFE
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, temp, head, entry_type, list) \
+ LIST_FOREACH(pos, head, list)
+
+#endif
+
+#ifndef LIST_FOR_EACH_ENTRY
+#define LIST_FOR_EACH_ENTRY(pos, head, entry_type, list) \
+ LIST_FOREACH(pos, head, list)
+
+#endif
+
+#endif /* _IECM_OSDEP_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 3/9] net/idpf: support device initialization
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-07 7:07 ` [RFC 1/9] net/idpf/base: introduce base code Junfeng Guo
2022-05-07 7:07 ` [RFC 2/9] net/idpf/base: add OS specific implementation Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 4/9] net/idpf: support queue ops Junfeng Guo
` (5 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing
Cc: dev, junfeng.guo, Xiaoyun Li, Xiao Wang
Support dev init and add dev ops for IDPF PMD:
dev_configure
dev_start
dev_stop
dev_close
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 652 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 200 ++++++++++
drivers/net/idpf/idpf_logs.h | 38 ++
drivers/net/idpf/idpf_vchnl.c | 465 +++++++++++++++++++++++
drivers/net/idpf/meson.build | 18 +
drivers/net/idpf/version.map | 3 +
drivers/net/meson.build | 1 +
7 files changed, 1377 insertions(+)
create mode 100644 drivers/net/idpf/idpf_ethdev.c
create mode 100644 drivers/net/idpf/idpf_ethdev.h
create mode 100644 drivers/net/idpf/idpf_logs.h
create mode 100644 drivers/net/idpf/idpf_vchnl.c
create mode 100644 drivers/net/idpf/meson.build
create mode 100644 drivers/net/idpf/version.map
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
new file mode 100644
index 0000000000..e34165a87d
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -0,0 +1,652 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "idpf_ethdev.h"
+
+#define VPORT_NUM "vport_num"
+
+struct idpf_adapter *adapter;
+uint16_t vport_num = 1;
+
+static const char * const idpf_valid_args[] = {
+ VPORT_NUM,
+ NULL
+};
+
+static int idpf_dev_configure(struct rte_eth_dev *dev);
+static int idpf_dev_start(struct rte_eth_dev *dev);
+static int idpf_dev_stop(struct rte_eth_dev *dev);
+static int idpf_dev_close(struct rte_eth_dev *dev);
+
+static const struct eth_dev_ops idpf_eth_dev_ops = {
+ .dev_configure = idpf_dev_configure,
+ .dev_start = idpf_dev_start,
+ .dev_stop = idpf_dev_stop,
+ .dev_close = idpf_dev_close,
+};
+
+
+static int
+idpf_init_vport_req_info(struct rte_eth_dev *dev)
+{
+ struct virtchnl2_create_vport *vport_info;
+ uint16_t idx = adapter->next_vport_idx;
+
+ if (!adapter->vport_req_info[idx]) {
+ adapter->vport_req_info[idx] = rte_zmalloc(NULL,
+ sizeof(struct virtchnl2_create_vport), 0);
+ if (!adapter->vport_req_info[idx]) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info");
+ return -1;
+ }
+ }
+
+ vport_info =
+ (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+
+ vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+
+ return 0;
+}
+
+static uint16_t
+idpf_get_next_vport_idx(struct idpf_vport **vports, uint16_t max_vport_nb,
+ uint16_t cur_vport_idx)
+{
+ uint16_t vport_idx;
+ uint16_t i;
+
+ if (cur_vport_idx < max_vport_nb && !vports[cur_vport_idx + 1]) {
+ vport_idx = cur_vport_idx + 1;
+ return vport_idx;
+ }
+
+ for (i = 0; i < max_vport_nb; i++) {
+ if (vports[i])
+ continue;
+ }
+
+ if (i == max_vport_nb)
+ vport_idx = IDPF_INVALID_VPORT_IDX;
+ else
+ vport_idx = i;
+
+ return vport_idx;
+}
+
+#ifndef IDPF_RSS_KEY_LEN
+#define IDPF_RSS_KEY_LEN 52
+#endif
+
+static int
+idpf_init_vport(struct rte_eth_dev *dev)
+{
+ uint16_t idx = adapter->next_vport_idx;
+ struct virtchnl2_create_vport *vport_info =
+ (struct virtchnl2_create_vport *)adapter->vport_recv_info[idx];
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int i;
+
+ vport->adapter = adapter;
+ vport->vport_id = vport_info->vport_id;
+ vport->txq_model = vport_info->txq_model;
+ vport->rxq_model = vport_info->rxq_model;
+ vport->num_tx_q = vport_info->num_tx_q;
+ vport->num_tx_complq = vport_info->num_tx_complq;
+ vport->num_rx_q = vport_info->num_rx_q;
+ vport->num_rx_bufq = vport_info->num_rx_bufq;
+ vport->max_mtu = vport_info->max_mtu;
+ rte_memcpy(vport->default_mac_addr,
+ vport_info->default_mac_addr, ETH_ALEN);
+ vport->rss_algorithm = vport_info->rss_algorithm;
+ vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+ vport_info->rss_key_size);
+ vport->rss_lut_size = vport_info->rss_lut_size;
+ vport->sw_idx = idx;
+
+ for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+ if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport->chunks_info.tx_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.tx_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.tx_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport->chunks_info.rx_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.rx_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.rx_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) {
+ vport->chunks_info.tx_compl_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.tx_compl_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.tx_compl_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) {
+ vport->chunks_info.rx_buf_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.rx_buf_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.rx_buf_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ }
+ }
+
+ adapter->vports[idx] = vport;
+ adapter->cur_vport_nb++;
+ adapter->next_vport_idx = idpf_get_next_vport_idx(adapter->vports,
+ adapter->max_vport_nb, idx);
+ if (adapter->next_vport_idx == IDPF_INVALID_VPORT_IDX) {
+ PMD_INIT_LOG(ERR, "Failed to get next vport id");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+idpf_dev_configure(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |=
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ ret = idpf_init_vport_req_info(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+ return ret;
+ }
+
+ ret = idpf_create_vport(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create vport.");
+ return ret;
+ }
+
+ ret = idpf_init_vport(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init vports.");
+ return ret;
+ }
+
+ return ret;
+}
+
+static int
+idpf_dev_start(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ vport->stopped = 0;
+
+ if (idpf_ena_dis_vport(vport, true)) {
+ PMD_DRV_LOG(ERR, "Failed to enable vport");
+ goto err_vport;
+ }
+
+ return 0;
+
+err_vport:
+ return -1;
+}
+
+static int
+idpf_dev_stop(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (vport->stopped == 1)
+ return 0;
+
+ if (idpf_ena_dis_vport(vport, false))
+ PMD_DRV_LOG(ERR, "disable vport failed");
+
+ vport->stopped = 1;
+ dev->data->dev_started = 0;
+
+ return 0;
+}
+
+static int
+idpf_dev_close(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ idpf_dev_stop(dev);
+ idpf_destroy_vport(vport);
+
+ return 0;
+}
+
+static void
+idpf_reset_pf(struct iecm_hw *hw)
+{
+ uint32_t reg;
+
+ reg = IECM_READ_REG(hw, PFGEN_CTRL);
+ IECM_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct iecm_hw *hw)
+{
+ uint32_t reg;
+ int i;
+
+ for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+ reg = IECM_READ_REG(hw, PFGEN_RSTAT);
+ if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+ return 0;
+ rte_delay_ms(1000);
+ }
+
+ PMD_INIT_LOG(ERR, "IDPF reset timeout");
+ return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct iecm_hw *hw)
+{
+ struct iecm_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+ {
+ .type = IECM_CTLQ_TYPE_MAILBOX_TX,
+ .id = IDPF_CTLQ_ID,
+ .len = IDPF_CTLQ_LEN,
+ .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+ .reg = {
+ .head = PF_FW_ATQH,
+ .tail = PF_FW_ATQT,
+ .len = PF_FW_ATQLEN,
+ .bah = PF_FW_ATQBAH,
+ .bal = PF_FW_ATQBAL,
+ .len_mask = PF_FW_ATQLEN_ATQLEN_M,
+ .len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+ .head_mask = PF_FW_ATQH_ATQH_M,
+ }
+ },
+ {
+ .type = IECM_CTLQ_TYPE_MAILBOX_RX,
+ .id = IDPF_CTLQ_ID,
+ .len = IDPF_CTLQ_LEN,
+ .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+ .reg = {
+ .head = PF_FW_ARQH,
+ .tail = PF_FW_ARQT,
+ .len = PF_FW_ARQLEN,
+ .bah = PF_FW_ARQBAH,
+ .bal = PF_FW_ARQBAL,
+ .len_mask = PF_FW_ARQLEN_ARQLEN_M,
+ .len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+ .head_mask = PF_FW_ARQH_ARQH_M,
+ }
+ }
+ };
+ struct iecm_ctlq_info *ctlq;
+ int ret = 0;
+
+ ret = iecm_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+ if (ret)
+ return ret;
+
+ LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+ struct iecm_ctlq_info, cq_list) {
+ if (ctlq->q_id == IDPF_CTLQ_ID && ctlq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ hw->asq = ctlq;
+ if (ctlq->q_id == IDPF_CTLQ_ID && ctlq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX)
+ hw->arq = ctlq;
+ }
+
+ if (!hw->asq || !hw->arq) {
+ iecm_ctlq_deinit(hw);
+ ret = -ENOENT;
+ }
+
+ return ret;
+}
+
+static int
+idpf_adapter_init(struct rte_eth_dev *dev)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ struct rte_pci_device *pci_dev = IDPF_DEV_TO_PCI(dev);
+ int ret = 0;
+
+ if (adapter->initialized)
+ return 0;
+
+ hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+ hw->hw_addr_len = pci_dev->mem_resource[0].len;
+ hw->back = adapter;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->device_id = pci_dev->id.device_id;
+ hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+ idpf_reset_pf(hw);
+ ret = idpf_check_pf_reset_done(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "IDPF is still resetting");
+ goto err;
+ }
+
+ ret = idpf_init_mbx(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init mailbox");
+ goto err;
+ }
+
+ adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp", IDPF_DFLT_MBX_BUF_SIZE, 0);
+ if (!adapter->mbx_resp) {
+ PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+ goto err_mbx;
+ }
+
+ if (idpf_check_api_version(adapter)) {
+ PMD_INIT_LOG(ERR, "Failed to check api version");
+ goto err_api;
+ }
+
+ adapter->caps = rte_zmalloc("idpf_caps",
+ sizeof(struct virtchnl2_get_capabilities), 0);
+ if (!adapter->caps) {
+ PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
+ goto err_api;
+ }
+
+ if (idpf_get_caps(adapter)) {
+ PMD_INIT_LOG(ERR, "Failed to get capabilities");
+ goto err_caps;
+ }
+
+ adapter->max_vport_nb = adapter->caps->max_vports;
+
+ adapter->vport_req_info = rte_zmalloc("vport_req_info",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vport_req_info),
+ 0);
+ if (!adapter->vport_req_info) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info memory");
+ goto err_caps;
+ }
+
+ adapter->vport_recv_info = rte_zmalloc("vport_recv_info",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vport_recv_info),
+ 0);
+ if (!adapter->vport_recv_info) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_recv_info memory");
+ goto err_vport_recv_info;
+ }
+
+ adapter->vports = rte_zmalloc("vports",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vports),
+ 0);
+ if (!adapter->vports) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+ goto err_vports;
+ }
+
+ adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
+ sizeof(struct virtchnl2_config_rx_queues)) /
+ sizeof(struct virtchnl2_rxq_info);
+ adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
+ sizeof(struct virtchnl2_config_tx_queues)) /
+ sizeof(struct virtchnl2_txq_info);
+
+ adapter->cur_vport_nb = 0;
+ adapter->next_vport_idx = 0;
+ adapter->initialized = true;
+
+ return ret;
+
+err_vports:
+ rte_free(adapter->vports);
+ adapter->vports = NULL;
+err_vport_recv_info:
+ rte_free(adapter->vport_req_info);
+ adapter->vport_req_info = NULL;
+err_caps:
+ rte_free(adapter->caps);
+ adapter->caps = NULL;
+err_api:
+ rte_free(adapter->mbx_resp);
+ adapter->mbx_resp = NULL;
+err_mbx:
+ iecm_ctlq_deinit(hw);
+err:
+ return -1;
+}
+
+
+static int
+idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void *init_params)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->dev_ops = &idpf_eth_dev_ops;
+
+ ret = idpf_adapter_init(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init adapter.");
+ return ret;
+ }
+
+ dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+
+ vport->dev_data = dev->data;
+
+ dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+ ret = -ENOMEM;
+ goto err;
+ }
+
+err:
+ return ret;
+}
+
+static int
+idpf_dev_uninit(struct rte_eth_dev *dev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return -EPERM;
+
+ idpf_dev_close(dev);
+
+ return 0;
+}
+
+static const struct rte_pci_id pci_id_idpf_map[] = {
+ { RTE_PCI_DEVICE(IECM_INTEL_VENDOR_ID, IECM_DEV_ID_PF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idpf_handle_vport_num(const char *key, const char *value, void *args)
+{
+ int *i = (int *)args;
+ char *end;
+ int num;
+
+ num = strtoul(value, &end, 10);
+
+ if (num <= 0) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", value must be greater than 0",
+ value, key);
+ return -1;
+ }
+
+ *i = num;
+ return 0;
+}
+
+static int
+idpf_parse_vport_num(struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+ const char *key = "vport_num";
+ int ret = 0;
+
+ if (devargs == NULL)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+ if (kvlist == NULL)
+ return 0;
+
+ ret = rte_kvargs_process(kvlist, key, &idpf_handle_vport_num,
+ &vport_num);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static int
+idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ char name[RTE_ETH_NAME_MAX_LEN];
+ int i, retval;
+
+ retval = idpf_parse_vport_num(pci_dev->device.devargs);
+ if (retval)
+ return retval;
+
+ if (!adapter) {
+ adapter = (struct idpf_adapter *)rte_zmalloc("idpf_adapter",
+ sizeof(struct idpf_adapter), 0);
+ if (!adapter) {
+ PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+ return -1;
+ }
+ }
+
+ for (i = 0; i < vport_num; i++) {
+ snprintf(name, sizeof(name), "idpf_vport_%d", i);
+ retval = rte_eth_dev_create(&pci_dev->device, name,
+ sizeof(struct idpf_vport),
+ NULL, NULL, idpf_dev_init,
+ NULL);
+ if (retval)
+ PMD_DRV_LOG(ERR, "failed to creat vport %d", i);
+ }
+
+ return 0;
+}
+
+static void
+idpf_adapter_rel(struct idpf_adapter *adapter)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ int i;
+
+ iecm_ctlq_deinit(hw);
+
+ if (adapter->caps) {
+ rte_free(adapter->caps);
+ adapter->caps = NULL;
+ }
+
+ if (adapter->mbx_resp) {
+ rte_free(adapter->mbx_resp);
+ adapter->mbx_resp = NULL;
+ }
+
+ if (adapter->vport_req_info) {
+ for (i = 0; i < adapter->max_vport_nb; i++) {
+ if (adapter->vport_req_info[i]) {
+ rte_free(adapter->vport_req_info[i]);
+ adapter->vport_req_info[i] = NULL;
+ }
+ }
+ rte_free(adapter->vport_req_info);
+ adapter->vport_req_info = NULL;
+ }
+
+ if (adapter->vport_recv_info) {
+ for (i = 0; i < adapter->max_vport_nb; i++) {
+ if (adapter->vport_recv_info[i]) {
+ rte_free(adapter->vport_recv_info[i]);
+ adapter->vport_recv_info[i] = NULL;
+ }
+ }
+ }
+
+ if (adapter->vports) {
+ /* Needn't free adapter->vports[i] since it's private data */
+ rte_free(adapter->vports);
+ adapter->vports = NULL;
+ }
+}
+
+static int
+idpf_pci_remove(struct rte_pci_device *pci_dev)
+{
+ if (adapter) {
+ idpf_adapter_rel(adapter);
+ rte_free(adapter);
+ adapter = NULL;
+ }
+
+ return rte_eth_dev_pci_generic_remove(pci_dev, idpf_dev_uninit);
+}
+
+static struct rte_pci_driver rte_idpf_pmd = {
+ .id_table = pci_id_idpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = idpf_pci_probe,
+ .remove = idpf_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
new file mode 100644
index 0000000000..762d5ff66a
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_ETHDEV_H_
+#define _IDPF_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_bus_pci.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+
+#include "base/iecm_osdep.h"
+#include "base/iecm_type.h"
+#include "base/iecm_devids.h"
+#include "base/iecm_lan_txrx.h"
+#include "base/iecm_lan_pf_regs.h"
+#include "base/virtchnl.h"
+#include "base/virtchnl2.h"
+
+#define IDPF_INVALID_VPORT_IDX 0xffff
+#define IDPF_TX_COMPLQ_PER_GRP 1
+#define IDPF_RX_BUFQ_PER_GRP 2
+
+#define IDPF_CTLQ_ID -1
+#define IDPF_CTLQ_LEN 64
+#define IDPF_DFLT_MBX_BUF_SIZE 4096
+
+#define IDPF_MAX_NUM_QUEUES 256
+#define IDPF_MIN_BUF_SIZE 1024
+#define IDPF_MAX_FRAME_SIZE 9728
+
+#define IDPF_NUM_MACADDR_MAX 64
+
+#define IDPF_MAX_PKT_TYPE 1024
+
+#define IDPF_VLAN_TAG_SIZE 4
+#define IDPF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN 6
+#endif
+
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+ IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+ IDPF_MSG_NON, /* Read nothing from admin queue */
+ IDPF_MSG_SYS, /* Read system msg from admin queue */
+ IDPF_MSG_CMD, /* Read async command result */
+};
+
+struct idpf_chunks_info {
+ uint32_t tx_start_qid;
+ uint32_t rx_start_qid;
+ /* Valid only if split queue model */
+ uint32_t tx_compl_start_qid;
+ uint32_t rx_buf_start_qid;
+
+ uint64_t tx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint64_t rx_qtail_start;
+ uint32_t rx_qtail_spacing;
+ uint64_t tx_compl_qtail_start;
+ uint32_t tx_compl_qtail_spacing;
+ uint64_t rx_buf_qtail_start;
+ uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+ struct idpf_adapter *adapter; /* Backreference to associated adapter */
+ uint16_t vport_id;
+ uint32_t txq_model;
+ uint32_t rxq_model;
+ uint16_t num_tx_q;
+ /* valid only if txq_model is split Q */
+ uint16_t num_tx_complq;
+ uint16_t num_rx_q;
+ /* valid only if rxq_model is split Q */
+ uint16_t num_rx_bufq;
+
+ uint16_t max_mtu;
+ uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+ enum virtchnl_rss_algorithm rss_algorithm;
+ uint16_t rss_key_size;
+ uint16_t rss_lut_size;
+
+ uint16_t sw_idx; /* SW idx */
+
+ struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+
+ /* RSS info */
+ uint32_t *rss_lut;
+ uint8_t *rss_key;
+ uint64_t rss_hf;
+
+ /* Chunk info */
+ struct idpf_chunks_info chunks_info;
+
+ /* Event from ipf */
+ bool link_up;
+ uint32_t link_speed;
+
+ bool stopped;
+};
+
+struct idpf_adapter {
+ struct iecm_hw hw;
+
+ struct virtchnl_version_info virtchnl_version;
+ struct virtchnl2_get_capabilities *caps;
+
+ volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+ uint32_t cmd_retval; /* return value of the cmd response from ipf */
+ uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+
+ uint32_t txq_model;
+ uint32_t rxq_model;
+
+ /* Vport info */
+ uint8_t **vport_req_info;
+ uint8_t **vport_recv_info;
+ struct idpf_vport **vports;
+ uint16_t max_vport_nb;
+ uint16_t cur_vport_nb;
+ uint16_t next_vport_idx;
+
+ /* Max config queue number per VC message */
+ uint32_t max_rxq_per_msg;
+ uint32_t max_txq_per_msg;
+
+ uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+ bool initialized;
+ bool stopped;
+};
+
+extern struct idpf_adapter *adapter;
+
+#define IDPF_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+ uint32_t ops;
+ uint8_t *in_args; /* buffer for sending */
+ uint32_t in_args_size; /* buffer size for sending */
+ uint8_t *out_buffer; /* buffer for response */
+ uint32_t out_size; /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+ adapter->cmd_retval = msg_ret;
+ rte_wmb();
+ adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct idpf_adapter *adapter)
+{
+ rte_wmb();
+ adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+ adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
+{
+ int ret = rte_atomic32_cmpset(&adapter->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+ if (!ret)
+ PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+ return !ret;
+}
+
+void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int idpf_check_api_version(struct idpf_adapter *adapter);
+int idpf_get_caps(struct idpf_adapter *adapter);
+int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
+int idpf_destroy_vport(struct idpf_vport *vport);
+
+int idpf_ena_dis_vport(struct idpf_vport *vport, bool enable);
+
+#endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
new file mode 100644
index 0000000000..906aae8463
--- /dev/null
+++ b/drivers/net/idpf/idpf_logs.h
@@ -0,0 +1,38 @@
+#ifndef _IDPF_LOGS_H_
+#define _IDPF_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_logtype_init;
+extern int idpf_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, idpf_logtype_init, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, idpf_logtype_driver, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
new file mode 100644
index 0000000000..77d77b82d8
--- /dev/null
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_dev.h>
+
+#include "idpf_ethdev.h"
+
+#include "base/iecm_prototype.h"
+
+#define IDPF_CTLQ_LEN 64
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+ struct iecm_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+ uint16_t num_q_msg = IDPF_CTLQ_LEN;
+ struct iecm_dma_mem *dma_mem;
+ int err = 0;
+ uint32_t i;
+
+ for (i = 0; i < 10; i++) {
+ err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+ msleep(20);
+ if (num_q_msg)
+ break;
+ }
+ if (err)
+ goto error;
+
+ /* Empty queue is not an error */
+ for (i = 0; i < num_q_msg; i++) {
+ dma_mem = q_msg[i]->ctx.indirect.payload;
+ if (dma_mem) {
+ iecm_free_dma_mem(&adapter->hw, dma_mem);
+ rte_free(dma_mem);
+ }
+ rte_free(q_msg[i]);
+ }
+
+error:
+ return err;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op,
+ uint16_t msg_size, uint8_t *msg)
+{
+ struct iecm_ctlq_msg *ctlq_msg;
+ struct iecm_dma_mem *dma_mem;
+ int err = 0;
+
+ err = idpf_vc_clean(adapter);
+ if (err)
+ goto err;
+
+ ctlq_msg = (struct iecm_ctlq_msg *)rte_zmalloc(NULL,
+ sizeof(struct iecm_ctlq_msg), 0);
+ if (!ctlq_msg) {
+ err = -ENOMEM;
+ goto err;
+ }
+
+ dma_mem = (struct iecm_dma_mem *)rte_zmalloc(NULL,
+ sizeof(struct iecm_dma_mem), 0);
+ if (!dma_mem) {
+ err = -ENOMEM;
+ goto dma_mem_error;
+ }
+
+ dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+ iecm_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+ if (!dma_mem->va) {
+ err = -ENOMEM;
+ goto dma_alloc_error;
+ }
+
+ memcpy(dma_mem->va, msg, msg_size);
+
+ ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_pf;
+ ctlq_msg->func_id = 0;
+ ctlq_msg->data_len = msg_size;
+ ctlq_msg->cookie.mbx.chnl_opcode = op;
+ ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+ ctlq_msg->ctx.indirect.payload = dma_mem;
+
+ err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+ if (err)
+ goto send_error;
+
+ return err;
+
+send_error:
+ iecm_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+ rte_free(dma_mem);
+dma_mem_error:
+ rte_free(ctlq_msg);
+err:
+ return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_ipf(struct idpf_adapter *adapter, uint16_t buf_len,
+ uint8_t *buf)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ struct iecm_arq_event_info event;
+ enum idpf_vc_result result = IDPF_MSG_NON;
+ enum virtchnl_ops opcode;
+ uint16_t pending = 1;
+ int ret;
+
+ event.buf_len = buf_len;
+ event.msg_buf = buf;
+ ret = iecm_clean_arq_element(hw, &event, &pending);
+ if (ret) {
+ PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+ if (ret != IECM_ERR_CTLQ_NO_WORK)
+ result = IDPF_MSG_ERR;
+ return result;
+ }
+
+ opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+ adapter->cmd_retval =
+ (enum virtchnl_status_code)rte_le_to_cpu_32(event.desc.cookie_low);
+
+ PMD_DRV_LOG(DEBUG, "CQ from ipf carries opcode %u, retval %d",
+ opcode, adapter->cmd_retval);
+
+ if (opcode == VIRTCHNL2_OP_EVENT) {
+ struct virtchnl2_event *ve =
+ (struct virtchnl2_event *)event.msg_buf;
+
+ result = IDPF_MSG_SYS;
+ switch (ve->event) {
+ case VIRTCHNL2_EVENT_LINK_CHANGE:
+ /* TBD */
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "%s: Unknown event %d from ipf",
+ __func__, ve->event);
+ break;
+ }
+ } else {
+ /* async reply msg on command issued by pf previously */
+ result = IDPF_MSG_CMD;
+ if (opcode != adapter->pend_cmd) {
+ PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+ adapter->pend_cmd, opcode);
+ result = IDPF_MSG_ERR;
+ }
+ }
+
+ return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS 10
+
+static int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+ enum idpf_vc_result result;
+ int err = 0;
+ int i = 0;
+ int ret;
+
+ if (_atomic_set_cmd(adapter, args->ops))
+ return -1;
+
+ ret = idpf_send_vc_msg(adapter, args->ops,
+ args->in_args_size,
+ args->in_args);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+ _clear_cmd(adapter);
+ return ret;
+ }
+
+ switch (args->ops) {
+ case VIRTCHNL_OP_VERSION:
+ case VIRTCHNL2_OP_GET_CAPS:
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ /* for init virtchnl ops, need to poll the response */
+ do {
+ result = idpf_read_msg_from_ipf(adapter,
+ args->out_size,
+ args->out_buffer);
+ if (result == IDPF_MSG_CMD)
+ break;
+ rte_delay_ms(ASQ_DELAY_MS);
+ } while (i++ < MAX_TRY_TIMES);
+ if (i >= MAX_TRY_TIMES ||
+ adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+ adapter->cmd_retval, args->ops);
+ }
+ _clear_cmd(adapter);
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+ break;
+ rte_delay_ms(ASQ_DELAY_MS);
+ /* If don't read msg or read sys event, continue */
+ } while (i++ < MAX_TRY_TIMES);
+ /* If there's no response is received, clear command */
+ if (i >= MAX_TRY_TIMES ||
+ adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+ adapter->cmd_retval, args->ops);
+ _clear_cmd(adapter);
+ }
+ break;
+ }
+
+ return err;
+}
+
+int
+idpf_check_api_version(struct idpf_adapter *adapter)
+{
+ struct virtchnl_version_info version;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&version, 0, sizeof(struct virtchnl_version_info));
+ version.major = VIRTCHNL_VERSION_MAJOR_2;
+ version.minor = VIRTCHNL_VERSION_MINOR_0;
+
+ args.ops = VIRTCHNL_OP_VERSION;
+ args.in_args = (uint8_t *)&version;
+ args.in_args_size = sizeof(version);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_VERSION");
+ return err;
+ }
+
+ return err;
+}
+
+int
+idpf_get_caps(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_get_capabilities caps_msg;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+ caps_msg.csum_caps =
+ VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_GENERIC |
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+ caps_msg.seg_caps =
+ VIRTCHNL2_CAP_SEG_IPV4_TCP |
+ VIRTCHNL2_CAP_SEG_IPV4_UDP |
+ VIRTCHNL2_CAP_SEG_IPV4_SCTP |
+ VIRTCHNL2_CAP_SEG_IPV6_TCP |
+ VIRTCHNL2_CAP_SEG_IPV6_UDP |
+ VIRTCHNL2_CAP_SEG_IPV6_SCTP |
+ VIRTCHNL2_CAP_SEG_GENERIC;
+
+ caps_msg.rss_caps =
+ VIRTCHNL2_CAP_RSS_IPV4_TCP |
+ VIRTCHNL2_CAP_RSS_IPV4_UDP |
+ VIRTCHNL2_CAP_RSS_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV4_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV6_TCP |
+ VIRTCHNL2_CAP_RSS_IPV6_UDP |
+ VIRTCHNL2_CAP_RSS_IPV6_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV6_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV4_AH |
+ VIRTCHNL2_CAP_RSS_IPV4_ESP |
+ VIRTCHNL2_CAP_RSS_IPV4_AH_ESP |
+ VIRTCHNL2_CAP_RSS_IPV6_AH |
+ VIRTCHNL2_CAP_RSS_IPV6_ESP |
+ VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+ caps_msg.hsplit_caps =
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6;
+
+ caps_msg.rsc_caps =
+ VIRTCHNL2_CAP_RSC_IPV4_TCP |
+ VIRTCHNL2_CAP_RSC_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSC_IPV6_TCP |
+ VIRTCHNL2_CAP_RSC_IPV6_SCTP;
+
+ caps_msg.other_caps =
+ VIRTCHNL2_CAP_RDMA |
+ VIRTCHNL2_CAP_SRIOV |
+ VIRTCHNL2_CAP_MACFILTER |
+ VIRTCHNL2_CAP_FLOW_DIRECTOR |
+ VIRTCHNL2_CAP_SPLITQ_QSCHED |
+ VIRTCHNL2_CAP_CRC |
+ VIRTCHNL2_CAP_WB_ON_ITR |
+ VIRTCHNL2_CAP_PROMISC |
+ VIRTCHNL2_CAP_LINK_SPEED |
+ VIRTCHNL2_CAP_VLAN;
+
+ args.ops = VIRTCHNL2_OP_GET_CAPS;
+ args.in_args = (uint8_t *)&caps_msg;
+ args.in_args_size = sizeof(caps_msg);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+ return err;
+ }
+
+ rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+ return err;
+}
+
+int
+idpf_create_vport(__rte_unused struct rte_eth_dev *dev)
+{
+ uint16_t idx = adapter->next_vport_idx;
+ struct virtchnl2_create_vport *vport_req_info =
+ (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+ struct virtchnl2_create_vport vport_msg;
+ struct idpf_cmd_info args;
+ int err = -1;
+
+ memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+ vport_msg.vport_type = vport_req_info->vport_type;
+ vport_msg.txq_model = vport_req_info->txq_model;
+ vport_msg.rxq_model = vport_req_info->rxq_model;
+ vport_msg.num_tx_q = vport_req_info->num_tx_q;
+ vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+ vport_msg.num_rx_q = vport_req_info->num_rx_q;
+ vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+ args.in_args = (uint8_t *)&vport_msg;
+ args.in_args_size = sizeof(vport_msg);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+ return err;
+ }
+
+ if (!adapter->vport_recv_info[idx]) {
+ adapter->vport_recv_info[idx] = rte_zmalloc(NULL,
+ IDPF_DFLT_MBX_BUF_SIZE, 0);
+ if (!adapter->vport_recv_info[idx]) {
+ PMD_INIT_LOG(ERR, "Failed to alloc vport_recv_info.");
+ return err;
+ }
+ }
+ rte_memcpy(adapter->vport_recv_info[idx], args.out_buffer,
+ IDPF_DFLT_MBX_BUF_SIZE);
+ return err;
+}
+
+int
+idpf_destroy_vport(struct idpf_vport *vport)
+{
+ struct virtchnl2_vport vc_vport;
+ struct idpf_cmd_info args;
+ int err;
+
+ vc_vport.vport_id = vport->vport_id;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+ args.in_args = (uint8_t *)&vc_vport;
+ args.in_args_size = sizeof(vc_vport);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+ return err;
+ }
+
+ return err;
+}
+
+int
+idpf_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+ struct virtchnl2_vport vc_vport;
+ struct idpf_cmd_info args;
+ int err;
+
+ vc_vport.vport_id = vport->vport_id;
+ args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+ VIRTCHNL2_OP_DISABLE_VPORT;
+ args.in_args = (u8 *)&vc_vport;
+ args.in_args_size = sizeof(vc_vport);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+ enable ? "ENABLE" : "DISABLE");
+ }
+
+ return err;
+}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
new file mode 100644
index 0000000000..262a7aa8c7
--- /dev/null
+++ b/drivers/net/idpf/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+ build = false
+ reason = 'not supported on Windows'
+ subdir_done()
+endif
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+ 'idpf_ethdev.c',
+ 'idpf_vchnl.c',
+)
+
+includes += include_directories('base')
diff --git a/drivers/net/idpf/version.map b/drivers/net/idpf/version.map
new file mode 100644
index 0000000000..b7da224860
--- /dev/null
+++ b/drivers/net/idpf/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index e35652fe63..8910154544 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -28,6 +28,7 @@ drivers = [
'i40e',
'iavf',
'ice',
+ 'idpf',
'igc',
'ionic',
'ipn3ke',
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 4/9] net/idpf: support queue ops
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (2 preceding siblings ...)
2022-05-07 7:07 ` [RFC 3/9] net/idpf: support device initialization Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 5/9] net/idpf: support getting device information Junfeng Guo
` (4 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Xiaoyun Li
Add queue ops for IDPF PMD:
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 85 +++
drivers/net/idpf/idpf_ethdev.h | 5 +
drivers/net/idpf/idpf_rxtx.c | 1252 ++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 167 +++++
drivers/net/idpf/idpf_vchnl.c | 342 +++++++++
drivers/net/idpf/meson.build | 1 +
6 files changed, 1852 insertions(+)
create mode 100644 drivers/net/idpf/idpf_rxtx.c
create mode 100644 drivers/net/idpf/idpf_rxtx.h
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e34165a87d..511770ed4f 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -12,6 +12,7 @@
#include <rte_dev.h>
#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
#define VPORT_NUM "vport_num"
@@ -33,6 +34,14 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_start = idpf_dev_start,
.dev_stop = idpf_dev_stop,
.dev_close = idpf_dev_close,
+ .rx_queue_start = idpf_rx_queue_start,
+ .rx_queue_stop = idpf_rx_queue_stop,
+ .tx_queue_start = idpf_tx_queue_start,
+ .tx_queue_stop = idpf_tx_queue_stop,
+ .rx_queue_setup = idpf_rx_queue_setup,
+ .rx_queue_release = idpf_dev_rx_queue_release,
+ .tx_queue_setup = idpf_tx_queue_setup,
+ .tx_queue_release = idpf_dev_tx_queue_release,
};
@@ -193,6 +202,65 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+static int
+idpf_config_queues(struct idpf_vport *vport)
+{
+ int err;
+
+ err = idpf_config_rxqs(vport);
+ if (err)
+ return err;
+
+ err = idpf_config_txqs(vport);
+
+ return err;
+}
+
+static int
+idpf_start_queues(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_tx_queue *txq;
+ int i, err = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (txq->tx_deferred_start)
+ continue;
+ if (idpf_tx_queue_init(dev, i) != 0) {
+ PMD_DRV_LOG(ERR, "Fail to init tx queue %u", i);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (rxq->rx_deferred_start)
+ continue;
+ if (idpf_rx_queue_init(dev, i) != 0) {
+ PMD_DRV_LOG(ERR, "Fail to init rx queue %u", i);
+ return -1;
+ }
+ }
+
+ err = idpf_ena_dis_queues(vport, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Fail to start queues");
+ return err;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ dev->data->tx_queue_state[i] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ dev->data->rx_queue_state[i] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
static int
idpf_dev_start(struct rte_eth_dev *dev)
{
@@ -203,6 +271,19 @@ idpf_dev_start(struct rte_eth_dev *dev)
vport->stopped = 0;
+ if (idpf_config_queues(vport)) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues");
+ goto err_queue;
+ }
+
+ idpf_set_rx_function(dev);
+ idpf_set_tx_function(dev);
+
+ if (idpf_start_queues(dev)) {
+ PMD_DRV_LOG(ERR, "Failed to start queues");
+ goto err_queue;
+ }
+
if (idpf_ena_dis_vport(vport, true)) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
goto err_vport;
@@ -211,6 +292,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
return 0;
err_vport:
+ idpf_stop_queues(dev);
+err_queue:
return -1;
}
@@ -228,6 +311,8 @@ idpf_dev_stop(struct rte_eth_dev *dev)
if (idpf_ena_dis_vport(vport, false))
PMD_DRV_LOG(ERR, "disable vport failed");
+ idpf_stop_queues(dev);
+
vport->stopped = 1;
dev->data->dev_started = 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 762d5ff66a..c5aa168d95 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -195,6 +195,11 @@ int idpf_get_caps(struct idpf_adapter *adapter);
int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
int idpf_destroy_vport(struct idpf_vport *vport);
+int idpf_config_rxqs(struct idpf_vport *vport);
+int idpf_config_txqs(struct idpf_vport *vport);
+int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on);
+int idpf_ena_dis_queues(struct idpf_vport *vport, bool enable);
int idpf_ena_dis_vport(struct idpf_vport *vport, bool enable);
#endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
new file mode 100644
index 0000000000..770ed52281
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -0,0 +1,1252 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+
+#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+ /* The following constraints must be satisfied:
+ * thresh < rxq->nb_rx_desc
+ */
+ if (thresh >= nb_desc) {
+ PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+ thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+ uint16_t tx_free_thresh)
+{
+ /* TX descriptors will have their RS bit set after tx_rs_thresh
+ * descriptors have been used. The TX descriptor ring will be cleaned
+ * after tx_free_thresh descriptors are used or if the number of
+ * descriptors required to transmit a packet is greater than the
+ * number of free TX descriptors.
+ *
+ * The following constraints must be satisfied:
+ * - tx_rs_thresh must be less than the size of the ring minus 2.
+ * - tx_free_thresh must be less than the size of the ring minus 3.
+ * - tx_rs_thresh must be less than or equal to tx_free_thresh.
+ * - tx_rs_thresh must be a divisor of the ring size.
+ *
+ * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+ * race condition, hence the maximum threshold constraints. When set
+ * to zero use default values.
+ */
+ if (tx_rs_thresh >= (nb_desc - 2)) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 2",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_free_thresh >= (nb_desc - 3)) {
+ PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 3.",
+ tx_free_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_rs_thresh > tx_free_thresh) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+ "equal to tx_free_thresh (%u).",
+ tx_rs_thresh, tx_free_thresh);
+ return -EINVAL;
+ }
+ if ((nb_desc % tx_rs_thresh) != 0) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+ "number of TX descriptors (%u).",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (!rxq->sw_ring)
+ return;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ if (rxq->sw_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+ rxq->sw_ring[i] = NULL;
+ }
+ }
+}
+
+static inline void
+release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+ uint16_t nb_desc, i;
+
+ if (!txq || !txq->sw_ring) {
+ PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+ return;
+ }
+
+ if (txq->sw_nb_desc) {
+ /* For split queue model, descriptor ring */
+ nb_desc = txq->sw_nb_desc;
+ } else {
+ /* For single queue model */
+ nb_desc = txq->nb_tx_desc;
+ }
+ for (i = 0; i < nb_desc; i++) {
+ if (txq->sw_ring[i].mbuf) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+}
+
+static const struct idpf_rxq_ops def_rxq_ops = {
+ .release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+ .release_mbufs = release_txq_mbufs,
+};
+
+static void
+idpf_rx_queue_release(void *rxq)
+{
+ struct idpf_rx_queue *q = (struct idpf_rx_queue *)rxq;
+
+ if (!q)
+ return;
+
+ /* Split queue */
+ if (q->bufq1 && q->bufq2) {
+ q->bufq1->ops->release_mbufs(q->bufq1);
+ rte_free(q->bufq1->sw_ring);
+ rte_memzone_free(q->bufq1->mz);
+ rte_free(q->bufq1);
+ q->bufq2->ops->release_mbufs(q->bufq2);
+ rte_free(q->bufq2->sw_ring);
+ rte_memzone_free(q->bufq2->mz);
+ rte_free(q->bufq2);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+ return;
+ }
+
+ /* Single queue */
+ q->ops->release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+}
+
+static void
+idpf_tx_queue_release(void *txq)
+{
+ struct idpf_tx_queue *q = (struct idpf_tx_queue *)txq;
+
+ if (!q)
+ return;
+
+ if (q->complq)
+ rte_free(q->complq);
+ q->ops->release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+}
+
+static inline void
+reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ rxq->rx_tail = 0;
+ rxq->expected_gen_id = 1;
+}
+
+static inline void
+reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+ for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+ rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+ /* The next descriptor id which can be received. */
+ rxq->rx_next_avail = 0;
+
+ /* The next descriptor id which can be refilled. */
+ rxq->rx_tail = 0;
+ /* The number of descriptors which can be refilled. */
+ rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+ rxq->bufq1 = NULL;
+ rxq->bufq2 = NULL;
+}
+
+static inline void
+reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+ reset_split_rx_descq(rxq);
+ reset_split_rx_bufq(rxq->bufq1);
+ reset_split_rx_bufq(rxq->bufq2);
+}
+
+static inline void
+reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+ for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+ rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+ rxq->rx_tail = 0;
+ rxq->nb_rx_hold = 0;
+
+ if (rxq->pkt_first_seg != NULL)
+ rte_pktmbuf_free(rxq->pkt_first_seg);
+
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+ struct idpf_tx_entry *txe;
+ uint32_t i, size;
+ uint16_t prev;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ size = sizeof(struct iecm_flex_tx_sched_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->desc_ring)[i] = 0;
+
+ txe = txq->sw_ring;
+ prev = (uint16_t)(txq->sw_nb_desc - 1);
+ for (i = 0; i < txq->sw_nb_desc; i++) {
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ /* Use this as next to clean for split desc queue */
+ txq->last_desc_cleaned = 0;
+ txq->sw_tail = 0;
+ txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+static inline void
+reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+ uint32_t i, size;
+
+ if (!cq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
+ return;
+ }
+
+ size = sizeof(struct iecm_splitq_tx_compl_desc) * cq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)cq->compl_ring)[i] = 0;
+
+ cq->tx_tail = 0;
+ cq->expected_gen_id = 1;
+}
+
+static inline void
+reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+ struct idpf_tx_entry *txe;
+ uint32_t i, size;
+ uint16_t prev;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ txe = txq->sw_ring;
+ size = sizeof(struct iecm_base_tx_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->tx_ring)[i] = 0;
+
+ prev = (uint16_t)(txq->nb_tx_desc - 1);
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ txq->tx_ring[i].qw1 =
+ rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DESC_DONE);
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+ txq->nb_free = txq->nb_tx_desc - 1;
+
+ txq->next_dd = txq->rs_thresh - 1;
+ txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+ uint16_t queue_idx, uint16_t rx_free_thresh,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t len;
+
+ bufq->mp = mp;
+ bufq->nb_rx_desc = nb_desc;
+ bufq->rx_free_thresh = rx_free_thresh;
+ bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+ bufq->port_id = dev->data->port_id;
+ bufq->rx_deferred_start = rx_conf->rx_deferred_start;
+ bufq->rx_hdr_len = 0;
+ bufq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ bufq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ bufq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+ bufq->rx_buf_len = len;
+
+ /* Allocate the software ring. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ bufq->sw_ring =
+ rte_zmalloc_socket("idpf rx bufq sw ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ return -ENOMEM;
+ }
+
+ /* Allocate a liitle more to support bulk allocate. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_splitq_rx_buf_desc),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
+ rte_free(bufq->sw_ring);
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ bufq->rx_ring_phys_addr = mz->iova;
+ bufq->rx_ring = mz->addr;
+
+ bufq->mz = mz;
+ reset_split_rx_bufq(bufq);
+ bufq->q_set = true;
+ bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+ queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+ bufq->ops = &def_rxq_ops;
+
+ /* TODO: allow bulk or vec */
+
+ return 0;
+}
+
+static int
+idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *bufq1, *bufq2;
+ const struct rte_memzone *mz;
+ uint16_t rx_free_thresh;
+ uint32_t ring_size;
+ uint16_t qid;
+ uint16_t len;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc);
+ return -EINVAL;
+ }
+
+ /* Check free threshold */
+ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+ IDPF_DEFAULT_RX_FREE_THRESH :
+ rx_conf->rx_free_thresh;
+ if (check_rx_thresh(nb_desc, rx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Setup Rx description queue */
+ rxq = rte_zmalloc_socket("idpf rxq",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+ return -ENOMEM;
+ }
+
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->rx_hdr_len = 0;
+ rxq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = len;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+ ret = -ENOMEM;
+ goto free_rxq;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ rxq->rx_ring_phys_addr = mz->iova;
+ rxq->rx_ring = mz->addr;
+
+ rxq->mz = mz;
+ reset_split_rx_descq(rxq);
+ rxq->q_set = true;
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ /* TODO: allow bulk or vec */
+
+ /* setup Rx buffer queue */
+ bufq1 = rte_zmalloc_socket("idpf bufq1",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq1) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
+ ret = -ENOMEM;
+ goto free_mz;
+ }
+ qid = 2 * queue_idx;
+ ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
+ nb_desc, socket_id, rx_conf, mp);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+ ret = -EINVAL;
+ goto free_bufq1;
+ }
+ rxq->bufq1 = bufq1;
+
+ bufq2 = rte_zmalloc_socket("idpf bufq2",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq2) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
+ rte_free(bufq1->sw_ring);
+ rte_memzone_free(bufq1->mz);
+ ret = -ENOMEM;
+ goto free_bufq1;
+ }
+ qid = 2 * queue_idx + 1;
+ ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
+ nb_desc, socket_id, rx_conf, mp);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+ rte_free(bufq1->sw_ring);
+ rte_memzone_free(bufq1->mz);
+ ret = -EINVAL;
+ goto free_bufq2;
+ }
+ rxq->bufq2 = bufq2;
+
+ return 0;
+
+free_bufq2:
+ rte_free(bufq2);
+free_bufq1:
+ rte_free(bufq1);
+free_mz:
+ rte_memzone_free(mz);
+free_rxq:
+ rte_free(rxq);
+
+ return ret;
+}
+
+static int
+idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_rx_queue *rxq;
+ const struct rte_memzone *mz;
+ uint16_t rx_free_thresh;
+ uint32_t ring_size;
+ uint16_t len;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ /* Check free threshold */
+ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+ IDPF_DEFAULT_RX_FREE_THRESH :
+ rx_conf->rx_free_thresh;
+ if (check_rx_thresh(nb_desc, rx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Setup Rx description queue */
+ rxq = rte_zmalloc_socket("idpf rxq",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+ return -ENOMEM;
+ }
+
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->rx_hdr_len = 0;
+ rxq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = len;
+
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ rxq->sw_ring =
+ rte_zmalloc_socket("idpf rxq sw ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ rte_free(rxq);
+ return -ENOMEM;
+ }
+
+ /* Allocate a liitle more to support bulk allocate. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_singleq_rx_buf_desc),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
+ rte_free(rxq->sw_ring);
+ rte_free(rxq);
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ rxq->rx_ring_phys_addr = mz->iova;
+ rxq->rx_ring = mz->addr;
+
+ rxq->mz = mz;
+ reset_single_rx_queue(rxq);
+ rxq->q_set = true;
+ dev->data->rx_queues[queue_idx] = rxq;
+ rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+ queue_idx * vport->chunks_info.rx_qtail_spacing);
+ rxq->ops = &def_rxq_ops;
+
+ return 0;
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+ return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, rx_conf, mp);
+ else
+ return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, rx_conf, mp);
+}
+
+static int
+idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_tx_queue *txq, *cq;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ PMD_INIT_FUNC_TRACE();
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH;
+ tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH;
+ if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("idpf split txq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ txq->nb_tx_desc = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ /* Allocate software ring */
+ txq->sw_nb_desc = 2 * nb_desc;
+ txq->sw_ring =
+ rte_zmalloc_socket("idpf split tx sw ring",
+ sizeof(struct idpf_tx_entry) *
+ txq->sw_nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct iecm_flex_tx_sched_desc) * txq->nb_tx_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->desc_ring = (struct iecm_flex_tx_sched_desc *)mz->addr;
+
+ txq->mz = mz;
+ reset_split_tx_descq(txq);
+ txq->q_set = true;
+ dev->data->tx_queues[queue_idx] = txq;
+ txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+ queue_idx * vport->chunks_info.tx_qtail_spacing);
+ txq->ops = &def_txq_ops;
+
+ /* Allocate the TX completion queue data structure. */
+ txq->complq = rte_zmalloc_socket("idpf splitq cq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ cq = txq->complq;
+ if (!cq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+ cq->nb_tx_desc = 2 * nb_desc;
+ cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+ cq->port_id = dev->data->port_id;
+ cq->txqs = dev->data->tx_queues;
+ cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+ ring_size = sizeof(struct iecm_splitq_tx_compl_desc) * cq->nb_tx_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+ cq->tx_ring_phys_addr = mz->iova;
+ cq->compl_ring = (struct iecm_splitq_tx_compl_desc *)mz->addr;
+ cq->mz = mz;
+ reset_split_tx_complq(cq);
+
+ return 0;
+}
+
+static int
+idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_tx_queue *txq;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ PMD_INIT_FUNC_TRACE();
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+ tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
+ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
+ check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("idpf txq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ /* TODO: vlan offload */
+
+ txq->nb_tx_desc = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ /* Allocate software ring */
+ txq->sw_ring =
+ rte_zmalloc_socket("idpf tx sw ring",
+ sizeof(struct idpf_tx_entry) * nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct iecm_base_tx_desc) * nb_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring = (struct iecm_base_tx_desc *)mz->addr;
+
+ txq->mz = mz;
+ reset_single_tx_queue(txq);
+ txq->q_set = true;
+ dev->data->tx_queues[queue_idx] = txq;
+ txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+ queue_idx * vport->chunks_info.tx_qtail_spacing);
+ txq->ops = &def_txq_ops;
+
+ return 0;
+}
+
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+ return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, tx_conf);
+ else
+ return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, tx_conf);
+}
+
+static int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+ rxd->pkt_addr = dma_addr;
+ rxd->hdr_addr = 0;
+#ifndef RTE_LIBRTE_IDPF_16BYTE_RX_DESC
+ rxd->rsvd1 = 0;
+ rxd->rsvd2 = 0;
+#endif
+
+ rxq->sw_ring[i] = mbuf;
+ }
+
+ return 0;
+}
+
+static int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+ rxd->qword0.buf_id = i;
+ rxd->qword0.rsvd0 = 0;
+ rxd->qword0.rsvd1 = 0;
+ rxd->pkt_addr = dma_addr;
+ rxd->hdr_addr = 0;
+ rxd->rsvd2 = 0;
+
+ rxq->sw_ring[i] = mbuf;
+ }
+
+ rxq->nb_rx_hold = 0;
+ rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+ return 0;
+}
+
+int
+idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_rx_queue *rxq;
+ int err;
+
+ if (rx_queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+
+ if (!rxq->bufq1) {
+ /* Single queue */
+ err = idpf_alloc_single_rxq_mbufs(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+ return err;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+ } else {
+ /* Split queue */
+ err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+ return err;
+ }
+ err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+ return err;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->nb_rx_desc - 1);
+ IECM_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->nb_rx_desc - 1);
+ }
+
+ return err;
+}
+
+int
+idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int err = 0;
+
+ PMD_DRV_FUNC_TRACE();
+
+ err = idpf_rx_queue_init(dev, rx_queue_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+ rx_queue_id);
+ return err;
+ }
+
+ /* Ready to switch the queue on */
+ err = idpf_switch_queue(vport, rx_queue_id, true, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+ rx_queue_id);
+ else
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
+int
+idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_tx_queue *txq;
+
+ if (tx_queue_id >= dev->data->nb_tx_queues)
+ return -EINVAL;
+
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+ return 0;
+}
+
+int
+idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int err = 0;
+
+ PMD_DRV_FUNC_TRACE();
+
+ err = idpf_tx_queue_init(dev, tx_queue_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+ tx_queue_id);
+ return err;
+ }
+
+ /* Ready to switch the queue on */
+ err = idpf_switch_queue(vport, tx_queue_id, false, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+ tx_queue_id);
+ else
+ dev->data->tx_queue_state[tx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
+int
+idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ int err;
+
+ PMD_DRV_FUNC_TRACE();
+
+ if (rx_queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ err = idpf_switch_queue(vport, rx_queue_id, true, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+ rx_queue_id);
+ return err;
+ }
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ rxq->ops->release_mbufs(rxq);
+ reset_single_rx_queue(rxq);
+ } else {
+ rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+ rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+ reset_split_rx_queue(rxq);
+ }
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+int
+idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_tx_queue *txq;
+ int err;
+
+ PMD_DRV_FUNC_TRACE();
+
+ if (tx_queue_id >= dev->data->nb_tx_queues)
+ return -EINVAL;
+
+ err = idpf_switch_queue(vport, tx_queue_id, false, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+ tx_queue_id);
+ return err;
+ }
+
+ txq = dev->data->tx_queues[tx_queue_id];
+ txq->ops->release_mbufs(txq);
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ reset_single_tx_queue(txq);
+ } else {
+ reset_split_tx_descq(txq);
+ reset_split_tx_complq(txq->complq);
+ }
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+void
+idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
+void
+idpf_stop_queues(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_tx_queue *txq;
+ int ret, i;
+
+ /* Stop All queues */
+ ret = idpf_ena_dis_queues(vport, false);
+ if (ret)
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (!rxq)
+ continue;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ rxq->ops->release_mbufs(rxq);
+ reset_single_rx_queue(rxq);
+ } else {
+ rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+ rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+ reset_split_rx_queue(rxq);
+ }
+ dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (!txq)
+ continue;
+ txq->ops->release_mbufs(txq);
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ reset_split_tx_descq(txq);
+ reset_split_tx_complq(txq->complq);
+ } else {
+ reset_single_tx_queue(txq);
+ }
+ dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+}
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
new file mode 100644
index 0000000000..705f706890
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_RXTX_H_
+#define _IDPF_RXTX_H_
+
+#include "base/iecm_osdep.h"
+#include "base/iecm_type.h"
+#include "base/iecm_devids.h"
+#include "base/iecm_lan_txrx.h"
+#include "base/iecm_lan_pf_regs.h"
+#include "base/virtchnl.h"
+#include "base/virtchnl2.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define IDPF_ALIGN_RING_DESC 32
+#define IDPF_MIN_RING_DESC 32
+#define IDPF_MAX_RING_DESC 4096
+#define IDPF_DMA_MEM_ALIGN 4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define IDPF_RING_BASE_ALIGN 128
+
+/* used for Rx Bulk Allocate */
+#define IDPF_RX_MAX_BURST 32
+
+#define IDPF_DEFAULT_RX_FREE_THRESH 32
+
+
+#define IDPF_DEFAULT_TX_RS_THRESH 128
+#define IDPF_DEFAULT_TX_FREE_THRESH 128
+
+#define IDPF_MIN_TSO_MSS 256
+#define IDPF_MAX_TSO_MSS 9668
+#define IDPF_TSO_MAX_SEG UINT8_MAX
+#define IDPF_TX_MAX_MTU_SEG 8
+
+struct idpf_rx_queue {
+ struct idpf_adapter *adapter; /* the adapter this queue belongs to */
+ struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
+ const struct rte_memzone *mz; /* memzone for Rx ring */
+ volatile void *rx_ring;
+ struct rte_mbuf **sw_ring; /* address of SW ring */
+ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
+
+ uint16_t nb_rx_desc; /* ring length */
+ uint16_t rx_tail; /* current value of tail */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
+
+ /* for rx bulk */
+ uint16_t rx_nb_avail; /* number of staged packets ready */
+ uint16_t rx_next_avail; /* index of next staged packets */
+ uint16_t rx_free_trigger; /* triggers rx buffer allocation */
+ struct rte_mbuf *rx_stage[IDPF_RX_MAX_BURST * 2]; /* store mbuf */
+
+ uint16_t port_id; /* device port ID */
+ uint16_t queue_id; /* Rx queue index */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint8_t rxdid;
+
+ bool q_set; /* if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+ const struct idpf_rxq_ops *ops;
+
+ /* only valid for split queue mode */
+ uint8_t expected_gen_id;
+ struct idpf_rx_queue *bufq1;
+ struct idpf_rx_queue *bufq2;
+};
+
+struct idpf_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+ const struct rte_memzone *mz; /* memzone for Tx ring */
+ volatile struct iecm_base_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile union {
+ struct iecm_flex_tx_sched_desc *desc_ring;
+ struct iecm_splitq_tx_compl_desc *compl_ring;
+ };
+ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ struct idpf_tx_entry *sw_ring; /* address array of SW ring */
+
+ uint16_t nb_tx_desc; /* ring length */
+ uint16_t tx_tail; /* current value of tail */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ /* number of used desc since RS bit set */
+ uint16_t nb_used;
+ uint16_t nb_free;
+ uint16_t last_desc_cleaned; /* last desc have been cleaned*/
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint64_t offloads;
+ uint16_t next_dd; /* next to set RS, for VPMD */
+ uint16_t next_rs; /* next to check DD, for VPMD */
+
+ bool q_set; /* if rx queue has been configured */
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ const struct idpf_txq_ops *ops;
+#define IDPF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IDPF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+
+ /* only valid for split queue mode */
+ uint16_t sw_nb_desc;
+ uint16_t sw_tail;
+ void **txqs;
+ uint32_t tx_start_qid;
+ uint8_t expected_gen_id;
+ struct idpf_tx_queue *complq;
+};
+
+/* Offload features */
+union idpf_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ /* uint64_t unused : 24; */
+ };
+};
+
+struct idpf_rxq_ops {
+ void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+ void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp);
+int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+
+int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf);
+int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+
+void idpf_stop_queues(struct rte_eth_dev *dev);
+
+#endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 77d77b82d8..74ed555449 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -21,6 +21,7 @@
#include <rte_dev.h>
#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
#include "base/iecm_prototype.h"
@@ -440,6 +441,347 @@ idpf_destroy_vport(struct idpf_vport *vport)
return err;
}
+#define IDPF_RX_BUF_STRIDE 64
+int
+idpf_config_rxqs(struct idpf_vport *vport)
+{
+ struct idpf_rx_queue **rxq =
+ (struct idpf_rx_queue **)vport->dev_data->rx_queues;
+ struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+ struct virtchnl2_rxq_info *rxq_info;
+ struct idpf_cmd_info args;
+ uint16_t total_qs, num_qs;
+ int size, err, i, j;
+ int k = 0;
+
+ total_qs = vport->num_rx_q + vport->num_rx_bufq;
+ while (total_qs) {
+ if (total_qs > adapter->max_rxq_per_msg) {
+ num_qs = adapter->max_rxq_per_msg;
+ total_qs -= adapter->max_rxq_per_msg;
+ } else {
+ num_qs = total_qs;
+ total_qs = 0;
+ }
+
+ size = sizeof(*vc_rxqs) + (num_qs - 1) *
+ sizeof(struct virtchnl2_rxq_info);
+ vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+ if (vc_rxqs == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+ err = -ENOMEM;
+ break;
+ }
+ vc_rxqs->vport_id = vport->vport_id;
+ vc_rxqs->num_qinfo = num_qs;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ for (i = 0; i < num_qs; i++, k++) {
+ rxq_info = &vc_rxqs->qinfo[i];
+ rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
+ rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+ rxq_info->queue_id = rxq[k]->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+ rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
+ rxq_info->max_pkt_size =
+ vport->dev_data->dev_conf.rxmode.mtu;
+
+ rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+ rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+ rxq_info->ring_len = rxq[k]->nb_rx_desc;
+ }
+ } else {
+ for (i = 0; i < num_qs / 3; i++, k++) {
+ /* Rx queue */
+ rxq_info = &vc_rxqs->qinfo[i * 3];
+ rxq_info->dma_ring_addr =
+ rxq[k]->rx_ring_phys_addr;
+ rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+ rxq_info->queue_id = rxq[k]->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
+ rxq_info->max_pkt_size =
+ vport->dev_data->dev_conf.rxmode.mtu;
+
+ rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+ rxq_info->ring_len = rxq[k]->nb_rx_desc;
+ rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
+ rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
+ rxq_info->rx_buffer_low_watermark = 64;
+
+ /* Buffer queue */
+ for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
+ struct idpf_rx_queue *bufq = j == 1 ?
+ rxq[k]->bufq1 : rxq[k]->bufq2;
+ rxq_info = &vc_rxqs->qinfo[i * 3 + j];
+ rxq_info->dma_ring_addr =
+ bufq->rx_ring_phys_addr;
+ rxq_info->type =
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ rxq_info->queue_id = bufq->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ rxq_info->data_buffer_size = bufq->rx_buf_len;
+ rxq_info->desc_ids =
+ VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ rxq_info->ring_len = bufq->nb_rx_desc;
+
+ rxq_info->buffer_notif_stride =
+ IDPF_RX_BUF_STRIDE;
+ rxq_info->rx_buffer_low_watermark = 64;
+ }
+ }
+ }
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+ args.in_args = (uint8_t *)vc_rxqs;
+ args.in_args_size = size;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ rte_free(vc_rxqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+ break;
+ }
+ }
+
+ return err;
+}
+
+int
+idpf_config_txqs(struct idpf_vport *vport)
+{
+ struct idpf_tx_queue **txq =
+ (struct idpf_tx_queue **)vport->dev_data->tx_queues;
+ struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+ struct virtchnl2_txq_info *txq_info;
+ struct idpf_cmd_info args;
+ uint16_t total_qs, num_qs;
+ int size, err, i;
+ int k = 0;
+
+ total_qs = vport->num_tx_q + vport->num_tx_complq;
+ while (total_qs) {
+ if (total_qs > adapter->max_txq_per_msg) {
+ num_qs = adapter->max_txq_per_msg;
+ total_qs -= adapter->max_txq_per_msg;
+ } else {
+ num_qs = total_qs;
+ total_qs = 0;
+ }
+ size = sizeof(*vc_txqs) + (num_qs - 1) *
+ sizeof(struct virtchnl2_txq_info);
+ vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+ if (vc_txqs == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+ err = -ENOMEM;
+ break;
+ }
+ vc_txqs->vport_id = vport->vport_id;
+ vc_txqs->num_qinfo = num_qs;
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ for (i = 0; i < num_qs; i++, k++) {
+ txq_info = &vc_txqs->qinfo[i];
+ txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+ txq_info->queue_id = txq[k]->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+ txq_info->ring_len = txq[k]->nb_tx_desc;
+ }
+ } else {
+ for (i = 0; i < num_qs / 2; i++, k++) {
+ /* txq info */
+ txq_info = &vc_txqs->qinfo[2 * i];
+ txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+ txq_info->queue_id = txq[k]->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ txq_info->ring_len = txq[k]->nb_tx_desc;
+ txq_info->tx_compl_queue_id =
+ txq[k]->complq->queue_id;
+
+ /* tx completion queue info */
+ txq_info = &vc_txqs->qinfo[2 * i + 1];
+ txq_info->dma_ring_addr =
+ txq[k]->complq->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ txq_info->queue_id = txq[k]->complq->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ txq_info->ring_len = txq[k]->complq->nb_tx_desc;
+ }
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+ args.in_args = (uint8_t *)vc_txqs;
+ args.in_args_size = size;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ rte_free(vc_txqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int
+idpf_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = vport->vport_id;
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (u8 *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
+int
+idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on)
+{
+ uint32_t type;
+ int err, queue_id;
+
+ /* switch txq/rxq */
+ type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+ if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+ queue_id = vport->chunks_info.rx_start_qid + qid;
+ else
+ queue_id = vport->chunks_info.tx_start_qid + qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+
+ /* switch tx completion queue */
+ if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ }
+
+ /* switch rx buffer queue */
+ if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ queue_id++;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ }
+
+ return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM 2
+int idpf_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ uint32_t type;
+ struct idpf_cmd_info args;
+ uint16_t num_chunks;
+ int err, len;
+
+ num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ num_chunks++;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ num_chunks++;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+ sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = num_chunks;
+ queue_select->vport_id = vport->vport_id;
+
+ type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+ queue_chunk[type].num_queues = vport->num_rx_q;
+
+ type = VIRTCHNL2_QUEUE_TYPE_TX;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+ queue_chunk[type].num_queues = vport->num_tx_q;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id =
+ vport->chunks_info.rx_buf_start_qid;
+ queue_chunk[type].num_queues = vport->num_rx_bufq;
+ }
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id =
+ vport->chunks_info.tx_compl_start_qid;
+ queue_chunk[type].num_queues = vport->num_tx_complq;
+ }
+
+ args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (u8 *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ enable ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
int
idpf_ena_dis_vport(struct idpf_vport *vport, bool enable)
{
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 262a7aa8c7..9bda251ead 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
sources = files(
'idpf_ethdev.c',
+ 'idpf_rxtx.c',
'idpf_vchnl.c',
)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 5/9] net/idpf: support getting device information
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (3 preceding siblings ...)
2022-05-07 7:07 ` [RFC 4/9] net/idpf: support queue ops Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 6/9] net/idpf: support packet type getting Junfeng Guo
` (3 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops dev_infos_get.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 69 ++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 511770ed4f..c58a40e7ab 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -28,6 +28,8 @@ static int idpf_dev_configure(struct rte_eth_dev *dev);
static int idpf_dev_start(struct rte_eth_dev *dev);
static int idpf_dev_stop(struct rte_eth_dev *dev);
static int idpf_dev_close(struct rte_eth_dev *dev);
+static int idpf_dev_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);
static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure = idpf_dev_configure,
@@ -42,8 +44,75 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.rx_queue_release = idpf_dev_rx_queue_release,
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release = idpf_dev_tx_queue_release,
+ .dev_infos_get = idpf_dev_info_get,
};
+static int
+idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = adapter->caps->max_rx_q;
+ dev_info->max_tx_queues = adapter->caps->max_tx_q;
+ dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
+ dev_info->max_rx_pktlen = IDPF_MAX_FRAME_SIZE;
+
+ dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+ dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+ dev_info->rx_offload_capa =
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa =
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+ .tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = IDPF_MAX_RING_DESC,
+ .nb_min = IDPF_MIN_RING_DESC,
+ .nb_align = IDPF_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = IDPF_MAX_RING_DESC,
+ .nb_min = IDPF_MIN_RING_DESC,
+ .nb_align = IDPF_ALIGN_RING_DESC,
+ };
+
+ return 0;
+}
static int
idpf_init_vport_req_info(struct rte_eth_dev *dev)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 6/9] net/idpf: support packet type getting
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (4 preceding siblings ...)
2022-05-07 7:07 ` [RFC 5/9] net/idpf: support getting device information Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 7/9] net/idpf: support link update Junfeng Guo
` (2 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops dev_supported_ptypes_get.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 3 ++
drivers/net/idpf/idpf_rxtx.c | 51 ++++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 3 ++
3 files changed, 57 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c58a40e7ab..01fd023bfc 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -32,6 +32,7 @@ static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
static const struct eth_dev_ops idpf_eth_dev_ops = {
+ .dev_supported_ptypes_get = idpf_dev_supported_ptypes_get,
.dev_configure = idpf_dev_configure,
.dev_start = idpf_dev_start,
.dev_stop = idpf_dev_stop,
@@ -501,6 +502,8 @@ idpf_adapter_init(struct rte_eth_dev *dev)
if (adapter->initialized)
return 0;
+ idpf_set_default_ptype_table(dev);
+
hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
hw->hw_addr_len = pci_dev->mem_resource[0].len;
hw->back = adapter;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 770ed52281..6b436141c8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -8,6 +8,57 @@
#include "idpf_ethdev.h"
#include "idpf_rxtx.h"
+const uint32_t *
+idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ return ptypes;
+}
+
+static inline uint32_t
+idpf_get_default_pkt_type(uint16_t ptype)
+{
+ static const uint32_t type_table[IDPF_MAX_PKT_TYPE]
+ __rte_cache_aligned = {
+ [1] = RTE_PTYPE_L2_ETHER,
+ [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
+ [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_ICMP,
+ [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6,
+ [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_ICMP,
+ };
+
+ return type_table[ptype];
+}
+
+void __rte_cold
+idpf_set_default_ptype_table(struct rte_eth_dev *dev __rte_unused)
+{
+ int i;
+
+ for (i = 0; i < IDPF_MAX_PKT_TYPE; i++)
+ adapter->ptype_tbl[i] = idpf_get_default_pkt_type(i);
+}
+
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
{
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 705f706890..21b6d8cb84 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -164,4 +164,7 @@ void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
void idpf_stop_queues(struct rte_eth_dev *dev);
+void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+
#endif /* _IDPF_RXTX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 7/9] net/idpf: support link update
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (5 preceding siblings ...)
2022-05-07 7:07 ` [RFC 6/9] net/idpf: support packet type getting Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 8/9] net/idpf: support basic Rx/Tx Junfeng Guo
2022-05-07 7:07 ` [RFC 9/9] net/idpf: support RSS Junfeng Guo
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops link_update.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 22 ++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 2 ++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 01fd023bfc..39efb387cf 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -31,6 +31,27 @@ static int idpf_dev_close(struct rte_eth_dev *dev);
static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+int
+idpf_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct rte_eth_link new_link;
+
+ memset(&new_link, 0, sizeof(new_link));
+
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
+ new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+
+ return rte_eth_linkstatus_set(dev, &new_link);
+}
+
static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_supported_ptypes_get = idpf_dev_supported_ptypes_get,
.dev_configure = idpf_dev_configure,
@@ -46,6 +67,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release = idpf_dev_tx_queue_release,
.dev_infos_get = idpf_dev_info_get,
+ .link_update = idpf_dev_link_update,
};
static int
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c5aa168d95..5520b2d6ce 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -189,6 +189,8 @@ _atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
return !ret;
}
+int idpf_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete);
void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
int idpf_check_api_version(struct idpf_adapter *adapter);
int idpf_get_caps(struct idpf_adapter *adapter);
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 8/9] net/idpf: support basic Rx/Tx
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (6 preceding siblings ...)
2022-05-07 7:07 ` [RFC 7/9] net/idpf: support link update Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
2022-05-07 7:07 ` [RFC 9/9] net/idpf: support RSS Junfeng Guo
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Xiaoyun Li
Add basic RX & TX support in split queue mode and single queue mode.
Using split queue mode by default.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 93 ++++
drivers/net/idpf/idpf_rxtx.c | 877 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 33 ++
3 files changed, 1003 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 39efb387cf..1a985caf46 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -14,12 +14,16 @@
#include "idpf_ethdev.h"
#include "idpf_rxtx.h"
+#define IDPF_TX_SINGLE_Q "tx_single"
+#define IDPF_RX_SINGLE_Q "rx_single"
#define VPORT_NUM "vport_num"
struct idpf_adapter *adapter;
uint16_t vport_num = 1;
static const char * const idpf_valid_args[] = {
+ IDPF_TX_SINGLE_Q,
+ IDPF_RX_SINGLE_Q,
VPORT_NUM,
NULL
};
@@ -156,6 +160,30 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev)
(struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+ if (!adapter->txq_model) {
+ vport_info->txq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ vport_info->num_tx_q = dev->data->nb_tx_queues;
+ vport_info->num_tx_complq =
+ dev->data->nb_tx_queues * IDPF_TX_COMPLQ_PER_GRP;
+ } else {
+ vport_info->txq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ vport_info->num_tx_q = dev->data->nb_tx_queues;
+ vport_info->num_tx_complq = 0;
+ }
+ if (!adapter->rxq_model) {
+ vport_info->rxq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ vport_info->num_rx_q = dev->data->nb_rx_queues;
+ vport_info->num_rx_bufq =
+ dev->data->nb_rx_queues * IDPF_RX_BUFQ_PER_GRP;
+ } else {
+ vport_info->rxq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ vport_info->num_rx_q = dev->data->nb_rx_queues;
+ vport_info->num_rx_bufq = 0;
+ }
return 0;
}
@@ -426,6 +454,56 @@ idpf_dev_close(struct rte_eth_dev *dev)
return 0;
}
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+ int *i = (int *)args;
+ char *end;
+ int num;
+
+ num = strtoul(value, &end, 10);
+
+ if (num != 0 && num != 1) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+ "value must be 0 or 1",
+ value, key);
+ return -1;
+ }
+
+ *i = num;
+ return 0;
+}
+
+static int idpf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key");
+ return -EINVAL;
+ }
+
+ ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
+ &adapter->txq_model);
+ if (ret)
+ goto bail;
+
+ ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
+ &adapter->rxq_model);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
static void
idpf_reset_pf(struct iecm_hw *hw)
{
@@ -533,6 +611,12 @@ idpf_adapter_init(struct rte_eth_dev *dev)
hw->device_id = pci_dev->id.device_id;
hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+ ret = idpf_parse_devargs(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
idpf_reset_pf(hw);
ret = idpf_check_pf_reset_done(hw);
if (ret) {
@@ -641,6 +725,15 @@ idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void *init_params)
dev->dev_ops = &idpf_eth_dev_ops;
+ /* for secondary processes, we don't initialise any further as primary
+ * has already done this work.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ idpf_set_rx_function(dev);
+ idpf_set_tx_function(dev);
+ return ret;
+ }
+
ret = idpf_adapter_init(dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to init adapter.");
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 6b436141c8..3026517b5d 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1301,3 +1301,880 @@ idpf_stop_queues(struct rte_eth_dev *dev)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
}
+
+#define IDPF_RX_ERR0_QW1 \
+ (BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+ uint64_t flags = 0;
+
+ if (unlikely(!(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S))))
+ return flags;
+
+ if (likely((err & IDPF_RX_ERR0_QW1) == 0)) {
+ flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)))
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)))
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)))
+ flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)))
+ flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+ return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_HASH1_S 0
+#define IDPF_RX_FLEX_DESC_HASH2_S 16
+#define IDPF_RX_FLEX_DESC_HASH3_S 24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+ uint8_t status_err0_qw0;
+ uint64_t flags = 0;
+
+ status_err0_qw0 = rx_desc->status_err0_qw0;
+
+ if (status_err0_qw0 & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) {
+ flags |= RTE_MBUF_F_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_16(rx_desc->hash1) |
+ ((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+ IDPF_RX_FLEX_DESC_HASH2_S) |
+ ((uint32_t)(rx_desc->hash3) <<
+ IDPF_RX_FLEX_DESC_HASH3_S);
+ }
+
+ return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+ volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+ volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+ uint16_t nb_refill = rx_bufq->nb_rx_hold;
+ uint16_t nb_desc = rx_bufq->nb_rx_desc;
+ uint16_t next_avail = rx_bufq->rx_tail;
+ struct rte_mbuf *nmb[nb_refill];
+ struct rte_eth_dev *dev;
+ uint64_t dma_addr;
+ uint16_t delta;
+
+ if (nb_refill <= rx_bufq->rx_free_thresh)
+ return;
+
+ if (nb_refill >= nb_desc)
+ nb_refill = nb_desc - 1;
+
+ rx_buf_ring =
+ (volatile struct virtchnl2_splitq_rx_buf_desc *)rx_bufq->rx_ring;
+ delta = nb_desc - next_avail;
+ if (delta < nb_refill) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta))) {
+ for (int i = 0; i < delta; i++) {
+ rx_buf_desc = &rx_buf_ring[next_avail + i];
+ rx_bufq->sw_ring[next_avail + i] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->hdr_addr = 0;
+ rx_buf_desc->pkt_addr = dma_addr;
+ }
+ nb_refill -= delta;
+ next_avail = 0;
+ rx_bufq->nb_rx_hold -= delta;
+ } else {
+ dev = &rte_eth_devices[rx_bufq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rx_bufq->port_id, rx_bufq->queue_id);
+ return;
+ }
+ }
+
+ if (nb_desc - next_avail >= nb_refill) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill))) {
+ for (int i = 0; i < nb_refill; i++) {
+ rx_buf_desc = &rx_buf_ring[next_avail + i];
+ rx_bufq->sw_ring[next_avail + i] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->hdr_addr = 0;
+ rx_buf_desc->pkt_addr = dma_addr;
+ }
+ next_avail += nb_refill;
+ rx_bufq->nb_rx_hold -= nb_refill;
+ } else {
+ dev = &rte_eth_devices[rx_bufq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rx_bufq->port_id, rx_bufq->queue_id);
+ }
+ }
+
+ IECM_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+ rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+ uint16_t pktlen_gen_bufq_id;
+ struct idpf_rx_queue *rxq;
+ const uint32_t *ptype_tbl;
+ uint8_t status_err0_qw1;
+ struct rte_mbuf *rxm;
+ uint16_t rx_id_bufq1;
+ uint16_t rx_id_bufq2;
+ uint64_t pkt_flags;
+ uint16_t pkt_len;
+ uint16_t bufq_id;
+ uint16_t gen_id;
+ uint16_t rx_id;
+ uint16_t nb_rx;
+
+ nb_rx = 0;
+ rxq = (struct idpf_rx_queue *)rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+ rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+ rx_desc_ring =
+ (volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *)rxq->rx_ring;
+ ptype_tbl = rxq->adapter->ptype_tbl;
+
+ while (nb_rx < nb_pkts) {
+ rx_desc = &rx_desc_ring[rx_id];
+
+ pktlen_gen_bufq_id =
+ rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ gen_id = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+ if (gen_id != rxq->expected_gen_id)
+ break;
+
+ pkt_len = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+ if (!pkt_len)
+ PMD_RX_LOG(ERR, "Packet length is 0");
+
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc)) {
+ rx_id = 0;
+ rxq->expected_gen_id ^= 1;
+ }
+
+ bufq_id = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+ if (!bufq_id) {
+ rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+ rx_id_bufq1++;
+ if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+ rx_id_bufq1 = 0;
+ rxq->bufq1->nb_rx_hold++;
+ } else {
+ rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+ rx_id_bufq2++;
+ if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+ rx_id_bufq2 = 0;
+ rxq->bufq2->nb_rx_hold++;
+ }
+
+ pkt_len -= rxq->crc_len;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->next = NULL;
+ rxm->nb_segs = 1;
+ rxm->port = rxq->port_id;
+ rxm->ol_flags = 0;
+ rxm->packet_type =
+ ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+ status_err0_qw1 = rx_desc->status_err0_qw1;
+ pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+ pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+ rxm->ol_flags |= pkt_flags;
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+
+ if (nb_rx) {
+ rxq->rx_tail = rx_id;
+ if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+ rxq->bufq1->rx_next_avail = rx_id_bufq1;
+ if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+ rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+ idpf_split_rx_bufq_refill(rxq->bufq1);
+ idpf_split_rx_bufq_refill(rxq->bufq2);
+ }
+
+ return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+ volatile struct iecm_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+ volatile struct iecm_splitq_tx_compl_desc *txd;
+ uint16_t next = cq->tx_tail;
+ struct idpf_tx_entry *txe;
+ struct idpf_tx_queue *txq;
+ uint16_t gen, qid, q_head;
+ uint8_t ctype;
+
+ txd = &compl_ring[next];
+ gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S;
+ if (gen != cq->expected_gen_id)
+ return;
+
+ ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_COMPL_TYPE_M) >> IECM_TXD_COMPLQ_COMPL_TYPE_S;
+ qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S;
+ q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+ txq = cq->txqs[qid - cq->tx_start_qid];
+
+ switch (ctype) {
+ case IECM_TXD_COMPLT_RE:
+ if (q_head == 0)
+ txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+ else
+ txq->last_desc_cleaned = q_head - 1;
+ if (unlikely(!(txq->last_desc_cleaned % 32))) {
+ PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
+ q_head);
+ return;
+ }
+
+ break;
+ case IECM_TXD_COMPLT_RS:
+ txq->nb_free++;
+ txq->nb_used--;
+ txe = &txq->sw_ring[q_head];
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "unknown completion type.");
+ return;
+ }
+
+ if (++next == cq->nb_tx_desc) {
+ next = 0;
+ cq->expected_gen_id ^= 1;
+ }
+
+ cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+ if (flags & RTE_MBUF_F_TX_TCP_SEG)
+ return 1;
+
+ return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+ union idpf_tx_offload tx_offload,
+ volatile union iecm_flex_tx_ctx_desc *ctx_desc)
+{
+ uint16_t cmd_dtype;
+ uint32_t tso_len;
+ uint8_t hdr_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return;
+ }
+
+ hdr_len = tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+ cmd_dtype = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX |
+ IECM_TX_FLEX_CTX_DESC_CMD_TSO;
+ tso_len = mbuf->pkt_len - hdr_len;
+
+ ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+ ctx_desc->tso.qw0.hdr_len = hdr_len;
+ ctx_desc->tso.qw0.mss_rt =
+ rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+ IECM_TXD_FLEX_CTX_MSS_RT_M);
+ ctx_desc->tso.qw0.flex_tlen =
+ rte_cpu_to_le_32(tso_len &
+ IECM_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+ volatile struct iecm_flex_tx_sched_desc *txr = txq->desc_ring;
+ volatile struct iecm_flex_tx_sched_desc *txd;
+ struct idpf_tx_entry *sw_ring = txq->sw_ring;
+ union idpf_tx_offload tx_offload = {0};
+ struct idpf_tx_entry *txe, *txn;
+ uint16_t nb_used, tx_id, sw_id;
+ struct rte_mbuf *tx_pkt;
+ uint16_t nb_to_clean;
+ uint16_t nb_tx = 0;
+ uint64_t ol_flags;
+ uint16_t nb_ctx;
+
+ tx_id = txq->tx_tail;
+ sw_id = txq->sw_tail;
+ txe = &sw_ring[sw_id];
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ tx_pkt = tx_pkts[nb_tx];
+
+ if (txq->nb_free <= txq->free_thresh) {
+ /* TODO: Need to refine
+ * 1. free and clean: Better to decide a clean destination instead of
+ * loop times. And don't free mbuf when RS got immediately, free when
+ * transmit or according to the clean destination.
+ * Now, just ingnore the RE write back, free mbuf when get RS
+ * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+ * need to be separated.
+ **/
+ nb_to_clean = 2 * txq->rs_thresh;
+ while (nb_to_clean--)
+ idpf_split_tx_free(txq->complq);
+ }
+
+ if (txq->nb_free < tx_pkt->nb_segs)
+ break;
+
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = idpf_calc_context_desc(ol_flags);
+ nb_used = tx_pkt->nb_segs + nb_ctx;
+
+ /* context descriptor */
+ if (nb_ctx) {
+ volatile union iecm_flex_tx_ctx_desc *ctx_desc =
+ (volatile union iecm_flex_tx_ctx_desc *)&txr[tx_id];
+
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+ idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+ ctx_desc);
+
+ tx_id++;
+ if (tx_id == txq->nb_tx_desc)
+ tx_id = 0;
+ }
+
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+ txe->mbuf = tx_pkt;
+
+ /* Setup TX descriptor */
+ txd->buf_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+ txd->qw1.cmd_dtype =
+ rte_cpu_to_le_16(IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+ txd->qw1.rxr_bufsize = tx_pkt->data_len;
+ txd->qw1.compl_tag = sw_id;
+ tx_id++;
+ if (tx_id == txq->nb_tx_desc)
+ tx_id = 0;
+ sw_id = txe->next_id;
+ txe = txn;
+ tx_pkt = tx_pkt->next;
+ } while (tx_pkt);
+
+ /* fill the last descriptor with End of Packet (EOP) bit */
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_EOP;
+
+ if (unlikely(!(tx_id % 32)))
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_RE;
+ if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_CS_EN;
+ txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+ txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+ }
+
+ /* update the tail pointer if any packets were processed */
+ if (likely(nb_tx)) {
+ IECM_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+ txq->sw_tail = sw_id;
+ }
+
+ return nb_tx;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+ uint16_t rx_id)
+{
+ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+ if (nb_hold > rxq->rx_free_thresh) {
+ PMD_RX_LOG(DEBUG,
+ "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+ rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+ rx_id = (uint16_t)((rx_id == 0) ?
+ (rxq->nb_rx_desc - 1) : (rx_id - 1));
+ IECM_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+ nb_hold = 0;
+ }
+ rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile union virtchnl2_rx_desc *rx_ring;
+ volatile union virtchnl2_rx_desc *rxdp;
+ struct idpf_rx_queue *rxq;
+ const uint32_t *ptype_tbl;
+ uint16_t rx_id, nb_hold;
+ struct rte_eth_dev *dev;
+ uint16_t rx_packet_len;
+ struct rte_mbuf *rxe;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *nmb;
+ uint16_t rx_status0;
+ uint64_t dma_addr;
+ uint16_t nb_rx;
+
+ nb_rx = 0;
+ nb_hold = 0;
+ rxq = rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_ring = rxq->rx_ring;
+ ptype_tbl = rxq->adapter->ptype_tbl;
+
+ while (nb_rx < nb_pkts) {
+ rxdp = &rx_ring[rx_id];
+ rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+ /* Check the DD bit first */
+ if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+ break;
+
+ rx_packet_len = (rte_cpu_to_le_16(rxdp->flex_nic_wb.pkt_len)) -
+ rxq->crc_len;
+
+ nmb = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!nmb)) {
+ dev = &rte_eth_devices[rxq->port_id];
+ dev->data->rx_mbuf_alloc_failed++;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+ "queue_id=%u", rxq->port_id, rxq->queue_id);
+ break;
+ }
+
+ nb_hold++;
+ rxe = rxq->sw_ring[rx_id];
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc))
+ rx_id = 0;
+
+ /* Prefetch next mbuf */
+ rte_prefetch0(rxq->sw_ring[rx_id]);
+
+ /* When next RX descriptor is on a cache line boundary,
+ * prefetch the next 4 RX descriptors and next 8 pointers
+ * to mbufs.
+ */
+ if ((rx_id & 0x3) == 0) {
+ rte_prefetch0(&rx_ring[rx_id]);
+ rte_prefetch0(rxq->sw_ring[rx_id]);
+ }
+ rxm = rxe;
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+ rxdp->read.hdr_addr = 0;
+ rxdp->read.pkt_addr = dma_addr;
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = rx_packet_len;
+ rxm->data_len = rx_packet_len;
+ rxm->port = rxq->port_id;
+ rxm->ol_flags = 0;
+ rxm->packet_type =
+ ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxdp->flex_nic_wb.ptype_flex_flags0) &
+ VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+ rxq->rx_tail = rx_id;
+
+ idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+ return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+ uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+ struct idpf_tx_entry *sw_ring = txq->sw_ring;
+ uint16_t nb_tx_desc = txq->nb_tx_desc;
+ uint16_t desc_to_clean_to;
+ uint16_t nb_tx_to_clean;
+
+ volatile struct iecm_base_tx_desc *txd = txq->tx_ring;
+
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ if (desc_to_clean_to >= nb_tx_desc)
+ desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+ desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+ if ((txd[desc_to_clean_to].qw1 &
+ rte_cpu_to_le_64(IECM_TXD_QW1_DTYPE_M)) !=
+ rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DESC_DONE)) {
+ PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
+ "(port=%d queue=%d)", desc_to_clean_to,
+ txq->port_id, txq->queue_id);
+ return -1;
+ }
+
+ if (last_desc_cleaned > desc_to_clean_to)
+ nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+ desc_to_clean_to);
+ else
+ nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+ last_desc_cleaned);
+
+ txd[desc_to_clean_to].qw1 = 0;
+
+ txq->last_desc_cleaned = desc_to_clean_to;
+ txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+ return 0;
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+idpf_set_tso_ctx(struct rte_mbuf *mbuf, union idpf_tx_offload tx_offload)
+{
+ uint64_t ctx_desc = 0;
+ uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return ctx_desc;
+ }
+
+ hdr_len = tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+
+ cd_cmd = IECM_TX_CTX_DESC_TSO;
+ cd_tso_len = mbuf->pkt_len - hdr_len;
+ ctx_desc |= ((uint64_t)cd_cmd << IECM_TXD_CTX_QW1_CMD_S) |
+ ((uint64_t)cd_tso_len << IECM_TXD_CTX_QW1_TSO_LEN_S) |
+ ((uint64_t)mbuf->tso_segsz << IECM_TXD_CTX_QW1_MSS_S);
+
+ return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+idpf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size)
+{
+ return rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DATA |
+ ((uint64_t)td_cmd << IECM_TXD_QW1_CMD_S) |
+ ((uint64_t)td_offset <<
+ IECM_TXD_QW1_OFFSET_S) |
+ ((uint64_t)size <<
+ IECM_TXD_QW1_TX_BUF_SZ_S));
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct iecm_base_tx_desc *txd;
+ volatile struct iecm_base_tx_desc *txr;
+ union idpf_tx_offload tx_offload = {0};
+ struct idpf_tx_entry *txe, *txn;
+ struct idpf_tx_entry *sw_ring;
+ struct idpf_tx_queue *txq;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ uint64_t buf_dma_addr;
+ uint32_t td_offset;
+ uint64_t ol_flags;
+ uint16_t tx_last;
+ uint16_t nb_used;
+ uint16_t nb_ctx;
+ uint32_t td_cmd;
+ uint16_t tx_id;
+ uint16_t nb_tx;
+ uint16_t slen;
+
+ txq = tx_queue;
+ sw_ring = txq->sw_ring;
+ txr = txq->tx_ring;
+ tx_id = txq->tx_tail;
+ txe = &sw_ring[tx_id];
+
+ /* Check if the descriptor ring needs to be cleaned. */
+ if (txq->nb_free < txq->free_thresh)
+ (void)idpf_xmit_cleanup(txq);
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ td_cmd = 0;
+ td_offset = 0;
+
+ tx_pkt = *tx_pkts++;
+ RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = idpf_calc_context_desc(ol_flags);
+
+ /* The number of descriptors that must be allocated for
+ * a packet equals to the number of the segments of that
+ * packet plus 1 context descriptor if needed.
+ */
+ nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+ tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+ /* Circular ring */
+ if (tx_last >= txq->nb_tx_desc)
+ tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+ " tx_first=%u tx_last=%u",
+ txq->port_id, txq->queue_id, tx_id, tx_last);
+
+ if (nb_used > txq->nb_free) {
+ if (idpf_xmit_cleanup(txq)) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ if (unlikely(nb_used > txq->rs_thresh)) {
+ while (nb_used > txq->nb_free) {
+ if (idpf_xmit_cleanup(txq)) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ }
+ }
+ }
+
+ /* According to datasheet, the bit2 is reserved and must be
+ * set to 1.
+ */
+ td_cmd |= 0x04;
+
+ if (nb_ctx) {
+ /* Setup TX context descriptor if required */
+ volatile union iecm_flex_tx_ctx_desc *ctx_txd =
+ (volatile union iecm_flex_tx_ctx_desc *)
+ &txr[tx_id];
+
+ txn = &sw_ring[txe->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+
+ /* TSO enabled */
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+ idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+ ctx_txd);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ }
+
+ m_seg = tx_pkt;
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+
+ if (txe->mbuf)
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = m_seg;
+
+ /* Setup TX Descriptor */
+ slen = m_seg->data_len;
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ txd->qw1 = idpf_build_ctob(td_cmd, td_offset, slen);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ m_seg = m_seg->next;
+ } while (m_seg);
+
+ /* The last packet data descriptor needs End Of Packet (EOP) */
+ td_cmd |= IECM_TX_DESC_CMD_EOP;
+ txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+ txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+ if (txq->nb_used >= txq->rs_thresh) {
+ PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+ "%4u (port=%d queue=%d)",
+ tx_last, txq->port_id, txq->queue_id);
+
+ td_cmd |= IECM_TX_DESC_CMD_RS;
+
+ /* Update txq RS bit counters */
+ txq->nb_used = 0;
+ }
+
+ txd->qw1 |=
+ rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+ IECM_TXD_QW1_CMD_S);
+ }
+
+end_of_tx:
+ rte_wmb();
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+ txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+ IECM_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+
+ return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ int i, ret;
+ uint64_t ol_flags;
+ struct rte_mbuf *m;
+
+ for (i = 0; i < nb_pkts; i++) {
+ m = tx_pkts[i];
+ ol_flags = m->ol_flags;
+
+ /* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+ if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+ rte_errno = EINVAL;
+ return i;
+ }
+ } else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+ (m->tso_segsz > IDPF_MAX_TSO_MSS)) {
+ /* MSS outside the range are considered malicious */
+ rte_errno = EINVAL;
+ return i;
+ }
+
+ if (ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) {
+ rte_errno = ENOTSUP;
+ return i;
+ }
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+ }
+
+ return i;
+}
+
+void
+idpf_set_rx_function(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+ return;
+ }
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+ return;
+ }
+}
+
+void
+idpf_set_tx_function(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_prep_pkts;
+ return;
+ }
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_prep_pkts;
+ return;
+ }
+}
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 21b6d8cb84..d9451d2e2d 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -35,6 +35,25 @@
#define IDPF_TSO_MAX_SEG UINT8_MAX
#define IDPF_TX_MAX_MTU_SEG 8
+#define IDPF_TX_CKSUM_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_OUTER_IPV6 | \
+ RTE_MBUF_F_TX_OUTER_IPV4 | \
+ RTE_MBUF_F_TX_IPV6 | \
+ RTE_MBUF_F_TX_IPV4 | \
+ RTE_MBUF_F_TX_VLAN | \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_ETH_TX_OFFLOAD_SECURITY)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+ (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
struct idpf_rx_queue {
struct idpf_adapter *adapter; /* the adapter this queue belongs to */
struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
@@ -162,8 +181,22 @@ int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
void idpf_stop_queues(struct rte_eth_dev *dev);
+void idpf_set_rx_function(struct rte_eth_dev *dev);
+void idpf_set_tx_function(struct rte_eth_dev *dev);
+
void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC 9/9] net/idpf: support RSS
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (7 preceding siblings ...)
2022-05-07 7:07 ` [RFC 8/9] net/idpf: support basic Rx/Tx Junfeng Guo
@ 2022-05-07 7:07 ` Junfeng Guo
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-07 7:07 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add RSS support.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 106 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 18 +++++-
drivers/net/idpf/idpf_vchnl.c | 93 +++++++++++++++++++++++++++++
3 files changed, 216 insertions(+), 1 deletion(-)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1a985caf46..2a0304c18e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -85,6 +85,7 @@ idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa =
@@ -292,9 +293,96 @@ idpf_init_vport(struct rte_eth_dev *dev)
return 0;
}
+static int
+idpf_config_rss(struct idpf_vport *vport)
+{
+ int ret;
+
+ ret = idpf_set_rss_key(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+ return ret;
+ }
+
+ ret = idpf_set_rss_lut(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+ return ret;
+ }
+
+ ret = idpf_set_rss_hash(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+ return ret;
+ }
+
+ return ret;
+}
+
+static int
+idpf_init_rss(struct idpf_vport *vport)
+{
+ struct rte_eth_rss_conf *rss_conf;
+ uint16_t i, nb_q, lut_size;
+ int ret = 0;
+
+ rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
+ nb_q = vport->num_rx_q;
+
+ vport->rss_key = (uint8_t *)rte_zmalloc("rss_key",
+ vport->rss_key_size, 0);
+ if (!vport->rss_key) {
+ PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+ ret = -ENOMEM;
+ goto err_key;
+ }
+
+ lut_size = vport->rss_lut_size;
+ vport->rss_lut = (uint32_t *)rte_zmalloc("rss_lut",
+ sizeof(uint32_t) * lut_size, 0);
+ if (!vport->rss_lut) {
+ PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
+ ret = -ENOMEM;
+ goto err_lut;
+ }
+
+ if (!rss_conf->rss_key) {
+ for (i = 0; i < vport->rss_key_size; i++)
+ vport->rss_key[i] = (uint8_t)rte_rand();
+ } else {
+ rte_memcpy(vport->rss_key, rss_conf->rss_key,
+ RTE_MIN(rss_conf->rss_key_len,
+ vport->rss_key_size));
+ }
+
+ for (i = 0; i < lut_size; i++)
+ vport->rss_lut[i] = i % nb_q;
+
+ vport->rss_hf = IECM_DEFAULT_RSS_HASH_EXPANDED;
+
+ ret = idpf_config_rss(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS");
+ goto err_cfg;
+ }
+
+ return ret;
+
+err_cfg:
+ rte_free(vport->rss_lut);
+ vport->rss_lut = NULL;
+err_lut:
+ rte_free(vport->rss_key);
+ vport->rss_key = NULL;
+err_key:
+ return ret;
+}
+
static int
idpf_dev_configure(struct rte_eth_dev *dev)
{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
int ret = 0;
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
@@ -319,6 +407,14 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+ if (adapter->caps->rss_caps) {
+ ret = idpf_init_rss(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init rss");
+ return ret;
+ }
+ }
+
return ret;
}
@@ -451,6 +547,16 @@ idpf_dev_close(struct rte_eth_dev *dev)
idpf_dev_stop(dev);
idpf_destroy_vport(vport);
+ if (vport->rss_lut) {
+ rte_free(vport->rss_lut);
+ vport->rss_lut = NULL;
+ }
+
+ if (vport->rss_key) {
+ rte_free(vport->rss_key);
+ vport->rss_key = NULL;
+ }
+
return 0;
}
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 5520b2d6ce..0b8e163bbb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -43,6 +43,20 @@
#define IDPF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
+#define IDPF_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+
#ifndef ETH_ADDR_LEN
#define ETH_ADDR_LEN 6
#endif
@@ -196,7 +210,9 @@ int idpf_check_api_version(struct idpf_adapter *adapter);
int idpf_get_caps(struct idpf_adapter *adapter);
int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
int idpf_destroy_vport(struct idpf_vport *vport);
-
+int idpf_set_rss_key(struct idpf_vport *vport);
+int idpf_set_rss_lut(struct idpf_vport *vport);
+int idpf_set_rss_hash(struct idpf_vport *vport);
int idpf_config_rxqs(struct idpf_vport *vport);
int idpf_config_txqs(struct idpf_vport *vport);
int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 74ed555449..fb7cee6915 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -441,6 +441,99 @@ idpf_destroy_vport(struct idpf_vport *vport)
return err;
}
+int
+idpf_set_rss_key(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_key *rss_key;
+ struct idpf_cmd_info args;
+ int len, err;
+
+ len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+ (vport->rss_key_size - 1);
+ rss_key = rte_zmalloc("rss_key", len, 0);
+ if (!rss_key)
+ return -ENOMEM;
+
+ rss_key->vport_id = vport->vport_id;
+ rss_key->key_len = vport->rss_key_size;
+ rte_memcpy(rss_key->key, vport->rss_key,
+ sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+ args.in_args = (uint8_t *)rss_key;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+ return err;
+ }
+
+ rte_free(rss_key);
+ return err;
+}
+
+int
+idpf_set_rss_lut(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_lut *rss_lut;
+ struct idpf_cmd_info args;
+ int len, err;
+
+ len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+ (vport->rss_lut_size - 1);
+ rss_lut = rte_zmalloc("rss_lut", len, 0);
+ if (!rss_lut)
+ return -ENOMEM;
+
+ rss_lut->vport_id = vport->vport_id;
+ rss_lut->lut_entries = vport->rss_lut_size;
+ rte_memcpy(rss_lut->lut, vport->rss_lut,
+ sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+ args.in_args = (uint8_t *)rss_lut;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+ rte_free(rss_lut);
+ return err;
+}
+
+int
+idpf_set_rss_hash(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_hash rss_hash;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&rss_hash, 0, sizeof(rss_hash));
+ rss_hash.ptype_groups = vport->rss_hf;
+ rss_hash.vport_id = vport->vport_id;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+ args.in_args = (uint8_t *)&rss_hash;
+ args.in_args_size = sizeof(rss_hash);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+ return err;
+}
+
#define IDPF_RX_BUF_STRIDE 64
int
idpf_config_rxqs(struct idpf_vport *vport)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 0/9] add support for idpf PMD in DPDK
2022-05-07 7:07 ` [RFC 1/9] net/idpf/base: introduce base code Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 1/9] net/idpf/base: introduce base code Junfeng Guo
` (8 more replies)
0 siblings, 9 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
This is a draft of idpf (Infrastructure Data Path Function) PMD
in DPDK for Intel Device ID of 0x1452.
v2:
fix code typo in func idpf_set_tx_function.
Junfeng Guo (9):
net/idpf/base: introduce base code
net/idpf/base: add OS specific implementation
net/idpf: support device initialization
net/idpf: support queue ops
net/idpf: support getting device information
net/idpf: support packet type getting
net/idpf: support link update
net/idpf: support basic Rx/Tx
net/idpf: support RSS
drivers/net/idpf/base/iecm_alloc.h | 22 +
drivers/net/idpf/base/iecm_common.c | 359 +++
drivers/net/idpf/base/iecm_controlq.c | 662 ++++
drivers/net/idpf/base/iecm_controlq.h | 214 ++
drivers/net/idpf/base/iecm_controlq_api.h | 227 ++
drivers/net/idpf/base/iecm_controlq_setup.c | 179 ++
drivers/net/idpf/base/iecm_devids.h | 17 +
drivers/net/idpf/base/iecm_lan_pf_regs.h | 134 +
drivers/net/idpf/base/iecm_lan_txrx.h | 428 +++
drivers/net/idpf/base/iecm_lan_vf_regs.h | 114 +
drivers/net/idpf/base/iecm_osdep.h | 365 +++
drivers/net/idpf/base/iecm_prototype.h | 45 +
drivers/net/idpf/base/iecm_type.h | 106 +
drivers/net/idpf/base/meson.build | 27 +
drivers/net/idpf/base/siov_regs.h | 41 +
drivers/net/idpf/base/virtchnl.h | 2743 +++++++++++++++++
drivers/net/idpf/base/virtchnl2.h | 1411 +++++++++
drivers/net/idpf/base/virtchnl2_lan_desc.h | 603 ++++
drivers/net/idpf/base/virtchnl_inline_ipsec.h | 567 ++++
drivers/net/idpf/idpf_ethdev.c | 1030 +++++++
drivers/net/idpf/idpf_ethdev.h | 223 ++
drivers/net/idpf/idpf_logs.h | 38 +
drivers/net/idpf/idpf_rxtx.c | 2180 +++++++++++++
drivers/net/idpf/idpf_rxtx.h | 203 ++
drivers/net/idpf/idpf_vchnl.c | 900 ++++++
drivers/net/idpf/meson.build | 19 +
drivers/net/idpf/version.map | 3 +
drivers/net/meson.build | 1 +
28 files changed, 12861 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_alloc.h
create mode 100644 drivers/net/idpf/base/iecm_common.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
create mode 100644 drivers/net/idpf/base/iecm_devids.h
create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_osdep.h
create mode 100644 drivers/net/idpf/base/iecm_prototype.h
create mode 100644 drivers/net/idpf/base/iecm_type.h
create mode 100644 drivers/net/idpf/base/meson.build
create mode 100644 drivers/net/idpf/base/siov_regs.h
create mode 100644 drivers/net/idpf/base/virtchnl.h
create mode 100644 drivers/net/idpf/base/virtchnl2.h
create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
create mode 100644 drivers/net/idpf/idpf_ethdev.c
create mode 100644 drivers/net/idpf/idpf_ethdev.h
create mode 100644 drivers/net/idpf/idpf_logs.h
create mode 100644 drivers/net/idpf/idpf_rxtx.c
create mode 100644 drivers/net/idpf/idpf_rxtx.h
create mode 100644 drivers/net/idpf/idpf_vchnl.c
create mode 100644 drivers/net/idpf/meson.build
create mode 100644 drivers/net/idpf/version.map
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 1/9] net/idpf/base: introduce base code
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 2/9] net/idpf/base: add OS specific implementation Junfeng Guo
` (7 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Introduce base code for IDPF (Infrastructure Data Path Function) PMD.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_alloc.h | 22 +
drivers/net/idpf/base/iecm_common.c | 359 +++
drivers/net/idpf/base/iecm_controlq.c | 662 ++++
drivers/net/idpf/base/iecm_controlq.h | 214 ++
drivers/net/idpf/base/iecm_controlq_api.h | 227 ++
drivers/net/idpf/base/iecm_controlq_setup.c | 179 ++
drivers/net/idpf/base/iecm_devids.h | 17 +
drivers/net/idpf/base/iecm_lan_pf_regs.h | 134 +
drivers/net/idpf/base/iecm_lan_txrx.h | 428 +++
drivers/net/idpf/base/iecm_lan_vf_regs.h | 114 +
drivers/net/idpf/base/iecm_prototype.h | 45 +
drivers/net/idpf/base/iecm_type.h | 106 +
drivers/net/idpf/base/meson.build | 27 +
drivers/net/idpf/base/siov_regs.h | 41 +
drivers/net/idpf/base/virtchnl.h | 2743 +++++++++++++++++
drivers/net/idpf/base/virtchnl2.h | 1411 +++++++++
drivers/net/idpf/base/virtchnl2_lan_desc.h | 603 ++++
drivers/net/idpf/base/virtchnl_inline_ipsec.h | 567 ++++
18 files changed, 7899 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_alloc.h
create mode 100644 drivers/net/idpf/base/iecm_common.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
create mode 100644 drivers/net/idpf/base/iecm_devids.h
create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_prototype.h
create mode 100644 drivers/net/idpf/base/iecm_type.h
create mode 100644 drivers/net/idpf/base/meson.build
create mode 100644 drivers/net/idpf/base/siov_regs.h
create mode 100644 drivers/net/idpf/base/virtchnl.h
create mode 100644 drivers/net/idpf/base/virtchnl2.h
create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
diff --git a/drivers/net/idpf/base/iecm_alloc.h b/drivers/net/idpf/base/iecm_alloc.h
new file mode 100644
index 0000000000..7ea219c784
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_ALLOC_H_
+#define _IECM_ALLOC_H_
+
+/* Memory types */
+enum iecm_memset_type {
+ IECM_NONDMA_MEM = 0,
+ IECM_DMA_MEM
+};
+
+/* Memcpy types */
+enum iecm_memcpy_type {
+ IECM_NONDMA_TO_NONDMA = 0,
+ IECM_NONDMA_TO_DMA,
+ IECM_DMA_TO_DMA,
+ IECM_DMA_TO_NONDMA
+};
+
+#endif /* _IECM_ALLOC_H_ */
diff --git a/drivers/net/idpf/base/iecm_common.c b/drivers/net/idpf/base/iecm_common.c
new file mode 100644
index 0000000000..418fd99298
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_common.c
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#include "iecm_type.h"
+#include "iecm_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * iecm_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+int iecm_set_mac_type(struct iecm_hw *hw)
+{
+ int status = IECM_SUCCESS;
+
+ DEBUGFUNC("iecm_set_mac_type\n");
+
+ if (hw->vendor_id == IECM_INTEL_VENDOR_ID) {
+ switch (hw->device_id) {
+ case IECM_DEV_ID_PF:
+ hw->mac.type = IECM_MAC_PF;
+ break;
+ default:
+ hw->mac.type = IECM_MAC_GENERIC;
+ break;
+ }
+ } else {
+ status = IECM_ERR_DEVICE_NOT_SUPPORTED;
+ }
+
+ DEBUGOUT2("iecm_set_mac_type found mac: %d, returns: %d\n",
+ hw->mac.type, status);
+ return status;
+}
+
+/**
+ * iecm_init_hw - main initialization routine
+ * @hw: pointer to the hardware structure
+ * @ctlq_size: struct to pass ctlq size data
+ */
+int iecm_init_hw(struct iecm_hw *hw, struct iecm_ctlq_size ctlq_size)
+{
+ struct iecm_ctlq_create_info *q_info;
+ int status = IECM_SUCCESS;
+ struct iecm_ctlq_info *cq = NULL;
+
+ /* Setup initial control queues */
+ q_info = (struct iecm_ctlq_create_info *)
+ iecm_calloc(hw, 2, sizeof(struct iecm_ctlq_create_info));
+ if (!q_info)
+ return IECM_ERR_NO_MEMORY;
+
+ q_info[0].type = IECM_CTLQ_TYPE_MAILBOX_TX;
+ q_info[0].buf_size = ctlq_size.asq_buf_size;
+ q_info[0].len = ctlq_size.asq_ring_size;
+ q_info[0].id = -1; /* default queue */
+
+ if (hw->mac.type == IECM_MAC_PF) {
+ q_info[0].reg.head = PF_FW_ATQH;
+ q_info[0].reg.tail = PF_FW_ATQT;
+ q_info[0].reg.len = PF_FW_ATQLEN;
+ q_info[0].reg.bah = PF_FW_ATQBAH;
+ q_info[0].reg.bal = PF_FW_ATQBAL;
+ q_info[0].reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
+ q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
+ q_info[0].reg.head_mask = PF_FW_ATQH_ATQH_M;
+ } else {
+ q_info[0].reg.head = VF_ATQH;
+ q_info[0].reg.tail = VF_ATQT;
+ q_info[0].reg.len = VF_ATQLEN;
+ q_info[0].reg.bah = VF_ATQBAH;
+ q_info[0].reg.bal = VF_ATQBAL;
+ q_info[0].reg.len_mask = VF_ATQLEN_ATQLEN_M;
+ q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
+ q_info[0].reg.head_mask = VF_ATQH_ATQH_M;
+ }
+
+ q_info[1].type = IECM_CTLQ_TYPE_MAILBOX_RX;
+ q_info[1].buf_size = ctlq_size.arq_buf_size;
+ q_info[1].len = ctlq_size.arq_ring_size;
+ q_info[1].id = -1; /* default queue */
+
+ if (hw->mac.type == IECM_MAC_PF) {
+ q_info[1].reg.head = PF_FW_ARQH;
+ q_info[1].reg.tail = PF_FW_ARQT;
+ q_info[1].reg.len = PF_FW_ARQLEN;
+ q_info[1].reg.bah = PF_FW_ARQBAH;
+ q_info[1].reg.bal = PF_FW_ARQBAL;
+ q_info[1].reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
+ q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
+ q_info[1].reg.head_mask = PF_FW_ARQH_ARQH_M;
+ } else {
+ q_info[1].reg.head = VF_ARQH;
+ q_info[1].reg.tail = VF_ARQT;
+ q_info[1].reg.len = VF_ARQLEN;
+ q_info[1].reg.bah = VF_ARQBAH;
+ q_info[1].reg.bal = VF_ARQBAL;
+ q_info[1].reg.len_mask = VF_ARQLEN_ARQLEN_M;
+ q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
+ q_info[1].reg.head_mask = VF_ARQH_ARQH_M;
+ }
+
+ status = iecm_ctlq_init(hw, 2, q_info);
+ if (status != IECM_SUCCESS) {
+ /* TODO return error */
+ iecm_free(hw, q_info);
+ return status;
+ }
+
+ LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, iecm_ctlq_info, cq_list) {
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ hw->asq = cq;
+ else if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX)
+ hw->arq = cq;
+ }
+
+ /* TODO hardcode a mac addr for now */
+ hw->mac.addr[0] = 0x00;
+ hw->mac.addr[1] = 0x00;
+ hw->mac.addr[2] = 0x00;
+ hw->mac.addr[3] = 0x00;
+ hw->mac.addr[4] = 0x03;
+ hw->mac.addr[5] = 0x14;
+
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_send_msg_to_cp
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to CP. By default, this message
+ * is sent asynchronously, i.e. iecm_asq_send_command() does not wait for
+ * completion before returning.
+ */
+int iecm_send_msg_to_cp(struct iecm_hw *hw, enum virtchnl_ops v_opcode,
+ int v_retval, u8 *msg, u16 msglen)
+{
+ struct iecm_ctlq_msg ctlq_msg = { 0 };
+ struct iecm_dma_mem dma_mem = { 0 };
+ int status;
+
+ ctlq_msg.opcode = iecm_mbq_opc_send_msg_to_pf;
+ ctlq_msg.func_id = 0;
+ ctlq_msg.data_len = msglen;
+ ctlq_msg.cookie.mbx.chnl_retval = v_retval;
+ ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
+
+ if (msglen > 0) {
+ dma_mem.va = (struct iecm_dma_mem *)
+ iecm_alloc_dma_mem(hw, &dma_mem, msglen);
+ if (!dma_mem.va)
+ return IECM_ERR_NO_MEMORY;
+
+ iecm_memcpy(dma_mem.va, msg, msglen, IECM_NONDMA_TO_DMA);
+ ctlq_msg.ctx.indirect.payload = &dma_mem;
+ }
+ status = iecm_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
+
+ if (dma_mem.va)
+ iecm_free_dma_mem(hw, &dma_mem);
+
+ return status;
+}
+
+/**
+ * iecm_asq_done - check if FW has processed the Admin Send Queue
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool iecm_asq_done(struct iecm_hw *hw)
+{
+ /* AQ designers suggest use of head for better
+ * timing reliability than DD bit
+ */
+ return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
+}
+
+/**
+ * iecm_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool iecm_check_asq_alive(struct iecm_hw *hw)
+{
+ if (hw->asq->reg.len)
+ return !!(rd32(hw, hw->asq->reg.len) &
+ PF_FW_ATQLEN_ATQENABLE_M);
+
+ return false;
+}
+
+/**
+ * iecm_clean_arq_element
+ * @hw: pointer to the hw struct
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'
+ */
+int iecm_clean_arq_element(struct iecm_hw *hw,
+ struct iecm_arq_event_info *e, u16 *pending)
+{
+ struct iecm_ctlq_msg msg = { 0 };
+ int status;
+
+ *pending = 1;
+
+ status = iecm_ctlq_recv(hw->arq, pending, &msg);
+
+ /* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
+ e->desc.opcode = msg.opcode;
+ e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
+ e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
+ e->desc.ret_val = msg.status;
+ e->desc.datalen = msg.data_len;
+ if (msg.data_len > 0) {
+ e->buf_len = msg.data_len;
+ iecm_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg.data_len,
+ IECM_DMA_TO_NONDMA);
+ }
+ return status;
+}
+
+/**
+ * iecm_deinit_hw - shutdown routine
+ * @hw: pointer to the hardware structure
+ */
+int iecm_deinit_hw(struct iecm_hw *hw)
+{
+ hw->asq = NULL;
+ hw->arq = NULL;
+
+ return iecm_ctlq_deinit(hw);
+}
+
+/**
+ * iecm_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a RESET message to the CPF. Does not wait for response from CPF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the control queue should be shut down and (optionally) reinitialized.
+ */
+int iecm_reset(struct iecm_hw *hw)
+{
+ return iecm_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
+ IECM_SUCCESS, NULL, 0);
+}
+
+/**
+ * iecm_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ */
+STATIC int iecm_get_set_rss_lut(struct iecm_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size,
+ bool set)
+{
+ /* TODO fill out command */
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+int iecm_get_rss_lut(struct iecm_hw *hw, u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size)
+{
+ return iecm_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
+}
+
+/**
+ * iecm_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+int iecm_set_rss_lut(struct iecm_hw *hw, u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size)
+{
+ return iecm_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * iecm_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ */
+STATIC int iecm_get_set_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key,
+ bool set)
+{
+ /* TODO fill out command */
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ */
+int iecm_get_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key)
+{
+ return iecm_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * iecm_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+int iecm_set_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key)
+{
+ return iecm_get_set_rss_key(hw, vsi_id, key, true);
+}
diff --git a/drivers/net/idpf/base/iecm_controlq.c b/drivers/net/idpf/base/iecm_controlq.c
new file mode 100644
index 0000000000..3a877bbf74
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq.c
@@ -0,0 +1,662 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#include "iecm_controlq.h"
+
+/**
+ * iecm_ctlq_setup_regs - initialize control queue registers
+ * @cq: pointer to the specific control queue
+ * @q_create_info: structs containing info for each queue to be initialized
+ */
+static void
+iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq,
+ struct iecm_ctlq_create_info *q_create_info)
+{
+ /* set head and tail registers in our local struct */
+ cq->reg.head = q_create_info->reg.head;
+ cq->reg.tail = q_create_info->reg.tail;
+ cq->reg.len = q_create_info->reg.len;
+ cq->reg.bah = q_create_info->reg.bah;
+ cq->reg.bal = q_create_info->reg.bal;
+ cq->reg.len_mask = q_create_info->reg.len_mask;
+ cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask;
+ cq->reg.head_mask = q_create_info->reg.head_mask;
+}
+
+/**
+ * iecm_ctlq_init_regs - Initialize control queue registers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ * @is_rxq: true if receive control queue, false otherwise
+ *
+ * Initialize registers. The caller is expected to have already initialized the
+ * descriptor ring memory and buffer memory
+ */
+static void iecm_ctlq_init_regs(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ bool is_rxq)
+{
+ /* Update tail to post pre-allocated buffers for rx queues */
+ if (is_rxq)
+ wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));
+
+ /* For non-Mailbox control queues only TAIL need to be set */
+ if (cq->q_id != -1)
+ return;
+
+ /* Clear Head for both send or receive */
+ wr32(hw, cq->reg.head, 0);
+
+ /* set starting point */
+ wr32(hw, cq->reg.bal, IECM_LO_DWORD(cq->desc_ring.pa));
+ wr32(hw, cq->reg.bah, IECM_HI_DWORD(cq->desc_ring.pa));
+ wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));
+}
+
+/**
+ * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf
+ * @cq: pointer to the specific Control queue
+ *
+ * Record the address of the receive queue DMA buffers in the descriptors.
+ * The buffers must have been previously allocated.
+ */
+static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq)
+{
+ int i = 0;
+
+ for (i = 0; i < cq->ring_size; i++) {
+ struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i);
+ struct iecm_dma_mem *bi = cq->bi.rx_buff[i];
+
+ /* No buffer to post to descriptor, continue */
+ if (!bi)
+ continue;
+
+ desc->flags =
+ CPU_TO_LE16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+ desc->opcode = 0;
+ desc->datalen = (__le16)CPU_TO_LE16(bi->size);
+ desc->ret_val = 0;
+ desc->cookie_high = 0;
+ desc->cookie_low = 0;
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(bi->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(bi->pa));
+ desc->params.indirect.param0 = 0;
+ desc->params.indirect.param1 = 0;
+ }
+}
+
+/**
+ * iecm_ctlq_shutdown - shutdown the CQ
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for any controq queue
+ */
+static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ iecm_acquire_lock(&cq->cq_lock);
+
+ if (!cq->ring_size)
+ goto shutdown_sq_out;
+
+
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_dealloc_ring_res(hw, cq);
+
+ /* Set ring_size to 0 to indicate uninitialized queue */
+ cq->ring_size = 0;
+
+shutdown_sq_out:
+ iecm_release_lock(&cq->cq_lock);
+ iecm_destroy_lock(&cq->cq_lock);
+}
+
+/**
+ * iecm_ctlq_add - add one control queue
+ * @hw: pointer to hardware struct
+ * @qinfo: info for queue to be created
+ * @cq_out: (output) double pointer to control queue to be created
+ *
+ * Allocate and initialize a control queue and add it to the control queue list.
+ * The cq parameter will be allocated/initialized and passed back to the caller
+ * if no errors occur.
+ *
+ * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add
+ */
+int iecm_ctlq_add(struct iecm_hw *hw,
+ struct iecm_ctlq_create_info *qinfo,
+ struct iecm_ctlq_info **cq_out)
+{
+ bool is_rxq = false;
+ int status = IECM_SUCCESS;
+
+ if (!qinfo->len || !qinfo->buf_size ||
+ qinfo->len > IECM_CTLQ_MAX_RING_SIZE ||
+ qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN)
+ return IECM_ERR_CFG;
+
+ *cq_out = (struct iecm_ctlq_info *)
+ iecm_calloc(hw, 1, sizeof(struct iecm_ctlq_info));
+ if (!(*cq_out))
+ return IECM_ERR_NO_MEMORY;
+
+ (*cq_out)->cq_type = qinfo->type;
+ (*cq_out)->q_id = qinfo->id;
+ (*cq_out)->buf_size = qinfo->buf_size;
+ (*cq_out)->ring_size = qinfo->len;
+
+ (*cq_out)->next_to_use = 0;
+ (*cq_out)->next_to_clean = 0;
+ (*cq_out)->next_to_post = (*cq_out)->ring_size - 1;
+
+ switch (qinfo->type) {
+ case IECM_CTLQ_TYPE_MAILBOX_RX:
+ is_rxq = true;
+ fallthrough;
+ case IECM_CTLQ_TYPE_MAILBOX_TX:
+ status = iecm_ctlq_alloc_ring_res(hw, *cq_out);
+ break;
+ default:
+ status = IECM_ERR_PARAM;
+ break;
+ }
+
+ if (status)
+ goto init_free_q;
+
+ if (is_rxq) {
+ iecm_ctlq_init_rxq_bufs(*cq_out);
+ } else {
+ /* Allocate the array of msg pointers for TX queues */
+ (*cq_out)->bi.tx_msg = (struct iecm_ctlq_msg **)
+ iecm_calloc(hw, qinfo->len,
+ sizeof(struct iecm_ctlq_msg *));
+ if (!(*cq_out)->bi.tx_msg) {
+ status = IECM_ERR_NO_MEMORY;
+ goto init_dealloc_q_mem;
+ }
+ }
+
+ iecm_ctlq_setup_regs(*cq_out, qinfo);
+
+ iecm_ctlq_init_regs(hw, *cq_out, is_rxq);
+
+ iecm_init_lock(&(*cq_out)->cq_lock);
+
+ LIST_INSERT_HEAD(&hw->cq_list_head, (*cq_out), cq_list);
+
+ return status;
+
+init_dealloc_q_mem:
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_dealloc_ring_res(hw, *cq_out);
+init_free_q:
+ iecm_free(hw, *cq_out);
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_remove - deallocate and remove specified control queue
+ * @hw: pointer to hardware struct
+ * @cq: pointer to control queue to be removed
+ */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ LIST_REMOVE(cq, cq_list);
+ iecm_ctlq_shutdown(hw, cq);
+ iecm_free(hw, cq);
+}
+
+/**
+ * iecm_ctlq_init - main initialization routine for all control queues
+ * @hw: pointer to hardware struct
+ * @num_q: number of queues to initialize
+ * @q_info: array of structs containing info for each queue to be initialized
+ *
+ * This initializes any number and any type of control queues. This is an all
+ * or nothing routine; if one fails, all previously allocated queues will be
+ * destroyed. This must be called prior to using the individual add/remove
+ * APIs.
+ */
+int iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+ struct iecm_ctlq_create_info *q_info)
+{
+ struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+ int ret_code = IECM_SUCCESS;
+ int i = 0;
+
+ LIST_INIT(&hw->cq_list_head);
+
+ for (i = 0; i < num_q; i++) {
+ struct iecm_ctlq_create_info *qinfo = q_info + i;
+
+ ret_code = iecm_ctlq_add(hw, qinfo, &cq);
+ if (ret_code)
+ goto init_destroy_qs;
+ }
+
+ return ret_code;
+
+init_destroy_qs:
+ LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
+ iecm_ctlq_info, cq_list)
+ iecm_ctlq_remove(hw, cq);
+
+ return ret_code;
+}
+
+/**
+ * iecm_ctlq_deinit - destroy all control queues
+ * @hw: pointer to hw struct
+ */
+int iecm_ctlq_deinit(struct iecm_hw *hw)
+{
+ struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+ int ret_code = IECM_SUCCESS;
+
+ LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
+ iecm_ctlq_info, cq_list)
+ iecm_ctlq_remove(hw, cq);
+
+ return ret_code;
+}
+
+/**
+ * iecm_ctlq_send - send command to Control Queue (CTQ)
+ * @hw: pointer to hw struct
+ * @cq: handle to control queue struct to send on
+ * @num_q_msg: number of messages to send on control queue
+ * @q_msg: pointer to array of queue messages to be sent
+ *
+ * The caller is expected to allocate DMAable buffers and pass them to the
+ * send routine via the q_msg struct / control queue specific data struct.
+ * The control queue will hold a reference to each send message until
+ * the completion for that message has been cleaned.
+ */
+int iecm_ctlq_send(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ u16 num_q_msg, struct iecm_ctlq_msg q_msg[])
+{
+ struct iecm_ctlq_desc *desc;
+ int num_desc_avail = 0;
+ int status = IECM_SUCCESS;
+ int i = 0;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ /* Ensure there are enough descriptors to send all messages */
+ num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq);
+ if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
+ status = IECM_ERR_CTLQ_FULL;
+ goto sq_send_command_out;
+ }
+
+ for (i = 0; i < num_q_msg; i++) {
+ struct iecm_ctlq_msg *msg = &q_msg[i];
+ u64 msg_cookie;
+
+ desc = IECM_CTLQ_DESC(cq, cq->next_to_use);
+
+ desc->opcode = CPU_TO_LE16(msg->opcode);
+ desc->pfid_vfid = CPU_TO_LE16(msg->func_id);
+
+ msg_cookie = *(u64 *)&msg->cookie;
+ desc->cookie_high =
+ CPU_TO_LE32(IECM_HI_DWORD(msg_cookie));
+ desc->cookie_low =
+ CPU_TO_LE32(IECM_LO_DWORD(msg_cookie));
+
+ desc->flags = CPU_TO_LE16((msg->host_id & IECM_HOST_ID_MASK) <<
+ IECM_CTLQ_FLAG_HOST_ID_S);
+ if (msg->data_len) {
+ struct iecm_dma_mem *buff = msg->ctx.indirect.payload;
+
+ desc->datalen |= CPU_TO_LE16(msg->data_len);
+ desc->flags |= CPU_TO_LE16(IECM_CTLQ_FLAG_BUF);
+ desc->flags |= CPU_TO_LE16(IECM_CTLQ_FLAG_RD);
+
+ /* Update the address values in the desc with the pa
+ * value for respective buffer
+ */
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(buff->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(buff->pa));
+
+ iecm_memcpy(&desc->params, msg->ctx.indirect.context,
+ IECM_INDIRECT_CTX_SIZE, IECM_NONDMA_TO_DMA);
+ } else {
+ iecm_memcpy(&desc->params, msg->ctx.direct,
+ IECM_DIRECT_CTX_SIZE, IECM_NONDMA_TO_DMA);
+ }
+
+ /* Store buffer info */
+ cq->bi.tx_msg[cq->next_to_use] = msg;
+
+ (cq->next_to_use)++;
+ if (cq->next_to_use == cq->ring_size)
+ cq->next_to_use = 0;
+ }
+
+ /* Force memory write to complete before letting hardware
+ * know that there are new descriptors to fetch.
+ */
+ iecm_wmb();
+
+ wr32(hw, cq->reg.tail, cq->next_to_use);
+
+sq_send_command_out:
+ iecm_release_lock(&cq->cq_lock);
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the
+ * requested queue
+ * @cq: pointer to the specific Control queue
+ * @clean_count: (input|output) number of descriptors to clean as input, and
+ * number of descriptors actually cleaned as output
+ * @msg_status: (output) pointer to msg pointer array to be populated; needs
+ * to be allocated by caller
+ *
+ * Returns an array of message pointers associated with the cleaned
+ * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned
+ * descriptors. The status will be returned for each; any messages that failed
+ * to send will have a non-zero status. The caller is expected to free original
+ * ctlq_msgs and free or reuse the DMA buffers.
+ */
+int iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count,
+ struct iecm_ctlq_msg *msg_status[])
+{
+ struct iecm_ctlq_desc *desc;
+ u16 i = 0, num_to_clean;
+ u16 ntc, desc_err;
+ int ret = IECM_SUCCESS;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ if (*clean_count == 0)
+ return IECM_SUCCESS;
+ if (*clean_count > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *clean_count;
+
+ for (i = 0; i < num_to_clean; i++) {
+ /* Fetch next descriptor and check if marked as done */
+ desc = IECM_CTLQ_DESC(cq, ntc);
+ if (!(LE16_TO_CPU(desc->flags) & IECM_CTLQ_FLAG_DD))
+ break;
+
+ desc_err = LE16_TO_CPU(desc->ret_val);
+ if (desc_err) {
+ /* strip off FW internal code */
+ desc_err &= 0xff;
+ }
+
+ msg_status[i] = cq->bi.tx_msg[ntc];
+ msg_status[i]->status = desc_err;
+
+ cq->bi.tx_msg[ntc] = NULL;
+
+ /* Zero out any stale data */
+ iecm_memset(desc, 0, sizeof(*desc), IECM_DMA_MEM);
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ }
+
+ cq->next_to_clean = ntc;
+
+ iecm_release_lock(&cq->cq_lock);
+
+ /* Return number of descriptors actually cleaned */
+ *clean_count = i;
+
+ return ret;
+}
+
+/**
+ * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue handle
+ * @buff_count: (input|output) input is number of buffers caller is trying to
+ * return; output is number of buffers that were not posted
+ * @buffs: array of pointers to dma mem structs to be given to hardware
+ *
+ * Caller uses this function to return DMA buffers to the descriptor ring after
+ * consuming them; buff_count will be the number of buffers.
+ *
+ * Note: this function needs to be called after a receive call even
+ * if there are no DMA buffers to be returned, i.e. buff_count = 0,
+ * buffs = NULL to support direct commands
+ */
+int iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ u16 *buff_count, struct iecm_dma_mem **buffs)
+{
+ struct iecm_ctlq_desc *desc;
+ u16 ntp = cq->next_to_post;
+ bool buffs_avail = false;
+ u16 tbp = ntp + 1;
+ int status = IECM_SUCCESS;
+ int i = 0;
+
+ if (*buff_count > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ if (*buff_count > 0)
+ buffs_avail = true;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ if (tbp == cq->next_to_clean)
+ /* Nothing to do */
+ goto post_buffs_out;
+
+ /* Post buffers for as many as provided or up until the last one used */
+ while (ntp != cq->next_to_clean) {
+ desc = IECM_CTLQ_DESC(cq, ntp);
+
+ if (cq->bi.rx_buff[ntp])
+ goto fill_desc;
+ if (!buffs_avail) {
+ /* If the caller hasn't given us any buffers or
+ * there are none left, search the ring itself
+ * for an available buffer to move to this
+ * entry starting at the next entry in the ring
+ */
+ tbp = ntp + 1;
+
+ /* Wrap ring if necessary */
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ while (tbp != cq->next_to_clean) {
+ if (cq->bi.rx_buff[tbp]) {
+ cq->bi.rx_buff[ntp] =
+ cq->bi.rx_buff[tbp];
+ cq->bi.rx_buff[tbp] = NULL;
+
+ /* Found a buffer, no need to
+ * search anymore
+ */
+ break;
+ }
+
+ /* Wrap ring if necessary */
+ tbp++;
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+ }
+
+ if (tbp == cq->next_to_clean)
+ goto post_buffs_out;
+ } else {
+ /* Give back pointer to DMA buffer */
+ cq->bi.rx_buff[ntp] = buffs[i];
+ i++;
+
+ if (i >= *buff_count)
+ buffs_avail = false;
+ }
+
+fill_desc:
+ desc->flags =
+ CPU_TO_LE16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+
+ /* Post buffers to descriptor */
+ desc->datalen = CPU_TO_LE16(cq->bi.rx_buff[ntp]->size);
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(cq->bi.rx_buff[ntp]->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(cq->bi.rx_buff[ntp]->pa));
+
+ ntp++;
+ if (ntp == cq->ring_size)
+ ntp = 0;
+ }
+
+post_buffs_out:
+ /* Only update tail if buffers were actually posted */
+ if (cq->next_to_post != ntp) {
+ if (ntp)
+ /* Update next_to_post to ntp - 1 since current ntp
+ * will not have a buffer
+ */
+ cq->next_to_post = ntp - 1;
+ else
+ /* Wrap to end of end ring since current ntp is 0 */
+ cq->next_to_post = cq->ring_size - 1;
+
+ wr32(hw, cq->reg.tail, cq->next_to_post);
+ }
+
+ iecm_release_lock(&cq->cq_lock);
+
+ /* return the number of buffers that were not posted */
+ *buff_count = *buff_count - i;
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_recv - receive control queue message call back
+ * @cq: pointer to control queue handle to receive on
+ * @num_q_msg: (input|output) input number of messages that should be received;
+ * output number of messages actually received
+ * @q_msg: (output) array of received control queue messages on this q;
+ * needs to be pre-allocated by caller for as many messages as requested
+ *
+ * Called by interrupt handler or polling mechanism. Caller is expected
+ * to free buffers
+ */
+int iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg,
+ struct iecm_ctlq_msg *q_msg)
+{
+ u16 num_to_clean, ntc, ret_val, flags;
+ struct iecm_ctlq_desc *desc;
+ int ret_code = IECM_SUCCESS;
+ u16 i = 0;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ if (*num_q_msg == 0)
+ return IECM_SUCCESS;
+ else if (*num_q_msg > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ /* take the lock before we start messing with the ring */
+ iecm_acquire_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *num_q_msg;
+
+ for (i = 0; i < num_to_clean; i++) {
+ u64 msg_cookie;
+
+ /* Fetch next descriptor and check if marked as done */
+ desc = IECM_CTLQ_DESC(cq, ntc);
+ flags = LE16_TO_CPU(desc->flags);
+
+ if (!(flags & IECM_CTLQ_FLAG_DD))
+ break;
+
+ ret_val = LE16_TO_CPU(desc->ret_val);
+
+ q_msg[i].vmvf_type = (flags &
+ (IECM_CTLQ_FLAG_FTYPE_VM |
+ IECM_CTLQ_FLAG_FTYPE_PF)) >>
+ IECM_CTLQ_FLAG_FTYPE_S;
+
+ if (flags & IECM_CTLQ_FLAG_ERR)
+ ret_code = IECM_ERR_CTLQ_ERROR;
+
+ msg_cookie = (u64)LE32_TO_CPU(desc->cookie_high) << 32;
+ msg_cookie |= (u64)LE32_TO_CPU(desc->cookie_low);
+ iecm_memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64),
+ IECM_NONDMA_TO_NONDMA);
+
+ q_msg[i].opcode = LE16_TO_CPU(desc->opcode);
+ q_msg[i].data_len = LE16_TO_CPU(desc->datalen);
+ q_msg[i].status = ret_val;
+
+ if (desc->datalen) {
+ iecm_memcpy(q_msg[i].ctx.indirect.context,
+ &desc->params.indirect,
+ IECM_INDIRECT_CTX_SIZE,
+ IECM_DMA_TO_NONDMA);
+
+ /* Assign pointer to dma buffer to ctlq_msg array
+ * to be given to upper layer
+ */
+ q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
+
+ /* Zero out pointer to DMA buffer info;
+ * will be repopulated by post buffers API
+ */
+ cq->bi.rx_buff[ntc] = NULL;
+ } else {
+ iecm_memcpy(q_msg[i].ctx.direct,
+ desc->params.raw,
+ IECM_DIRECT_CTX_SIZE,
+ IECM_DMA_TO_NONDMA);
+ }
+
+ /* Zero out stale data in descriptor */
+ iecm_memset(desc, 0, sizeof(struct iecm_ctlq_desc),
+ IECM_DMA_MEM);
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ };
+
+ cq->next_to_clean = ntc;
+
+ iecm_release_lock(&cq->cq_lock);
+
+ *num_q_msg = i;
+ if (*num_q_msg == 0)
+ ret_code = IECM_ERR_CTLQ_NO_WORK;
+
+ return ret_code;
+}
diff --git a/drivers/net/idpf/base/iecm_controlq.h b/drivers/net/idpf/base/iecm_controlq.h
new file mode 100644
index 0000000000..0964146b49
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq.h
@@ -0,0 +1,214 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_CONTROLQ_H_
+#define _IECM_CONTROLQ_H_
+
+#ifdef __KERNEL__
+#include <linux/slab.h>
+#endif
+
+#ifndef __KERNEL__
+#include "iecm_osdep.h"
+#include "iecm_alloc.h"
+/* This is used to explicitly annotate when a switch case falls through to the
+ * next case.
+ */
+#define fallthrough do {} while (0)
+#endif
+#include "iecm_controlq_api.h"
+
+/* Maximum buffer lengths for all control queue types */
+#define IECM_CTLQ_MAX_RING_SIZE 1024
+#define IECM_CTLQ_MAX_BUF_LEN 4096
+
+#define IECM_CTLQ_DESC(R, i) \
+ (&(((struct iecm_ctlq_desc *)((R)->desc_ring.va))[i]))
+
+#define IECM_CTLQ_DESC_UNUSED(R) \
+ (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
+ (R)->next_to_clean - (R)->next_to_use - 1)
+
+#ifndef __KERNEL__
+/* Data type manipulation macros. */
+#define IECM_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define IECM_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF))
+#define IECM_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF))
+#define IECM_LO_WORD(x) ((u16)((x) & 0xFFFF))
+
+#endif
+/* Control Queue default settings */
+#define IECM_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */
+
+struct iecm_ctlq_desc {
+ __le16 flags;
+ __le16 opcode;
+ __le16 datalen; /* 0 for direct commands */
+ union {
+ __le16 ret_val;
+ __le16 pfid_vfid;
+#define IECM_CTLQ_DESC_VF_ID_S 0
+#define IECM_CTLQ_DESC_VF_ID_M (0x7FF << IECM_CTLQ_DESC_VF_ID_S)
+#define IECM_CTLQ_DESC_PF_ID_S 11
+#define IECM_CTLQ_DESC_PF_ID_M (0x1F << IECM_CTLQ_DESC_PF_ID_S)
+ };
+ __le32 cookie_high;
+ __le32 cookie_low;
+ union {
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 param2;
+ __le32 param3;
+ } direct;
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 addr_high;
+ __le32 addr_low;
+ } indirect;
+ u8 raw[16];
+ } params;
+};
+
+/* Flags sub-structure
+ * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| HOST_ID |
+ */
+/* command flags and offsets */
+#define IECM_CTLQ_FLAG_DD_S 0
+#define IECM_CTLQ_FLAG_CMP_S 1
+#define IECM_CTLQ_FLAG_ERR_S 2
+#define IECM_CTLQ_FLAG_FTYPE_S 6
+#define IECM_CTLQ_FLAG_RD_S 10
+#define IECM_CTLQ_FLAG_VFC_S 11
+#define IECM_CTLQ_FLAG_BUF_S 12
+#define IECM_CTLQ_FLAG_HOST_ID_S 13
+
+#define IECM_CTLQ_FLAG_DD BIT(IECM_CTLQ_FLAG_DD_S) /* 0x1 */
+#define IECM_CTLQ_FLAG_CMP BIT(IECM_CTLQ_FLAG_CMP_S) /* 0x2 */
+#define IECM_CTLQ_FLAG_ERR BIT(IECM_CTLQ_FLAG_ERR_S) /* 0x4 */
+#define IECM_CTLQ_FLAG_FTYPE_VM BIT(IECM_CTLQ_FLAG_FTYPE_S) /* 0x40 */
+#define IECM_CTLQ_FLAG_FTYPE_PF BIT(IECM_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */
+#define IECM_CTLQ_FLAG_RD BIT(IECM_CTLQ_FLAG_RD_S) /* 0x400 */
+#define IECM_CTLQ_FLAG_VFC BIT(IECM_CTLQ_FLAG_VFC_S) /* 0x800 */
+#define IECM_CTLQ_FLAG_BUF BIT(IECM_CTLQ_FLAG_BUF_S) /* 0x1000 */
+
+/* Host ID is a special field that has 3b and not a 1b flag */
+#define IECM_CTLQ_FLAG_HOST_ID_M MAKE_MASK(0x7000UL, IECM_CTLQ_FLAG_HOST_ID_S)
+
+struct iecm_mbxq_desc {
+ u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */
+ u32 chnl_opcode; /* avoid confusion with desc->opcode */
+ u32 chnl_retval; /* ditto for desc->retval */
+ u32 pf_vf_id; /* used by CP when sending to PF */
+};
+
+enum iecm_mac_type {
+ IECM_MAC_UNKNOWN = 0,
+ IECM_MAC_PF,
+ IECM_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct iecm_mac_info {
+ enum iecm_mac_type type;
+ u8 addr[ETH_ALEN];
+ u8 perm_addr[ETH_ALEN];
+};
+
+#define IECM_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum iecm_bus_type {
+ iecm_bus_type_unknown = 0,
+ iecm_bus_type_pci,
+ iecm_bus_type_pcix,
+ iecm_bus_type_pci_express,
+ iecm_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum iecm_bus_speed {
+ iecm_bus_speed_unknown = 0,
+ iecm_bus_speed_33 = 33,
+ iecm_bus_speed_66 = 66,
+ iecm_bus_speed_100 = 100,
+ iecm_bus_speed_120 = 120,
+ iecm_bus_speed_133 = 133,
+ iecm_bus_speed_2500 = 2500,
+ iecm_bus_speed_5000 = 5000,
+ iecm_bus_speed_8000 = 8000,
+ iecm_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum iecm_bus_width {
+ iecm_bus_width_unknown = 0,
+ iecm_bus_width_pcie_x1 = 1,
+ iecm_bus_width_pcie_x2 = 2,
+ iecm_bus_width_pcie_x4 = 4,
+ iecm_bus_width_pcie_x8 = 8,
+ iecm_bus_width_32 = 32,
+ iecm_bus_width_64 = 64,
+ iecm_bus_width_reserved
+};
+
+/* Bus parameters */
+struct iecm_bus_info {
+ enum iecm_bus_speed speed;
+ enum iecm_bus_width width;
+ enum iecm_bus_type type;
+
+ u16 func;
+ u16 device;
+ u16 lan_id;
+ u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct iecm_hw_func_caps {
+ u32 num_alloc_vfs;
+ u32 vf_base_id;
+};
+
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct iecm_hw {
+ u8 *hw_addr;
+ u64 hw_addr_len;
+ void *back;
+
+ /* control queue - send and receive */
+ struct iecm_ctlq_info *asq;
+ struct iecm_ctlq_info *arq;
+
+ /* subsystem structs */
+ struct iecm_mac_info mac;
+ struct iecm_bus_info bus;
+ struct iecm_hw_func_caps func_caps;
+
+ /* pci info */
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+ bool adapter_stopped;
+
+ LIST_HEAD_TYPE(list_head, iecm_ctlq_info) cq_list_head;
+};
+
+int iecm_ctlq_alloc_ring_res(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq);
+
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq);
+
+/* prototype for functions used for dynamic memory allocation */
+void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem,
+ u64 size);
+void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem);
+#endif /* _IECM_CONTROLQ_H_ */
diff --git a/drivers/net/idpf/base/iecm_controlq_api.h b/drivers/net/idpf/base/iecm_controlq_api.h
new file mode 100644
index 0000000000..27511ffd51
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq_api.h
@@ -0,0 +1,227 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_CONTROLQ_API_H_
+#define _IECM_CONTROLQ_API_H_
+
+#ifdef __KERNEL__
+#include "iecm_mem.h"
+#else /* !__KERNEL__ */
+/* Error Codes */
+/* Linux kernel driver can't directly use these. Instead, they are mapped to
+ * linux compatible error codes which get translated in the build script.
+ */
+#define IECM_SUCCESS 0
+#define IECM_ERR_PARAM -53 /* -EBADR */
+#define IECM_ERR_NOT_IMPL -95 /* -EOPNOTSUPP */
+#define IECM_ERR_NOT_READY -16 /* -EBUSY */
+#define IECM_ERR_BAD_PTR -14 /* -EFAULT */
+#define IECM_ERR_INVAL_SIZE -90 /* -EMSGSIZE */
+#define IECM_ERR_DEVICE_NOT_SUPPORTED -19 /* -ENODEV */
+#define IECM_ERR_FW_API_VER -13 /* -EACCESS */
+#define IECM_ERR_NO_MEMORY -12 /* -ENOMEM */
+#define IECM_ERR_CFG -22 /* -EINVAL */
+#define IECM_ERR_OUT_OF_RANGE -34 /* -ERANGE */
+#define IECM_ERR_ALREADY_EXISTS -17 /* -EEXIST */
+#define IECM_ERR_DOES_NOT_EXIST -6 /* -ENXIO */
+#define IECM_ERR_IN_USE -114 /* -EALREADY */
+#define IECM_ERR_MAX_LIMIT -109 /* -ETOOMANYREFS */
+#define IECM_ERR_RESET_ONGOING -104 /* -ECONNRESET */
+
+/* CRQ/CSQ specific error codes */
+#define IECM_ERR_CTLQ_ERROR -74 /* -EBADMSG */
+#define IECM_ERR_CTLQ_TIMEOUT -110 /* -ETIMEDOUT */
+#define IECM_ERR_CTLQ_FULL -28 /* -ENOSPC */
+#define IECM_ERR_CTLQ_NO_WORK -42 /* -ENOMSG */
+#define IECM_ERR_CTLQ_EMPTY -105 /* -ENOBUFS */
+#endif /* !__KERNEL__ */
+
+struct iecm_hw;
+
+/* Used for queue init, response and events */
+enum iecm_ctlq_type {
+ IECM_CTLQ_TYPE_MAILBOX_TX = 0,
+ IECM_CTLQ_TYPE_MAILBOX_RX = 1,
+ IECM_CTLQ_TYPE_CONFIG_TX = 2,
+ IECM_CTLQ_TYPE_CONFIG_RX = 3,
+ IECM_CTLQ_TYPE_EVENT_RX = 4,
+ IECM_CTLQ_TYPE_RDMA_TX = 5,
+ IECM_CTLQ_TYPE_RDMA_RX = 6,
+ IECM_CTLQ_TYPE_RDMA_COMPL = 7
+};
+
+/*
+ * Generic Control Queue Structures
+ */
+
+struct iecm_ctlq_reg {
+ /* used for queue tracking */
+ u32 head;
+ u32 tail;
+ /* Below applies only to default mb (if present) */
+ u32 len;
+ u32 bah;
+ u32 bal;
+ u32 len_mask;
+ u32 len_ena_mask;
+ u32 head_mask;
+};
+
+/* Generic queue msg structure */
+struct iecm_ctlq_msg {
+ u8 vmvf_type; /* represents the source of the message on recv */
+#define IECM_VMVF_TYPE_VF 0
+#define IECM_VMVF_TYPE_VM 1
+#define IECM_VMVF_TYPE_PF 2
+ u8 host_id;
+ /* 3b field used only when sending a message to peer - to be used in
+ * combination with target func_id to route the message
+ */
+#define IECM_HOST_ID_MASK 0x7
+
+ u16 opcode;
+ u16 data_len; /* data_len = 0 when no payload is attached */
+ union {
+ u16 func_id; /* when sending a message */
+ u16 status; /* when receiving a message */
+ };
+ union {
+ struct {
+ u32 chnl_retval;
+ u32 chnl_opcode;
+ } mbx;
+ } cookie;
+ union {
+#define IECM_DIRECT_CTX_SIZE 16
+#define IECM_INDIRECT_CTX_SIZE 8
+ /* 16 bytes of context can be provided or 8 bytes of context
+ * plus the address of a DMA buffer
+ */
+ u8 direct[IECM_DIRECT_CTX_SIZE];
+ struct {
+ u8 context[IECM_INDIRECT_CTX_SIZE];
+ struct iecm_dma_mem *payload;
+ } indirect;
+ } ctx;
+};
+
+/* Generic queue info structures */
+/* MB, CONFIG and EVENT q do not have extended info */
+struct iecm_ctlq_create_info {
+ enum iecm_ctlq_type type;
+ int id; /* absolute queue offset passed as input
+ * -1 for default mailbox if present
+ */
+ u16 len; /* Queue length passed as input */
+ u16 buf_size; /* buffer size passed as input */
+ u64 base_address; /* output, HPA of the Queue start */
+ struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+
+ int ext_info_size;
+ void *ext_info; /* Specific to q type */
+};
+
+/* Control Queue information */
+struct iecm_ctlq_info {
+ LIST_ENTRY_TYPE(iecm_ctlq_info) cq_list;
+
+ enum iecm_ctlq_type cq_type;
+ int q_id;
+ iecm_lock cq_lock; /* queue lock
+ * iecm_lock is defined in OSdep.h
+ */
+ /* used for interrupt processing */
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 next_to_post; /* starting descriptor to post buffers
+ * to after recev
+ */
+
+ struct iecm_dma_mem desc_ring; /* descriptor ring memory
+ * iecm_dma_mem is defined in OSdep.h
+ */
+ union {
+ struct iecm_dma_mem **rx_buff;
+ struct iecm_ctlq_msg **tx_msg;
+ } bi;
+
+ u16 buf_size; /* queue buffer size */
+ u16 ring_size; /* Number of descriptors */
+ struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+};
+
+/* PF/VF mailbox commands */
+enum iecm_mbx_opc {
+ /* iecm_mbq_opc_send_msg_to_pf:
+ * usage: used by PF or VF to send a message to its CPF
+ * target: RX queue and function ID of parent PF taken from HW
+ */
+ iecm_mbq_opc_send_msg_to_pf = 0x0801,
+
+ /* iecm_mbq_opc_send_msg_to_vf:
+ * usage: used by PF to send message to a VF
+ * target: VF control queue ID must be specified in descriptor
+ */
+ iecm_mbq_opc_send_msg_to_vf = 0x0802,
+
+ /* iecm_mbq_opc_send_msg_to_peer_pf:
+ * usage: used by any function to send message to any peer PF
+ * target: RX queue and host of parent PF taken from HW
+ */
+ iecm_mbq_opc_send_msg_to_peer_pf = 0x0803,
+
+ /* iecm_mbq_opc_send_msg_to_peer_drv:
+ * usage: used by any function to send message to any peer driver
+ * target: RX queue and target host must be specific in descriptor
+ */
+ iecm_mbq_opc_send_msg_to_peer_drv = 0x0804,
+};
+
+/*
+ * API supported for control queue management
+ */
+
+/* Will init all required q including default mb. "q_info" is an array of
+ * create_info structs equal to the number of control queues to be created.
+ */
+int iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+ struct iecm_ctlq_create_info *q_info);
+
+/* Allocate and initialize a single control queue, which will be added to the
+ * control queue list; returns a handle to the created control queue
+ */
+int iecm_ctlq_add(struct iecm_hw *hw,
+ struct iecm_ctlq_create_info *qinfo,
+ struct iecm_ctlq_info **cq);
+
+/* Deinitialize and deallocate a single control queue */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq);
+
+/* Sends messages to HW and will also free the buffer*/
+int iecm_ctlq_send(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq,
+ u16 num_q_msg,
+ struct iecm_ctlq_msg q_msg[]);
+
+/* Receives messages and called by interrupt handler/polling
+ * initiated by app/process. Also caller is supposed to free the buffers
+ */
+int iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg,
+ struct iecm_ctlq_msg *q_msg);
+
+/* Reclaims send descriptors on HW write back */
+int iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count,
+ struct iecm_ctlq_msg *msg_status[]);
+
+/* Indicate RX buffers are done being processed */
+int iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq,
+ u16 *buff_count,
+ struct iecm_dma_mem **buffs);
+
+/* Will destroy all q including the default mb */
+int iecm_ctlq_deinit(struct iecm_hw *hw);
+
+#endif /* _IECM_CONTROLQ_API_H_ */
diff --git a/drivers/net/idpf/base/iecm_controlq_setup.c b/drivers/net/idpf/base/iecm_controlq_setup.c
new file mode 100644
index 0000000000..eb6cf7651d
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq_setup.c
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+
+#include "iecm_controlq.h"
+
+
+/**
+ * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ */
+static int
+iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc);
+
+ cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size);
+ if (!cq->desc_ring.va)
+ return IECM_ERR_NO_MEMORY;
+
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Allocate the buffer head for all control queues, and if it's a receive
+ * queue, allocate DMA buffers
+ */
+static int iecm_ctlq_alloc_bufs(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ int i = 0;
+
+ /* Do not allocate DMA buffers for transmit queues */
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ return IECM_SUCCESS;
+
+ /* We'll be allocating the buffer info memory first, then we can
+ * allocate the mapped buffers for the event processing
+ */
+ cq->bi.rx_buff = (struct iecm_dma_mem **)
+ iecm_calloc(hw, cq->ring_size,
+ sizeof(struct iecm_dma_mem *));
+ if (!cq->bi.rx_buff)
+ return IECM_ERR_NO_MEMORY;
+
+ /* allocate the mapped buffers (except for the last one) */
+ for (i = 0; i < cq->ring_size - 1; i++) {
+ struct iecm_dma_mem *bi;
+ int num = 1; /* number of iecm_dma_mem to be allocated */
+
+ cq->bi.rx_buff[i] = (struct iecm_dma_mem *)iecm_calloc(hw, num,
+ sizeof(struct iecm_dma_mem));
+ if (!cq->bi.rx_buff[i])
+ goto unwind_alloc_cq_bufs;
+
+ bi = cq->bi.rx_buff[i];
+
+ bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size);
+ if (!bi->va) {
+ /* unwind will not free the failed entry */
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ goto unwind_alloc_cq_bufs;
+ }
+ }
+
+ return IECM_SUCCESS;
+
+unwind_alloc_cq_bufs:
+ /* don't try to free the one that failed... */
+ i--;
+ for (; i >= 0; i--) {
+ iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ }
+ iecm_free(hw, cq->bi.rx_buff);
+
+ return IECM_ERR_NO_MEMORY;
+}
+
+/**
+ * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * This assumes the posted send buffers have already been cleaned
+ * and de-allocated
+ */
+static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ iecm_free_dma_mem(hw, &cq->desc_ring);
+}
+
+/**
+ * iecm_ctlq_free_bufs - Free CQ buffer info elements
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
+ * queues. The upper layers are expected to manage freeing of TX DMA buffers
+ */
+static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ void *bi;
+
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) {
+ int i;
+
+ /* free DMA buffers for rx queues*/
+ for (i = 0; i < cq->ring_size; i++) {
+ if (cq->bi.rx_buff[i]) {
+ iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ }
+ }
+
+ bi = (void *)cq->bi.rx_buff;
+ } else {
+ bi = (void *)cq->bi.tx_msg;
+ }
+
+ /* free the buffer header */
+ iecm_free(hw, bi);
+}
+
+/**
+ * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the memory used by the ring, buffers and other related structures
+ */
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_free_bufs(hw, cq);
+ iecm_ctlq_free_desc_ring(hw, cq);
+}
+
+/**
+ * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue struct
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+int iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ int ret_code;
+
+ /* verify input for valid configuration */
+ if (!cq->ring_size || !cq->buf_size)
+ return IECM_ERR_CFG;
+
+ /* allocate the ring memory */
+ ret_code = iecm_ctlq_alloc_desc_ring(hw, cq);
+ if (ret_code)
+ return ret_code;
+
+ /* allocate buffers in the rings */
+ ret_code = iecm_ctlq_alloc_bufs(hw, cq);
+ if (ret_code)
+ goto iecm_init_cq_free_ring;
+
+ /* success! */
+ return IECM_SUCCESS;
+
+iecm_init_cq_free_ring:
+ iecm_free_dma_mem(hw, &cq->desc_ring);
+ return ret_code;
+}
diff --git a/drivers/net/idpf/base/iecm_devids.h b/drivers/net/idpf/base/iecm_devids.h
new file mode 100644
index 0000000000..839214cb40
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_DEVIDS_H_
+#define _IECM_DEVIDS_H_
+
+/* Vendor ID */
+#define IECM_INTEL_VENDOR_ID 0x8086
+
+/* Device IDs */
+#define IECM_DEV_ID_PF 0x1452
+
+
+
+
+#endif /* _IECM_DEVIDS_H_ */
diff --git a/drivers/net/idpf/base/iecm_lan_pf_regs.h b/drivers/net/idpf/base/iecm_lan_pf_regs.h
new file mode 100644
index 0000000000..c6c460dab0
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_pf_regs.h
@@ -0,0 +1,134 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_PF_REGS_H_
+#define _IECM_LAN_PF_REGS_H_
+
+
+/* Receive queues */
+#define PF_QRX_BASE 0x00000000
+#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000)))
+#define PF_QRX_BUFFQ_BASE 0x03000000
+#define PF_QRX_BUFFQ_TAIL(_QRX) (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
+
+/* Transmit queues */
+#define PF_QTX_BASE 0x05000000
+#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000))
+
+
+/* Control(PF Mailbox) Queue */
+#define PF_FW_BASE 0x08400000
+
+#define PF_FW_ARQBAL (PF_FW_BASE)
+#define PF_FW_ARQBAH (PF_FW_BASE + 0x4)
+#define PF_FW_ARQLEN (PF_FW_BASE + 0x8)
+#define PF_FW_ARQLEN_ARQLEN_S 0
+#define PF_FW_ARQLEN_ARQLEN_M MAKEMASK(0x1FFF, PF_FW_ARQLEN_ARQLEN_S)
+#define PF_FW_ARQLEN_ARQVFE_S 28
+#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S)
+#define PF_FW_ARQLEN_ARQOVFL_S 29
+#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S)
+#define PF_FW_ARQLEN_ARQCRIT_S 30
+#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S)
+#define PF_FW_ARQLEN_ARQENABLE_S 31
+#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S)
+#define PF_FW_ARQH (PF_FW_BASE + 0xC)
+#define PF_FW_ARQH_ARQH_S 0
+#define PF_FW_ARQH_ARQH_M MAKEMASK(0x1FFF, PF_FW_ARQH_ARQH_S)
+#define PF_FW_ARQT (PF_FW_BASE + 0x10)
+
+#define PF_FW_ATQBAL (PF_FW_BASE + 0x14)
+#define PF_FW_ATQBAH (PF_FW_BASE + 0x18)
+#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C)
+#define PF_FW_ATQLEN_ATQLEN_S 0
+#define PF_FW_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, PF_FW_ATQLEN_ATQLEN_S)
+#define PF_FW_ATQLEN_ATQVFE_S 28
+#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S)
+#define PF_FW_ATQLEN_ATQOVFL_S 29
+#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S)
+#define PF_FW_ATQLEN_ATQCRIT_S 30
+#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S)
+#define PF_FW_ATQLEN_ATQENABLE_S 31
+#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S)
+#define PF_FW_ATQH (PF_FW_BASE + 0x20)
+#define PF_FW_ATQH_ATQH_S 0
+#define PF_FW_ATQH_ATQH_M MAKEMASK(0x3FF, PF_FW_ATQH_ATQH_S)
+#define PF_FW_ATQT (PF_FW_BASE + 0x24)
+
+/* Interrupts */
+#define PF_GLINT_BASE 0x08900000
+#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000))
+#define PF_GLINT_DYN_CTL_INTENA_S 0
+#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S)
+#define PF_GLINT_DYN_CTL_CLEARPBA_S 1
+#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
+#define PF_GLINT_DYN_CTL_ITR_INDX_S 3
+#define PF_GLINT_DYN_CTL_ITR_INDX_M MAKEMASK(0x3, PF_GLINT_DYN_CTL_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_INTERVAL_S 5
+#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_S 30
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_M BIT(PF_GLINT_DYN_CTL_WB_ON_ITR_S)
+#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31
+#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
+#define PF_GLINT_ITR_V2(_i, _reg_start) (((_i) * 4) + (_reg_start))
+#define PF_GLINT_ITR(_i, _INT) (PF_GLINT_BASE + (((_i) + 1) * 4) + ((_INT) * 0x1000))
+#define PF_GLINT_ITR_MAX_INDEX 2
+#define PF_GLINT_ITR_INTERVAL_S 0
+#define PF_GLINT_ITR_INTERVAL_M MAKEMASK(0xFFF, PF_GLINT_ITR_INTERVAL_S)
+
+/* Timesync registers */
+#define PF_TIMESYNC_BASE 0x08404000
+#define PF_GLTSYN_CMD_SYNC (PF_TIMESYNC_BASE)
+#define PF_GLTSYN_CMD_SYNC_EXEC_CMD_S 0
+#define PF_GLTSYN_CMD_SYNC_EXEC_CMD_M MAKEMASK(0x3, PF_GLTSYN_CMD_SYNC_EXEC_CMD_S)
+#define PF_GLTSYN_CMD_SYNC_SHTIME_EN_S 2
+#define PF_GLTSYN_CMD_SYNC_SHTIME_EN_M BIT(PF_GLTSYN_CMD_SYNC_SHTIME_EN_S)
+#define PF_GLTSYN_SHTIME_0 (PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L (PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H (PF_TIMESYNC_BASE + 0xC)
+#define PF_GLTSYN_ART_L (PF_TIMESYNC_BASE + 0x10)
+#define PF_GLTSYN_ART_H (PF_TIMESYNC_BASE + 0x14)
+
+/* Generic registers */
+#define PF_INT_DIR_OICR_ENA 0x08406000
+#define PF_INT_DIR_OICR_ENA_S 0
+#define PF_INT_DIR_OICR_ENA_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_ENA_S)
+#define PF_INT_DIR_OICR 0x08406004
+#define PF_INT_DIR_OICR_TSYN_EVNT 0
+#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1)
+#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2)
+#define PF_INT_DIR_OICR_CAUSE 0x08406008
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_CAUSE_CAUSE_S)
+#define PF_INT_PBA_CLEAR 0x0840600C
+
+#define PF_FUNC_RID 0x08406010
+#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M MAKEMASK(0x7, PF_FUNC_RID_FUNCTION_NUMBER_S)
+#define PF_FUNC_RID_DEVICE_NUMBER_S 3
+#define PF_FUNC_RID_DEVICE_NUMBER_M MAKEMASK(0x1F, PF_FUNC_RID_DEVICE_NUMBER_S)
+#define PF_FUNC_RID_BUS_NUMBER_S 8
+#define PF_FUNC_RID_BUS_NUMBER_M MAKEMASK(0xFF, PF_FUNC_RID_BUS_NUMBER_S)
+
+/* Reset registers */
+#define PFGEN_RTRIG 0x08407000
+#define PFGEN_RTRIG_CORER_S 0
+#define PFGEN_RTRIG_CORER_M BIT(0)
+#define PFGEN_RTRIG_LINKR_S 1
+#define PFGEN_RTRIG_LINKR_M BIT(1)
+#define PFGEN_RTRIG_IMCR_S 2
+#define PFGEN_RTRIG_IMCR_M BIT(2)
+#define PFGEN_RSTAT 0x08407008 /* PFR Status */
+#define PFGEN_RSTAT_PFR_STATE_S 0
+#define PFGEN_RSTAT_PFR_STATE_M MAKEMASK(0x3, PFGEN_RSTAT_PFR_STATE_S)
+#define PFGEN_CTRL 0x0840700C
+#define PFGEN_CTRL_PFSWR BIT(0)
+
+#endif
diff --git a/drivers/net/idpf/base/iecm_lan_txrx.h b/drivers/net/idpf/base/iecm_lan_txrx.h
new file mode 100644
index 0000000000..3e5320975d
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_txrx.h
@@ -0,0 +1,428 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_TXRX_H_
+#define _IECM_LAN_TXRX_H_
+#ifndef __KERNEL__
+#include "iecm_osdep.h"
+#endif
+
+enum iecm_rss_hash {
+ /* Values 0 - 28 are reserved for future use */
+ IECM_HASH_INVALID = 0,
+ IECM_HASH_NONF_UNICAST_IPV4_UDP = 29,
+ IECM_HASH_NONF_MULTICAST_IPV4_UDP,
+ IECM_HASH_NONF_IPV4_UDP,
+ IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
+ IECM_HASH_NONF_IPV4_TCP,
+ IECM_HASH_NONF_IPV4_SCTP,
+ IECM_HASH_NONF_IPV4_OTHER,
+ IECM_HASH_FRAG_IPV4,
+ /* Values 37-38 are reserved */
+ IECM_HASH_NONF_UNICAST_IPV6_UDP = 39,
+ IECM_HASH_NONF_MULTICAST_IPV6_UDP,
+ IECM_HASH_NONF_IPV6_UDP,
+ IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
+ IECM_HASH_NONF_IPV6_TCP,
+ IECM_HASH_NONF_IPV6_SCTP,
+ IECM_HASH_NONF_IPV6_OTHER,
+ IECM_HASH_FRAG_IPV6,
+ IECM_HASH_NONF_RSVD47,
+ IECM_HASH_NONF_FCOE_OX,
+ IECM_HASH_NONF_FCOE_RX,
+ IECM_HASH_NONF_FCOE_OTHER,
+ /* Values 51-62 are reserved */
+ IECM_HASH_L2_PAYLOAD = 63,
+ IECM_HASH_MAX
+};
+
+/* Supported RSS offloads */
+#define IECM_DEFAULT_RSS_HASH ( \
+ BIT_ULL(IECM_HASH_NONF_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_SCTP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_TCP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_OTHER) | \
+ BIT_ULL(IECM_HASH_FRAG_IPV4) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_TCP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_SCTP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_OTHER) | \
+ BIT_ULL(IECM_HASH_FRAG_IPV6) | \
+ BIT_ULL(IECM_HASH_L2_PAYLOAD))
+
+ /* TODO: Wrap belwo comment under internal flag
+ * Below 6 pcktypes are not supported by FVL or older products
+ * They are supported by FPK and future products
+ */
+#define IECM_DEFAULT_RSS_HASH_EXPANDED (IECM_DEFAULT_RSS_HASH | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IECM_HASH_NONF_UNICAST_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IECM_HASH_NONF_UNICAST_IPV6_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV6_UDP))
+
+/* For iecm_splitq_base_tx_compl_desc */
+#define IECM_TXD_COMPLQ_GEN_S 15
+#define IECM_TXD_COMPLQ_GEN_M BIT_ULL(IECM_TXD_COMPLQ_GEN_S)
+#define IECM_TXD_COMPLQ_COMPL_TYPE_S 11
+#define IECM_TXD_COMPLQ_COMPL_TYPE_M \
+ MAKEMASK(0x7UL, IECM_TXD_COMPLQ_COMPL_TYPE_S)
+#define IECM_TXD_COMPLQ_QID_S 0
+#define IECM_TXD_COMPLQ_QID_M MAKEMASK(0x3FFUL, IECM_TXD_COMPLQ_QID_S)
+
+/* For base mode TX descriptors */
+
+#define IECM_TXD_CTX_QW0_TUNN_L4T_CS_S 23
+#define IECM_TXD_CTX_QW0_TUNN_L4T_CS_M BIT_ULL(IECM_TXD_CTX_QW0_TUNN_L4T_CS_S)
+#define IECM_TXD_CTX_QW0_TUNN_DECTTL_S 19
+#define IECM_TXD_CTX_QW0_TUNN_DECTTL_M \
+ (0xFULL << IECM_TXD_CTX_QW0_TUNN_DECTTL_S)
+#define IECM_TXD_CTX_QW0_TUNN_NATLEN_S 12
+#define IECM_TXD_CTX_QW0_TUNN_NATLEN_M \
+ (0X7FULL << IECM_TXD_CTX_QW0_TUNN_NATLEN_S)
+#define IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_S 11
+#define IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_M \
+ BIT_ULL(IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_S)
+#define IECM_TXD_CTX_EIP_NOINC_IPID_CONST \
+ IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_M
+#define IECM_TXD_CTX_QW0_TUNN_NATT_S 9
+#define IECM_TXD_CTX_QW0_TUNN_NATT_M (0x3ULL << IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_UDP_TUNNELING BIT_ULL(IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_GRE_TUNNELING (0x2ULL << IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_S 2
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_M \
+ (0x3FULL << IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_S)
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IP_S 0
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IP_M \
+ (0x3ULL << IECM_TXD_CTX_QW0_TUNN_EXT_IP_S)
+
+#define IECM_TXD_CTX_QW1_MSS_S 50
+#define IECM_TXD_CTX_QW1_MSS_M \
+ MAKEMASK(0x3FFFULL, IECM_TXD_CTX_QW1_MSS_S)
+#define IECM_TXD_CTX_QW1_TSO_LEN_S 30
+#define IECM_TXD_CTX_QW1_TSO_LEN_M \
+ MAKEMASK(0x3FFFFULL, IECM_TXD_CTX_QW1_TSO_LEN_S)
+#define IECM_TXD_CTX_QW1_CMD_S 4
+#define IECM_TXD_CTX_QW1_CMD_M \
+ MAKEMASK(0xFFFUL, IECM_TXD_CTX_QW1_CMD_S)
+#define IECM_TXD_CTX_QW1_DTYPE_S 0
+#define IECM_TXD_CTX_QW1_DTYPE_M \
+ MAKEMASK(0xFUL, IECM_TXD_CTX_QW1_DTYPE_S)
+#define IECM_TXD_QW1_L2TAG1_S 48
+#define IECM_TXD_QW1_L2TAG1_M \
+ MAKEMASK(0xFFFFULL, IECM_TXD_QW1_L2TAG1_S)
+#define IECM_TXD_QW1_TX_BUF_SZ_S 34
+#define IECM_TXD_QW1_TX_BUF_SZ_M \
+ MAKEMASK(0x3FFFULL, IECM_TXD_QW1_TX_BUF_SZ_S)
+#define IECM_TXD_QW1_OFFSET_S 16
+#define IECM_TXD_QW1_OFFSET_M \
+ MAKEMASK(0x3FFFFULL, IECM_TXD_QW1_OFFSET_S)
+#define IECM_TXD_QW1_CMD_S 4
+#define IECM_TXD_QW1_CMD_M MAKEMASK(0xFFFUL, IECM_TXD_QW1_CMD_S)
+#define IECM_TXD_QW1_DTYPE_S 0
+#define IECM_TXD_QW1_DTYPE_M MAKEMASK(0xFUL, IECM_TXD_QW1_DTYPE_S)
+
+/* TX Completion Descriptor Completion Types */
+#define IECM_TXD_COMPLT_ITR_FLUSH 0
+#define IECM_TXD_COMPLT_RULE_MISS 1
+#define IECM_TXD_COMPLT_RS 2
+#define IECM_TXD_COMPLT_REINJECTED 3
+#define IECM_TXD_COMPLT_RE 4
+#define IECM_TXD_COMPLT_SW_MARKER 5
+
+enum iecm_tx_desc_dtype_value {
+ IECM_TX_DESC_DTYPE_DATA = 0,
+ IECM_TX_DESC_DTYPE_CTX = 1,
+ IECM_TX_DESC_DTYPE_REINJECT_CTX = 2,
+ IECM_TX_DESC_DTYPE_FLEX_DATA = 3,
+ IECM_TX_DESC_DTYPE_FLEX_CTX = 4,
+ IECM_TX_DESC_DTYPE_FLEX_TSO_CTX = 5,
+ IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 = 6,
+ IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7,
+ IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX = 8,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_TSO_CTX = 9,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_CTX = 10,
+ IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX = 11,
+ IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_TSO_CTX = 13,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_CTX = 14,
+ /* DESC_DONE - HW has completed write-back of descriptor */
+ IECM_TX_DESC_DTYPE_DESC_DONE = 15,
+};
+
+enum iecm_tx_ctx_desc_cmd_bits {
+ IECM_TX_CTX_DESC_TSO = 0x01,
+ IECM_TX_CTX_DESC_TSYN = 0x02,
+ IECM_TX_CTX_DESC_IL2TAG2 = 0x04,
+ IECM_TX_CTX_DESC_RSVD = 0x08,
+ IECM_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
+ IECM_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
+ IECM_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
+ IECM_TX_CTX_DESC_SWTCH_VSI = 0x30,
+ IECM_TX_CTX_DESC_FILT_AU_EN = 0x40,
+ IECM_TX_CTX_DESC_FILT_AU_EVICT = 0x80,
+ IECM_TX_CTX_DESC_RSVD1 = 0xF00
+};
+
+enum iecm_tx_desc_len_fields {
+ /* Note: These are predefined bit offsets */
+ IECM_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */
+ IECM_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */
+ IECM_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */
+};
+
+#define IECM_TXD_QW1_MACLEN_M MAKEMASK(0x7FUL, IECM_TX_DESC_LEN_MACLEN_S)
+#define IECM_TXD_QW1_IPLEN_M MAKEMASK(0x7FUL, IECM_TX_DESC_LEN_IPLEN_S)
+#define IECM_TXD_QW1_L4LEN_M MAKEMASK(0xFUL, IECM_TX_DESC_LEN_L4_LEN_S)
+#define IECM_TXD_QW1_FCLEN_M MAKEMASK(0xFUL, IECM_TX_DESC_LEN_L4_LEN_S)
+
+enum iecm_tx_base_desc_cmd_bits {
+ IECM_TX_DESC_CMD_EOP = 0x0001,
+ IECM_TX_DESC_CMD_RS = 0x0002,
+ /* only on VFs else RSVD */
+ IECM_TX_DESC_CMD_ICRC = 0x0004,
+ IECM_TX_DESC_CMD_IL2TAG1 = 0x0008,
+ IECM_TX_DESC_CMD_RSVD1 = 0x0010,
+ IECM_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */
+ IECM_TX_DESC_CMD_RSVD2 = 0x0080,
+ IECM_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */
+ IECM_TX_DESC_CMD_RSVD3 = 0x0400,
+ IECM_TX_DESC_CMD_RSVD4 = 0x0800,
+};
+
+/* Transmit descriptors */
+/* splitq tx buf, singleq tx buf and singleq compl desc */
+struct iecm_base_tx_desc {
+ __le64 buf_addr; /* Address of descriptor's data buf */
+ __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
+};/* read used with buffer queues*/
+
+struct iecm_splitq_tx_compl_desc {
+ /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
+ __le16 qid_comptype_gen;
+ union {
+ __le16 q_head; /* Queue head */
+ __le16 compl_tag; /* Completion tag */
+ } q_head_compl_tag;
+ u32 rsvd;
+
+};/* writeback used with completion queues*/
+
+/* Context descriptors */
+struct iecm_base_tx_ctx_desc {
+ struct {
+ __le32 tunneling_params;
+ __le16 l2tag2;
+ __le16 rsvd1;
+ } qw0;
+ __le64 qw1; /* type_cmd_tlen_mss/rt_hint */
+};
+
+/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
+enum iecm_tx_flex_desc_cmd_bits {
+ IECM_TX_FLEX_DESC_CMD_EOP = 0x01,
+ IECM_TX_FLEX_DESC_CMD_RS = 0x02,
+ IECM_TX_FLEX_DESC_CMD_RE = 0x04,
+ IECM_TX_FLEX_DESC_CMD_IL2TAG1 = 0x08,
+ IECM_TX_FLEX_DESC_CMD_DUMMY = 0x10,
+ IECM_TX_FLEX_DESC_CMD_CS_EN = 0x20,
+ IECM_TX_FLEX_DESC_CMD_FILT_AU_EN = 0x40,
+ IECM_TX_FLEX_DESC_CMD_FILT_AU_EVICT = 0x80,
+};
+
+struct iecm_flex_tx_desc {
+ __le64 buf_addr; /* Packet buffer address */
+ struct {
+ __le16 cmd_dtype;
+#define IECM_FLEX_TXD_QW1_DTYPE_S 0
+#define IECM_FLEX_TXD_QW1_DTYPE_M \
+ MAKEMASK(0x1FUL, IECM_FLEX_TXD_QW1_DTYPE_S)
+#define IECM_FLEX_TXD_QW1_CMD_S 5
+#define IECM_FLEX_TXD_QW1_CMD_M MAKEMASK(0x7FFUL, IECM_TXD_QW1_CMD_S)
+ union {
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_DATA_(0x03) */
+ u8 raw[4];
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 (0x06) */
+ struct {
+ __le16 l2tag1;
+ u8 flex;
+ u8 tsync;
+ } tsync;
+
+ /* DTYPE=IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
+ struct {
+ __le16 l2tag1;
+ __le16 l2tag2;
+ } l2tags;
+ } flex;
+ __le16 buf_size;
+ } qw1;
+};
+
+struct iecm_flex_tx_sched_desc {
+ __le64 buf_addr; /* Packet buffer address */
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
+ struct {
+ u8 cmd_dtype;
+#define IECM_TXD_FLEX_FLOW_DTYPE_M 0x1F
+#define IECM_TXD_FLEX_FLOW_CMD_EOP 0x20
+#define IECM_TXD_FLEX_FLOW_CMD_CS_EN 0x40
+#define IECM_TXD_FLEX_FLOW_CMD_RE 0x80
+
+ u8 rsvd[3];
+
+ __le16 compl_tag;
+ __le16 rxr_bufsize;
+#define IECM_TXD_FLEX_FLOW_RXR 0x4000
+#define IECM_TXD_FLEX_FLOW_BUFSIZE_M 0x3FFF
+ } qw1;
+};
+
+/* Common cmd fields for all flex context descriptors
+ * Note: these defines already account for the 5 bit dtype in the cmd_dtype
+ * field
+ */
+enum iecm_tx_flex_ctx_desc_cmd_bits {
+ IECM_TX_FLEX_CTX_DESC_CMD_TSO = 0x0020,
+ IECM_TX_FLEX_CTX_DESC_CMD_TSYN_EN = 0x0040,
+ IECM_TX_FLEX_CTX_DESC_CMD_L2TAG2 = 0x0080,
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = 0x0200, /* 2 bits */
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = 0x0400, /* 2 bits */
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = 0x0600, /* 2 bits */
+};
+
+/* Standard flex descriptor TSO context quad word */
+struct iecm_flex_tx_tso_ctx_qw {
+ __le32 flex_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF
+#define IECM_TXD_FLEX_TSO_CTX_FLEX_S 24
+ __le16 mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF
+ u8 hdr_len;
+ u8 flex;
+};
+
+union iecm_flex_tx_ctx_desc {
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_CTX (0x04) */
+ struct {
+ u8 qw0_flex[8];
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag1;
+ u8 qw1_flex[4];
+ } qw1;
+ } gen;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
+ struct {
+ struct iecm_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ u8 flex[6];
+ } qw1;
+ } tso;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX (0x08) */
+ struct {
+ struct iecm_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag2;
+ u8 flex0;
+ u8 ptag;
+ u8 flex1[2];
+ } qw1;
+ } tso_l2tag2_ptag;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX (0x0B) */
+ struct {
+ u8 qw0_flex[8];
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag2;
+ u8 flex[4];
+ } qw1;
+ } l2tag2;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_REINJECT_CTX (0x02) */
+ struct {
+ struct {
+ __le32 sa_domain;
+#define IECM_TXD_FLEX_CTX_SA_DOM_M 0xFFFF
+#define IECM_TXD_FLEX_CTX_SA_DOM_VAL 0x10000
+ __le32 sa_idx;
+#define IECM_TXD_FLEX_CTX_SAIDX_M 0x1FFFFF
+ } qw0;
+ struct {
+ __le16 cmd_dtype;
+ __le16 txr2comp;
+#define IECM_TXD_FLEX_CTX_TXR2COMP 0x1
+ __le16 miss_txq_comp_tag;
+ __le16 miss_txq_id;
+ } qw1;
+ } reinjection_pkt;
+};
+
+/* Host Split Context Descriptors */
+struct iecm_flex_tx_hs_ctx_desc {
+ union {
+ struct {
+ __le32 host_fnum_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_S 0
+#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF
+#define IECM_TXD_FLEX_CTX_FNUM_S 18
+#define IECM_TXD_FLEX_CTX_FNUM_M 0x7FF
+#define IECM_TXD_FLEX_CTX_HOST_S 29
+#define IECM_TXD_FLEX_CTX_HOST_M 0x7
+ __le16 ftype_mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_0 0
+#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF
+#define IECM_TXD_FLEX_CTX_FTYPE_S 14
+#define IECM_TXD_FLEX_CTX_FTYPE_VF MAKEMASK(0x0, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_VDEV MAKEMASK(0x1, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_PF MAKEMASK(0x2, IECM_TXD_FLEX_CTX_FTYPE_S)
+ u8 hdr_len;
+ u8 ptag;
+ } tso;
+ struct {
+ u8 flex0[2];
+ __le16 host_fnum_ftype;
+ u8 flex1[3];
+ u8 ptag;
+ } no_tso;
+ } qw0;
+
+ __le64 qw1_cmd_dtype;
+#define IECM_TXD_FLEX_CTX_QW1_PASID_S 16
+#define IECM_TXD_FLEX_CTX_QW1_PASID_M 0xFFFFF
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID_S 36
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_PASID_VALID_S)
+#define IECM_TXD_FLEX_CTX_QW1_TPH_S 37
+#define IECM_TXD_FLEX_CTX_QW1_TPH \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_TPH_S)
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_S 38
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_M 0xF
+/* The following are only valid for DTYPE = 0x09 and DTYPE = 0x0A */
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_S 42
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_M 0x1FFFFF
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S 63
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VALID \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S)
+/* The following are only valid for DTYPE = 0x0D and DTYPE = 0x0E */
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_S 48
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_M 0xFF
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_S 56
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_M 0xFF
+};
+#endif /* _IECM_LAN_TXRX_H_ */
diff --git a/drivers/net/idpf/base/iecm_lan_vf_regs.h b/drivers/net/idpf/base/iecm_lan_vf_regs.h
new file mode 100644
index 0000000000..1ba1a8dea6
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_vf_regs.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_VF_REGS_H_
+#define _IECM_LAN_VF_REGS_H_
+
+
+/* Reset */
+#define VFGEN_RSTAT 0x00008800
+#define VFGEN_RSTAT_VFR_STATE_S 0
+#define VFGEN_RSTAT_VFR_STATE_M MAKEMASK(0x3, VFGEN_RSTAT_VFR_STATE_S)
+
+/* Control(VF Mailbox) Queue */
+#define VF_BASE 0x00006000
+
+#define VF_ATQBAL (VF_BASE + 0x1C00)
+#define VF_ATQBAH (VF_BASE + 0x1800)
+#define VF_ATQLEN (VF_BASE + 0x0800)
+#define VF_ATQLEN_ATQLEN_S 0
+#define VF_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, VF_ATQLEN_ATQLEN_S)
+#define VF_ATQLEN_ATQVFE_S 28
+#define VF_ATQLEN_ATQVFE_M BIT(VF_ATQLEN_ATQVFE_S)
+#define VF_ATQLEN_ATQOVFL_S 29
+#define VF_ATQLEN_ATQOVFL_M BIT(VF_ATQLEN_ATQOVFL_S)
+#define VF_ATQLEN_ATQCRIT_S 30
+#define VF_ATQLEN_ATQCRIT_M BIT(VF_ATQLEN_ATQCRIT_S)
+#define VF_ATQLEN_ATQENABLE_S 31
+#define VF_ATQLEN_ATQENABLE_M BIT(VF_ATQLEN_ATQENABLE_S)
+#define VF_ATQH (VF_BASE + 0x0400)
+#define VF_ATQH_ATQH_S 0
+#define VF_ATQH_ATQH_M MAKEMASK(0x3FF, VF_ATQH_ATQH_S)
+#define VF_ATQT (VF_BASE + 0x2400)
+
+#define VF_ARQBAL (VF_BASE + 0x0C00)
+#define VF_ARQBAH (VF_BASE)
+#define VF_ARQLEN (VF_BASE + 0x2000)
+#define VF_ARQLEN_ARQLEN_S 0
+#define VF_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, VF_ARQLEN_ARQLEN_S)
+#define VF_ARQLEN_ARQVFE_S 28
+#define VF_ARQLEN_ARQVFE_M BIT(VF_ARQLEN_ARQVFE_S)
+#define VF_ARQLEN_ARQOVFL_S 29
+#define VF_ARQLEN_ARQOVFL_M BIT(VF_ARQLEN_ARQOVFL_S)
+#define VF_ARQLEN_ARQCRIT_S 30
+#define VF_ARQLEN_ARQCRIT_M BIT(VF_ARQLEN_ARQCRIT_S)
+#define VF_ARQLEN_ARQENABLE_S 31
+#define VF_ARQLEN_ARQENABLE_M BIT(VF_ARQLEN_ARQENABLE_S)
+#define VF_ARQH (VF_BASE + 0x1400)
+#define VF_ARQH_ARQH_S 0
+#define VF_ARQH_ARQH_M MAKEMASK(0x1FFF, VF_ARQH_ARQH_S)
+#define VF_ARQT (VF_BASE + 0x1000)
+
+/* Transmit queues */
+#define VF_QTX_TAIL_BASE 0x00000000
+#define VF_QTX_TAIL(_QTX) (VF_QTX_TAIL_BASE + (_QTX) * 0x4)
+#define VF_QTX_TAIL_EXT_BASE 0x00040000
+#define VF_QTX_TAIL_EXT(_QTX) (VF_QTX_TAIL_EXT_BASE + ((_QTX) * 4))
+
+/* Receive queues */
+#define VF_QRX_TAIL_BASE 0x00002000
+#define VF_QRX_TAIL(_QRX) (VF_QRX_TAIL_BASE + ((_QRX) * 4))
+#define VF_QRX_TAIL_EXT_BASE 0x00050000
+#define VF_QRX_TAIL_EXT(_QRX) (VF_QRX_TAIL_EXT_BASE + ((_QRX) * 4))
+#define VF_QRXB_TAIL_BASE 0x00060000
+#define VF_QRXB_TAIL(_QRX) (VF_QRXB_TAIL_BASE + ((_QRX) * 4))
+
+/* Interrupts */
+#define VF_INT_DYN_CTL0 0x00005C00
+#define VF_INT_DYN_CTL0_INTENA_S 0
+#define VF_INT_DYN_CTL0_INTENA_M BIT(VF_INT_DYN_CTL0_INTENA_S)
+#define VF_INT_DYN_CTL0_ITR_INDX_S 3
+#define VF_INT_DYN_CTL0_ITR_INDX_M MAKEMASK(0x3, VF_INT_DYN_CTL0_ITR_INDX_S)
+#define VF_INT_DYN_CTLN(_INT) (0x00003800 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_EXT(_INT) (0x00070000 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_INTENA_S 0
+#define VF_INT_DYN_CTLN_INTENA_M BIT(VF_INT_DYN_CTLN_INTENA_S)
+#define VF_INT_DYN_CTLN_CLEARPBA_S 1
+#define VF_INT_DYN_CTLN_CLEARPBA_M BIT(VF_INT_DYN_CTLN_CLEARPBA_S)
+#define VF_INT_DYN_CTLN_SWINT_TRIG_S 2
+#define VF_INT_DYN_CTLN_SWINT_TRIG_M BIT(VF_INT_DYN_CTLN_SWINT_TRIG_S)
+#define VF_INT_DYN_CTLN_ITR_INDX_S 3
+#define VF_INT_DYN_CTLN_ITR_INDX_M MAKEMASK(0x3, VF_INT_DYN_CTLN_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_INTERVAL_S 5
+#define VF_INT_DYN_CTLN_INTERVAL_M BIT(VF_INT_DYN_CTLN_INTERVAL_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S 24
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_S 25
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_WB_ON_ITR_S 30
+#define VF_INT_DYN_CTLN_WB_ON_ITR_M BIT(VF_INT_DYN_CTLN_WB_ON_ITR_S)
+#define VF_INT_DYN_CTLN_INTENA_MSK_S 31
+#define VF_INT_DYN_CTLN_INTENA_MSK_M BIT(VF_INT_DYN_CTLN_INTENA_MSK_S)
+#define VF_INT_ITR0(_i) (0x00004C00 + ((_i) * 4))
+#define VF_INT_ITRN_V2(_i, _reg_start) ((_reg_start) + (((_i)) * 4))
+#define VF_INT_ITRN(_i, _INT) (0x00002800 + ((_i) * 4) + ((_INT) * 0x40))
+#define VF_INT_ITRN_64(_i, _INT) (0x00002C00 + ((_i) * 4) + ((_INT) * 0x100))
+#define VF_INT_ITRN_2K(_i, _INT) (0x00072000 + ((_i) * 4) + ((_INT) * 0x100))
+#define VF_INT_ITRN_MAX_INDEX 2
+#define VF_INT_ITRN_INTERVAL_S 0
+#define VF_INT_ITRN_INTERVAL_M MAKEMASK(0xFFF, VF_INT_ITRN_INTERVAL_S)
+#define VF_INT_PBA_CLEAR 0x00008900
+
+#define VF_INT_ICR0_ENA1 0x00005000
+#define VF_INT_ICR0_ENA1_ADMINQ_S 30
+#define VF_INT_ICR0_ENA1_ADMINQ_M BIT(VF_INT_ICR0_ENA1_ADMINQ_S)
+#define VF_INT_ICR0_ENA1_RSVD_S 31
+#define VF_INT_ICR01 0x00004800
+#define VF_QF_HENA(_i) (0x0000C400 + ((_i) * 4))
+#define VF_QF_HENA_MAX_INDX 1
+#define VF_QF_HKEY(_i) (0x0000CC00 + ((_i) * 4))
+#define VF_QF_HKEY_MAX_INDX 12
+#define VF_QF_HLUT(_i) (0x0000D000 + ((_i) * 4))
+#define VF_QF_HLUT_MAX_INDX 15
+#endif
diff --git a/drivers/net/idpf/base/iecm_prototype.h b/drivers/net/idpf/base/iecm_prototype.h
new file mode 100644
index 0000000000..cd3ee8dcbc
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_prototype.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_PROTOTYPE_H_
+#define _IECM_PROTOTYPE_H_
+
+/* Include generic macros and types first */
+#include "iecm_osdep.h"
+#include "iecm_controlq.h"
+#include "iecm_type.h"
+#include "iecm_alloc.h"
+#include "iecm_devids.h"
+#include "iecm_controlq_api.h"
+#include "iecm_lan_pf_regs.h"
+#include "iecm_lan_vf_regs.h"
+#include "iecm_lan_txrx.h"
+#include "virtchnl.h"
+
+#define APF
+
+int iecm_init_hw(struct iecm_hw *hw, struct iecm_ctlq_size ctlq_size);
+int iecm_deinit_hw(struct iecm_hw *hw);
+
+int iecm_clean_arq_element(struct iecm_hw *hw,
+ struct iecm_arq_event_info *e,
+ u16 *events_pending);
+bool iecm_asq_done(struct iecm_hw *hw);
+bool iecm_check_asq_alive(struct iecm_hw *hw);
+
+int iecm_get_rss_lut(struct iecm_hw *hw, u16 seid, bool pf_lut,
+ u8 *lut, u16 lut_size);
+int iecm_set_rss_lut(struct iecm_hw *hw, u16 seid, bool pf_lut,
+ u8 *lut, u16 lut_size);
+int iecm_get_rss_key(struct iecm_hw *hw, u16 seid,
+ struct iecm_get_set_rss_key_data *key);
+int iecm_set_rss_key(struct iecm_hw *hw, u16 seid,
+ struct iecm_get_set_rss_key_data *key);
+
+int iecm_set_mac_type(struct iecm_hw *hw);
+
+int iecm_reset(struct iecm_hw *hw);
+int iecm_send_msg_to_cp(struct iecm_hw *hw, enum virtchnl_ops v_opcode,
+ int v_retval, u8 *msg, u16 msglen);
+#endif /* _IECM_PROTOTYPE_H_ */
diff --git a/drivers/net/idpf/base/iecm_type.h b/drivers/net/idpf/base/iecm_type.h
new file mode 100644
index 0000000000..fdde9c6e61
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_type.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_TYPE_H_
+#define _IECM_TYPE_H_
+
+#include "iecm_controlq.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#define MAKEMASK(m, s) ((m) << (s))
+
+struct iecm_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+};
+
+/* Statistics collected by the MAC */
+struct iecm_hw_port_stats {
+ /* eth stats collected by the port */
+ struct iecm_eth_stats eth;
+
+ /* additional port specific stats */
+ u64 tx_dropped_link_down; /* tdold */
+ u64 crc_errors; /* crcerrs */
+ u64 illegal_bytes; /* illerrc */
+ u64 error_bytes; /* errbc */
+ u64 mac_local_faults; /* mlfc */
+ u64 mac_remote_faults; /* mrfc */
+ u64 rx_length_errors; /* rlec */
+ u64 link_xon_rx; /* lxonrxc */
+ u64 link_xoff_rx; /* lxoffrxc */
+ u64 priority_xon_rx[8]; /* pxonrxc[8] */
+ u64 priority_xoff_rx[8]; /* pxoffrxc[8] */
+ u64 link_xon_tx; /* lxontxc */
+ u64 link_xoff_tx; /* lxofftxc */
+ u64 priority_xon_tx[8]; /* pxontxc[8] */
+ u64 priority_xoff_tx[8]; /* pxofftxc[8] */
+ u64 priority_xon_2_xoff[8]; /* pxon2offc[8] */
+ u64 rx_size_64; /* prc64 */
+ u64 rx_size_127; /* prc127 */
+ u64 rx_size_255; /* prc255 */
+ u64 rx_size_511; /* prc511 */
+ u64 rx_size_1023; /* prc1023 */
+ u64 rx_size_1522; /* prc1522 */
+ u64 rx_size_big; /* prc9522 */
+ u64 rx_undersize; /* ruc */
+ u64 rx_fragments; /* rfc */
+ u64 rx_oversize; /* roc */
+ u64 rx_jabber; /* rjc */
+ u64 tx_size_64; /* ptc64 */
+ u64 tx_size_127; /* ptc127 */
+ u64 tx_size_255; /* ptc255 */
+ u64 tx_size_511; /* ptc511 */
+ u64 tx_size_1023; /* ptc1023 */
+ u64 tx_size_1522; /* ptc1522 */
+ u64 tx_size_big; /* ptc9522 */
+ u64 mac_short_packet_dropped; /* mspdc */
+ u64 checksum_error; /* xec */
+};
+/* Static buffer size to initialize control queue */
+struct iecm_ctlq_size {
+ u16 asq_buf_size;
+ u16 asq_ring_size;
+ u16 arq_buf_size;
+ u16 arq_ring_size;
+};
+
+/* Temporary definition to compile - TBD if needed */
+struct iecm_arq_event_info {
+ struct iecm_ctlq_desc desc;
+ u16 msg_len;
+ u16 buf_len;
+ u8 *msg_buf;
+};
+
+struct iecm_get_set_rss_key_data {
+ u8 standard_rss_key[0x28];
+ u8 extended_hash_key[0xc];
+};
+
+struct iecm_aq_get_phy_abilities_resp {
+ __le32 phy_type;
+};
+
+struct iecm_filter_program_desc {
+ __le32 qid;
+};
+
+#endif /* _IECM_TYPE_H_ */
diff --git a/drivers/net/idpf/base/meson.build b/drivers/net/idpf/base/meson.build
new file mode 100644
index 0000000000..1ad9a87d9d
--- /dev/null
+++ b/drivers/net/idpf/base/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+sources = [
+ 'iecm_common.c',
+ 'iecm_controlq.c',
+ 'iecm_controlq_setup.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+ '-Wno-unused-but-set-variable',
+ '-Wno-unused-variable',
+ '-Wno-unused-parameter',
+]
+
+c_args = cflags
+
+foreach flag: error_cflags
+ if cc.has_argument(flag)
+ c_args += flag
+ endif
+endforeach
+
+base_lib = static_library('idpf_base', sources,
+ dependencies: static_rte_eal,
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects()
\ No newline at end of file
diff --git a/drivers/net/idpf/base/siov_regs.h b/drivers/net/idpf/base/siov_regs.h
new file mode 100644
index 0000000000..bb7b2daac0
--- /dev/null
+++ b/drivers/net/idpf/base/siov_regs.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+#ifndef _SIOV_REGS_H_
+#define _SIOV_REGS_H_
+#define VDEV_MBX_START 0x20000 /* Begin at 128KB */
+#define VDEV_MBX_ATQBAL (VDEV_MBX_START + 0x0000)
+#define VDEV_MBX_ATQBAH (VDEV_MBX_START + 0x0004)
+#define VDEV_MBX_ATQLEN (VDEV_MBX_START + 0x0008)
+#define VDEV_MBX_ATQH (VDEV_MBX_START + 0x000C)
+#define VDEV_MBX_ATQT (VDEV_MBX_START + 0x0010)
+#define VDEV_MBX_ARQBAL (VDEV_MBX_START + 0x0014)
+#define VDEV_MBX_ARQBAH (VDEV_MBX_START + 0x0018)
+#define VDEV_MBX_ARQLEN (VDEV_MBX_START + 0x001C)
+#define VDEV_MBX_ARQH (VDEV_MBX_START + 0x0020)
+#define VDEV_MBX_ARQT (VDEV_MBX_START + 0x0024)
+#define VDEV_GET_RSTAT 0x21000 /* 132KB for RSTAT */
+
+/* Begin at offset after 1MB (after 256 4k pages) */
+#define VDEV_QRX_TAIL_START 0x100000
+#define VDEV_QRX_TAIL(_i) (VDEV_QRX_TAIL_START + ((_i) * 0x1000)) /* 2k Rx queues */
+
+#define VDEV_QRX_BUFQ_TAIL_START 0x900000 /* Begin at offset of 9MB for Rx buffer queue tail register pages */
+#define VDEV_QRX_BUFQ_TAIL(_i) (VDEV_QRX_BUFQ_TAIL_START + ((_i) * 0x1000)) /* 2k Rx buffer queues */
+
+#define VDEV_QTX_TAIL_START 0x1100000 /* Begin at offset of 17MB for 2k Tx queues */
+#define VDEV_QTX_TAIL(_i) (VDEV_QTX_TAIL_START + ((_i) * 0x1000)) /* 2k Tx queues */
+
+#define VDEV_QTX_COMPL_TAIL_START 0x1900000 /* Begin at offset of 25MB for 2k Tx completion queues */
+#define VDEV_QTX_COMPL_TAIL(_i) (VDEV_QTX_COMPL_TAIL_START + ((_i) * 0x1000)) /* 2k Tx completion queues */
+
+#define VDEV_INT_DYN_CTL01 0x2100000 /* Begin at offset 33MB */
+
+#define VDEV_INT_DYN_START (VDEV_INT_DYN_CTL01 + 0x1000) /* Begin at offset of 33MB + 4k to accomdate CTL01 register */
+#define VDEV_INT_DYN_CTL(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000))
+#define VDEV_INT_ITR_0(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x04)
+#define VDEV_INT_ITR_1(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x08)
+#define VDEV_INT_ITR_2(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x0C)
+
+/* Next offset to begin at 42MB (0x2A00000) */
+#endif /* _SIOV_REGS_H_ */
diff --git a/drivers/net/idpf/base/virtchnl.h b/drivers/net/idpf/base/virtchnl.h
new file mode 100644
index 0000000000..b5d0d5ffd3
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl.h
@@ -0,0 +1,2743 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the Virtual Function (VF) - Physical Function
+ * (PF) communication protocol used by the drivers for all devices starting
+ * from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI. Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The returned value
+ * is of virtchnl_status_code type, defined here.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS 6
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+ { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL_CHECK_UNION_LEN(n, X) enum virtchnl_static_asset_enum_##X \
+ { virtchnl_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+
+/* Error Codes
+ * Note that many older versions of various iAVF drivers convert the reported
+ * status code directly into an iavf_status enumeration. For this reason, it
+ * is important that the values of these enumerations line up.
+ */
+enum virtchnl_status_code {
+ VIRTCHNL_STATUS_SUCCESS = 0,
+ VIRTCHNL_STATUS_ERR_PARAM = -5,
+ VIRTCHNL_STATUS_ERR_NO_MEMORY = -18,
+ VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH = -38,
+ VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR = -39,
+ VIRTCHNL_STATUS_ERR_INVALID_VF_ID = -40,
+ VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR = -53,
+ VIRTCHNL_STATUS_ERR_NOT_SUPPORTED = -64,
+};
+
+/* Backward compatibility */
+#define VIRTCHNL_ERR_PARAM VIRTCHNL_STATUS_ERR_PARAM
+#define VIRTCHNL_STATUS_NOT_SUPPORTED VIRTCHNL_STATUS_ERR_NOT_SUPPORTED
+
+#define VIRTCHNL_LINK_SPEED_2_5GB_SHIFT 0x0
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT 0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT 0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT 0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT 0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT 0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT 0x6
+#define VIRTCHNL_LINK_SPEED_5GB_SHIFT 0x7
+
+enum virtchnl_link_speed {
+ VIRTCHNL_LINK_SPEED_UNKNOWN = 0,
+ VIRTCHNL_LINK_SPEED_100MB = BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+ VIRTCHNL_LINK_SPEED_1GB = BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+ VIRTCHNL_LINK_SPEED_10GB = BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_40GB = BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_20GB = BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_25GB = BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_2_5GB = BIT(VIRTCHNL_LINK_SPEED_2_5GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_5GB = BIT(VIRTCHNL_LINK_SPEED_5GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+ VIRTCHNL_RX_HSPLIT_NO_SPLIT = 0,
+ VIRTCHNL_RX_HSPLIT_SPLIT_L2 = 1,
+ VIRTCHNL_RX_HSPLIT_SPLIT_IP = 2,
+ VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+ VIRTCHNL_RX_HSPLIT_SPLIT_SCTP = 8,
+};
+
+enum virtchnl_bw_limit_type {
+ VIRTCHNL_BW_SHAPER = 0,
+};
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ *
+ */
+ VIRTCHNL_OP_UNKNOWN = 0,
+ VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+ VIRTCHNL_OP_RESET_VF = 2,
+ VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+ VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+ VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+ VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+ VIRTCHNL_OP_ENABLE_QUEUES = 8,
+ VIRTCHNL_OP_DISABLE_QUEUES = 9,
+ VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+ VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+ VIRTCHNL_OP_ADD_VLAN = 12,
+ VIRTCHNL_OP_DEL_VLAN = 13,
+ VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+ VIRTCHNL_OP_GET_STATS = 15,
+ VIRTCHNL_OP_RSVD = 16,
+ VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+ /* opcode 19 is reserved */
+ /* opcodes 20, 21, and 22 are reserved */
+ VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+ VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+ VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+ VIRTCHNL_OP_SET_RSS_HENA = 26,
+ VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+ VIRTCHNL_OP_REQUEST_QUEUES = 29,
+ VIRTCHNL_OP_ENABLE_CHANNELS = 30,
+ VIRTCHNL_OP_DISABLE_CHANNELS = 31,
+ VIRTCHNL_OP_ADD_CLOUD_FILTER = 32,
+ VIRTCHNL_OP_DEL_CLOUD_FILTER = 33,
+ /* opcode 34 is reserved */
+ /* opcodes 38, 39, 40, 41, 42 and 43 are reserved */
+ /* opcode 44 is reserved */
+ VIRTCHNL_OP_ADD_RSS_CFG = 45,
+ VIRTCHNL_OP_DEL_RSS_CFG = 46,
+ VIRTCHNL_OP_ADD_FDIR_FILTER = 47,
+ VIRTCHNL_OP_DEL_FDIR_FILTER = 48,
+ VIRTCHNL_OP_GET_MAX_RSS_QREGION = 50,
+ VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51,
+ VIRTCHNL_OP_ADD_VLAN_V2 = 52,
+ VIRTCHNL_OP_DEL_VLAN_V2 = 53,
+ VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 = 54,
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55,
+ VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56,
+ VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57,
+ VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2 = 58,
+ VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 = 59,
+ VIRTCHNL_OP_1588_PTP_GET_CAPS = 60,
+ VIRTCHNL_OP_1588_PTP_GET_TIME = 61,
+ VIRTCHNL_OP_1588_PTP_SET_TIME = 62,
+ VIRTCHNL_OP_1588_PTP_ADJ_TIME = 63,
+ VIRTCHNL_OP_1588_PTP_ADJ_FREQ = 64,
+ VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP = 65,
+ VIRTCHNL_OP_GET_QOS_CAPS = 66,
+ VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP = 67,
+ VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS = 68,
+ VIRTCHNL_OP_1588_PTP_SET_PIN_CFG = 69,
+ VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP = 70,
+ VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107,
+ VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108,
+ VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111,
+ VIRTCHNL_OP_MAX,
+};
+
+static inline const char *virtchnl_op_str(enum virtchnl_ops v_opcode)
+{
+ switch (v_opcode) {
+ case VIRTCHNL_OP_UNKNOWN:
+ return "VIRTCHNL_OP_UNKNOWN";
+ case VIRTCHNL_OP_VERSION:
+ return "VIRTCHNL_OP_VERSION";
+ case VIRTCHNL_OP_RESET_VF:
+ return "VIRTCHNL_OP_RESET_VF";
+ case VIRTCHNL_OP_GET_VF_RESOURCES:
+ return "VIRTCHNL_OP_GET_VF_RESOURCES";
+ case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+ return "VIRTCHNL_OP_CONFIG_TX_QUEUE";
+ case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+ return "VIRTCHNL_OP_CONFIG_RX_QUEUE";
+ case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+ return "VIRTCHNL_OP_CONFIG_VSI_QUEUES";
+ case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+ return "VIRTCHNL_OP_CONFIG_IRQ_MAP";
+ case VIRTCHNL_OP_ENABLE_QUEUES:
+ return "VIRTCHNL_OP_ENABLE_QUEUES";
+ case VIRTCHNL_OP_DISABLE_QUEUES:
+ return "VIRTCHNL_OP_DISABLE_QUEUES";
+ case VIRTCHNL_OP_ADD_ETH_ADDR:
+ return "VIRTCHNL_OP_ADD_ETH_ADDR";
+ case VIRTCHNL_OP_DEL_ETH_ADDR:
+ return "VIRTCHNL_OP_DEL_ETH_ADDR";
+ case VIRTCHNL_OP_ADD_VLAN:
+ return "VIRTCHNL_OP_ADD_VLAN";
+ case VIRTCHNL_OP_DEL_VLAN:
+ return "VIRTCHNL_OP_DEL_VLAN";
+ case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+ return "VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE";
+ case VIRTCHNL_OP_GET_STATS:
+ return "VIRTCHNL_OP_GET_STATS";
+ case VIRTCHNL_OP_RSVD:
+ return "VIRTCHNL_OP_RSVD";
+ case VIRTCHNL_OP_EVENT:
+ return "VIRTCHNL_OP_EVENT";
+ case VIRTCHNL_OP_CONFIG_RSS_KEY:
+ return "VIRTCHNL_OP_CONFIG_RSS_KEY";
+ case VIRTCHNL_OP_CONFIG_RSS_LUT:
+ return "VIRTCHNL_OP_CONFIG_RSS_LUT";
+ case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ return "VIRTCHNL_OP_GET_RSS_HENA_CAPS";
+ case VIRTCHNL_OP_SET_RSS_HENA:
+ return "VIRTCHNL_OP_SET_RSS_HENA";
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+ return "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING";
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ return "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING";
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ return "VIRTCHNL_OP_REQUEST_QUEUES";
+ case VIRTCHNL_OP_ENABLE_CHANNELS:
+ return "VIRTCHNL_OP_ENABLE_CHANNELS";
+ case VIRTCHNL_OP_DISABLE_CHANNELS:
+ return "VIRTCHNL_OP_DISABLE_CHANNELS";
+ case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+ return "VIRTCHNL_OP_ADD_CLOUD_FILTER";
+ case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+ return "VIRTCHNL_OP_DEL_CLOUD_FILTER";
+ case VIRTCHNL_OP_ADD_RSS_CFG:
+ return "VIRTCHNL_OP_ADD_RSS_CFG";
+ case VIRTCHNL_OP_DEL_RSS_CFG:
+ return "VIRTCHNL_OP_DEL_RSS_CFG";
+ case VIRTCHNL_OP_ADD_FDIR_FILTER:
+ return "VIRTCHNL_OP_ADD_FDIR_FILTER";
+ case VIRTCHNL_OP_DEL_FDIR_FILTER:
+ return "VIRTCHNL_OP_DEL_FDIR_FILTER";
+ case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
+ return "VIRTCHNL_OP_GET_MAX_RSS_QREGION";
+ case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS:
+ return "VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS";
+ case VIRTCHNL_OP_ADD_VLAN_V2:
+ return "VIRTCHNL_OP_ADD_VLAN_V2";
+ case VIRTCHNL_OP_DEL_VLAN_V2:
+ return "VIRTCHNL_OP_DEL_VLAN_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2";
+ case VIRTCHNL_OP_1588_PTP_GET_CAPS:
+ return "VIRTCHNL_OP_1588_PTP_GET_CAPS";
+ case VIRTCHNL_OP_1588_PTP_GET_TIME:
+ return "VIRTCHNL_OP_1588_PTP_GET_TIME";
+ case VIRTCHNL_OP_1588_PTP_SET_TIME:
+ return "VIRTCHNL_OP_1588_PTP_SET_TIME";
+ case VIRTCHNL_OP_1588_PTP_ADJ_TIME:
+ return "VIRTCHNL_OP_1588_PTP_ADJ_TIME";
+ case VIRTCHNL_OP_1588_PTP_ADJ_FREQ:
+ return "VIRTCHNL_OP_1588_PTP_ADJ_FREQ";
+ case VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP:
+ return "VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP";
+ case VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS:
+ return "VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS";
+ case VIRTCHNL_OP_1588_PTP_SET_PIN_CFG:
+ return "VIRTCHNL_OP_1588_PTP_SET_PIN_CFG";
+ case VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP:
+ return "VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP";
+ case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+ return "VIRTCHNL_OP_ENABLE_QUEUES_V2";
+ case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+ return "VIRTCHNL_OP_DISABLE_QUEUES_V2";
+ case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+ return "VIRTCHNL_OP_MAP_QUEUE_VECTOR";
+ case VIRTCHNL_OP_MAX:
+ return "VIRTCHNL_OP_MAX";
+ default:
+ return "Unsupported (update virtchnl.h)";
+ }
+}
+
+static inline const char *virtchnl_stat_str(enum virtchnl_status_code v_status)
+{
+ switch (v_status) {
+ case VIRTCHNL_STATUS_SUCCESS:
+ return "VIRTCHNL_STATUS_SUCCESS";
+ case VIRTCHNL_STATUS_ERR_PARAM:
+ return "VIRTCHNL_STATUS_ERR_PARAM";
+ case VIRTCHNL_STATUS_ERR_NO_MEMORY:
+ return "VIRTCHNL_STATUS_ERR_NO_MEMORY";
+ case VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH:
+ return "VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH";
+ case VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR:
+ return "VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR";
+ case VIRTCHNL_STATUS_ERR_INVALID_VF_ID:
+ return "VIRTCHNL_STATUS_ERR_INVALID_VF_ID";
+ case VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR:
+ return "VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR";
+ case VIRTCHNL_STATUS_ERR_NOT_SUPPORTED:
+ return "VIRTCHNL_STATUS_ERR_NOT_SUPPORTED";
+ default:
+ return "Unknown status code (update virtchnl.h)";
+ }
+}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+ u8 pad[8]; /* AQ flags/opcode/len/retval fields */
+
+ /* avoid confusion with desc->opcode */
+ enum virtchnl_ops v_opcode;
+
+ /* ditto for desc->retval */
+ enum virtchnl_status_code v_retval;
+ u32 vfid; /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures. */
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR 1
+#define VIRTCHNL_VERSION_MINOR 1
+#define VIRTCHNL_VERSION_MAJOR_2 2
+#define VIRTCHNL_VERSION_MINOR_0 0
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS 0
+
+struct virtchnl_version_info {
+ u32 major;
+ u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_ver) (((_ver)->major == 1) && ((_ver)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+#define VF_IS_V20(_ver) (((_ver)->major == 2) && ((_ver)->minor == 0))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+ VIRTCHNL_VSI_TYPE_INVALID = 0,
+ VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ */
+
+struct virtchnl_vsi_resource {
+ u16 vsi_id;
+ u16 num_queue_pairs;
+
+ /* see enum virtchnl_vsi_type */
+ s32 vsi_type;
+ u16 qset_handle;
+ u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2 BIT(0)
+#define VIRTCHNL_VF_OFFLOAD_IWARP BIT(1)
+#define VIRTCHNL_VF_CAP_RDMA VIRTCHNL_VF_OFFLOAD_IWARP
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ BIT(3)
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG BIT(4)
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR BIT(5)
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6)
+/* used to negotiate communicating link speeds in Mbps */
+#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7)
+ /* BIT(8) is reserved */
+#define VIRTCHNL_VF_LARGE_NUM_QPAIRS BIT(9)
+#define VIRTCHNL_VF_OFFLOAD_CRC BIT(10)
+#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15)
+#define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16)
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17)
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 BIT(18)
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF BIT(19)
+#define VIRTCHNL_VF_OFFLOAD_ENCAP BIT(20)
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM BIT(21)
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM BIT(22)
+#define VIRTCHNL_VF_OFFLOAD_ADQ BIT(23)
+#define VIRTCHNL_VF_OFFLOAD_ADQ_V2 BIT(24)
+#define VIRTCHNL_VF_OFFLOAD_USO BIT(25)
+ /* BIT(26) is reserved */
+#define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF BIT(27)
+#define VIRTCHNL_VF_OFFLOAD_FDIR_PF BIT(28)
+#define VIRTCHNL_VF_OFFLOAD_QOS BIT(29)
+ /* BIT(30) is reserved */
+#define VIRTCHNL_VF_CAP_PTP BIT(31)
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+ VIRTCHNL_VF_OFFLOAD_VLAN | \
+ VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+ u16 num_vsis;
+ u16 num_queue_pairs;
+ u16 max_vectors;
+ u16 max_mtu;
+
+ u32 vf_cap_flags;
+ u32 rss_key_size;
+ u32 rss_lut_size;
+
+ struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+ u16 vsi_id;
+ u16 queue_id;
+ u16 ring_len; /* number of descriptors, multiple of 8 */
+ u16 headwb_enabled; /* deprecated with AVF 1.0 */
+ u64 dma_ring_addr;
+ u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* RX descriptor IDs (range from 0 to 63) */
+enum virtchnl_rx_desc_ids {
+ VIRTCHNL_RXDID_0_16B_BASE = 0,
+ VIRTCHNL_RXDID_1_32B_BASE = 1,
+ VIRTCHNL_RXDID_2_FLEX_SQ_NIC = 2,
+ VIRTCHNL_RXDID_3_FLEX_SQ_SW = 3,
+ VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB = 4,
+ VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL = 5,
+ VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2 = 6,
+ VIRTCHNL_RXDID_7_HW_RSVD = 7,
+ /* 8 through 15 are reserved */
+ VIRTCHNL_RXDID_16_COMMS_GENERIC = 16,
+ VIRTCHNL_RXDID_17_COMMS_AUX_VLAN = 17,
+ VIRTCHNL_RXDID_18_COMMS_AUX_IPV4 = 18,
+ VIRTCHNL_RXDID_19_COMMS_AUX_IPV6 = 19,
+ VIRTCHNL_RXDID_20_COMMS_AUX_FLOW = 20,
+ VIRTCHNL_RXDID_21_COMMS_AUX_TCP = 21,
+ /* 22 through 63 are reserved */
+};
+
+/* RX descriptor ID bitmasks */
+enum virtchnl_rx_desc_id_bitmasks {
+ VIRTCHNL_RXDID_0_16B_BASE_M = BIT(VIRTCHNL_RXDID_0_16B_BASE),
+ VIRTCHNL_RXDID_1_32B_BASE_M = BIT(VIRTCHNL_RXDID_1_32B_BASE),
+ VIRTCHNL_RXDID_2_FLEX_SQ_NIC_M = BIT(VIRTCHNL_RXDID_2_FLEX_SQ_NIC),
+ VIRTCHNL_RXDID_3_FLEX_SQ_SW_M = BIT(VIRTCHNL_RXDID_3_FLEX_SQ_SW),
+ VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB_M = BIT(VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB),
+ VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL_M = BIT(VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL),
+ VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2_M = BIT(VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2),
+ VIRTCHNL_RXDID_7_HW_RSVD_M = BIT(VIRTCHNL_RXDID_7_HW_RSVD),
+ /* 9 through 15 are reserved */
+ VIRTCHNL_RXDID_16_COMMS_GENERIC_M = BIT(VIRTCHNL_RXDID_16_COMMS_GENERIC),
+ VIRTCHNL_RXDID_17_COMMS_AUX_VLAN_M = BIT(VIRTCHNL_RXDID_17_COMMS_AUX_VLAN),
+ VIRTCHNL_RXDID_18_COMMS_AUX_IPV4_M = BIT(VIRTCHNL_RXDID_18_COMMS_AUX_IPV4),
+ VIRTCHNL_RXDID_19_COMMS_AUX_IPV6_M = BIT(VIRTCHNL_RXDID_19_COMMS_AUX_IPV6),
+ VIRTCHNL_RXDID_20_COMMS_AUX_FLOW_M = BIT(VIRTCHNL_RXDID_20_COMMS_AUX_FLOW),
+ VIRTCHNL_RXDID_21_COMMS_AUX_TCP_M = BIT(VIRTCHNL_RXDID_21_COMMS_AUX_TCP),
+ /* 22 through 63 are reserved */
+};
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code. The
+ * crc_disable flag disables CRC stripping on the VF. Setting
+ * the crc_disable flag to 1 will disable CRC stripping for each
+ * queue in the VF where the flag is set. The VIRTCHNL_VF_OFFLOAD_CRC
+ * offload must have been set prior to sending this info or the PF
+ * will ignore the request. This flag should be set the same for
+ * all of the queues for a VF.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+ u16 vsi_id;
+ u16 queue_id;
+ u32 ring_len; /* number of descriptors, multiple of 32 */
+ u16 hdr_size;
+ u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+ u32 databuffer_size;
+ u32 max_pkt_size;
+ u8 crc_disable;
+ u8 pad1[3];
+ u64 dma_ring_addr;
+
+ /* see enum virtchnl_rx_hsplit; deprecated with AVF 1.0 */
+ s32 rx_split_pos;
+ u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ * NOTE: The VF is not required to configure all queues in a single request.
+ * It may send multiple messages. PF drivers must correctly handle all VF
+ * requests.
+ */
+struct virtchnl_queue_pair_info {
+ /* NOTE: vsi_id and queue_id should be identical for both queues. */
+ struct virtchnl_txq_info txq;
+ struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+ u16 vsi_id;
+ u16 num_queue_pairs;
+ u32 pad;
+ struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF. Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated. This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support. If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+ u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0. The VF may not request
+ * that vector 0 be used for traffic.
+ * PF configures interrupt mapping and returns status.
+ * NOTE: due to hardware requirements, all active queues (both TX and RX)
+ * should be mapped to interrupts, even if the driver intends to operate
+ * only in polling mode. In this case the interrupt may be disabled, but
+ * the ITR timer will still run to trigger writebacks.
+ */
+struct virtchnl_vector_map {
+ u16 vsi_id;
+ u16 vector_id;
+ u16 rxq_map;
+ u16 txq_map;
+ u16 rxitr_idx;
+ u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+ u16 num_vectors;
+ struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ * NOTE: The VF is not required to enable/disable all queues in a single
+ * request. It may send multiple messages.
+ * PF drivers must correctly handle all VF requests.
+ */
+struct virtchnl_queue_select {
+ u16 vsi_id;
+ u16 pad;
+ u32 rx_queues;
+ u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_GET_MAX_RSS_QREGION
+ *
+ * if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated in VIRTCHNL_OP_GET_VF_RESOURCES
+ * then this op must be supported.
+ *
+ * VF sends this message in order to query the max RSS queue region
+ * size supported by PF, when VIRTCHNL_VF_LARGE_NUM_QPAIRS is enabled.
+ * This information should be used when configuring the RSS LUT and/or
+ * configuring queue region based filters.
+ *
+ * The maximum RSS queue region is 2^qregion_width. So, a qregion_width
+ * of 6 would inform the VF that the PF supports a maximum RSS queue region
+ * of 64.
+ *
+ * A queue region represents a range of queues that can be used to configure
+ * a RSS LUT. For example, if a VF is given 64 queues, but only a max queue
+ * region size of 16 (i.e. 2^qregion_width = 16) then it will only be able
+ * to configure the RSS LUT with queue indices from 0 to 15. However, other
+ * filters can be used to direct packets to queues >15 via specifying a queue
+ * base/offset and queue region width.
+ */
+struct virtchnl_max_rss_qregion {
+ u16 vport_id;
+ u16 qregion_width;
+ u8 pad[4];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_max_rss_qregion);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_LEGACY
+ * Prior to adding the @type member to virtchnl_ether_addr, there were 2 pad
+ * bytes. Moving forward all VF drivers should not set type to
+ * VIRTCHNL_ETHER_ADDR_LEGACY. This is only here to not break previous/legacy
+ * behavior. The control plane function (i.e. PF) can use a best effort method
+ * of tracking the primary/device unicast in this case, but there is no
+ * guarantee and functionality depends on the implementation of the PF.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_PRIMARY
+ * All VF drivers should set @type to VIRTCHNL_ETHER_ADDR_PRIMARY for the
+ * primary/device unicast MAC address filter for VIRTCHNL_OP_ADD_ETH_ADDR and
+ * VIRTCHNL_OP_DEL_ETH_ADDR. This allows for the underlying control plane
+ * function (i.e. PF) to accurately track and use this MAC address for
+ * displaying on the host and for VM/function reset.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_EXTRA
+ * All VF drivers should set @type to VIRTCHNL_ETHER_ADDR_EXTRA for any extra
+ * unicast and/or multicast filters that are being added/deleted via
+ * VIRTCHNL_OP_DEL_ETH_ADDR/VIRTCHNL_OP_ADD_ETH_ADDR respectively.
+ */
+struct virtchnl_ether_addr {
+ u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ u8 type;
+#define VIRTCHNL_ETHER_ADDR_LEGACY 0
+#define VIRTCHNL_ETHER_ADDR_PRIMARY 1
+#define VIRTCHNL_ETHER_ADDR_EXTRA 2
+#define VIRTCHNL_ETHER_ADDR_TYPE_MASK 3 /* first two bits of type are valid */
+ u8 pad;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+ u16 vsi_id;
+ u16 num_elements;
+ struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+ u16 vsi_id;
+ u16 num_elements;
+ u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* This enum is used for all of the VIRTCHNL_VF_OFFLOAD_VLAN_V2_CAPS related
+ * structures and opcodes.
+ *
+ * VIRTCHNL_VLAN_UNSUPPORTED - This field is not supported and if a VF driver
+ * populates it the PF should return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED.
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 - This field supports 0x8100 ethertype.
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 - This field supports 0x88A8 ethertype.
+ * VIRTCHNL_VLAN_ETHERTYPE_9100 - This field supports 0x9100 ethertype.
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_AND - Used when multiple ethertypes can be supported
+ * by the PF concurrently. For example, if the PF can support
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 AND VIRTCHNL_VLAN_ETHERTYPE_88A8 filters it
+ * would OR the following bits:
+ *
+ * VIRTHCNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * The VF would interpret this as VLAN filtering can be supported on both 0x8100
+ * and 0x88A8 VLAN ethertypes.
+ *
+ * VIRTCHNL_ETHERTYPE_XOR - Used when only a single ethertype can be supported
+ * by the PF concurrently. For example if the PF can support
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 XOR VIRTCHNL_VLAN_ETHERTYPE_88A8 stripping
+ * offload it would OR the following bits:
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * The VF would interpret this as VLAN stripping can be supported on either
+ * 0x8100 or 0x88a8 VLAN ethertypes. So when requesting VLAN stripping via
+ * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 the specified ethertype will override
+ * the previously set value.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 - Used to tell the VF to insert and/or
+ * strip the VLAN tag using the L2TAG1 field of the Tx/Rx descriptors.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to insert hardware
+ * offloaded VLAN tags using the L2TAG2 field of the Tx descriptor.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to strip hardware
+ * offloaded VLAN tags using the L2TAG2_2 field of the Rx descriptor.
+ *
+ * VIRTCHNL_VLAN_PRIO - This field supports VLAN priority bits. This is used for
+ * VLAN filtering if the underlying PF supports it.
+ *
+ * VIRTCHNL_VLAN_TOGGLE_ALLOWED - This field is used to say whether a
+ * certain VLAN capability can be toggled. For example if the underlying PF/CP
+ * allows the VF to toggle VLAN filtering, stripping, and/or insertion it should
+ * set this bit along with the supported ethertypes.
+ */
+enum virtchnl_vlan_support {
+ VIRTCHNL_VLAN_UNSUPPORTED = 0,
+ VIRTCHNL_VLAN_ETHERTYPE_8100 = 0x00000001,
+ VIRTCHNL_VLAN_ETHERTYPE_88A8 = 0x00000002,
+ VIRTCHNL_VLAN_ETHERTYPE_9100 = 0x00000004,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 = 0x00000100,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 = 0x00000200,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 = 0x00000400,
+ VIRTCHNL_VLAN_PRIO = 0x01000000,
+ VIRTCHNL_VLAN_FILTER_MASK = 0x10000000,
+ VIRTCHNL_VLAN_ETHERTYPE_AND = 0x20000000,
+ VIRTCHNL_VLAN_ETHERTYPE_XOR = 0x40000000,
+ VIRTCHNL_VLAN_TOGGLE = 0x80000000
+};
+
+/* This structure is used as part of the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
+ * for filtering, insertion, and stripping capabilities.
+ *
+ * If only outer capabilities are supported (for filtering, insertion, and/or
+ * stripping) then this refers to the outer most or single VLAN from the VF's
+ * perspective.
+ *
+ * If only inner capabilities are supported (for filtering, insertion, and/or
+ * stripping) then this refers to the outer most or single VLAN from the VF's
+ * perspective. Functionally this is the same as if only outer capabilities are
+ * supported. The VF driver is just forced to use the inner fields when
+ * adding/deleting filters and enabling/disabling offloads (if supported).
+ *
+ * If both outer and inner capabilities are supported (for filtering, insertion,
+ * and/or stripping) then outer refers to the outer most or single VLAN and
+ * inner refers to the second VLAN, if it exists, in the packet.
+ *
+ * There is no support for tunneled VLAN offloads, so outer or inner are never
+ * referring to a tunneled packet from the VF's perspective.
+ */
+struct virtchnl_vlan_supported_caps {
+ u32 outer;
+ u32 inner;
+};
+
+/* The PF populates these fields based on the supported VLAN filtering. If a
+ * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will
+ * reject any VIRTCHNL_OP_ADD_VLAN_V2 or VIRTCHNL_OP_DEL_VLAN_V2 messages using
+ * the unsupported fields.
+ *
+ * Also, a VF is only allowed to toggle its VLAN filtering setting if the
+ * VIRTCHNL_VLAN_TOGGLE bit is set.
+ *
+ * The ethertype(s) specified in the ethertype_init field are the ethertypes
+ * enabled for VLAN filtering. VLAN filtering in this case refers to the outer
+ * most VLAN from the VF's perspective. If both inner and outer filtering are
+ * allowed then ethertype_init only refers to the outer most VLAN as only
+ * VLAN ethertype supported for inner VLAN filtering is
+ * VIRTCHNL_VLAN_ETHERTYPE_8100. By default, inner VLAN filtering is disabled
+ * when both inner and outer filtering are allowed.
+ *
+ * The max_filters field tells the VF how many VLAN filters it's allowed to have
+ * at any one time. If it exceeds this amount and tries to add another filter,
+ * then the request will be rejected by the PF. To prevent failures, the VF
+ * should keep track of how many VLAN filters it has added and not attempt to
+ * add more than max_filters.
+ */
+struct virtchnl_vlan_filtering_caps {
+ struct virtchnl_vlan_supported_caps filtering_support;
+ u32 ethertype_init;
+ u16 max_filters;
+ u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_filtering_caps);
+
+/* This enum is used for the virtchnl_vlan_offload_caps structure to specify
+ * if the PF supports a different ethertype for stripping and insertion.
+ *
+ * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION - The ethertype(s) specified
+ * for stripping affect the ethertype(s) specified for insertion and visa versa
+ * as well. If the VF tries to configure VLAN stripping via
+ * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 with VIRTCHNL_VLAN_ETHERTYPE_8100 then
+ * that will be the ethertype for both stripping and insertion.
+ *
+ * VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED - The ethertype(s) specified for
+ * stripping do not affect the ethertype(s) specified for insertion and visa
+ * versa.
+ */
+enum virtchnl_vlan_ethertype_match {
+ VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION = 0,
+ VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED = 1,
+};
+
+/* The PF populates these fields based on the supported VLAN offloads. If a
+ * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will
+ * reject any VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 or
+ * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 messages using the unsupported fields.
+ *
+ * Also, a VF is only allowed to toggle its VLAN offload setting if the
+ * VIRTCHNL_VLAN_TOGGLE_ALLOWED bit is set.
+ *
+ * The VF driver needs to be aware of how the tags are stripped by hardware and
+ * inserted by the VF driver based on the level of offload support. The PF will
+ * populate these fields based on where the VLAN tags are expected to be
+ * offloaded via the VIRTHCNL_VLAN_TAG_LOCATION_* bits. The VF will need to
+ * interpret these fields. See the definition of the
+ * VIRTCHNL_VLAN_TAG_LOCATION_* bits above the virtchnl_vlan_support
+ * enumeration.
+ */
+struct virtchnl_vlan_offload_caps {
+ struct virtchnl_vlan_supported_caps stripping_support;
+ struct virtchnl_vlan_supported_caps insertion_support;
+ u32 ethertype_init;
+ u8 ethertype_match;
+ u8 pad[3];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_vlan_offload_caps);
+
+/* VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
+ * VF sends this message to determine its VLAN capabilities.
+ *
+ * PF will mark which capabilities it supports based on hardware support and
+ * current configuration. For example, if a port VLAN is configured the PF will
+ * not allow outer VLAN filtering, stripping, or insertion to be configured so
+ * it will block these features from the VF.
+ *
+ * The VF will need to cross reference its capabilities with the PFs
+ * capabilities in the response message from the PF to determine the VLAN
+ * support.
+ */
+struct virtchnl_vlan_caps {
+ struct virtchnl_vlan_filtering_caps filtering;
+ struct virtchnl_vlan_offload_caps offloads;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_caps);
+
+struct virtchnl_vlan {
+ u16 tci; /* tci[15:13] = PCP and tci[11:0] = VID */
+ u16 tci_mask; /* only valid if VIRTCHNL_VLAN_FILTER_MASK set in
+ * filtering caps
+ */
+ u16 tpid; /* 0x8100, 0x88a8, etc. and only type(s) set in
+ * filtering caps. Note that tpid here does not refer to
+ * VIRTCHNL_VLAN_ETHERTYPE_*, but it refers to the
+ * actual 2-byte VLAN TPID
+ */
+ u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan);
+
+struct virtchnl_vlan_filter {
+ struct virtchnl_vlan inner;
+ struct virtchnl_vlan outer;
+ u8 pad[16];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(32, virtchnl_vlan_filter);
+
+/* VIRTCHNL_OP_ADD_VLAN_V2
+ * VIRTCHNL_OP_DEL_VLAN_V2
+ *
+ * VF sends these messages to add/del one or more VLAN tag filters for Rx
+ * traffic.
+ *
+ * The PF attempts to add the filters and returns status.
+ *
+ * The VF should only ever attempt to add/del virtchnl_vlan_filter(s) using the
+ * supported fields negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS.
+ */
+struct virtchnl_vlan_filter_list_v2 {
+ u16 vport_id;
+ u16 num_elements;
+ u8 pad[4];
+ struct virtchnl_vlan_filter filters[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_filter_list_v2);
+
+/* VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2
+ * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2
+ *
+ * VF sends this message to enable or disable VLAN stripping or insertion. It
+ * also needs to specify an ethertype. The VF knows which VLAN ethertypes are
+ * allowed and whether or not it's allowed to enable/disable the specific
+ * offload via the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to
+ * parse the virtchnl_vlan_caps.offloads fields to determine which offload
+ * messages are allowed.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.offloads in the
+ * following manner the VF will be allowed to enable and/or disable 0x8100 inner
+ * VLAN insertion and/or stripping via the opcodes listed above. Inner in this
+ * case means the outer most or single VLAN from the VF's perspective. This is
+ * because no outer offloads are supported. See the comments above the
+ * virtchnl_vlan_supported_caps structure for more details.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.inner =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.inner =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * In order to enable inner (again note that in this case inner is the outer
+ * most or single VLAN from the VF's perspective) VLAN stripping for 0x8100
+ * VLANs, the VF would populate the virtchnl_vlan_setting structure in the
+ * following manner and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message.
+ *
+ * virtchnl_vlan_setting.inner_ethertype_setting =
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * The reason that VLAN TPID(s) are not being used for the
+ * outer_ethertype_setting and inner_ethertype_setting fields is because it's
+ * possible a device could support VLAN insertion and/or stripping offload on
+ * multiple ethertypes concurrently, so this method allows a VF to request
+ * multiple ethertypes in one message using the virtchnl_vlan_support
+ * enumeration.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.offloads in the
+ * following manner the VF will be allowed to enable 0x8100 and 0x88a8 outer
+ * VLAN insertion and stripping simultaneously. The
+ * virtchnl_vlan_caps.offloads.ethertype_match field will also have to be
+ * populated based on what the PF can support.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * In order to enable outer VLAN stripping for 0x8100 and 0x88a8 VLANs, the VF
+ * would populate the virthcnl_vlan_offload_structure in the following manner
+ * and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting =
+ * VIRTHCNL_VLAN_ETHERTYPE_8100 |
+ * VIRTHCNL_VLAN_ETHERTYPE_88A8;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * There is also the case where a PF and the underlying hardware can support
+ * VLAN offloads on multiple ethertypes, but not concurrently. For example, if
+ * the PF populates the virtchnl_vlan_caps.offloads in the following manner the
+ * VF will be allowed to enable and/or disable 0x8100 XOR 0x88a8 outer VLAN
+ * offloads. The ethertypes must match for stripping and insertion.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * virtchnl_vlan_caps.offloads.ethertype_match =
+ * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION;
+ *
+ * In order to enable outer VLAN stripping for 0x88a8 VLANs, the VF would
+ * populate the virtchnl_vlan_setting structure in the following manner and send
+ * the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2. Also, this will change the
+ * ethertype for VLAN insertion if it's enabled. So, for completeness, a
+ * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 with the same ethertype should be sent.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting = VIRTHCNL_VLAN_ETHERTYPE_88A8;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2
+ *
+ * VF sends this message to enable or disable VLAN filtering. It also needs to
+ * specify an ethertype. The VF knows which VLAN ethertypes are allowed and
+ * whether or not it's allowed to enable/disable filtering via the
+ * VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to
+ * parse the virtchnl_vlan_caps.filtering fields to determine which, if any,
+ * filtering messages are allowed.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.filtering in the
+ * following manner the VF will be allowed to enable/disable 0x8100 and 0x88a8
+ * outer VLAN filtering together. Note, that the VIRTCHNL_VLAN_ETHERTYPE_AND
+ * means that all filtering ethertypes will to be enabled and disabled together
+ * regardless of the request from the VF. This means that the underlying
+ * hardware only supports VLAN filtering for all VLAN the specified ethertypes
+ * or none of them.
+ *
+ * virtchnl_vlan_caps.filtering.filtering_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTHCNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_9100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * In order to enable outer VLAN filtering for 0x88a8 and 0x8100 VLANs (0x9100
+ * VLANs aren't supported by the VF driver), the VF would populate the
+ * virtchnl_vlan_setting structure in the following manner and send the
+ * VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2. The same message format would be used
+ * to disable outer VLAN filtering for 0x88a8 and 0x8100 VLANs, but the
+ * VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 opcode is used.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting =
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8;
+ *
+ */
+struct virtchnl_vlan_setting {
+ u32 outer_ethertype_setting;
+ u32 inner_ethertype_setting;
+ u16 vport_id;
+ u8 pad[6];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_setting);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+ u16 vsi_id;
+ u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC 0x00000001
+#define FLAG_VF_MULTICAST_PROMISC 0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+ u64 rx_bytes; /* received bytes */
+ u64 rx_unicast; /* received unicast pkts */
+ u64 rx_multicast; /* received multicast pkts */
+ u64 rx_broadcast; /* received broadcast pkts */
+ u64 rx_discards;
+ u64 rx_unknown_protocol;
+ u64 tx_bytes; /* transmitted bytes */
+ u64 tx_unicast; /* transmitted unicast pkts */
+ u64 tx_multicast; /* transmitted multicast pkts */
+ u64 tx_broadcast; /* transmitted broadcast pkts */
+ u64 tx_discards;
+ u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+ u16 vsi_id;
+ u16 key_len;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+ u16 vsi_id;
+ u16 lut_entries;
+ u8 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+ u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* Type of RSS algorithm */
+enum virtchnl_rss_algorithm {
+ VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0,
+ VIRTCHNL_RSS_ALG_R_ASYMMETRIC = 1,
+ VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC = 2,
+ VIRTCHNL_RSS_ALG_XOR_SYMMETRIC = 3,
+};
+
+/* This is used by PF driver to enforce how many channels can be supported.
+ * When ADQ_V2 capability is negotiated, it will allow 16 channels otherwise
+ * PF driver will allow only max 4 channels
+ */
+#define VIRTCHNL_MAX_ADQ_CHANNELS 4
+#define VIRTCHNL_MAX_ADQ_V2_CHANNELS 16
+
+/* VIRTCHNL_OP_ENABLE_CHANNELS
+ * VIRTCHNL_OP_DISABLE_CHANNELS
+ * VF sends these messages to enable or disable channels based on
+ * the user specified queue count and queue offset for each traffic class.
+ * This struct encompasses all the information that the PF needs from
+ * VF to create a channel.
+ */
+struct virtchnl_channel_info {
+ u16 count; /* number of queues in a channel */
+ u16 offset; /* queues in a channel start from 'offset' */
+ u32 pad;
+ u64 max_tx_rate;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_channel_info);
+
+struct virtchnl_tc_info {
+ u32 num_tc;
+ u32 pad;
+ struct virtchnl_channel_info list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_tc_info);
+
+/* VIRTCHNL_ADD_CLOUD_FILTER
+ * VIRTCHNL_DEL_CLOUD_FILTER
+ * VF sends these messages to add or delete a cloud filter based on the
+ * user specified match and action filters. These structures encompass
+ * all the information that the PF needs from the VF to add/delete a
+ * cloud filter.
+ */
+
+struct virtchnl_l4_spec {
+ u8 src_mac[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ u8 dst_mac[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ /* vlan_prio is part of this 16 bit field even from OS perspective
+ * vlan_id:12 is actual vlan_id, then vlanid:bit14..12 is vlan_prio
+ * in future, when decided to offload vlan_prio, pass that information
+ * as part of the "vlan_id" field, Bit14..12
+ */
+ __be16 vlan_id;
+ __be16 pad; /* reserved for future use */
+ __be32 src_ip[4];
+ __be32 dst_ip[4];
+ __be16 src_port;
+ __be16 dst_port;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(52, virtchnl_l4_spec);
+
+union virtchnl_flow_spec {
+ struct virtchnl_l4_spec tcp_spec;
+ u8 buffer[128]; /* reserved for future use */
+};
+
+VIRTCHNL_CHECK_UNION_LEN(128, virtchnl_flow_spec);
+
+enum virtchnl_action {
+ /* action types */
+ VIRTCHNL_ACTION_DROP = 0,
+ VIRTCHNL_ACTION_TC_REDIRECT,
+ VIRTCHNL_ACTION_PASSTHRU,
+ VIRTCHNL_ACTION_QUEUE,
+ VIRTCHNL_ACTION_Q_REGION,
+ VIRTCHNL_ACTION_MARK,
+ VIRTCHNL_ACTION_COUNT,
+};
+
+enum virtchnl_flow_type {
+ /* flow types */
+ VIRTCHNL_TCP_V4_FLOW = 0,
+ VIRTCHNL_TCP_V6_FLOW,
+ VIRTCHNL_UDP_V4_FLOW,
+ VIRTCHNL_UDP_V6_FLOW,
+};
+
+struct virtchnl_filter {
+ union virtchnl_flow_spec data;
+ union virtchnl_flow_spec mask;
+
+ /* see enum virtchnl_flow_type */
+ s32 flow_type;
+
+ /* see enum virtchnl_action */
+ s32 action;
+ u32 action_meta;
+ u8 field_flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(272, virtchnl_filter);
+
+struct virtchnl_shaper_bw {
+ /* Unit is Kbps */
+ u32 committed;
+ u32 peak;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_shaper_bw);
+
+
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+ VIRTCHNL_EVENT_UNKNOWN = 0,
+ VIRTCHNL_EVENT_LINK_CHANGE,
+ VIRTCHNL_EVENT_RESET_IMPENDING,
+ VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO 0
+#define PF_EVENT_SEVERITY_ATTENTION 1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED 2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM 255
+
+struct virtchnl_pf_event {
+ /* see enum virtchnl_event_codes */
+ s32 event;
+ union {
+ /* If the PF driver does not support the new speed reporting
+ * capabilities then use link_event else use link_event_adv to
+ * get the speed and link information. The ability to understand
+ * new speeds is indicated by setting the capability flag
+ * VIRTCHNL_VF_CAP_ADV_LINK_SPEED in vf_cap_flags parameter
+ * in virtchnl_vf_resource struct and can be used to determine
+ * which link event struct to use below.
+ */
+ struct {
+ enum virtchnl_link_speed link_speed;
+ bool link_status;
+ u8 pad[3];
+ } link_event;
+ struct {
+ /* link_speed provided in Mbps */
+ u32 link_speed;
+ u8 link_status;
+ u8 pad[3];
+ } link_event_adv;
+ } event_data;
+
+ s32 severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+ VIRTCHNL_VFR_INPROGRESS = 0,
+ VIRTCHNL_VFR_COMPLETED,
+ VIRTCHNL_VFR_VFACTIVE,
+};
+
+#define VIRTCHNL_MAX_NUM_PROTO_HDRS 32
+#define PROTO_HDR_SHIFT 5
+#define PROTO_HDR_FIELD_START(proto_hdr_type) \
+ (proto_hdr_type << PROTO_HDR_SHIFT)
+#define PROTO_HDR_FIELD_MASK ((1UL << PROTO_HDR_SHIFT) - 1)
+
+/* VF use these macros to configure each protocol header.
+ * Specify which protocol headers and protocol header fields base on
+ * virtchnl_proto_hdr_type and virtchnl_proto_hdr_field.
+ * @param hdr: a struct of virtchnl_proto_hdr
+ * @param hdr_type: ETH/IPV4/TCP, etc
+ * @param field: SRC/DST/TEID/SPI, etc
+ */
+#define VIRTCHNL_ADD_PROTO_HDR_FIELD(hdr, field) \
+ ((hdr)->field_selector |= BIT((field) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_DEL_PROTO_HDR_FIELD(hdr, field) \
+ ((hdr)->field_selector &= ~BIT((field) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_TEST_PROTO_HDR_FIELD(hdr, val) \
+ ((hdr)->field_selector & BIT((val) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_GET_PROTO_HDR_FIELD(hdr) ((hdr)->field_selector)
+
+#define VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, hdr_type, field) \
+ (VIRTCHNL_ADD_PROTO_HDR_FIELD(hdr, \
+ VIRTCHNL_PROTO_HDR_ ## hdr_type ## _ ## field))
+#define VIRTCHNL_DEL_PROTO_HDR_FIELD_BIT(hdr, hdr_type, field) \
+ (VIRTCHNL_DEL_PROTO_HDR_FIELD(hdr, \
+ VIRTCHNL_PROTO_HDR_ ## hdr_type ## _ ## field))
+
+#define VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, hdr_type) \
+ ((hdr)->type = VIRTCHNL_PROTO_HDR_ ## hdr_type)
+#define VIRTCHNL_GET_PROTO_HDR_TYPE(hdr) \
+ (((hdr)->type) >> PROTO_HDR_SHIFT)
+#define VIRTCHNL_TEST_PROTO_HDR_TYPE(hdr, val) \
+ ((hdr)->type == ((s32)((val) >> PROTO_HDR_SHIFT)))
+#define VIRTCHNL_TEST_PROTO_HDR(hdr, val) \
+ (VIRTCHNL_TEST_PROTO_HDR_TYPE(hdr, val) && \
+ VIRTCHNL_TEST_PROTO_HDR_FIELD(hdr, val))
+
+/* Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+enum virtchnl_proto_hdr_type {
+ VIRTCHNL_PROTO_HDR_NONE,
+ VIRTCHNL_PROTO_HDR_ETH,
+ VIRTCHNL_PROTO_HDR_S_VLAN,
+ VIRTCHNL_PROTO_HDR_C_VLAN,
+ VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_TCP,
+ VIRTCHNL_PROTO_HDR_UDP,
+ VIRTCHNL_PROTO_HDR_SCTP,
+ VIRTCHNL_PROTO_HDR_GTPU_IP,
+ VIRTCHNL_PROTO_HDR_GTPU_EH,
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN,
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP,
+ VIRTCHNL_PROTO_HDR_PPPOE,
+ VIRTCHNL_PROTO_HDR_L2TPV3,
+ VIRTCHNL_PROTO_HDR_ESP,
+ VIRTCHNL_PROTO_HDR_AH,
+ VIRTCHNL_PROTO_HDR_PFCP,
+ VIRTCHNL_PROTO_HDR_GTPC,
+ VIRTCHNL_PROTO_HDR_ECPRI,
+ VIRTCHNL_PROTO_HDR_L2TPV2,
+ VIRTCHNL_PROTO_HDR_PPP,
+ /* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL_PROTO_HDR_IPV4 and VIRTCHNL_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
+ VIRTCHNL_PROTO_HDR_GRE,
+};
+
+/* Protocol header field within a protocol header. */
+enum virtchnl_proto_hdr_field {
+ /* ETHER */
+ VIRTCHNL_PROTO_HDR_ETH_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ETH),
+ VIRTCHNL_PROTO_HDR_ETH_DST,
+ VIRTCHNL_PROTO_HDR_ETH_ETHERTYPE,
+ /* S-VLAN */
+ VIRTCHNL_PROTO_HDR_S_VLAN_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_S_VLAN),
+ /* C-VLAN */
+ VIRTCHNL_PROTO_HDR_C_VLAN_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_C_VLAN),
+ /* IPV4 */
+ VIRTCHNL_PROTO_HDR_IPV4_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4),
+ VIRTCHNL_PROTO_HDR_IPV4_DST,
+ VIRTCHNL_PROTO_HDR_IPV4_DSCP,
+ VIRTCHNL_PROTO_HDR_IPV4_TTL,
+ VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_CHKSUM,
+ /* IPV6 */
+ VIRTCHNL_PROTO_HDR_IPV6_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
+ VIRTCHNL_PROTO_HDR_IPV6_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_TC,
+ VIRTCHNL_PROTO_HDR_IPV6_HOP_LIMIT,
+ VIRTCHNL_PROTO_HDR_IPV6_PROT,
+ /* IPV6 Prefix */
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX32_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX32_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX40_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX40_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX48_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX48_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX56_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX56_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* TCP */
+ VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
+ VIRTCHNL_PROTO_HDR_TCP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_TCP_CHKSUM,
+ /* UDP */
+ VIRTCHNL_PROTO_HDR_UDP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_UDP),
+ VIRTCHNL_PROTO_HDR_UDP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_UDP_CHKSUM,
+ /* SCTP */
+ VIRTCHNL_PROTO_HDR_SCTP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_SCTP),
+ VIRTCHNL_PROTO_HDR_SCTP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_SCTP_CHKSUM,
+ /* GTPU_IP */
+ VIRTCHNL_PROTO_HDR_GTPU_IP_TEID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_IP),
+ /* GTPU_EH */
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH),
+ VIRTCHNL_PROTO_HDR_GTPU_EH_QFI,
+ /* PPPOE */
+ VIRTCHNL_PROTO_HDR_PPPOE_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_PPPOE),
+ /* L2TPV3 */
+ VIRTCHNL_PROTO_HDR_L2TPV3_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_L2TPV3),
+ /* ESP */
+ VIRTCHNL_PROTO_HDR_ESP_SPI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ESP),
+ /* AH */
+ VIRTCHNL_PROTO_HDR_AH_SPI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_AH),
+ /* PFCP */
+ VIRTCHNL_PROTO_HDR_PFCP_S_FIELD =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_PFCP),
+ VIRTCHNL_PROTO_HDR_PFCP_SEID,
+ /* GTPC */
+ VIRTCHNL_PROTO_HDR_GTPC_TEID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPC),
+ /* ECPRI */
+ VIRTCHNL_PROTO_HDR_ECPRI_MSG_TYPE =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ECPRI),
+ VIRTCHNL_PROTO_HDR_ECPRI_PC_RTC_ID,
+ /* IPv4 Dummy Fragment */
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG),
+ /* IPv6 Extension Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
+ /* GTPU_DWN/UP */
+ VIRTCHNL_PROTO_HDR_GTPU_DWN_QFI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN),
+ VIRTCHNL_PROTO_HDR_GTPU_UP_QFI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP),
+ /* L2TPv2 */
+ VIRTCHNL_PROTO_HDR_L2TPV2_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_L2TPV2),
+ VIRTCHNL_PROTO_HDR_L2TPV2_LEN_SESS_ID,
+};
+
+struct virtchnl_proto_hdr {
+ /* see enum virtchnl_proto_hdr_type */
+ s32 type;
+ u32 field_selector; /* a bit mask to select field for header type */
+ u8 buffer[64];
+ /**
+ * binary buffer in network order for specific header type.
+ * For example, if type = VIRTCHNL_PROTO_HDR_IPV4, a IPv4
+ * header is expected to be copied into the buffer.
+ */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
+
+struct virtchnl_proto_hdrs {
+ u8 tunnel_level;
+ /**
+ * specify where protocol header start from.
+ * 0 - from the outer layer
+ * 1 - from the first inner layer
+ * 2 - from the second inner layer
+ * ....
+ **/
+ int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
+ struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
+
+struct virtchnl_rss_cfg {
+ struct virtchnl_proto_hdrs proto_hdrs; /* protocol headers */
+
+ /* see enum virtchnl_rss_algorithm; rss algorithm type */
+ s32 rss_algorithm;
+ u8 reserved[128]; /* reserve for future */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2444, virtchnl_rss_cfg);
+
+/* action configuration for FDIR */
+struct virtchnl_filter_action {
+ /* see enum virtchnl_action type */
+ s32 type;
+ union {
+ /* used for queue and qgroup action */
+ struct {
+ u16 index;
+ u8 region;
+ } queue;
+ /* used for count action */
+ struct {
+ /* share counter ID with other flow rules */
+ u8 shared;
+ u32 id; /* counter ID */
+ } count;
+ /* used for mark action */
+ u32 mark_id;
+ u8 reserve[32];
+ } act_conf;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_filter_action);
+
+#define VIRTCHNL_MAX_NUM_ACTIONS 8
+
+struct virtchnl_filter_action_set {
+ /* action number must be less then VIRTCHNL_MAX_NUM_ACTIONS */
+ int count;
+ struct virtchnl_filter_action actions[VIRTCHNL_MAX_NUM_ACTIONS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(292, virtchnl_filter_action_set);
+
+/* pattern and action for FDIR rule */
+struct virtchnl_fdir_rule {
+ struct virtchnl_proto_hdrs proto_hdrs;
+ struct virtchnl_filter_action_set action_set;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2604, virtchnl_fdir_rule);
+
+/* Status returned to VF after VF requests FDIR commands
+ * VIRTCHNL_FDIR_SUCCESS
+ * VF FDIR related request is successfully done by PF
+ * The request can be OP_ADD/DEL/QUERY_FDIR_FILTER.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE
+ * OP_ADD_FDIR_FILTER request is failed due to no Hardware resource.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_EXIST
+ * OP_ADD_FDIR_FILTER request is failed due to the rule is already existed.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT
+ * OP_ADD_FDIR_FILTER request is failed due to conflict with existing rule.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST
+ * OP_DEL_FDIR_FILTER request is failed due to this rule doesn't exist.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_INVALID
+ * OP_ADD_FDIR_FILTER request is failed due to parameters validation
+ * or HW doesn't support.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT
+ * OP_ADD/DEL_FDIR_FILTER request is failed due to timing out
+ * for programming.
+ *
+ * VIRTCHNL_FDIR_FAILURE_QUERY_INVALID
+ * OP_QUERY_FDIR_FILTER request is failed due to parameters validation,
+ * for example, VF query counter of a rule who has no counter action.
+ */
+enum virtchnl_fdir_prgm_status {
+ VIRTCHNL_FDIR_SUCCESS = 0,
+ VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE,
+ VIRTCHNL_FDIR_FAILURE_RULE_EXIST,
+ VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT,
+ VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST,
+ VIRTCHNL_FDIR_FAILURE_RULE_INVALID,
+ VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT,
+ VIRTCHNL_FDIR_FAILURE_QUERY_INVALID,
+};
+
+/* VIRTCHNL_OP_ADD_FDIR_FILTER
+ * VF sends this request to PF by filling out vsi_id,
+ * validate_only and rule_cfg. PF will return flow_id
+ * if the request is successfully done and return add_status to VF.
+ */
+struct virtchnl_fdir_add {
+ u16 vsi_id; /* INPUT */
+ /*
+ * 1 for validating a fdir rule, 0 for creating a fdir rule.
+ * Validate and create share one ops: VIRTCHNL_OP_ADD_FDIR_FILTER.
+ */
+ u16 validate_only; /* INPUT */
+ u32 flow_id; /* OUTPUT */
+ struct virtchnl_fdir_rule rule_cfg; /* INPUT */
+
+ /* see enum virtchnl_fdir_prgm_status; OUTPUT */
+ s32 status;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2616, virtchnl_fdir_add);
+
+/* VIRTCHNL_OP_DEL_FDIR_FILTER
+ * VF sends this request to PF by filling out vsi_id
+ * and flow_id. PF will return del_status to VF.
+ */
+struct virtchnl_fdir_del {
+ u16 vsi_id; /* INPUT */
+ u16 pad;
+ u32 flow_id; /* INPUT */
+
+ /* see enum virtchnl_fdir_prgm_status; OUTPUT */
+ s32 status;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del);
+
+/* VIRTCHNL_OP_GET_QOS_CAPS
+ * VF sends this message to get its QoS Caps, such as
+ * TC number, Arbiter and Bandwidth.
+ */
+struct virtchnl_qos_cap_elem {
+ u8 tc_num;
+ u8 tc_prio;
+#define VIRTCHNL_ABITER_STRICT 0
+#define VIRTCHNL_ABITER_ETS 2
+ u8 arbiter;
+#define VIRTCHNL_STRICT_WEIGHT 1
+ u8 weight;
+ enum virtchnl_bw_limit_type type;
+ union {
+ struct virtchnl_shaper_bw shaper;
+ u8 pad2[32];
+ };
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_qos_cap_elem);
+
+struct virtchnl_qos_cap_list {
+ u16 vsi_id;
+ u16 num_elem;
+ struct virtchnl_qos_cap_elem cap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(44, virtchnl_qos_cap_list);
+
+/* VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP
+ * VF sends message virtchnl_queue_tc_mapping to set queue to tc
+ * mapping for all the Tx and Rx queues with a specified VSI, and
+ * would get response about bitmap of valid user priorities
+ * associated with queues.
+ */
+struct virtchnl_queue_tc_mapping {
+ u16 vsi_id;
+ u16 num_tc;
+ u16 num_queue_pairs;
+ u8 pad[2];
+ union {
+ struct {
+ u16 start_queue_id;
+ u16 queue_count;
+ } req;
+ struct {
+#define VIRTCHNL_USER_PRIO_TYPE_UP 0
+#define VIRTCHNL_USER_PRIO_TYPE_DSCP 1
+ u16 prio_type;
+ u16 valid_prio_bitmap;
+ } resp;
+ } tc[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_tc_mapping);
+
+/* queue types */
+enum virtchnl_queue_type {
+ VIRTCHNL_QUEUE_TYPE_TX = 0,
+ VIRTCHNL_QUEUE_TYPE_RX = 1,
+};
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl_queue_chunk {
+ /* see enum virtchnl_queue_type */
+ s32 type;
+ u16 start_queue_id;
+ u16 num_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl_queue_chunks {
+ u16 num_chunks;
+ u16 rsvd;
+ struct virtchnl_queue_chunk chunks[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_chunks);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES_V2
+ * VIRTCHNL_OP_DISABLE_QUEUES_V2
+ *
+ * These opcodes can be used if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated in
+ * VIRTCHNL_OP_GET_VF_RESOURCES
+ *
+ * VF sends virtchnl_ena_dis_queues struct to specify the queues to be
+ * enabled/disabled in chunks. Also applicable to single queue RX or
+ * TX. PF performs requested action and returns status.
+ */
+struct virtchnl_del_ena_dis_queues {
+ u16 vport_id;
+ u16 pad;
+ struct virtchnl_queue_chunks chunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_del_ena_dis_queues);
+
+/* Virtchannel interrupt throttling rate index */
+enum virtchnl_itr_idx {
+ VIRTCHNL_ITR_IDX_0 = 0,
+ VIRTCHNL_ITR_IDX_1 = 1,
+ VIRTCHNL_ITR_IDX_NO_ITR = 3,
+};
+
+/* Queue to vector mapping */
+struct virtchnl_queue_vector {
+ u16 queue_id;
+ u16 vector_id;
+ u8 pad[4];
+
+ /* see enum virtchnl_itr_idx */
+ s32 itr_idx;
+
+ /* see enum virtchnl_queue_type */
+ s32 queue_type;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_queue_vector);
+
+/* VIRTCHNL_OP_MAP_QUEUE_VECTOR
+ *
+ * This opcode can be used only if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated
+ * in VIRTCHNL_OP_GET_VF_RESOURCES
+ *
+ * VF sends this message to map queues to vectors and ITR index registers.
+ * External data buffer contains virtchnl_queue_vector_maps structure
+ * that contains num_qv_maps of virtchnl_queue_vector structures.
+ * PF maps the requested queue vector maps after validating the queue and vector
+ * ids and returns a status code.
+ */
+struct virtchnl_queue_vector_maps {
+ u16 vport_id;
+ u16 num_qv_maps;
+ u8 pad[4];
+ struct virtchnl_queue_vector qv_maps[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_queue_vector_maps);
+
+/* VIRTCHNL_VF_CAP_PTP
+ * VIRTCHNL_OP_1588_PTP_GET_CAPS
+ * VIRTCHNL_OP_1588_PTP_GET_TIME
+ * VIRTCHNL_OP_1588_PTP_SET_TIME
+ * VIRTCHNL_OP_1588_PTP_ADJ_TIME
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ
+ * VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP
+ * VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS
+ * VIRTCHNL_OP_1588_PTP_SET_PIN_CFG
+ * VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP
+ *
+ * Support for offloading control of the device PTP hardware clock (PHC) is enabled
+ * by VIRTCHNL_VF_CAP_PTP. This capability allows a VF to request that PF
+ * enable Tx and Rx timestamps, and request access to read and/or write the
+ * PHC on the device, as well as query if the VF has direct access to the PHC
+ * time registers.
+ *
+ * The VF must set VIRTCHNL_VF_CAP_PTP in its capabilities when requesting
+ * resources. If the capability is set in reply, the VF must then send
+ * a VIRTCHNL_OP_1588_PTP_GET_CAPS request during initialization. The VF indicates
+ * what extended capabilities it wants by setting the appropriate flags in the
+ * caps field. The PF reply will indicate what features are enabled for
+ * that VF.
+ */
+#define VIRTCHNL_1588_PTP_CAP_TX_TSTAMP BIT(0)
+#define VIRTCHNL_1588_PTP_CAP_RX_TSTAMP BIT(1)
+#define VIRTCHNL_1588_PTP_CAP_READ_PHC BIT(2)
+#define VIRTCHNL_1588_PTP_CAP_WRITE_PHC BIT(3)
+#define VIRTCHNL_1588_PTP_CAP_PHC_REGS BIT(4)
+#define VIRTCHNL_1588_PTP_CAP_PIN_CFG BIT(5)
+
+/**
+ * virtchnl_phc_regs
+ *
+ * Structure defines how the VF should access PHC related registers. The VF
+ * must request VIRTCHNL_1588_PTP_CAP_PHC_REGS. If the VF has access to PHC
+ * registers, the PF will reply with the capability flag set, and with this
+ * structure detailing what PCIe region and what offsets to use. If direct
+ * access is not available, this entire structure is reserved and the fields
+ * will be zero.
+ *
+ * If necessary in a future extension, a separate capability mutually
+ * exclusive with VIRTCHNL_1588_PTP_CAP_PHC_REGS might be used to change the
+ * entire format of this structure within virtchnl_ptp_caps.
+ *
+ * @clock_hi: Register offset of the high 32 bits of clock time
+ * @clock_lo: Register offset of the low 32 bits of clock time
+ * @pcie_region: The PCIe region the registers are located in.
+ * @rsvd: Reserved bits for future extension
+ */
+struct virtchnl_phc_regs {
+ u32 clock_hi;
+ u32 clock_lo;
+ u8 pcie_region;
+ u8 rsvd[15];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_phc_regs);
+
+/* timestamp format enumeration
+ *
+ * VIRTCHNL_1588_PTP_TSTAMP_40BIT
+ *
+ * This format indicates a timestamp that uses the 40bit format from the
+ * flexible Rx descriptors. It is also the default Tx timestamp format used
+ * today.
+ *
+ * Such a timestamp has the following 40bit format:
+ *
+ * *--------------------------------*-------------------------------*-----------*
+ * | 32 bits of time in nanoseconds | 7 bits of sub-nanosecond time | valid bit |
+ * *--------------------------------*-------------------------------*-----------*
+ *
+ * The timestamp is passed in a u64, with the upper 24bits of the field
+ * reserved as zero.
+ *
+ * With this format, in order to report a full 64bit timestamp to userspace
+ * applications, the VF is responsible for performing timestamp extension by
+ * carefully comparing the timestamp with the PHC time. This can correctly
+ * be achieved with a recent cached copy of the PHC time by doing delta
+ * comparison between the 32bits of nanoseconds in the timestamp with the
+ * lower 32 bits of the clock time. For this to work, the cached PHC time
+ * must be from within 2^31 nanoseconds (~2.1 seconds) of when the timestamp
+ * was captured.
+ *
+ * VIRTCHNL_1588_PTP_TSTAMP_64BIT_NS
+ *
+ * This format indicates a timestamp that is 64 bits of nanoseconds.
+ */
+enum virtchnl_ptp_tstamp_format {
+ VIRTCHNL_1588_PTP_TSTAMP_40BIT = 0,
+ VIRTCHNL_1588_PTP_TSTAMP_64BIT_NS = 1,
+};
+
+/**
+ * virtchnl_ptp_caps
+ *
+ * Structure that defines the PTP capabilities available to the VF. The VF
+ * sends VIRTCHNL_OP_1588_PTP_GET_CAPS, and must fill in the ptp_caps field
+ * indicating what capabilities it is requesting. The PF will respond with the
+ * same message with the virtchnl_ptp_caps structure indicating what is
+ * enabled for the VF.
+ *
+ * @phc_regs: If VIRTCHNL_1588_PTP_CAP_PHC_REGS is set, contains information
+ * on the PHC related registers available to the VF.
+ * @caps: On send, VF sets what capabilities it requests. On reply, PF
+ * indicates what has been enabled for this VF. The PF shall not set
+ * bits which were not requested by the VF.
+ * @max_adj: The maximum adjustment capable of being requested by
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ, in parts per billion. Note that 1 ppb
+ * is approximately 65.5 scaled_ppm. The PF shall clamp any
+ * frequency adjustment in VIRTCHNL_op_1588_ADJ_FREQ to +/- max_adj.
+ * Use of ppb in this field allows fitting the value into 4 bytes
+ * instead of potentially requiring 8 if scaled_ppm units were used.
+ * @tx_tstamp_idx: The Tx timestamp index to set in the transmit descriptor
+ * when requesting a timestamp for an outgoing packet.
+ * Reserved if VIRTCHNL_1588_PTP_CAP_TX_TSTAMP is not enabled.
+ * @n_ext_ts: Number of external timestamp functions available. Reserved
+ * if VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @n_per_out: Number of periodic output functions available. Reserved if
+ * VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @n_pins: Number of physical programmable pins able to be controlled.
+ * Reserved if VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @tx_tstamp_format: Format of the Tx timestamps. Valid formats are defined
+ * by the virtchnl_ptp_tstamp enumeration. Note that Rx
+ * timestamps are tied to the descriptor format, and do not
+ * have a separate format field.
+ * @rsvd: Reserved bits for future extension.
+ *
+ * PTP capabilities
+ *
+ * VIRTCHNL_1588_PTP_CAP_TX_TSTAMP indicates that the VF can request transmit
+ * timestamps for packets in its transmit descriptors. If this is unset,
+ * transmit timestamp requests are ignored. Note that only one outstanding Tx
+ * timestamp request will be honored at a time. The PF shall handle receipt of
+ * the timestamp from the hardware, and will forward this to the VF by sending
+ * a VIRTCHNL_OP_1588_TX_TIMESTAMP message.
+ *
+ * VIRTCHNL_1588_PTP_CAP_RX_TSTAMP indicates that the VF receive queues have
+ * receive timestamps enabled in the flexible descriptors. Note that this
+ * requires a VF to also negotiate to enable advanced flexible descriptors in
+ * the receive path instead of the default legacy descriptor format.
+ *
+ * For a detailed description of the current Tx and Rx timestamp format, see
+ * the section on virtchnl_phc_tx_tstamp. Future extensions may indicate
+ * timestamp format in the capability structure.
+ *
+ * VIRTCHNL_1588_PTP_CAP_READ_PHC indicates that the VF may read the PHC time
+ * via the VIRTCHNL_OP_1588_PTP_GET_TIME command, or by directly reading PHC
+ * registers if VIRTCHNL_1588_PTP_CAP_PHC_REGS is also set.
+ *
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC indicates that the VF may request updates
+ * to the PHC time via VIRTCHNL_OP_1588_PTP_SET_TIME,
+ * VIRTCHNL_OP_1588_PTP_ADJ_TIME, and VIRTCHNL_OP_1588_PTP_ADJ_FREQ.
+ *
+ * VIRTCHNL_1588_PTP_CAP_PHC_REGS indicates that the VF has direct access to
+ * certain PHC related registers, primarily for lower latency access to the
+ * PHC time. If this is set, the VF shall read the virtchnl_phc_regs section
+ * of the capabilities to determine the location of the clock registers. If
+ * this capability is not set, the entire 24 bytes of virtchnl_phc_regs is
+ * reserved as zero. Future extensions define alternative formats for this
+ * data, in which case they will be mutually exclusive with this capability.
+ *
+ * VIRTCHNL_1588_PTP_CAP_PIN_CFG indicates that the VF has the capability to
+ * control software defined pins. These pins can be assigned either as an
+ * input to timestamp external events, or as an output to cause a periodic
+ * signal output.
+ *
+ * Note that in the future, additional capability flags may be added which
+ * indicate additional extended support. All fields marked as reserved by this
+ * header will be set to zero. VF implementations should verify this to ensure
+ * that future extensions do not break compatibility.
+ */
+struct virtchnl_ptp_caps {
+ struct virtchnl_phc_regs phc_regs;
+ u32 caps;
+ s32 max_adj;
+ u8 tx_tstamp_idx;
+ u8 n_ext_ts;
+ u8 n_per_out;
+ u8 n_pins;
+ /* see enum virtchnl_ptp_tstamp_format */
+ u8 tx_tstamp_format;
+ u8 rsvd[11];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_ptp_caps);
+
+/**
+ * virtchnl_phc_time
+ * @time: PHC time in nanoseconds
+ * @rsvd: Reserved for future extension
+ *
+ * Structure sent with VIRTCHNL_OP_1588_PTP_SET_TIME and received with
+ * VIRTCHNL_OP_1588_PTP_GET_TIME. Contains the 64bits of PHC clock time in
+ * nanoseconds.
+ *
+ * VIRTCHNL_OP_1588_PTP_SET_TIME may be sent by the VF if
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC is set. This will request that the PHC time
+ * be set to the requested value. This operation is non-atomic and thus does
+ * not adjust for the delay between request and completion. It is recommended
+ * that the VF use VIRTCHNL_OP_1588_PTP_ADJ_TIME and
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ when possible to steer the PHC clock.
+ *
+ * VIRTCHNL_OP_1588_PTP_GET_TIME may be sent to request the current time of
+ * the PHC. This op is available in case direct access via the PHC registers
+ * is not available.
+ */
+struct virtchnl_phc_time {
+ u64 time;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_time);
+
+/**
+ * virtchnl_phc_adj_time
+ * @delta: offset requested to adjust clock by
+ * @rsvd: reserved for future extension
+ *
+ * Sent with VIRTCHNL_OP_1588_PTP_ADJ_TIME. Used to request an adjustment of
+ * the clock time by the provided delta, with negative values representing
+ * subtraction. VIRTCHNL_OP_1588_PTP_ADJ_TIME may not be sent unless
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC is set.
+ *
+ * The atomicity of this operation is not guaranteed. The PF should perform an
+ * atomic update using appropriate mechanisms if possible. However, this is
+ * not guaranteed.
+ */
+struct virtchnl_phc_adj_time {
+ s64 delta;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_adj_time);
+
+/**
+ * virtchnl_phc_adj_freq
+ * @scaled_ppm: frequency adjustment represented in scaled parts per million
+ * @rsvd: Reserved for future extension
+ *
+ * Sent with the VIRTCHNL_OP_1588_PTP_ADJ_FREQ to request an adjustment to the
+ * clock frequency. The adjustment is in scaled_ppm, which is parts per
+ * million with a 16bit binary fractional portion. 1 part per billion is
+ * approximately 65.5 scaled_ppm.
+ *
+ * ppm = scaled_ppm / 2^16
+ *
+ * ppb = scaled_ppm * 1000 / 2^16 or
+ *
+ * ppb = scaled_ppm * 125 / 2^13
+ *
+ * The PF shall clamp any adjustment request to plus or minus the specified
+ * max_adj in the PTP capabilities.
+ *
+ * Requests for adjustment are always based off of nominal clock frequency and
+ * not compounding. To reset clock frequency, send a request with a scaled_ppm
+ * of 0.
+ */
+struct virtchnl_phc_adj_freq {
+ s64 scaled_ppm;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_adj_freq);
+
+/**
+ * virtchnl_phc_tx_stamp
+ * @tstamp: timestamp value
+ * @rsvd: Reserved for future extension
+ *
+ * Sent along with VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP from the PF when a Tx
+ * timestamp for the index associated with this VF in the tx_tstamp_idx field
+ * is captured by hardware.
+ *
+ * If VIRTCHNL_1588_PTP_CAP_TX_TSTAMP is set, the VF may request a timestamp
+ * for a packet in its transmit context descriptor by setting the appropriate
+ * flag and setting the timestamp index provided by the PF. On transmission,
+ * the timestamp will be captured and sent to the PF. The PF will forward this
+ * timestamp to the VF via the VIRTCHNL_1588_PTP_CAP_TX_TSTAMP op.
+ *
+ * The timestamp format is defined by the tx_tstamp_format field of the
+ * virtchnl_ptp_caps structure.
+ */
+struct virtchnl_phc_tx_tstamp {
+ u64 tstamp;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_tx_tstamp);
+
+enum virtchnl_phc_pin_func {
+ VIRTCHNL_PHC_PIN_FUNC_NONE = 0, /* Not assigned to any function */
+ VIRTCHNL_PHC_PIN_FUNC_EXT_TS = 1, /* Assigned to external timestamp */
+ VIRTCHNL_PHC_PIN_FUNC_PER_OUT = 2, /* Assigned to periodic output */
+};
+
+/* Length of the pin configuration data. All pin configurations belong within
+ * the same union and *must* have this length in bytes.
+ */
+#define VIRTCHNL_PIN_CFG_LEN 64
+
+/* virtchnl_phc_ext_ts_mode
+ *
+ * Mode of the external timestamp, indicating which edges of the input signal
+ * to timestamp.
+ */
+enum virtchnl_phc_ext_ts_mode {
+ VIRTCHNL_PHC_EXT_TS_NONE = 0,
+ VIRTCHNL_PHC_EXT_TS_RISING_EDGE = 1,
+ VIRTCHNL_PHC_EXT_TS_FALLING_EDGE = 2,
+ VIRTCHNL_PHC_EXT_TS_BOTH_EDGES = 3,
+};
+
+/**
+ * virtchnl_phc_ext_ts
+ * @mode: mode of external timestamp request
+ * @rsvd: reserved for future extension
+ *
+ * External timestamp configuration. Defines the configuration for this
+ * external timestamp function.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_NONE, the function is essentially disabled,
+ * timestamping nothing.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_RISING_EDGE, the function shall timestamp
+ * the rising edge of the input when it transitions from low to high signal.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_FALLING_EDGE, the function shall timestamp
+ * the falling edge of the input when it transitions from high to low signal.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_BOTH_EDGES, the function shall timestamp
+ * both the rising and falling edge of the signal whenever it changes.
+ *
+ * The PF shall return an error if the requested mode cannot be implemented on
+ * the function.
+ */
+struct virtchnl_phc_ext_ts {
+ u8 mode; /* see virtchnl_phc_ext_ts_mode */
+ u8 rsvd[63];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(VIRTCHNL_PIN_CFG_LEN, virtchnl_phc_ext_ts);
+
+/* virtchnl_phc_per_out_flags
+ *
+ * Flags defining periodic output functionality.
+ */
+enum virtchnl_phc_per_out_flags {
+ VIRTCHNL_PHC_PER_OUT_PHASE_START = BIT(0),
+};
+
+/**
+ * virtchnl_phc_per_out
+ * @start: absolute start time (if VIRTCHNL_PHC_PER_OUT_PHASE_START unset)
+ * @phase: phase offset to start (if VIRTCHNL_PHC_PER_OUT_PHASE_START set)
+ * @period: time to complete a full clock cycle (low - > high -> low)
+ * @on: length of time the signal should stay high
+ * @flags: flags defining the periodic output operation.
+ * rsvd: reserved for future extension
+ *
+ * Configuration for a periodic output signal. Used to define the signal that
+ * should be generated on a given function.
+ *
+ * The period field determines the full length of the clock cycle, including
+ * both duration hold high transition and duration to hold low transition in
+ * nanoseconds.
+ *
+ * The on field determines how long the signal should remain high. For
+ * a traditional square wave clock that is on for some duration and off for
+ * the same duration, use an on length of precisely half the period. The duty
+ * cycle of the clock is period/on.
+ *
+ * If VIRTCHNL_PHC_PER_OUT_PHASE_START is unset, then the request is to start
+ * a clock an absolute time. This means that the clock should start precisely
+ * at the specified time in the start field. If the start time is in the past,
+ * then the periodic output should start at the next valid multiple of the
+ * period plus the start time:
+ *
+ * new_start = (n * period) + start
+ * (choose n such that new start is in the future)
+ *
+ * Note that the PF should not reject a start time in the past because it is
+ * possible that such a start time was valid when the request was made, but
+ * became invalid due to delay in programming the pin.
+ *
+ * If VIRTCHNL_PHC_PER_OUT_PHASE_START is set, then the request is to start
+ * the next multiple of the period plus the phase offset. The phase must be
+ * less than the period. In this case, the clock should start as soon possible
+ * at the next available multiple of the period. To calculate a start time
+ * when programming this mode, use:
+ *
+ * start = (n * period) + phase
+ * (choose n such that start is in the future)
+ *
+ * A period of zero should be treated as a request to disable the clock
+ * output.
+ */
+struct virtchnl_phc_per_out {
+ union {
+ u64 start;
+ u64 phase;
+ };
+ u64 period;
+ u64 on;
+ u32 flags;
+ u8 rsvd[36];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(VIRTCHNL_PIN_CFG_LEN, virtchnl_phc_per_out);
+
+/* virtchnl_phc_pin_cfg_flags
+ *
+ * Definition of bits in the flags field of the virtchnl_phc_pin_cfg
+ * structure.
+ */
+enum virtchnl_phc_pin_cfg_flags {
+ /* Valid for VIRTCHNL_OP_1588_PTP_SET_PIN_CFG. If set, indicates this
+ * is a request to verify if the function can be assigned to the
+ * provided pin. In this case, the ext_ts and per_out fields are
+ * ignored, and the PF response must be an error if the pin cannot be
+ * assigned to that function index.
+ */
+ VIRTCHNL_PHC_PIN_CFG_VERIFY = BIT(0),
+};
+
+/**
+ * virtchnl_phc_set_pin
+ * @pin_index: The pin to get or set
+ * @func: the function type the pin is assigned to
+ * @func_index: the index of the function the pin is assigned to
+ * @ext_ts: external timestamp configuration
+ * @per_out: periodic output configuration
+ * @rsvd1: Reserved for future extension
+ * @rsvd2: Reserved for future extension
+ *
+ * Sent along with the VIRTCHNL_OP_1588_PTP_SET_PIN_CFG op.
+ *
+ * The VF issues a VIRTCHNL_OP_1588_PTP_SET_PIN_CFG to assign the pin to one
+ * of the functions. It must set the pin_index field, the func field, and
+ * the func_index field. The pin_index must be less than n_pins, and the
+ * func_index must be less than the n_ext_ts or n_per_out depending on which
+ * function type is selected. If func is for an external timestamp, the
+ * ext_ts field must be filled in with the desired configuration. Similarly,
+ * if the function is for a periodic output, the per_out field must be
+ * configured.
+ *
+ * If the VIRTCHNL_PHC_PIN_CFG_VERIFY bit of the flag field is set, this is
+ * a request only to verify the configuration, not to set it. In this case,
+ * the PF should simply report an error if the requested pin cannot be
+ * assigned to the requested function. This allows VF to determine whether or
+ * not a given function can be assigned to a specific pin. Other flag bits are
+ * currently reserved and must be verified as zero on both sides. They may be
+ * extended in the future.
+ */
+struct virtchnl_phc_set_pin {
+ u32 flags; /* see virtchnl_phc_pin_cfg_flags */
+ u8 pin_index;
+ u8 func; /* see virtchnl_phc_pin_func */
+ u8 func_index;
+ u8 rsvd1;
+ union {
+ struct virtchnl_phc_ext_ts ext_ts;
+ struct virtchnl_phc_per_out per_out;
+ };
+ u8 rsvd2[8];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_phc_set_pin);
+
+/**
+ * virtchnl_phc_pin
+ * @pin_index: The pin to get or set
+ * @func: the function type the pin is assigned to
+ * @func_index: the index of the function the pin is assigned to
+ * @rsvd: Reserved for future extension
+ * @name: human readable pin name, supplied by PF on GET_PIN_CFGS
+ *
+ * Sent by the PF as part of the VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS response.
+ *
+ * The VF issues a VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS request to the PF in
+ * order to obtain the current pin configuration for all of the pins that were
+ * assigned to this VF.
+ *
+ * This structure details the pin configuration state, including a pin name
+ * and which function is assigned to the pin currently.
+ */
+struct virtchnl_phc_pin {
+ u8 pin_index;
+ u8 func; /* see virtchnl_phc_pin_func */
+ u8 func_index;
+ u8 rsvd[5];
+ char name[64];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_phc_pin);
+
+/**
+ * virtchnl_phc_pin_cfg
+ * @len: length of the variable pin config array
+ * @pins: variable length pin configuration array
+ *
+ * Variable structure sent by the PF in reply to
+ * VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS. The VF does not send this structure with
+ * its request of the operation.
+ *
+ * It is possible that the PF may need to send more pin configuration data
+ * than can be sent in one virtchnl message. To handle this, the PF should
+ * issue multiple VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS responses. Each response
+ * will indicate the number of pins it covers. The VF should be ready to wait
+ * for multiple responses until it has received a total length equal to the
+ * number of n_pins negotiated during extended PTP capabilities exchange.
+ */
+struct virtchnl_phc_get_pins {
+ u8 len;
+ u8 rsvd[7];
+ struct virtchnl_phc_pin pins[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_phc_get_pins);
+
+/**
+ * virtchnl_phc_ext_stamp
+ * @tstamp: timestamp value
+ * @tstamp_rsvd: Reserved for future extension of the timestamp value.
+ * @tstamp_format: format of the timstamp
+ * @func_index: external timestamp function this timestamp is for
+ * @rsvd2: Reserved for future extension
+ *
+ * Sent along with the VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP from the PF when an
+ * external timestamp function is triggered.
+ *
+ * This will be sent only if one of the external timestamp functions is
+ * configured by the VF, and is only valid if VIRTCHNL_1588_PTP_CAP_PIN_CFG is
+ * negotiated with the PF.
+ *
+ * The timestamp format is defined by the tstamp_format field using the
+ * virtchnl_ptp_tstamp_format enumeration. The tstamp_rsvd field is
+ * exclusively reserved for possible future variants of the timestamp format,
+ * and its access will be controlled by the tstamp_format field.
+ */
+struct virtchnl_phc_ext_tstamp {
+ u64 tstamp;
+ u8 tstamp_rsvd[8];
+ u8 tstamp_format;
+ u8 func_index;
+ u8 rsvd2[6];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_phc_ext_tstamp);
+
+/* Since VF messages are limited by u16 size, precalculate the maximum possible
+ * values of nested elements in virtchnl structures that virtual channel can
+ * possibly handle in a single message.
+ */
+enum virtchnl_vector_limits {
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vsi_queue_config_info)) /
+ sizeof(struct virtchnl_queue_pair_info),
+
+ VIRTCHNL_OP_CONFIG_IRQ_MAP_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_irq_map_info)) /
+ sizeof(struct virtchnl_vector_map),
+
+ VIRTCHNL_OP_ADD_DEL_ETH_ADDR_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_ether_addr_list)) /
+ sizeof(struct virtchnl_ether_addr),
+
+ VIRTCHNL_OP_ADD_DEL_VLAN_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list)) /
+ sizeof(u16),
+
+
+ VIRTCHNL_OP_ENABLE_CHANNELS_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_tc_info)) /
+ sizeof(struct virtchnl_channel_info),
+
+ VIRTCHNL_OP_ENABLE_DISABLE_DEL_QUEUES_V2_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_del_ena_dis_queues)) /
+ sizeof(struct virtchnl_queue_chunk),
+
+ VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_queue_vector_maps)) /
+ sizeof(struct virtchnl_queue_vector),
+
+ VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list_v2)) /
+ sizeof(struct virtchnl_vlan_filter),
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+ u8 *msg, u16 msglen)
+{
+ bool err_msg_format = false;
+ u32 valid_len = 0;
+
+ /* Validate message length. */
+ switch (v_opcode) {
+ case VIRTCHNL_OP_VERSION:
+ valid_len = sizeof(struct virtchnl_version_info);
+ break;
+ case VIRTCHNL_OP_RESET_VF:
+ break;
+ case VIRTCHNL_OP_GET_VF_RESOURCES:
+ if (VF_IS_V11(ver))
+ valid_len = sizeof(u32);
+ break;
+ case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+ valid_len = sizeof(struct virtchnl_txq_info);
+ break;
+ case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+ valid_len = sizeof(struct virtchnl_rxq_info);
+ break;
+ case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+ valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_vsi_queue_config_info *vqc =
+ (struct virtchnl_vsi_queue_config_info *)msg;
+
+ if (vqc->num_queue_pairs == 0 || vqc->num_queue_pairs >
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vqc->num_queue_pairs *
+ sizeof(struct
+ virtchnl_queue_pair_info));
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+ valid_len = sizeof(struct virtchnl_irq_map_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_irq_map_info *vimi =
+ (struct virtchnl_irq_map_info *)msg;
+
+ if (vimi->num_vectors == 0 || vimi->num_vectors >
+ VIRTCHNL_OP_CONFIG_IRQ_MAP_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vimi->num_vectors *
+ sizeof(struct virtchnl_vector_map));
+ }
+ break;
+ case VIRTCHNL_OP_ENABLE_QUEUES:
+ case VIRTCHNL_OP_DISABLE_QUEUES:
+ valid_len = sizeof(struct virtchnl_queue_select);
+ break;
+ case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
+ break;
+ case VIRTCHNL_OP_ADD_ETH_ADDR:
+ case VIRTCHNL_OP_DEL_ETH_ADDR:
+ valid_len = sizeof(struct virtchnl_ether_addr_list);
+ if (msglen >= valid_len) {
+ struct virtchnl_ether_addr_list *veal =
+ (struct virtchnl_ether_addr_list *)msg;
+
+ if (veal->num_elements == 0 || veal->num_elements >
+ VIRTCHNL_OP_ADD_DEL_ETH_ADDR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += veal->num_elements *
+ sizeof(struct virtchnl_ether_addr);
+ }
+ break;
+ case VIRTCHNL_OP_ADD_VLAN:
+ case VIRTCHNL_OP_DEL_VLAN:
+ valid_len = sizeof(struct virtchnl_vlan_filter_list);
+ if (msglen >= valid_len) {
+ struct virtchnl_vlan_filter_list *vfl =
+ (struct virtchnl_vlan_filter_list *)msg;
+
+ if (vfl->num_elements == 0 || vfl->num_elements >
+ VIRTCHNL_OP_ADD_DEL_VLAN_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += vfl->num_elements * sizeof(u16);
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+ valid_len = sizeof(struct virtchnl_promisc_info);
+ break;
+ case VIRTCHNL_OP_GET_STATS:
+ valid_len = sizeof(struct virtchnl_queue_select);
+ break;
+ case VIRTCHNL_OP_CONFIG_RSS_KEY:
+ valid_len = sizeof(struct virtchnl_rss_key);
+ if (msglen >= valid_len) {
+ struct virtchnl_rss_key *vrk =
+ (struct virtchnl_rss_key *)msg;
+
+ if (vrk->key_len == 0) {
+ /* zero length is allowed as input */
+ break;
+ }
+
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_RSS_LUT:
+ valid_len = sizeof(struct virtchnl_rss_lut);
+ if (msglen >= valid_len) {
+ struct virtchnl_rss_lut *vrl =
+ (struct virtchnl_rss_lut *)msg;
+
+ if (vrl->lut_entries == 0) {
+ /* zero entries is allowed as input */
+ break;
+ }
+
+ valid_len += vrl->lut_entries - 1;
+ }
+ break;
+ case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ break;
+ case VIRTCHNL_OP_SET_RSS_HENA:
+ valid_len = sizeof(struct virtchnl_rss_hena);
+ break;
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ break;
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ valid_len = sizeof(struct virtchnl_vf_res_request);
+ break;
+ case VIRTCHNL_OP_ENABLE_CHANNELS:
+ valid_len = sizeof(struct virtchnl_tc_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_tc_info *vti =
+ (struct virtchnl_tc_info *)msg;
+
+ if (vti->num_tc == 0 || vti->num_tc >
+ VIRTCHNL_OP_ENABLE_CHANNELS_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vti->num_tc - 1) *
+ sizeof(struct virtchnl_channel_info);
+ }
+ break;
+ case VIRTCHNL_OP_DISABLE_CHANNELS:
+ break;
+ case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+ case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+ valid_len = sizeof(struct virtchnl_filter);
+ break;
+ case VIRTCHNL_OP_ADD_RSS_CFG:
+ case VIRTCHNL_OP_DEL_RSS_CFG:
+ valid_len = sizeof(struct virtchnl_rss_cfg);
+ break;
+ case VIRTCHNL_OP_ADD_FDIR_FILTER:
+ valid_len = sizeof(struct virtchnl_fdir_add);
+ break;
+ case VIRTCHNL_OP_DEL_FDIR_FILTER:
+ valid_len = sizeof(struct virtchnl_fdir_del);
+ break;
+ case VIRTCHNL_OP_GET_QOS_CAPS:
+ break;
+ case VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP:
+ valid_len = sizeof(struct virtchnl_queue_tc_mapping);
+ if (msglen >= valid_len) {
+ struct virtchnl_queue_tc_mapping *q_tc =
+ (struct virtchnl_queue_tc_mapping *)msg;
+ if (q_tc->num_tc == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (q_tc->num_tc - 1) *
+ sizeof(q_tc->tc[0]);
+ }
+ break;
+ case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS:
+ break;
+ case VIRTCHNL_OP_ADD_VLAN_V2:
+ case VIRTCHNL_OP_DEL_VLAN_V2:
+ valid_len = sizeof(struct virtchnl_vlan_filter_list_v2);
+ if (msglen >= valid_len) {
+ struct virtchnl_vlan_filter_list_v2 *vfl =
+ (struct virtchnl_vlan_filter_list_v2 *)msg;
+
+ if (vfl->num_elements == 0 || vfl->num_elements >
+ VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vfl->num_elements - 1) *
+ sizeof(struct virtchnl_vlan_filter);
+ }
+ break;
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2:
+ case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2:
+ case VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2:
+ valid_len = sizeof(struct virtchnl_vlan_setting);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_CAPS:
+ valid_len = sizeof(struct virtchnl_ptp_caps);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_TIME:
+ case VIRTCHNL_OP_1588_PTP_SET_TIME:
+ valid_len = sizeof(struct virtchnl_phc_time);
+ break;
+ case VIRTCHNL_OP_1588_PTP_ADJ_TIME:
+ valid_len = sizeof(struct virtchnl_phc_adj_time);
+ break;
+ case VIRTCHNL_OP_1588_PTP_ADJ_FREQ:
+ valid_len = sizeof(struct virtchnl_phc_adj_freq);
+ break;
+ case VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP:
+ valid_len = sizeof(struct virtchnl_phc_tx_tstamp);
+ break;
+ case VIRTCHNL_OP_1588_PTP_SET_PIN_CFG:
+ valid_len = sizeof(struct virtchnl_phc_set_pin);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS:
+ break;
+ case VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP:
+ valid_len = sizeof(struct virtchnl_phc_ext_tstamp);
+ break;
+ case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+ case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+ valid_len = sizeof(struct virtchnl_del_ena_dis_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl_del_ena_dis_queues *qs =
+ (struct virtchnl_del_ena_dis_queues *)msg;
+ if (qs->chunks.num_chunks == 0 ||
+ qs->chunks.num_chunks > VIRTCHNL_OP_ENABLE_DISABLE_DEL_QUEUES_V2_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (qs->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl_queue_chunk);
+ }
+ break;
+ case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+ valid_len = sizeof(struct virtchnl_queue_vector_maps);
+ if (msglen >= valid_len) {
+ struct virtchnl_queue_vector_maps *v_qp =
+ (struct virtchnl_queue_vector_maps *)msg;
+ if (v_qp->num_qv_maps == 0 ||
+ v_qp->num_qv_maps > VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_qp->num_qv_maps - 1) *
+ sizeof(struct virtchnl_queue_vector);
+ }
+ break;
+ /* These are always errors coming from the VF. */
+ case VIRTCHNL_OP_EVENT:
+ case VIRTCHNL_OP_UNKNOWN:
+ default:
+ return VIRTCHNL_STATUS_ERR_PARAM;
+ }
+ /* few more checks */
+ if (err_msg_format || valid_len != msglen)
+ return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+ return 0;
+}
+#endif /* _VIRTCHNL_H_ */
diff --git a/drivers/net/idpf/base/virtchnl2.h b/drivers/net/idpf/base/virtchnl2.h
new file mode 100644
index 0000000000..d0af6ef7c7
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl2.h
@@ -0,0 +1,1411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL2_H_
+#define _VIRTCHNL2_H_
+
+/* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
+ * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
+ * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ */
+
+#include "virtchnl2_lan_desc.h"
+
+/* Error Codes
+ * Note that many older versions of various iAVF drivers convert the reported
+ * status code directly into an iavf_status enumeration. For this reason, it
+ * is important that the values of these enumerations line up.
+ */
+#define VIRTCHNL2_STATUS_SUCCESS 0
+#define VIRTCHNL2_STATUS_ERR_PARAM -5
+#define VIRTCHNL2_STATUS_ERR_OPCODE_MISMATCH -38
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
+ { virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
+ { virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+/* New major set of opcodes introduced and so leaving room for
+ * old misc opcodes to be added in future. Also these opcodes may only
+ * be used if both the PF and VF have successfully negotiated the
+ * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
+ */
+#define VIRTCHNL2_OP_UNKNOWN 0
+#define VIRTCHNL2_OP_VERSION 1
+#define VIRTCHNL2_OP_GET_CAPS 500
+#define VIRTCHNL2_OP_CREATE_VPORT 501
+#define VIRTCHNL2_OP_DESTROY_VPORT 502
+#define VIRTCHNL2_OP_ENABLE_VPORT 503
+#define VIRTCHNL2_OP_DISABLE_VPORT 504
+#define VIRTCHNL2_OP_CONFIG_TX_QUEUES 505
+#define VIRTCHNL2_OP_CONFIG_RX_QUEUES 506
+#define VIRTCHNL2_OP_ENABLE_QUEUES 507
+#define VIRTCHNL2_OP_DISABLE_QUEUES 508
+#define VIRTCHNL2_OP_ADD_QUEUES 509
+#define VIRTCHNL2_OP_DEL_QUEUES 510
+#define VIRTCHNL2_OP_MAP_QUEUE_VECTOR 511
+#define VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR 512
+#define VIRTCHNL2_OP_GET_RSS_KEY 513
+#define VIRTCHNL2_OP_SET_RSS_KEY 514
+#define VIRTCHNL2_OP_GET_RSS_LUT 515
+#define VIRTCHNL2_OP_SET_RSS_LUT 516
+#define VIRTCHNL2_OP_GET_RSS_HASH 517
+#define VIRTCHNL2_OP_SET_RSS_HASH 518
+#define VIRTCHNL2_OP_SET_SRIOV_VFS 519
+#define VIRTCHNL2_OP_ALLOC_VECTORS 520
+#define VIRTCHNL2_OP_DEALLOC_VECTORS 521
+#define VIRTCHNL2_OP_EVENT 522
+#define VIRTCHNL2_OP_GET_STATS 523
+#define VIRTCHNL2_OP_RESET_VF 524
+ /* opcode 525 is reserved */
+#define VIRTCHNL2_OP_GET_PTYPE_INFO 526
+ /* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+ * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ */
+ /* opcodes 529, 530, and 531 are reserved */
+#define VIRTCHNL2_OP_CREATE_ADI 532
+#define VIRTCHNL2_OP_DESTROY_ADI 533
+
+#define VIRTCHNL2_MAX_NUM_PROTO_HDRS 32
+
+#define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX 0xFFFF
+
+/* VIRTCHNL2_VPORT_TYPE
+ * Type of virtual port
+ */
+#define VIRTCHNL2_VPORT_TYPE_DEFAULT 0
+#define VIRTCHNL2_VPORT_TYPE_SRIOV 1
+#define VIRTCHNL2_VPORT_TYPE_SIOV 2
+#define VIRTCHNL2_VPORT_TYPE_SUBDEV 3
+#define VIRTCHNL2_VPORT_TYPE_MNG 4
+
+/* VIRTCHNL2_QUEUE_MODEL
+ * Type of queue model
+ *
+ * In the single queue model, the same transmit descriptor queue is used by
+ * software to post descriptors to hardware and by hardware to post completed
+ * descriptors to software.
+ * Likewise, the same receive descriptor queue is used by hardware to post
+ * completions to software and by software to post buffers to hardware.
+ */
+#define VIRTCHNL2_QUEUE_MODEL_SINGLE 0
+/* In the split queue model, hardware uses transmit completion queues to post
+ * descriptor/buffer completions to software, while software uses transmit
+ * descriptor queues to post descriptors to hardware.
+ * Likewise, hardware posts descriptor completions to the receive descriptor
+ * queue, while software uses receive buffer queues to post buffers to hardware.
+ */
+#define VIRTCHNL2_QUEUE_MODEL_SPLIT 1
+
+/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
+ * Checksum offload capability flags
+ */
+#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 BIT(0)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP BIT(1)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP BIT(2)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP BIT(3)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP BIT(4)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP BIT(5)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP BIT(6)
+#define VIRTCHNL2_CAP_TX_CSUM_GENERIC BIT(7)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 BIT(8)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP BIT(9)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP BIT(10)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP BIT(11)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP BIT(12)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP BIT(13)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP BIT(14)
+#define VIRTCHNL2_CAP_RX_CSUM_GENERIC BIT(15)
+#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL BIT(16)
+#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL BIT(17)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL BIT(18)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL BIT(19)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL BIT(20)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL BIT(21)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL BIT(22)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL BIT(23)
+
+/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
+ * Segmentation offload capability flags
+ */
+#define VIRTCHNL2_CAP_SEG_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_SEG_IPV4_UDP BIT(1)
+#define VIRTCHNL2_CAP_SEG_IPV4_SCTP BIT(2)
+#define VIRTCHNL2_CAP_SEG_IPV6_TCP BIT(3)
+#define VIRTCHNL2_CAP_SEG_IPV6_UDP BIT(4)
+#define VIRTCHNL2_CAP_SEG_IPV6_SCTP BIT(5)
+#define VIRTCHNL2_CAP_SEG_GENERIC BIT(6)
+#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL BIT(7)
+#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL BIT(8)
+
+/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
+ * Receive Side Scaling Flow type capability flags
+ */
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP BIT(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP BIT(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER BIT(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP BIT(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP BIT(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP BIT(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER BIT(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH BIT(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP BIT(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP BIT(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH BIT(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP BIT(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP BIT(13)
+
+/* VIRTCHNL2_HEADER_SPLIT_CAPS
+ * Header split capability flags
+ */
+/* for prepended metadata */
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 BIT(0)
+/* all VLANs go into header buffer */
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 BIT(1)
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 BIT(2)
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6 BIT(3)
+
+/* VIRTCHNL2_RSC_OFFLOAD_CAPS
+ * Receive Side Coalescing offload capability flags
+ */
+#define VIRTCHNL2_CAP_RSC_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_RSC_IPV4_SCTP BIT(1)
+#define VIRTCHNL2_CAP_RSC_IPV6_TCP BIT(2)
+#define VIRTCHNL2_CAP_RSC_IPV6_SCTP BIT(3)
+
+/* VIRTCHNL2_OTHER_CAPS
+ * Other capability flags
+ * SPLITQ_QSCHED: Queue based scheduling using split queue model
+ * TX_VLAN: VLAN tag insertion
+ * RX_VLAN: VLAN tag stripping
+ */
+#define VIRTCHNL2_CAP_RDMA BIT(0)
+#define VIRTCHNL2_CAP_SRIOV BIT(1)
+#define VIRTCHNL2_CAP_MACFILTER BIT(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR BIT(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED BIT(4)
+#define VIRTCHNL2_CAP_CRC BIT(5)
+#define VIRTCHNL2_CAP_ADQ BIT(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR BIT(7)
+#define VIRTCHNL2_CAP_PROMISC BIT(8)
+#define VIRTCHNL2_CAP_LINK_SPEED BIT(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC BIT(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES BIT(11)
+/* require additional info */
+#define VIRTCHNL2_CAP_VLAN BIT(12)
+#define VIRTCHNL2_CAP_PTP BIT(13)
+#define VIRTCHNL2_CAP_ADV_RSS BIT(15)
+#define VIRTCHNL2_CAP_FDIR BIT(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC BIT(17)
+#define VIRTCHNL2_CAP_PTYPE BIT(18)
+
+/* VIRTCHNL2_DEVICE_TYPE */
+/* underlying device type */
+#define VIRTCHNL2_MEV_DEVICE 0
+
+/* VIRTCHNL2_TXQ_SCHED_MODE
+ * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
+ * completions where descriptors and buffers are completed at the same time.
+ * Flow scheduling mode allows for out of order packet processing where
+ * descriptors are cleaned in order, but buffers can be completed out of order.
+ */
+#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE 0
+#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW 1
+
+/* VIRTCHNL2_TXQ_FLAGS
+ * Transmit Queue feature flags
+ *
+ * Enable rule miss completion type; packet completion for a packet
+ * sent on exception path; only relevant in flow scheduling mode
+ */
+#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL BIT(0)
+
+/* VIRTCHNL2_PEER_TYPE
+ * Transmit mailbox peer type
+ */
+#define VIRTCHNL2_RDMA_CPF 0
+#define VIRTCHNL2_NVME_CPF 1
+#define VIRTCHNL2_ATE_CPF 2
+#define VIRTCHNL2_LCE_CPF 3
+
+/* VIRTCHNL2_RXQ_FLAGS
+ * Receive Queue Feature flags
+ */
+#define VIRTCHNL2_RXQ_RSC BIT(0)
+#define VIRTCHNL2_RXQ_HDR_SPLIT BIT(1)
+/* When set, packet descriptors are flushed by hardware immediately after
+ * processing each packet.
+ */
+#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK BIT(2)
+#define VIRTCHNL2_RX_DESC_SIZE_16BYTE BIT(3)
+#define VIRTCHNL2_RX_DESC_SIZE_32BYTE BIT(4)
+
+/* VIRTCHNL2_RSS_ALGORITHM
+ * Type of RSS algorithm
+ */
+#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC 0
+#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC 1
+#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC 2
+#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC 3
+
+/* VIRTCHNL2_EVENT_CODES
+ * Type of event
+ */
+#define VIRTCHNL2_EVENT_UNKNOWN 0
+#define VIRTCHNL2_EVENT_LINK_CHANGE 1
+
+/* VIRTCHNL2_QUEUE_TYPE
+ * Transmit and Receive queue types are valid in legacy as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced -
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
+ * the queue where hardware posts completions.
+ */
+#define VIRTCHNL2_QUEUE_TYPE_TX 0
+#define VIRTCHNL2_QUEUE_TYPE_RX 1
+#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION 2
+#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER 3
+#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX 4
+#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX 5
+
+/* VIRTCHNL2_ITR_IDX
+ * Virtchannel interrupt throttling rate index
+ */
+#define VIRTCHNL2_ITR_IDX_0 0
+#define VIRTCHNL2_ITR_IDX_1 1
+#define VIRTCHNL2_ITR_IDX_2 2
+#define VIRTCHNL2_ITR_IDX_NO_ITR 3
+
+/* VIRTCHNL2_VECTOR_LIMITS
+ * Since PF/VF messages are limited by __le16 size, precalculate the maximum
+ * possible values of nested elements in virtchnl structures that virtual
+ * channel can possibly handle in a single message.
+ */
+
+#define VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX (\
+ ((__le16)(~0) - sizeof(struct virtchnl2_del_ena_dis_queues)) / \
+ sizeof(struct virtchnl2_queue_chunk))
+
+#define VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX (\
+ ((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
+ sizeof(struct virtchnl2_queue_vector))
+
+/* VIRTCHNL2_PROTO_HDR_TYPE
+ * Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ANY 0
+#define VIRTCHNL2_PROTO_HDR_PRE_MAC 1
+/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_MAC 2
+#define VIRTCHNL2_PROTO_HDR_POST_MAC 3
+#define VIRTCHNL2_PROTO_HDR_ETHERTYPE 4
+#define VIRTCHNL2_PROTO_HDR_VLAN 5
+#define VIRTCHNL2_PROTO_HDR_SVLAN 6
+#define VIRTCHNL2_PROTO_HDR_CVLAN 7
+#define VIRTCHNL2_PROTO_HDR_MPLS 8
+#define VIRTCHNL2_PROTO_HDR_UMPLS 9
+#define VIRTCHNL2_PROTO_HDR_MMPLS 10
+#define VIRTCHNL2_PROTO_HDR_PTP 11
+#define VIRTCHNL2_PROTO_HDR_CTRL 12
+#define VIRTCHNL2_PROTO_HDR_LLDP 13
+#define VIRTCHNL2_PROTO_HDR_ARP 14
+#define VIRTCHNL2_PROTO_HDR_ECP 15
+#define VIRTCHNL2_PROTO_HDR_EAPOL 16
+#define VIRTCHNL2_PROTO_HDR_PPPOD 17
+#define VIRTCHNL2_PROTO_HDR_PPPOE 18
+/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV4 19
+/* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG 20
+/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV6 21
+/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG 22
+#define VIRTCHNL2_PROTO_HDR_IPV6_EH 23
+/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_UDP 24
+/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_TCP 25
+/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_SCTP 26
+/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ICMP 27
+/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ICMPV6 28
+#define VIRTCHNL2_PROTO_HDR_IGMP 29
+#define VIRTCHNL2_PROTO_HDR_AH 30
+#define VIRTCHNL2_PROTO_HDR_ESP 31
+#define VIRTCHNL2_PROTO_HDR_IKE 32
+#define VIRTCHNL2_PROTO_HDR_NATT_KEEP 33
+/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_PAY 34
+#define VIRTCHNL2_PROTO_HDR_L2TPV2 35
+#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL 36
+#define VIRTCHNL2_PROTO_HDR_L2TPV3 37
+#define VIRTCHNL2_PROTO_HDR_GTP 38
+#define VIRTCHNL2_PROTO_HDR_GTP_EH 39
+#define VIRTCHNL2_PROTO_HDR_GTPCV2 40
+#define VIRTCHNL2_PROTO_HDR_GTPC_TEID 41
+#define VIRTCHNL2_PROTO_HDR_GTPU 42
+#define VIRTCHNL2_PROTO_HDR_GTPU_UL 43
+#define VIRTCHNL2_PROTO_HDR_GTPU_DL 44
+#define VIRTCHNL2_PROTO_HDR_ECPRI 45
+#define VIRTCHNL2_PROTO_HDR_VRRP 46
+#define VIRTCHNL2_PROTO_HDR_OSPF 47
+/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_TUN 48
+#define VIRTCHNL2_PROTO_HDR_GRE 49
+#define VIRTCHNL2_PROTO_HDR_NVGRE 50
+#define VIRTCHNL2_PROTO_HDR_VXLAN 51
+#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE 52
+#define VIRTCHNL2_PROTO_HDR_GENEVE 53
+#define VIRTCHNL2_PROTO_HDR_NSH 54
+#define VIRTCHNL2_PROTO_HDR_QUIC 55
+#define VIRTCHNL2_PROTO_HDR_PFCP 56
+#define VIRTCHNL2_PROTO_HDR_PFCP_NODE 57
+#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION 58
+#define VIRTCHNL2_PROTO_HDR_RTP 59
+#define VIRTCHNL2_PROTO_HDR_ROCE 60
+#define VIRTCHNL2_PROTO_HDR_ROCEV1 61
+#define VIRTCHNL2_PROTO_HDR_ROCEV2 62
+/* protocol ids upto 32767 are reserved for AVF use */
+/* 32768 - 65534 are used for user defined protocol ids */
+/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_NO_PROTO 65535
+
+#define VIRTCHNL2_VERSION_MAJOR_2 2
+#define VIRTCHNL2_VERSION_MINOR_0 0
+
+
+/* VIRTCHNL2_OP_VERSION
+ * VF posts its version number to the CP. CP responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This version opcode MUST always be specified as == 1, regardless of other
+ * changes in the API. The CP must always respond to this message without
+ * error regardless of version mismatch.
+ */
+struct virtchnl2_version_info {
+ u32 major;
+ u32 minor;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
+
+/* VIRTCHNL2_OP_GET_CAPS
+ * Dataplane driver sends this message to CP to negotiate capabilities and
+ * provides a virtchnl2_get_capabilities structure with its desired
+ * capabilities, max_sriov_vfs and num_allocated_vectors.
+ * CP responds with a virtchnl2_get_capabilities structure updated
+ * with allowed capabilities and the other fields as below.
+ * If PF sets max_sriov_vfs as 0, CP will respond with max number of VFs
+ * that can be created by this PF. For any other value 'n', CP responds
+ * with max_sriov_vfs set to min(n, x) where x is the max number of VFs
+ * allowed by CP's policy. max_sriov_vfs is not applicable for VFs.
+ * If dataplane driver sets num_allocated_vectors as 0, CP will respond with 1
+ * which is default vector associated with the default mailbox. For any other
+ * value 'n', CP responds with a value <= n based on the CP's policy of
+ * max number of vectors for a PF.
+ * CP will respond with the vector ID of mailbox allocated to the PF in
+ * mailbox_vector_id and the number of itr index registers in itr_idx_map.
+ * It also responds with default number of vports that the dataplane driver
+ * should comeup with in default_num_vports and maximum number of vports that
+ * can be supported in max_vports
+ */
+struct virtchnl2_get_capabilities {
+ /* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
+ __le32 csum_caps;
+
+ /* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
+ __le32 seg_caps;
+
+ /* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
+ __le32 hsplit_caps;
+
+ /* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
+ __le32 rsc_caps;
+
+ /* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions */
+ __le64 rss_caps;
+
+
+ /* see VIRTCHNL2_OTHER_CAPS definitions */
+ __le64 other_caps;
+
+ /* DYN_CTL register offset and vector id for mailbox provided by CP */
+ __le32 mailbox_dyn_ctl;
+ __le16 mailbox_vector_id;
+ /* Maximum number of allocated vectors for the device */
+ __le16 num_allocated_vectors;
+
+ /* Maximum number of queues that can be supported */
+ __le16 max_rx_q;
+ __le16 max_tx_q;
+ __le16 max_rx_bufq;
+ __le16 max_tx_complq;
+
+ /* The PF sends the maximum VFs it is requesting. The CP responds with
+ * the maximum VFs granted.
+ */
+ __le16 max_sriov_vfs;
+
+ /* maximum number of vports that can be supported */
+ __le16 max_vports;
+ /* default number of vports driver should allocate on load */
+ __le16 default_num_vports;
+
+ /* Max header length hardware can parse/checksum, in bytes */
+ __le16 max_tx_hdr_size;
+
+ /* Max number of scatter gather buffers that can be sent per transmit
+ * packet without needing to be linearized
+ */
+ u8 max_sg_bufs_per_tx_pkt;
+
+ /* see VIRTCHNL2_ITR_IDX definition */
+ u8 itr_idx_map;
+
+ __le16 pad1;
+
+ /* version of Control Plane that is running */
+ __le16 oem_cp_ver_major;
+ __le16 oem_cp_ver_minor;
+ /* see VIRTCHNL2_DEVICE_TYPE definitions */
+ __le32 device_type;
+
+ u8 reserved[12];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
+
+struct virtchnl2_queue_reg_chunk {
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ __le32 pad;
+
+ /* Queue tail register offset and spacing provided by CP */
+ __le64 qtail_reg_start;
+ __le32 qtail_reg_spacing;
+
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl2_queue_reg_chunks {
+ __le16 num_chunks;
+ u8 reserved[6];
+ struct virtchnl2_queue_reg_chunk chunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+
+#define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS 6
+
+/* VIRTCHNL2_OP_CREATE_VPORT
+ * PF sends this message to CP to create a vport by filling in required
+ * fields of virtchnl2_create_vport structure.
+ * CP responds with the updated virtchnl2_create_vport structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_create_vport {
+ /* PF/VF populates the following fields on request */
+ /* see VIRTCHNL2_VPORT_TYPE definitions */
+ __le16 vport_type;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 txq_model;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 rxq_model;
+ __le16 num_tx_q;
+ /* valid only if txq_model is split queue */
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ /* valid only if rxq_model is split queue */
+ __le16 num_rx_bufq;
+ /* relative receive queue index to be used as default */
+ __le16 default_rx_q;
+ /* used to align PF and CP in case of default multiple vports, it is
+ * filled by the PF and CP returns the same value, to enable the driver
+ * to support multiple asynchronous parallel CREATE_VPORT requests and
+ * associate a response to a specific request
+ */
+ __le16 vport_index;
+
+ /* CP populates the following fields on response */
+ __le16 max_mtu;
+ __le32 vport_id;
+ u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
+ __le16 pad;
+ /* see VIRTCHNL2_RX_DESC_IDS definitions */
+ __le64 rx_desc_ids;
+ /* see VIRTCHNL2_TX_DESC_IDS definitions */
+ __le64 tx_desc_ids;
+
+#define MAX_Q_REGIONS 16
+ __le32 max_qs_per_qregion[MAX_Q_REGIONS];
+ __le32 qregion_total_qs;
+ __le16 qregion_type;
+ __le16 pad2;
+
+ /* see VIRTCHNL2_RSS_ALGORITHM definitions */
+ __le32 rss_algorithm;
+ __le16 rss_key_size;
+ __le16 rss_lut_size;
+
+ /* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
+ __le32 rx_split_pos;
+
+ u8 reserved[20];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+
+/* VIRTCHNL2_OP_DESTROY_VPORT
+ * VIRTCHNL2_OP_ENABLE_VPORT
+ * VIRTCHNL2_OP_DISABLE_VPORT
+ * PF sends this message to CP to destroy, enable or disable a vport by filling
+ * in the vport_id in virtchnl2_vport structure.
+ * CP responds with the status of the requested operation.
+ */
+struct virtchnl2_vport {
+ __le32 vport_id;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
+
+/* Transmit queue config info */
+struct virtchnl2_txq_info {
+ __le64 dma_ring_addr;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+
+ __le32 queue_id;
+ /* valid only if queue model is split and type is trasmit queue. Used
+ * in many to one mapping of transmit queues to completion queue
+ */
+ __le16 relative_queue_id;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 model;
+
+ /* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
+ __le16 sched_mode;
+
+ /* see VIRTCHNL2_TXQ_FLAGS definitions */
+ __le16 qflags;
+ __le16 ring_len;
+
+ /* valid only if queue model is split and type is transmit queue */
+ __le16 tx_compl_queue_id;
+ /* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
+ /* see VIRTCHNL2_PEER_TYPE definitions */
+ __le16 peer_type;
+ /* valid only if queue type is CONFIG_TX and used to deliver messages
+ * for the respective CONFIG_TX queue
+ */
+ __le16 peer_rx_queue_id;
+
+ /* value ranges from 0 to 15 */
+ __le16 qregion_id;
+ u8 pad[2];
+
+ /* Egress pasid is used for SIOV use case */
+ __le32 egress_pasid;
+ __le32 egress_hdr_pasid;
+ __le32 egress_buf_pasid;
+
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
+
+/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
+ * PF sends this message to set up parameters for one or more transmit queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_txq_info
+ * structures. CP configures requested queues and returns a status code. If
+ * num_qinfo specified is greater than the number of queues associated with the
+ * vport, an error is returned and no queues are configured.
+ */
+struct virtchnl2_config_tx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+
+ u8 reserved[10];
+ struct virtchnl2_txq_info qinfo[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+
+/* Receive queue config info */
+struct virtchnl2_rxq_info {
+ /* see VIRTCHNL2_RX_DESC_IDS definitions */
+ __le64 desc_ids;
+ __le64 dma_ring_addr;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 queue_id;
+
+ /* see QUEUE_MODEL definitions */
+ __le16 model;
+
+ __le16 hdr_buffer_size;
+ __le32 data_buffer_size;
+ __le32 max_pkt_size;
+
+ __le16 ring_len;
+ u8 buffer_notif_stride;
+ u8 pad[1];
+
+ /* Applicable only for receive buffer queues */
+ __le64 dma_head_wb_addr;
+
+ /* Applicable only for receive completion queues */
+ /* see VIRTCHNL2_RXQ_FLAGS definitions */
+ __le16 qflags;
+
+ __le16 rx_buffer_low_watermark;
+
+ /* valid only in split queue model */
+ __le16 rx_bufq1_id;
+ /* valid only in split queue model */
+ __le16 rx_bufq2_id;
+ /* it indicates if there is a second buffer, rx_bufq2_id is valid only
+ * if this field is set
+ */
+ u8 bufq2_ena;
+ u8 pad2;
+
+ /* value ranges from 0 to 15 */
+ __le16 qregion_id;
+
+ /* Ingress pasid is used for SIOV use case */
+ __le32 ingress_pasid;
+ __le32 ingress_hdr_pasid;
+ __le32 ingress_buf_pasid;
+
+ u8 reserved[16];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
+
+/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
+ * PF sends this message to set up parameters for one or more receive queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
+ * structures. CP configures requested queues and returns a status code.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ */
+struct virtchnl2_config_rx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+
+ u8 reserved[18];
+ struct virtchnl2_rxq_info qinfo[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+
+/* VIRTCHNL2_OP_ADD_QUEUES
+ * PF sends this message to request additional transmit/receive queues beyond
+ * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
+ * structure is used to specify the number of each type of queues.
+ * CP responds with the same structure with the actual number of queues assigned
+ * followed by num_chunks of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_add_queues {
+ __le32 vport_id;
+ __le16 num_tx_q;
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ __le16 num_rx_bufq;
+ u8 reserved[4];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+
+/* Structure to specify a chunk of contiguous interrupt vectors */
+struct virtchnl2_vector_chunk {
+ __le16 start_vector_id;
+ __le16 start_evv_id;
+ __le16 num_vectors;
+ __le16 pad1;
+
+ /* Register offsets and spacing provided by CP.
+ * dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
+ __le32 dynctl_reg_start;
+ __le32 dynctl_reg_spacing;
+
+ __le32 itrn_reg_start;
+ __le32 itrn_reg_spacing;
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
+
+/* Structure to specify several chunks of contiguous interrupt vectors */
+struct virtchnl2_vector_chunks {
+ __le16 num_vchunks;
+ u8 reserved[14];
+ struct virtchnl2_vector_chunk vchunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+
+/* VIRTCHNL2_OP_ALLOC_VECTORS
+ * PF sends this message to request additional interrupt vectors beyond the
+ * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
+ * structure is used to specify the number of vectors requested. CP responds
+ * with the same structure with the actual number of vectors assigned followed
+ * by virtchnl2_vector_chunks structure identifying the vector ids.
+ */
+struct virtchnl2_alloc_vectors {
+ __le16 num_vectors;
+ u8 reserved[14];
+ struct virtchnl2_vector_chunks vchunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+
+/* VIRTCHNL2_OP_DEALLOC_VECTORS
+ * PF sends this message to release the vectors.
+ * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
+ * away. CP performs requested action and returns status.
+ */
+
+/* VIRTCHNL2_OP_GET_RSS_LUT
+ * VIRTCHNL2_OP_SET_RSS_LUT
+ * PF sends this message to get or set RSS lookup table. Only supported if
+ * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl2_rss_lut structure
+ */
+struct virtchnl2_rss_lut {
+ __le32 vport_id;
+ __le16 lut_entries_start;
+ __le16 lut_entries;
+ u8 reserved[4];
+ __le32 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+
+struct virtchnl2_proto_hdr {
+ /* see VIRTCHNL2_PROTO_HDR_TYPE definitions */
+ __le32 type;
+ __le32 field_selector; /* a bit mask to select field for header type */
+ u8 buffer[64];
+ /*
+ * binary buffer in network order for specific header type.
+ * For example, if type = VIRTCHNL2_PROTO_HDR_IPV4, a IPv4
+ * header is expected to be copied into the buffer.
+ */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_proto_hdr);
+
+struct virtchnl2_proto_hdrs {
+ u8 tunnel_level;
+ /*
+ * specify where protocol header start from.
+ * 0 - from the outer layer
+ * 1 - from the first inner layer
+ * 2 - from the second inner layer
+ * ....
+ */
+ __le32 count; /* the proto layers must < VIRTCHNL2_MAX_NUM_PROTO_HDRS */
+ struct virtchnl2_proto_hdr proto_hdr[VIRTCHNL2_MAX_NUM_PROTO_HDRS];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(2312, virtchnl2_proto_hdrs);
+
+struct virtchnl2_rss_cfg {
+ struct virtchnl2_proto_hdrs proto_hdrs;
+
+ /* see VIRTCHNL2_RSS_ALGORITHM definitions */
+ __le32 rss_algorithm;
+ u8 reserved[128];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(2444, virtchnl2_rss_cfg);
+
+/* VIRTCHNL2_OP_GET_RSS_KEY
+ * PF sends this message to get RSS key. Only supported if both PF and CP
+ * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
+ * the virtchnl2_rss_key structure
+ */
+
+/* VIRTCHNL2_OP_GET_RSS_HASH
+ * VIRTCHNL2_OP_SET_RSS_HASH
+ * PF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the CP sets these to all possible traffic types that the
+ * hardware supports. The PF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
+ * during configuration negotiation.
+ */
+struct virtchnl2_rss_hash {
+ /* Packet Type Groups bitmap */
+ __le64 ptype_groups;
+ __le32 vport_id;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
+
+/* VIRTCHNL2_OP_SET_SRIOV_VFS
+ * This message is used to set number of SRIOV VFs to be created. The actual
+ * allocation of resources for the VFs in terms of vport, queues and interrupts
+ * is done by CP. When this call completes, the APF driver calls
+ * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
+ * The number of VFs set to 0 will destroy all the VFs of this function.
+ */
+
+struct virtchnl2_sriov_vfs_info {
+ __le16 num_vfs;
+ __le16 pad;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
+
+/* VIRTCHNL2_OP_CREATE_ADI
+ * PF sends this message to HMA to create ADI by filling in required
+ * fields of virtchnl2_create_adi structure.
+ * HMA responds with the updated virtchnl2_create_adi structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_create_adi {
+ /* PF sends PASID to HMA */
+ __le32 pasid;
+ /*
+ * mbx_id is set to 1 by PF when requesting HMA to provide HW mailbox
+ * id else it is set to 0 by PF
+ */
+ __le16 mbx_id;
+ /* PF sends mailbox vector id to HMA */
+ __le16 mbx_vec_id;
+ /* HMA populates ADI id */
+ __le16 adi_id;
+ u8 reserved[64];
+ u8 pad[6];
+ /* HMA populates queue chunks */
+ struct virtchnl2_queue_reg_chunks chunks;
+ /* PF sends vector chunks to HMA */
+ struct virtchnl2_vector_chunks vchunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_create_adi);
+
+/* VIRTCHNL2_OP_DESTROY_ADI
+ * PF sends this message to HMA to destroy ADI by filling
+ * in the adi_id in virtchnl2_destropy_adi structure.
+ * HMA responds with the status of the requested operation.
+ */
+struct virtchnl2_destroy_adi {
+ __le16 adi_id;
+ u8 reserved[2];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_destroy_adi);
+
+/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+ * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
+ * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
+ * last ptype.
+ */
+struct virtchnl2_ptype {
+ __le16 ptype_id_10;
+ u8 ptype_id_8;
+ /* number of protocol ids the packet supports, maximum of 32
+ * protocol ids are supported
+ */
+ u8 proto_id_count;
+ __le16 pad;
+ /* proto_id_count decides the allocation of protocol id array */
+ /* see VIRTCHNL2_PROTO_HDR_TYPE */
+ __le16 proto_id[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+
+/* VIRTCHNL2_OP_GET_PTYPE_INFO
+ * PF sends this message to CP to get all supported packet types. It does by
+ * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
+ * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
+ * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
+ * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
+ * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
+ * messages, where each message will have the start ptype id, number of ptypes
+ * sent in that message and the ptype array itself. When CP is done updating
+ * all ptype information it extracted from the package (number of ptypes
+ * extracted might be less than what PF expects), it will append a dummy ptype
+ * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
+ * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
+ * messages.
+ */
+struct virtchnl2_get_ptype_info {
+ __le16 start_ptype_id;
+ __le16 num_ptypes;
+ __le32 pad;
+ struct virtchnl2_ptype ptype[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
+
+/* VIRTCHNL2_OP_GET_STATS
+ * PF/VF sends this message to CP to get the update stats by specifying the
+ * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ */
+struct virtchnl2_vport_stats {
+ __le32 vport_id;
+ u8 pad[4];
+
+ __le64 rx_bytes; /* received bytes */
+ __le64 rx_unicast; /* received unicast pkts */
+ __le64 rx_multicast; /* received multicast pkts */
+ __le64 rx_broadcast; /* received broadcast pkts */
+ __le64 rx_discards;
+ __le64 rx_errors;
+ __le64 rx_unknown_protocol;
+ __le64 tx_bytes; /* transmitted bytes */
+ __le64 tx_unicast; /* transmitted unicast pkts */
+ __le64 tx_multicast; /* transmitted multicast pkts */
+ __le64 tx_broadcast; /* transmitted broadcast pkts */
+ __le64 tx_discards;
+ __le64 tx_errors;
+ u8 reserved[16];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
+
+/* VIRTCHNL2_OP_EVENT
+ * CP sends this message to inform the PF/VF driver of events that may affect
+ * it. No direct response is expected from the driver, though it may generate
+ * other messages in response to this one.
+ */
+struct virtchnl2_event {
+ /* see VIRTCHNL2_EVENT_CODES definitions */
+ __le32 event;
+ /* link_speed provided in Mbps */
+ __le32 link_speed;
+ __le32 vport_id;
+ u8 link_status;
+ u8 pad[3];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
+
+/* VIRTCHNL2_OP_GET_RSS_KEY
+ * VIRTCHNL2_OP_SET_RSS_KEY
+ * PF/VF sends this message to get or set RSS key. Only supported if both
+ * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl2_rss_key structure
+ */
+struct virtchnl2_rss_key {
+ __le32 vport_id;
+ __le16 key_len;
+ u8 pad;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl2_queue_chunk {
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl2_queue_chunks {
+ __le16 num_chunks;
+ u8 reserved[6];
+ struct virtchnl2_queue_chunk chunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+
+/* VIRTCHNL2_OP_ENABLE_QUEUES
+ * VIRTCHNL2_OP_DISABLE_QUEUES
+ * VIRTCHNL2_OP_DEL_QUEUES
+ *
+ * PF sends these messages to enable, disable or delete queues specified in
+ * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * to be enabled/disabled/deleted. Also applicable to single queue receive or
+ * transmit. CP performs requested action and returns status.
+ */
+struct virtchnl2_del_ena_dis_queues {
+ __le32 vport_id;
+ u8 reserved[4];
+ struct virtchnl2_queue_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+
+/* Queue to vector mapping */
+struct virtchnl2_queue_vector {
+ __le32 queue_id;
+ __le16 vector_id;
+ u8 pad[2];
+
+ /* see VIRTCHNL2_ITR_IDX definitions */
+ __le32 itr_idx;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 queue_type;
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
+
+/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+ *
+ * PF sends this message to map or unmap queues to vectors and interrupt
+ * throttling rate index registers. External data buffer contains
+ * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
+ * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
+ * after validating the queue and vector ids and returns a status code.
+ */
+struct virtchnl2_queue_vector_maps {
+ __le32 vport_id;
+ __le16 num_qv_maps;
+ u8 pad[10];
+ struct virtchnl2_queue_vector qv_maps[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+
+
+static inline const char *virtchnl2_op_str(__le32 v_opcode)
+{
+ switch (v_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ return "VIRTCHNL2_OP_VERSION";
+ case VIRTCHNL2_OP_GET_CAPS:
+ return "VIRTCHNL2_OP_GET_CAPS";
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ return "VIRTCHNL2_OP_CREATE_VPORT";
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ return "VIRTCHNL2_OP_DESTROY_VPORT";
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ return "VIRTCHNL2_OP_ENABLE_VPORT";
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ return "VIRTCHNL2_OP_DISABLE_VPORT";
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ return "VIRTCHNL2_OP_CONFIG_TX_QUEUES";
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ return "VIRTCHNL2_OP_CONFIG_RX_QUEUES";
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ return "VIRTCHNL2_OP_ENABLE_QUEUES";
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ return "VIRTCHNL2_OP_DISABLE_QUEUES";
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ return "VIRTCHNL2_OP_ADD_QUEUES";
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ return "VIRTCHNL2_OP_DEL_QUEUES";
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ return "VIRTCHNL2_OP_MAP_QUEUE_VECTOR";
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ return "VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR";
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ return "VIRTCHNL2_OP_GET_RSS_KEY";
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ return "VIRTCHNL2_OP_SET_RSS_KEY";
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ return "VIRTCHNL2_OP_GET_RSS_LUT";
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ return "VIRTCHNL2_OP_SET_RSS_LUT";
+ case VIRTCHNL2_OP_GET_RSS_HASH:
+ return "VIRTCHNL2_OP_GET_RSS_HASH";
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ return "VIRTCHNL2_OP_SET_RSS_HASH";
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ return "VIRTCHNL2_OP_SET_SRIOV_VFS";
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ return "VIRTCHNL2_OP_ALLOC_VECTORS";
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ return "VIRTCHNL2_OP_DEALLOC_VECTORS";
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ return "VIRTCHNL2_OP_GET_PTYPE_INFO";
+ case VIRTCHNL2_OP_GET_STATS:
+ return "VIRTCHNL2_OP_GET_STATS";
+ case VIRTCHNL2_OP_EVENT:
+ return "VIRTCHNL2_OP_EVENT";
+ case VIRTCHNL2_OP_RESET_VF:
+ return "VIRTCHNL2_OP_RESET_VF";
+ case VIRTCHNL2_OP_CREATE_ADI:
+ return "VIRTCHNL2_OP_CREATE_ADI";
+ case VIRTCHNL2_OP_DESTROY_ADI:
+ return "VIRTCHNL2_OP_DESTROY_ADI";
+ default:
+ return "Unsupported (update virtchnl2.h)";
+ }
+}
+
+/**
+ * virtchnl2_vc_validate_vf_msg
+ * @ver: Virtchnl2 version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl2_vc_validate_vf_msg(struct virtchnl2_version_info *ver, u32 v_opcode,
+ u8 *msg, __le16 msglen)
+{
+ bool err_msg_format = false;
+ __le32 valid_len = 0;
+
+ /* Validate message length. */
+ switch (v_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ valid_len = sizeof(struct virtchnl2_version_info);
+ break;
+ case VIRTCHNL2_OP_GET_CAPS:
+ valid_len = sizeof(struct virtchnl2_get_capabilities);
+ break;
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ valid_len = sizeof(struct virtchnl2_create_vport);
+ if (msglen >= valid_len) {
+ struct virtchnl2_create_vport *cvport =
+ (struct virtchnl2_create_vport *)msg;
+
+ if (cvport->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (cvport->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_CREATE_ADI:
+ valid_len = sizeof(struct virtchnl2_create_adi);
+ if (msglen >= valid_len) {
+ struct virtchnl2_create_adi *cadi =
+ (struct virtchnl2_create_adi *)msg;
+
+ if (cadi->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ if (cadi->vchunks.num_vchunks == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (cadi->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ valid_len += (cadi->vchunks.num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_DESTROY_ADI:
+ valid_len = sizeof(struct virtchnl2_destroy_adi);
+ break;
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ valid_len = sizeof(struct virtchnl2_vport);
+ break;
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ valid_len = sizeof(struct virtchnl2_config_tx_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_config_tx_queues *ctq =
+ (struct virtchnl2_config_tx_queues *)msg;
+ if (ctq->num_qinfo == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (ctq->num_qinfo - 1) *
+ sizeof(struct virtchnl2_txq_info);
+ }
+ break;
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ valid_len = sizeof(struct virtchnl2_config_rx_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_config_rx_queues *crq =
+ (struct virtchnl2_config_rx_queues *)msg;
+ if (crq->num_qinfo == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (crq->num_qinfo - 1) *
+ sizeof(struct virtchnl2_rxq_info);
+ }
+ break;
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ valid_len = sizeof(struct virtchnl2_add_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)msg;
+
+ if (add_q->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (add_q->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_del_ena_dis_queues *qs =
+ (struct virtchnl2_del_ena_dis_queues *)msg;
+ if (qs->chunks.num_chunks == 0 ||
+ qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (qs->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ valid_len = sizeof(struct virtchnl2_queue_vector_maps);
+ if (msglen >= valid_len) {
+ struct virtchnl2_queue_vector_maps *v_qp =
+ (struct virtchnl2_queue_vector_maps *)msg;
+ if (v_qp->num_qv_maps == 0 ||
+ v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_qp->num_qv_maps - 1) *
+ sizeof(struct virtchnl2_queue_vector);
+ }
+ break;
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ valid_len = sizeof(struct virtchnl2_alloc_vectors);
+ if (msglen >= valid_len) {
+ struct virtchnl2_alloc_vectors *v_av =
+ (struct virtchnl2_alloc_vectors *)msg;
+
+ if (v_av->vchunks.num_vchunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (v_av->vchunks.num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ valid_len = sizeof(struct virtchnl2_vector_chunks);
+ if (msglen >= valid_len) {
+ struct virtchnl2_vector_chunks *v_chunks =
+ (struct virtchnl2_vector_chunks *)msg;
+ if (v_chunks->num_vchunks == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_chunks->num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ valid_len = sizeof(struct virtchnl2_rss_key);
+ if (msglen >= valid_len) {
+ struct virtchnl2_rss_key *vrk =
+ (struct virtchnl2_rss_key *)msg;
+
+ if (vrk->key_len == 0) {
+ /* zero length is allowed as input */
+ break;
+ }
+
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ valid_len = sizeof(struct virtchnl2_rss_lut);
+ if (msglen >= valid_len) {
+ struct virtchnl2_rss_lut *vrl =
+ (struct virtchnl2_rss_lut *)msg;
+
+ if (vrl->lut_entries == 0) {
+ /* zero entries is allowed as input */
+ break;
+ }
+
+ valid_len += (vrl->lut_entries - 1) * sizeof(__le16);
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_HASH:
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ valid_len = sizeof(struct virtchnl2_rss_hash);
+ break;
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
+ break;
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ valid_len = sizeof(struct virtchnl2_get_ptype_info);
+ break;
+ case VIRTCHNL2_OP_GET_STATS:
+ valid_len = sizeof(struct virtchnl2_vport_stats);
+ break;
+ case VIRTCHNL2_OP_RESET_VF:
+ break;
+ /* These are always errors coming from the VF. */
+ case VIRTCHNL2_OP_EVENT:
+ case VIRTCHNL2_OP_UNKNOWN:
+ default:
+ return VIRTCHNL2_STATUS_ERR_PARAM;
+ }
+ /* few more checks */
+ if (err_msg_format || valid_len != msglen)
+ return VIRTCHNL2_STATUS_ERR_OPCODE_MISMATCH;
+
+ return 0;
+}
+
+#endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/net/idpf/base/virtchnl2_lan_desc.h b/drivers/net/idpf/base/virtchnl2_lan_desc.h
new file mode 100644
index 0000000000..2243b17673
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl2_lan_desc.h
@@ -0,0 +1,603 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+/*
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * For licensing information, see the file 'LICENSE' in the root folder
+ */
+#ifndef _VIRTCHNL2_LAN_DESC_H_
+#define _VIRTCHNL2_LAN_DESC_H_
+
+/* VIRTCHNL2_TX_DESC_IDS
+ * Transmit descriptor ID flags
+ */
+#define VIRTCHNL2_TXDID_DATA BIT(0)
+#define VIRTCHNL2_TXDID_CTX BIT(1)
+#define VIRTCHNL2_TXDID_REINJECT_CTX BIT(2)
+#define VIRTCHNL2_TXDID_FLEX_DATA BIT(3)
+#define VIRTCHNL2_TXDID_FLEX_CTX BIT(4)
+#define VIRTCHNL2_TXDID_FLEX_TSO_CTX BIT(5)
+#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1 BIT(6)
+#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2 BIT(7)
+#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX BIT(8)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX BIT(9)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX BIT(10)
+#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX BIT(11)
+#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED BIT(12)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX BIT(13)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX BIT(14)
+#define VIRTCHNL2_TXDID_DESC_DONE BIT(15)
+
+/* VIRTCHNL2_RX_DESC_IDS
+ * Receive descriptor IDs (range from 0 to 63)
+ */
+#define VIRTCHNL2_RXDID_0_16B_BASE 0
+#define VIRTCHNL2_RXDID_1_32B_BASE 1
+/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+ * differentiated based on queue model; e.g. single queue model can
+ * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+ * for DID 2.
+ */
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ 2
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC 2
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW 3
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB 4
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL 5
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2 6
+#define VIRTCHNL2_RXDID_7_HW_RSVD 7
+/* 9 through 15 are reserved */
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC 16
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN 17
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4 18
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6 19
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW 20
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP 21
+/* 22 through 63 are reserved */
+
+/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+ * Receive descriptor ID bitmasks
+ */
+#define VIRTCHNL2_RXDID_0_16B_BASE_M BIT(VIRTCHNL2_RXDID_0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M BIT(VIRTCHNL2_RXDID_1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+/* 9 through 15 are reserved */
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+/* 22 through 63 are reserved */
+
+/* Rx */
+/* For splitq virtchnl2_rx_flex_desc_adv desc members */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M \
+ MAKEMASK(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M \
+ MAKEMASK(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M \
+ MAKEMASK(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M \
+ MAKEMASK(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M \
+ MAKEMASK(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S 7
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST 8 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S 0 /* 2 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST 8 /* this entry must be last!!! */
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields */
+/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+
+/* for virtchnl2_rx_flex_desc.pkt_length member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M \
+ MAKEMASK(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+
+/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * for singleq (flex) virtchnl2_rx_flex_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S 8
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S 9
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST 16 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * for singleq (flex) virtchnl2_rx_flex_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S 0 /* 4 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S 5
+/* [10:6] reserved */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST 16 /* this entry must be last!!! */
+
+/* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S 63
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S 52
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M \
+ MAKEMASK(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S 38
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M \
+ MAKEMASK(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S 30
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M \
+ MAKEMASK(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S 19
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M \
+ MAKEMASK(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S 0
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M \
+ MAKEMASK(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+
+/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S 0
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S 1
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S 2
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S 3
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S 4
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S 5 /* 3 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S 8
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S 9 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S 11
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S 12 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S 14
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S 15
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S 16 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S 18
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST 19 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S 0
+
+/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S 0
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S 1
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S 2
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S 3 /* 3 bits */
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S 3
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S 4
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S 5
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S 6
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S 7
+
+/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA 0
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID 1
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV 2
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH 3
+
+/* Receive Descriptors */
+/* splitq buf
+ | 16| 0|
+ ----------------------------------------------------------------
+ | RSV | Buffer ID |
+ ----------------------------------------------------------------
+ | Rx packet buffer adresss |
+ ----------------------------------------------------------------
+ | Rx header buffer adresss |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | 0|
+ */
+struct virtchnl2_splitq_rx_buf_desc {
+ struct {
+ __le16 buf_id; /* Buffer Identifier */
+ __le16 rsvd0;
+ __le32 rsvd1;
+ } qword0;
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 rsvd2;
+}; /* read used with buffer queues*/
+
+/* singleq buf
+ | 0|
+ ----------------------------------------------------------------
+ | Rx packet buffer adresss |
+ ----------------------------------------------------------------
+ | Rx header buffer adresss |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | 0|
+ */
+struct virtchnl2_singleq_rx_buf_desc {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 rsvd1;
+ __le64 rsvd2;
+}; /* read used with buffer queues*/
+
+union virtchnl2_rx_buf_desc {
+ struct virtchnl2_singleq_rx_buf_desc read;
+ struct virtchnl2_splitq_rx_buf_desc split_rd;
+};
+
+/* (0x00) singleq wb(compl) */
+struct virtchnl2_singleq_base_rx_desc {
+ struct {
+ struct {
+ __le16 mirroring_status;
+ __le16 l2tag1;
+ } lo_dword;
+ union {
+ __le32 rss; /* RSS Hash */
+ __le32 fd_id; /* Flow Director filter id */
+ } hi_dword;
+ } qword0;
+ struct {
+ /* status/error/PTYPE/length */
+ __le64 status_error_ptype_len;
+ } qword1;
+ struct {
+ __le16 ext_status; /* extended status */
+ __le16 rsvd;
+ __le16 l2tag2_1;
+ __le16 l2tag2_2;
+ } qword2;
+ struct {
+ __le32 reserved;
+ __le32 fd_id;
+ } qword3;
+}; /* writeback */
+
+/* (0x01) singleq flex compl */
+struct virtchnl2_rx_flex_desc {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile id */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* (0x02) */
+struct virtchnl2_rx_flex_desc_nic {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le32 flow_id;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct virtchnl2_rx_flex_desc_sw {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 src_vsi; /* [10:15] are reserved */
+ __le16 flex_md1_rsvd;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le32 rsvd; /* flex words 2-3 are reserved */
+ __le32 ts_high;
+};
+
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct virtchnl2_rx_flex_desc_nic_2 {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flow_id;
+ __le16 src_vsi;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* Rx Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile Id 7
+ */
+struct virtchnl2_rx_flex_desc_adv {
+ /* Qword 0 */
+ u8 rxdid_ucast; /* profile_id=[3:0] */
+ /* rsvd=[5:4] */
+ /* ucast=[7:6] */
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0; /* ptype=[9:0] */
+ /* ip_hdr_err=[10:10] */
+ /* udp_len_err=[11:11] */
+ /* ff0=[15:12] */
+ __le16 pktlen_gen_bufq_id; /* plen=[13:0] */
+ /* gen=[14:14] only in splitq */
+ /* bufq_id=[15:15] only in splitq */
+ __le16 hdrlen_flags; /* header=[9:0] */
+ /* rsc=[10:10] only in splitq */
+ /* sph=[11:11] only in splitq */
+ /* ext_udp_0=[12:12] */
+ /* int_udp_0=[13:13] */
+ /* trunc_mirr=[14:14] */
+ /* miss_prepend=[15:15] */
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 fmd0;
+ __le16 fmd1;
+ /* Qword 2 */
+ __le16 fmd2;
+ u8 fflags2;
+ u8 hash3;
+ __le16 fmd3;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 fmd5;
+ __le16 fmd6;
+ __le16 fmd7_0;
+ __le16 fmd7_1;
+}; /* writeback */
+
+/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
+ * RxDID Profile Id 8
+ * Flex-field 0: BufferID
+ * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
+ * Flex-field 2: Hash[15:0]
+ * Flex-flags 2: Hash[23:16]
+ * Flex-field 3: L2TAG2
+ * Flex-field 5: L2TAG1
+ * Flex-field 7: Timestamp (upper 32 bits)
+ */
+struct virtchnl2_rx_flex_desc_adv_nic_3 {
+ /* Qword 0 */
+ u8 rxdid_ucast; /* profile_id=[3:0] */
+ /* rsvd=[5:4] */
+ /* ucast=[7:6] */
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0; /* ptype=[9:0] */
+ /* ip_hdr_err=[10:10] */
+ /* udp_len_err=[11:11] */
+ /* ff0=[15:12] */
+ __le16 pktlen_gen_bufq_id; /* plen=[13:0] */
+ /* gen=[14:14] only in splitq */
+ /* bufq_id=[15:15] only in splitq */
+ __le16 hdrlen_flags; /* header=[9:0] */
+ /* rsc=[10:10] only in splitq */
+ /* sph=[11:11] only in splitq */
+ /* ext_udp_0=[12:12] */
+ /* int_udp_0=[13:13] */
+ /* trunc_mirr=[14:14] */
+ /* miss_prepend=[15:15] */
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 buf_id; /* only in splitq */
+ union {
+ __le16 raw_cs;
+ __le16 l2tag1;
+ __le16 rscseglen;
+ } misc;
+ /* Qword 2 */
+ __le16 hash1;
+ union {
+ u8 fflags2;
+ u8 mirrorid;
+ u8 hash2;
+ } ff2_mirrid_hash2;
+ u8 hash3;
+ __le16 l2tag2;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 l2tag1;
+ __le16 fmd6;
+ __le32 ts_high;
+}; /* writeback */
+
+union virtchnl2_rx_desc {
+ struct virtchnl2_singleq_rx_buf_desc read;
+ struct virtchnl2_singleq_base_rx_desc base_wb;
+ struct virtchnl2_rx_flex_desc flex_wb;
+ struct virtchnl2_rx_flex_desc_nic flex_nic_wb;
+ struct virtchnl2_rx_flex_desc_sw flex_sw_wb;
+ struct virtchnl2_rx_flex_desc_nic_2 flex_nic_2_wb;
+ struct virtchnl2_rx_flex_desc_adv flex_adv_wb;
+ struct virtchnl2_rx_flex_desc_adv_nic_3 flex_adv_nic_3_wb;
+};
+
+#endif /* _VIRTCHNL_LAN_DESC_H_ */
diff --git a/drivers/net/idpf/base/virtchnl_inline_ipsec.h b/drivers/net/idpf/base/virtchnl_inline_ipsec.h
new file mode 100644
index 0000000000..902f63bd51
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl_inline_ipsec.h
@@ -0,0 +1,567 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL_INLINE_IPSEC_H_
+#define _VIRTCHNL_INLINE_IPSEC_H_
+
+#define VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM 3
+#define VIRTCHNL_IPSEC_MAX_ALGO_CAP_NUM 16
+#define VIRTCHNL_IPSEC_MAX_TX_DESC_NUM 128
+#define VIRTCHNL_IPSEC_MAX_CRYPTO_ITEM_NUMBER 2
+#define VIRTCHNL_IPSEC_MAX_KEY_LEN 128
+#define VIRTCHNL_IPSEC_MAX_SA_DESTROY_NUM 8
+#define VIRTCHNL_IPSEC_SA_DESTROY 0
+#define VIRTCHNL_IPSEC_BROADCAST_VFID 0xFFFFFFFF
+#define VIRTCHNL_IPSEC_INVALID_REQ_ID 0xFFFF
+#define VIRTCHNL_IPSEC_INVALID_SA_CFG_RESP 0xFFFFFFFF
+#define VIRTCHNL_IPSEC_INVALID_SP_CFG_RESP 0xFFFFFFFF
+
+/* crypto type */
+#define VIRTCHNL_AUTH 1
+#define VIRTCHNL_CIPHER 2
+#define VIRTCHNL_AEAD 3
+
+/* caps enabled */
+#define VIRTCHNL_IPSEC_ESN_ENA BIT(0)
+#define VIRTCHNL_IPSEC_UDP_ENCAP_ENA BIT(1)
+#define VIRTCHNL_IPSEC_SA_INDEX_SW_ENA BIT(2)
+#define VIRTCHNL_IPSEC_AUDIT_ENA BIT(3)
+#define VIRTCHNL_IPSEC_BYTE_LIMIT_ENA BIT(4)
+#define VIRTCHNL_IPSEC_DROP_ON_AUTH_FAIL_ENA BIT(5)
+#define VIRTCHNL_IPSEC_ARW_CHECK_ENA BIT(6)
+#define VIRTCHNL_IPSEC_24BIT_SPI_ENA BIT(7)
+
+/* algorithm type */
+/* Hash Algorithm */
+#define VIRTCHNL_HASH_NO_ALG 0 /* NULL algorithm */
+#define VIRTCHNL_AES_CBC_MAC 1 /* AES-CBC-MAC algorithm */
+#define VIRTCHNL_AES_CMAC 2 /* AES CMAC algorithm */
+#define VIRTCHNL_AES_GMAC 3 /* AES GMAC algorithm */
+#define VIRTCHNL_AES_XCBC_MAC 4 /* AES XCBC algorithm */
+#define VIRTCHNL_MD5_HMAC 5 /* HMAC using MD5 algorithm */
+#define VIRTCHNL_SHA1_HMAC 6 /* HMAC using 128 bit SHA algorithm */
+#define VIRTCHNL_SHA224_HMAC 7 /* HMAC using 224 bit SHA algorithm */
+#define VIRTCHNL_SHA256_HMAC 8 /* HMAC using 256 bit SHA algorithm */
+#define VIRTCHNL_SHA384_HMAC 9 /* HMAC using 384 bit SHA algorithm */
+#define VIRTCHNL_SHA512_HMAC 10 /* HMAC using 512 bit SHA algorithm */
+#define VIRTCHNL_SHA3_224_HMAC 11 /* HMAC using 224 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_256_HMAC 12 /* HMAC using 256 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_384_HMAC 13 /* HMAC using 384 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_512_HMAC 14 /* HMAC using 512 bit SHA3 algorithm */
+/* Cipher Algorithm */
+#define VIRTCHNL_CIPHER_NO_ALG 15 /* NULL algorithm */
+#define VIRTCHNL_3DES_CBC 16 /* Triple DES algorithm in CBC mode */
+#define VIRTCHNL_AES_CBC 17 /* AES algorithm in CBC mode */
+#define VIRTCHNL_AES_CTR 18 /* AES algorithm in Counter mode */
+/* AEAD Algorithm */
+#define VIRTCHNL_AES_CCM 19 /* AES algorithm in CCM mode */
+#define VIRTCHNL_AES_GCM 20 /* AES algorithm in GCM mode */
+#define VIRTCHNL_CHACHA20_POLY1305 21 /* algorithm of ChaCha20-Poly1305 */
+
+/* protocol type */
+#define VIRTCHNL_PROTO_ESP 1
+#define VIRTCHNL_PROTO_AH 2
+#define VIRTCHNL_PROTO_RSVD1 3
+
+/* sa mode */
+#define VIRTCHNL_SA_MODE_TRANSPORT 1
+#define VIRTCHNL_SA_MODE_TUNNEL 2
+#define VIRTCHNL_SA_MODE_TRAN_TUN 3
+#define VIRTCHNL_SA_MODE_UNKNOWN 4
+
+/* sa direction */
+#define VIRTCHNL_DIR_INGRESS 1
+#define VIRTCHNL_DIR_EGRESS 2
+#define VIRTCHNL_DIR_INGRESS_EGRESS 3
+
+/* sa termination */
+#define VIRTCHNL_TERM_SOFTWARE 1
+#define VIRTCHNL_TERM_HARDWARE 2
+
+/* sa ip type */
+#define VIRTCHNL_IPV4 1
+#define VIRTCHNL_IPV6 2
+
+/* for virtchnl_ipsec_resp */
+enum inline_ipsec_resp {
+ INLINE_IPSEC_SUCCESS = 0,
+ INLINE_IPSEC_FAIL = -1,
+ INLINE_IPSEC_ERR_FIFO_FULL = -2,
+ INLINE_IPSEC_ERR_NOT_READY = -3,
+ INLINE_IPSEC_ERR_VF_DOWN = -4,
+ INLINE_IPSEC_ERR_INVALID_PARAMS = -5,
+ INLINE_IPSEC_ERR_NO_MEM = -6,
+};
+
+/* Detailed opcodes for DPDK and IPsec use */
+enum inline_ipsec_ops {
+ INLINE_IPSEC_OP_GET_CAP = 0,
+ INLINE_IPSEC_OP_GET_STATUS = 1,
+ INLINE_IPSEC_OP_SA_CREATE = 2,
+ INLINE_IPSEC_OP_SA_UPDATE = 3,
+ INLINE_IPSEC_OP_SA_DESTROY = 4,
+ INLINE_IPSEC_OP_SP_CREATE = 5,
+ INLINE_IPSEC_OP_SP_DESTROY = 6,
+ INLINE_IPSEC_OP_SA_READ = 7,
+ INLINE_IPSEC_OP_EVENT = 8,
+ INLINE_IPSEC_OP_RESP = 9,
+};
+
+#pragma pack(1)
+/* Not all valid, if certain field is invalid, set 1 for all bits */
+struct virtchnl_algo_cap {
+ u32 algo_type;
+
+ u16 block_size;
+
+ u16 min_key_size;
+ u16 max_key_size;
+ u16 inc_key_size;
+
+ u16 min_iv_size;
+ u16 max_iv_size;
+ u16 inc_iv_size;
+
+ u16 min_digest_size;
+ u16 max_digest_size;
+ u16 inc_digest_size;
+
+ u16 min_aad_size;
+ u16 max_aad_size;
+ u16 inc_aad_size;
+};
+#pragma pack()
+
+/* vf record the capability of crypto from the virtchnl */
+struct virtchnl_sym_crypto_cap {
+ u8 crypto_type;
+ u8 algo_cap_num;
+ struct virtchnl_algo_cap algo_cap_list[VIRTCHNL_IPSEC_MAX_ALGO_CAP_NUM];
+};
+
+/* VIRTCHNL_OP_GET_IPSEC_CAP
+ * VF pass virtchnl_ipsec_cap to PF
+ * and PF return capability of ipsec from virtchnl.
+ */
+#pragma pack(1)
+struct virtchnl_ipsec_cap {
+ /* max number of SA per VF */
+ u16 max_sa_num;
+
+ /* IPsec SA Protocol - value ref VIRTCHNL_PROTO_XXX */
+ u8 virtchnl_protocol_type;
+
+ /* IPsec SA Mode - value ref VIRTCHNL_SA_MODE_XXX */
+ u8 virtchnl_sa_mode;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 termination_mode;
+
+ /* number of supported crypto capability */
+ u8 crypto_cap_num;
+
+ /* descriptor ID */
+ u16 desc_id;
+
+ /* capabilities enabled - value ref VIRTCHNL_IPSEC_XXX_ENA */
+ u32 caps_enabled;
+
+ /* crypto capabilities */
+ struct virtchnl_sym_crypto_cap cap[VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM];
+};
+
+/* configuration of crypto function */
+struct virtchnl_ipsec_crypto_cfg_item {
+ u8 crypto_type;
+
+ u32 algo_type;
+
+ /* Length of valid IV data. */
+ u16 iv_len;
+
+ /* Length of digest */
+ u16 digest_len;
+
+ /* SA salt */
+ u32 salt;
+
+ /* The length of the symmetric key */
+ u16 key_len;
+
+ /* key data buffer */
+ u8 key_data[VIRTCHNL_IPSEC_MAX_KEY_LEN];
+};
+#pragma pack()
+
+struct virtchnl_ipsec_sym_crypto_cfg {
+ struct virtchnl_ipsec_crypto_cfg_item
+ items[VIRTCHNL_IPSEC_MAX_CRYPTO_ITEM_NUMBER];
+};
+
+#pragma pack(1)
+/* VIRTCHNL_OP_IPSEC_SA_CREATE
+ * VF send this SA configuration to PF using virtchnl;
+ * PF create SA as configuration and PF driver will return
+ * an unique index (sa_idx) for the created SA.
+ */
+struct virtchnl_ipsec_sa_cfg {
+ /* IPsec SA Protocol - AH/ESP */
+ u8 virtchnl_protocol_type;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 virtchnl_termination;
+
+ /* type of outer IP - IPv4/IPv6 */
+ u8 virtchnl_ip_type;
+
+ /* type of esn - !0:enable/0:disable */
+ u8 esn_enabled;
+
+ /* udp encap - !0:enable/0:disable */
+ u8 udp_encap_enabled;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* reserved */
+ u8 reserved1;
+
+ /* SA security parameter index */
+ u32 spi;
+
+ /* outer src ip address */
+ u8 src_addr[16];
+
+ /* outer dst ip address */
+ u8 dst_addr[16];
+
+ /* SPD reference. Used to link an SA with its policy.
+ * PF drivers may ignore this field.
+ */
+ u16 spd_ref;
+
+ /* high 32 bits of esn */
+ u32 esn_hi;
+
+ /* low 32 bits of esn */
+ u32 esn_low;
+
+ /* When enabled, sa_index must be valid */
+ u8 sa_index_en;
+
+ /* SA index when sa_index_en is true */
+ u32 sa_index;
+
+ /* auditing mode - enable/disable */
+ u8 audit_en;
+
+ /* lifetime byte limit - enable/disable
+ * When enabled, byte_limit_hard and byte_limit_soft
+ * must be valid.
+ */
+ u8 byte_limit_en;
+
+ /* hard byte limit count */
+ u64 byte_limit_hard;
+
+ /* soft byte limit count */
+ u64 byte_limit_soft;
+
+ /* drop on authentication failure - enable/disable */
+ u8 drop_on_auth_fail_en;
+
+ /* anti-reply window check - enable/disable
+ * When enabled, arw_size must be valid.
+ */
+ u8 arw_check_en;
+
+ /* size of arw window, offset by 1. Setting to 0
+ * represents ARW window size of 1. Setting to 127
+ * represents ARW window size of 128
+ */
+ u8 arw_size;
+
+ /* no ip offload mode - enable/disable
+ * When enabled, ip type and address must not be valid.
+ */
+ u8 no_ip_offload_en;
+
+ /* SA Domain. Used to logical separate an SADB into groups.
+ * PF drivers supporting a single group ignore this field.
+ */
+ u16 sa_domain;
+
+ /* crypto configuration */
+ struct virtchnl_ipsec_sym_crypto_cfg crypto_cfg;
+};
+#pragma pack()
+
+/* VIRTCHNL_OP_IPSEC_SA_UPDATE
+ * VF send configuration of index of SA to PF
+ * PF will update SA according to configuration
+ */
+struct virtchnl_ipsec_sa_update {
+ u32 sa_index; /* SA to update */
+ u32 esn_hi; /* high 32 bits of esn */
+ u32 esn_low; /* low 32 bits of esn */
+};
+
+#pragma pack(1)
+/* VIRTCHNL_OP_IPSEC_SA_DESTROY
+ * VF send configuration of index of SA to PF
+ * PF will destroy SA according to configuration
+ * flag bitmap indicate all SA or just selected SA will
+ * be destroyed
+ */
+struct virtchnl_ipsec_sa_destroy {
+ /* All zero bitmap indicates all SA will be destroyed.
+ * Non-zero bitmap indicates the selected SA in
+ * array sa_index will be destroyed.
+ */
+ u8 flag;
+
+ /* selected SA index */
+ u32 sa_index[VIRTCHNL_IPSEC_MAX_SA_DESTROY_NUM];
+};
+
+/* VIRTCHNL_OP_IPSEC_SA_READ
+ * VF send this SA configuration to PF using virtchnl;
+ * PF read SA and will return configuration for the created SA.
+ */
+struct virtchnl_ipsec_sa_read {
+ /* SA valid - invalid/valid */
+ u8 valid;
+
+ /* SA active - inactive/active */
+ u8 active;
+
+ /* SA SN rollover - not_rollover/rollover */
+ u8 sn_rollover;
+
+ /* IPsec SA Protocol - AH/ESP */
+ u8 virtchnl_protocol_type;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 virtchnl_termination;
+
+ /* auditing mode - enable/disable */
+ u8 audit_en;
+
+ /* lifetime byte limit - enable/disable
+ * When set to limit, byte_limit_hard and byte_limit_soft
+ * must be valid.
+ */
+ u8 byte_limit_en;
+
+ /* hard byte limit count */
+ u64 byte_limit_hard;
+
+ /* soft byte limit count */
+ u64 byte_limit_soft;
+
+ /* drop on authentication failure - enable/disable */
+ u8 drop_on_auth_fail_en;
+
+ /* anti-replay window check - enable/disable
+ * When set to check, arw_size, arw_top, and arw must be valid
+ */
+ u8 arw_check_en;
+
+ /* size of arw window, offset by 1. Setting to 0
+ * represents ARW window size of 1. Setting to 127
+ * represents ARW window size of 128
+ */
+ u8 arw_size;
+
+ /* reserved */
+ u8 reserved1;
+
+ /* top of anti-replay-window */
+ u64 arw_top;
+
+ /* anti-replay-window */
+ u8 arw[16];
+
+ /* packets processed */
+ u64 packets_processed;
+
+ /* bytes processed */
+ u64 bytes_processed;
+
+ /* packets dropped */
+ u32 packets_dropped;
+
+ /* authentication failures */
+ u32 auth_fails;
+
+ /* ARW check failures */
+ u32 arw_fails;
+
+ /* type of esn - enable/disable */
+ u8 esn;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* SA security parameter index */
+ u32 spi;
+
+ /* SA salt */
+ u32 salt;
+
+ /* high 32 bits of esn */
+ u32 esn_hi;
+
+ /* low 32 bits of esn */
+ u32 esn_low;
+
+ /* SA Domain. Used to logical separate an SADB into groups.
+ * PF drivers supporting a single group ignore this field.
+ */
+ u16 sa_domain;
+
+ /* SPD reference. Used to link an SA with its policy.
+ * PF drivers may ignore this field.
+ */
+ u16 spd_ref;
+
+ /* crypto configuration. Salt and keys are set to 0 */
+ struct virtchnl_ipsec_sym_crypto_cfg crypto_cfg;
+};
+#pragma pack()
+
+/* Add allowlist entry in IES */
+struct virtchnl_ipsec_sp_cfg {
+ u32 spi;
+ u32 dip[4];
+
+ /* Drop frame if true or redirect to QAT if false. */
+ u8 drop;
+
+ /* Congestion domain. For future use. */
+ u8 cgd;
+
+ /* 0 for IPv4 table, 1 for IPv6 table. */
+ u8 table_id;
+
+ /* Set TC (congestion domain) if true. For future use. */
+ u8 set_tc;
+
+ /* 0 for NAT-T unsupported, 1 for NAT-T supported */
+ u8 is_udp;
+
+ /* reserved */
+ u8 reserved;
+
+ /* NAT-T UDP port number. Only valid in case NAT-T supported */
+ u16 udp_port;
+};
+
+#pragma pack(1)
+/* Delete allowlist entry in IES */
+struct virtchnl_ipsec_sp_destroy {
+ /* 0 for IPv4 table, 1 for IPv6 table. */
+ u8 table_id;
+ u32 rule_id;
+};
+#pragma pack()
+
+/* Response from IES to allowlist operations */
+struct virtchnl_ipsec_sp_cfg_resp {
+ u32 rule_id;
+};
+
+struct virtchnl_ipsec_sa_cfg_resp {
+ u32 sa_handle;
+};
+
+#define INLINE_IPSEC_EVENT_RESET 0x1
+#define INLINE_IPSEC_EVENT_CRYPTO_ON 0x2
+#define INLINE_IPSEC_EVENT_CRYPTO_OFF 0x4
+
+struct virtchnl_ipsec_event {
+ u32 ipsec_event_data;
+};
+
+#define INLINE_IPSEC_STATUS_AVAILABLE 0x1
+#define INLINE_IPSEC_STATUS_UNAVAILABLE 0x2
+
+struct virtchnl_ipsec_status {
+ u32 status;
+};
+
+struct virtchnl_ipsec_resp {
+ u32 resp;
+};
+
+/* Internal message descriptor for VF <-> IPsec communication */
+struct inline_ipsec_msg {
+ u16 ipsec_opcode;
+ u16 req_id;
+
+ union {
+ /* IPsec request */
+ struct virtchnl_ipsec_sa_cfg sa_cfg[0];
+ struct virtchnl_ipsec_sp_cfg sp_cfg[0];
+ struct virtchnl_ipsec_sa_update sa_update[0];
+ struct virtchnl_ipsec_sa_destroy sa_destroy[0];
+ struct virtchnl_ipsec_sp_destroy sp_destroy[0];
+
+ /* IPsec response */
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp[0];
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp[0];
+ struct virtchnl_ipsec_cap ipsec_cap[0];
+ struct virtchnl_ipsec_status ipsec_status[0];
+ /* response to del_sa, del_sp, update_sa */
+ struct virtchnl_ipsec_resp ipsec_resp[0];
+
+ /* IPsec event (no req_id is required) */
+ struct virtchnl_ipsec_event event[0];
+
+ /* Reserved */
+ struct virtchnl_ipsec_sa_read sa_read[0];
+ } ipsec_data;
+};
+
+static inline u16 virtchnl_inline_ipsec_val_msg_len(u16 opcode)
+{
+ u16 valid_len = sizeof(struct inline_ipsec_msg);
+
+ switch (opcode) {
+ case INLINE_IPSEC_OP_GET_CAP:
+ case INLINE_IPSEC_OP_GET_STATUS:
+ break;
+ case INLINE_IPSEC_OP_SA_CREATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_cfg);
+ break;
+ case INLINE_IPSEC_OP_SP_CREATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sp_cfg);
+ break;
+ case INLINE_IPSEC_OP_SA_UPDATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_update);
+ break;
+ case INLINE_IPSEC_OP_SA_DESTROY:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_destroy);
+ break;
+ case INLINE_IPSEC_OP_SP_DESTROY:
+ valid_len += sizeof(struct virtchnl_ipsec_sp_destroy);
+ break;
+ /* Only for msg length caculation of response to VF in case of
+ * inline ipsec failure.
+ */
+ case INLINE_IPSEC_OP_RESP:
+ valid_len += sizeof(struct virtchnl_ipsec_resp);
+ break;
+ default:
+ valid_len = 0;
+ break;
+ }
+
+ return valid_len;
+}
+
+#endif /* _VIRTCHNL_INLINE_IPSEC_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 2/9] net/idpf/base: add OS specific implementation
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-09 9:11 ` [RFC v2 1/9] net/idpf/base: introduce base code Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 3/9] net/idpf: support device initialization Junfeng Guo
` (6 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add some MACRO definations and small functions which are specific
for DPDK.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_osdep.h | 365 +++++++++++++++++++++++++++++
1 file changed, 365 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_osdep.h
diff --git a/drivers/net/idpf/base/iecm_osdep.h b/drivers/net/idpf/base/iecm_osdep.h
new file mode 100644
index 0000000000..60e21fbc1b
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_osdep.h
@@ -0,0 +1,365 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_OSDEP_H_
+#define _IECM_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../idpf_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef int16_t s16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+typedef uint64_t s64;
+
+typedef enum iecm_status iecm_status;
+typedef struct iecm_lock iecm_lock;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x) ((x) & 0xFFFF)
+#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN 6
+#endif
+
+#ifndef __le16
+#define __le16 uint16_t
+#endif
+#ifndef __le32
+#define __le32 uint32_t
+#endif
+#ifndef __le64
+#define __le64 uint64_t
+#endif
+#ifndef __be16
+#define __be16 uint16_t
+#endif
+#ifndef __be32
+#define __be32 uint32_t
+#endif
+#ifndef __be64
+#define __be64 uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((__unused__))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((__unused__))
+#endif
+#ifndef __packed
+#define __packed __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#ifndef BIT
+#define BIT(a) (1ULL << (a))
+#endif
+
+#define FALSE 0
+#define TRUE 1
+#define false 0
+#define true 1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->(f)))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define iecm_debug(h, m, s, ...) \
+ do { \
+ if (((m) & (h)->debug_mask)) \
+ PMD_DRV_LOG_RAW(DEBUG, "iecm %02x.%x " s, \
+ (h)->bus.device, (h)->bus.func, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#define iecm_info(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_warn(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_debug_array(hw, type, rowsize, groupsize, buf, len) \
+ do { \
+ struct iecm_hw *hw_l = hw; \
+ u16 len_l = len; \
+ u8 *buf_l = buf; \
+ int i; \
+ for (i = 0; i < len_l; i += 8) \
+ iecm_debug(hw_l, type, \
+ "0x%04X 0x%016"PRIx64"\n", \
+ i, *((u64 *)((buf_l) + i))); \
+ } while (0)
+#define iecm_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF iecm_snprintf
+#endif
+
+#define IECM_PCI_REG(reg) rte_read32(reg)
+#define IECM_PCI_REG_ADDR(a, reg) \
+ ((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define IECM_PCI_REG64(reg) rte_read64(reg)
+#define IECM_PCI_REG_ADDR64(a, reg) \
+ ((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
+
+#define iecm_wmb() rte_io_wmb()
+#define iecm_rmb() rte_io_rmb()
+#define iecm_mb() rte_io_mb()
+
+static inline uint32_t iecm_read_addr(volatile void *addr)
+{
+ return rte_le_to_cpu_32(IECM_PCI_REG(addr));
+}
+
+static inline uint64_t iecm_read_addr64(volatile void *addr)
+{
+ return rte_le_to_cpu_64(IECM_PCI_REG64(addr));
+}
+
+#define IECM_PCI_REG_WRITE(reg, value) \
+ rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define IECM_PCI_REG_WRITE64(reg, value) \
+ rte_write64((rte_cpu_to_le_64(value)), reg)
+
+#define IECM_READ_REG(hw, reg) iecm_read_addr(IECM_PCI_REG_ADDR((hw), (reg)))
+#define IECM_WRITE_REG(hw, reg, value) \
+ IECM_PCI_REG_WRITE(IECM_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) iecm_read_addr(IECM_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+ IECM_PCI_REG_WRITE(IECM_PCI_REG_ADDR((a), (reg)), (value))
+#define div64_long(n, d) ((n) / (d))
+#define rd64(a, reg) iecm_read_addr64(IECM_PCI_REG_ADDR64((a), (reg)))
+
+#define BITS_PER_BYTE 8
+
+/* memory allocation tracking */
+struct iecm_dma_mem {
+ void *va;
+ u64 pa;
+ u32 size;
+ const void *zone;
+} __attribute__((packed));
+
+struct iecm_virt_mem {
+ void *va;
+ u32 size;
+} __attribute__((packed));
+
+#define iecm_malloc(h, s) rte_zmalloc(NULL, s, 0)
+#define iecm_calloc(h, c, s) rte_zmalloc(NULL, (c) * (s), 0)
+#define iecm_free(h, m) rte_free(m)
+
+#define iecm_memset(a, b, c, d) memset((a), (b), (c))
+#define iecm_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define iecm_memdup(a, b, c, d) rte_memcpy(iecm_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+/* SW spinlock */
+struct iecm_lock {
+ rte_spinlock_t spinlock;
+};
+
+static inline void
+iecm_init_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+iecm_acquire_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+iecm_release_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+iecm_destroy_lock(__attribute__((unused)) struct iecm_lock *sp)
+{
+}
+
+struct iecm_hw;
+
+static inline void *
+iecm_alloc_dma_mem(__attribute__((unused)) struct iecm_hw *hw,
+ struct iecm_dma_mem *mem, u64 size)
+{
+ const struct rte_memzone *mz = NULL;
+ char z_name[RTE_MEMZONE_NAMESIZE];
+
+ if (!mem)
+ return NULL;
+
+ snprintf(z_name, sizeof(z_name), "iecm_dma_%"PRIu64, rte_rand());
+ mz = rte_memzone_reserve_aligned(z_name, size, SOCKET_ID_ANY,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_PGSIZE_4K);
+ if (!mz)
+ return NULL;
+
+ mem->size = size;
+ mem->va = mz->addr;
+ mem->pa = mz->iova;
+ mem->zone = (const void *)mz;
+ memset(mem->va, 0, size);
+
+ return mem->va;
+}
+
+static inline void
+iecm_free_dma_mem(__attribute__((unused)) struct iecm_hw *hw,
+ struct iecm_dma_mem *mem)
+{
+ rte_memzone_free((const struct rte_memzone *)mem->zone);
+ mem->size = 0;
+ mem->va = NULL;
+ mem->pa = 0;
+}
+
+static inline u8
+iecm_hweight8(u32 num)
+{
+ u8 bits = 0;
+ u32 i;
+
+ for (i = 0; i < 8; i++) {
+ bits += (u8)(num & 0x1);
+ num >>= 1;
+ }
+
+ return bits;
+}
+
+static inline u8
+iecm_hweight32(u32 num)
+{
+ u8 bits = 0;
+ u32 i;
+
+ for (i = 0; i < 32; i++) {
+ bits += (u8)(num & 0x1);
+ num >>= 1;
+ }
+
+ return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define iecm_usec_delay(x) rte_delay_us(x)
+#define iecm_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+#ifndef IECM_DBG_TRACE
+#define IECM_DBG_TRACE BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef IECM_INTEL_VENDOR_ID
+#define IECM_INTEL_VENDOR_ID 0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+ ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+ ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr) \
+ ((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+ (((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+ ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+ ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#ifndef LIST_HEAD_TYPE
+#define LIST_HEAD_TYPE(list_name, type) LIST_HEAD(list_name, type)
+#endif
+
+#ifndef LIST_ENTRY_TYPE
+#define LIST_ENTRY_TYPE(type) LIST_ENTRY(type)
+#endif
+
+#ifndef LIST_FOR_EACH_ENTRY_SAFE
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, temp, head, entry_type, list) \
+ LIST_FOREACH(pos, head, list)
+
+#endif
+
+#ifndef LIST_FOR_EACH_ENTRY
+#define LIST_FOR_EACH_ENTRY(pos, head, entry_type, list) \
+ LIST_FOREACH(pos, head, list)
+
+#endif
+
+#endif /* _IECM_OSDEP_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 3/9] net/idpf: support device initialization
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-09 9:11 ` [RFC v2 1/9] net/idpf/base: introduce base code Junfeng Guo
2022-05-09 9:11 ` [RFC v2 2/9] net/idpf/base: add OS specific implementation Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 4/9] net/idpf: support queue ops Junfeng Guo
` (5 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing
Cc: dev, junfeng.guo, Xiaoyun Li, Xiao Wang
Support dev init and add dev ops for IDPF PMD:
dev_configure
dev_start
dev_stop
dev_close
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 652 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 200 ++++++++++
drivers/net/idpf/idpf_logs.h | 38 ++
drivers/net/idpf/idpf_vchnl.c | 465 +++++++++++++++++++++++
drivers/net/idpf/meson.build | 18 +
drivers/net/idpf/version.map | 3 +
drivers/net/meson.build | 1 +
7 files changed, 1377 insertions(+)
create mode 100644 drivers/net/idpf/idpf_ethdev.c
create mode 100644 drivers/net/idpf/idpf_ethdev.h
create mode 100644 drivers/net/idpf/idpf_logs.h
create mode 100644 drivers/net/idpf/idpf_vchnl.c
create mode 100644 drivers/net/idpf/meson.build
create mode 100644 drivers/net/idpf/version.map
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
new file mode 100644
index 0000000000..e34165a87d
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -0,0 +1,652 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "idpf_ethdev.h"
+
+#define VPORT_NUM "vport_num"
+
+struct idpf_adapter *adapter;
+uint16_t vport_num = 1;
+
+static const char * const idpf_valid_args[] = {
+ VPORT_NUM,
+ NULL
+};
+
+static int idpf_dev_configure(struct rte_eth_dev *dev);
+static int idpf_dev_start(struct rte_eth_dev *dev);
+static int idpf_dev_stop(struct rte_eth_dev *dev);
+static int idpf_dev_close(struct rte_eth_dev *dev);
+
+static const struct eth_dev_ops idpf_eth_dev_ops = {
+ .dev_configure = idpf_dev_configure,
+ .dev_start = idpf_dev_start,
+ .dev_stop = idpf_dev_stop,
+ .dev_close = idpf_dev_close,
+};
+
+
+static int
+idpf_init_vport_req_info(struct rte_eth_dev *dev)
+{
+ struct virtchnl2_create_vport *vport_info;
+ uint16_t idx = adapter->next_vport_idx;
+
+ if (!adapter->vport_req_info[idx]) {
+ adapter->vport_req_info[idx] = rte_zmalloc(NULL,
+ sizeof(struct virtchnl2_create_vport), 0);
+ if (!adapter->vport_req_info[idx]) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info");
+ return -1;
+ }
+ }
+
+ vport_info =
+ (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+
+ vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+
+ return 0;
+}
+
+static uint16_t
+idpf_get_next_vport_idx(struct idpf_vport **vports, uint16_t max_vport_nb,
+ uint16_t cur_vport_idx)
+{
+ uint16_t vport_idx;
+ uint16_t i;
+
+ if (cur_vport_idx < max_vport_nb && !vports[cur_vport_idx + 1]) {
+ vport_idx = cur_vport_idx + 1;
+ return vport_idx;
+ }
+
+ for (i = 0; i < max_vport_nb; i++) {
+ if (vports[i])
+ continue;
+ }
+
+ if (i == max_vport_nb)
+ vport_idx = IDPF_INVALID_VPORT_IDX;
+ else
+ vport_idx = i;
+
+ return vport_idx;
+}
+
+#ifndef IDPF_RSS_KEY_LEN
+#define IDPF_RSS_KEY_LEN 52
+#endif
+
+static int
+idpf_init_vport(struct rte_eth_dev *dev)
+{
+ uint16_t idx = adapter->next_vport_idx;
+ struct virtchnl2_create_vport *vport_info =
+ (struct virtchnl2_create_vport *)adapter->vport_recv_info[idx];
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int i;
+
+ vport->adapter = adapter;
+ vport->vport_id = vport_info->vport_id;
+ vport->txq_model = vport_info->txq_model;
+ vport->rxq_model = vport_info->rxq_model;
+ vport->num_tx_q = vport_info->num_tx_q;
+ vport->num_tx_complq = vport_info->num_tx_complq;
+ vport->num_rx_q = vport_info->num_rx_q;
+ vport->num_rx_bufq = vport_info->num_rx_bufq;
+ vport->max_mtu = vport_info->max_mtu;
+ rte_memcpy(vport->default_mac_addr,
+ vport_info->default_mac_addr, ETH_ALEN);
+ vport->rss_algorithm = vport_info->rss_algorithm;
+ vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+ vport_info->rss_key_size);
+ vport->rss_lut_size = vport_info->rss_lut_size;
+ vport->sw_idx = idx;
+
+ for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+ if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport->chunks_info.tx_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.tx_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.tx_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport->chunks_info.rx_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.rx_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.rx_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) {
+ vport->chunks_info.tx_compl_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.tx_compl_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.tx_compl_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) {
+ vport->chunks_info.rx_buf_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.rx_buf_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.rx_buf_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ }
+ }
+
+ adapter->vports[idx] = vport;
+ adapter->cur_vport_nb++;
+ adapter->next_vport_idx = idpf_get_next_vport_idx(adapter->vports,
+ adapter->max_vport_nb, idx);
+ if (adapter->next_vport_idx == IDPF_INVALID_VPORT_IDX) {
+ PMD_INIT_LOG(ERR, "Failed to get next vport id");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+idpf_dev_configure(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |=
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ ret = idpf_init_vport_req_info(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+ return ret;
+ }
+
+ ret = idpf_create_vport(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create vport.");
+ return ret;
+ }
+
+ ret = idpf_init_vport(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init vports.");
+ return ret;
+ }
+
+ return ret;
+}
+
+static int
+idpf_dev_start(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ vport->stopped = 0;
+
+ if (idpf_ena_dis_vport(vport, true)) {
+ PMD_DRV_LOG(ERR, "Failed to enable vport");
+ goto err_vport;
+ }
+
+ return 0;
+
+err_vport:
+ return -1;
+}
+
+static int
+idpf_dev_stop(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (vport->stopped == 1)
+ return 0;
+
+ if (idpf_ena_dis_vport(vport, false))
+ PMD_DRV_LOG(ERR, "disable vport failed");
+
+ vport->stopped = 1;
+ dev->data->dev_started = 0;
+
+ return 0;
+}
+
+static int
+idpf_dev_close(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ idpf_dev_stop(dev);
+ idpf_destroy_vport(vport);
+
+ return 0;
+}
+
+static void
+idpf_reset_pf(struct iecm_hw *hw)
+{
+ uint32_t reg;
+
+ reg = IECM_READ_REG(hw, PFGEN_CTRL);
+ IECM_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct iecm_hw *hw)
+{
+ uint32_t reg;
+ int i;
+
+ for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+ reg = IECM_READ_REG(hw, PFGEN_RSTAT);
+ if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+ return 0;
+ rte_delay_ms(1000);
+ }
+
+ PMD_INIT_LOG(ERR, "IDPF reset timeout");
+ return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct iecm_hw *hw)
+{
+ struct iecm_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+ {
+ .type = IECM_CTLQ_TYPE_MAILBOX_TX,
+ .id = IDPF_CTLQ_ID,
+ .len = IDPF_CTLQ_LEN,
+ .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+ .reg = {
+ .head = PF_FW_ATQH,
+ .tail = PF_FW_ATQT,
+ .len = PF_FW_ATQLEN,
+ .bah = PF_FW_ATQBAH,
+ .bal = PF_FW_ATQBAL,
+ .len_mask = PF_FW_ATQLEN_ATQLEN_M,
+ .len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+ .head_mask = PF_FW_ATQH_ATQH_M,
+ }
+ },
+ {
+ .type = IECM_CTLQ_TYPE_MAILBOX_RX,
+ .id = IDPF_CTLQ_ID,
+ .len = IDPF_CTLQ_LEN,
+ .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+ .reg = {
+ .head = PF_FW_ARQH,
+ .tail = PF_FW_ARQT,
+ .len = PF_FW_ARQLEN,
+ .bah = PF_FW_ARQBAH,
+ .bal = PF_FW_ARQBAL,
+ .len_mask = PF_FW_ARQLEN_ARQLEN_M,
+ .len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+ .head_mask = PF_FW_ARQH_ARQH_M,
+ }
+ }
+ };
+ struct iecm_ctlq_info *ctlq;
+ int ret = 0;
+
+ ret = iecm_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+ if (ret)
+ return ret;
+
+ LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+ struct iecm_ctlq_info, cq_list) {
+ if (ctlq->q_id == IDPF_CTLQ_ID && ctlq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ hw->asq = ctlq;
+ if (ctlq->q_id == IDPF_CTLQ_ID && ctlq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX)
+ hw->arq = ctlq;
+ }
+
+ if (!hw->asq || !hw->arq) {
+ iecm_ctlq_deinit(hw);
+ ret = -ENOENT;
+ }
+
+ return ret;
+}
+
+static int
+idpf_adapter_init(struct rte_eth_dev *dev)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ struct rte_pci_device *pci_dev = IDPF_DEV_TO_PCI(dev);
+ int ret = 0;
+
+ if (adapter->initialized)
+ return 0;
+
+ hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+ hw->hw_addr_len = pci_dev->mem_resource[0].len;
+ hw->back = adapter;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->device_id = pci_dev->id.device_id;
+ hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+ idpf_reset_pf(hw);
+ ret = idpf_check_pf_reset_done(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "IDPF is still resetting");
+ goto err;
+ }
+
+ ret = idpf_init_mbx(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init mailbox");
+ goto err;
+ }
+
+ adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp", IDPF_DFLT_MBX_BUF_SIZE, 0);
+ if (!adapter->mbx_resp) {
+ PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+ goto err_mbx;
+ }
+
+ if (idpf_check_api_version(adapter)) {
+ PMD_INIT_LOG(ERR, "Failed to check api version");
+ goto err_api;
+ }
+
+ adapter->caps = rte_zmalloc("idpf_caps",
+ sizeof(struct virtchnl2_get_capabilities), 0);
+ if (!adapter->caps) {
+ PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
+ goto err_api;
+ }
+
+ if (idpf_get_caps(adapter)) {
+ PMD_INIT_LOG(ERR, "Failed to get capabilities");
+ goto err_caps;
+ }
+
+ adapter->max_vport_nb = adapter->caps->max_vports;
+
+ adapter->vport_req_info = rte_zmalloc("vport_req_info",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vport_req_info),
+ 0);
+ if (!adapter->vport_req_info) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info memory");
+ goto err_caps;
+ }
+
+ adapter->vport_recv_info = rte_zmalloc("vport_recv_info",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vport_recv_info),
+ 0);
+ if (!adapter->vport_recv_info) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_recv_info memory");
+ goto err_vport_recv_info;
+ }
+
+ adapter->vports = rte_zmalloc("vports",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vports),
+ 0);
+ if (!adapter->vports) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+ goto err_vports;
+ }
+
+ adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
+ sizeof(struct virtchnl2_config_rx_queues)) /
+ sizeof(struct virtchnl2_rxq_info);
+ adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
+ sizeof(struct virtchnl2_config_tx_queues)) /
+ sizeof(struct virtchnl2_txq_info);
+
+ adapter->cur_vport_nb = 0;
+ adapter->next_vport_idx = 0;
+ adapter->initialized = true;
+
+ return ret;
+
+err_vports:
+ rte_free(adapter->vports);
+ adapter->vports = NULL;
+err_vport_recv_info:
+ rte_free(adapter->vport_req_info);
+ adapter->vport_req_info = NULL;
+err_caps:
+ rte_free(adapter->caps);
+ adapter->caps = NULL;
+err_api:
+ rte_free(adapter->mbx_resp);
+ adapter->mbx_resp = NULL;
+err_mbx:
+ iecm_ctlq_deinit(hw);
+err:
+ return -1;
+}
+
+
+static int
+idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void *init_params)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->dev_ops = &idpf_eth_dev_ops;
+
+ ret = idpf_adapter_init(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init adapter.");
+ return ret;
+ }
+
+ dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+
+ vport->dev_data = dev->data;
+
+ dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+ ret = -ENOMEM;
+ goto err;
+ }
+
+err:
+ return ret;
+}
+
+static int
+idpf_dev_uninit(struct rte_eth_dev *dev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return -EPERM;
+
+ idpf_dev_close(dev);
+
+ return 0;
+}
+
+static const struct rte_pci_id pci_id_idpf_map[] = {
+ { RTE_PCI_DEVICE(IECM_INTEL_VENDOR_ID, IECM_DEV_ID_PF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idpf_handle_vport_num(const char *key, const char *value, void *args)
+{
+ int *i = (int *)args;
+ char *end;
+ int num;
+
+ num = strtoul(value, &end, 10);
+
+ if (num <= 0) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", value must be greater than 0",
+ value, key);
+ return -1;
+ }
+
+ *i = num;
+ return 0;
+}
+
+static int
+idpf_parse_vport_num(struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+ const char *key = "vport_num";
+ int ret = 0;
+
+ if (devargs == NULL)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+ if (kvlist == NULL)
+ return 0;
+
+ ret = rte_kvargs_process(kvlist, key, &idpf_handle_vport_num,
+ &vport_num);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static int
+idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ char name[RTE_ETH_NAME_MAX_LEN];
+ int i, retval;
+
+ retval = idpf_parse_vport_num(pci_dev->device.devargs);
+ if (retval)
+ return retval;
+
+ if (!adapter) {
+ adapter = (struct idpf_adapter *)rte_zmalloc("idpf_adapter",
+ sizeof(struct idpf_adapter), 0);
+ if (!adapter) {
+ PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+ return -1;
+ }
+ }
+
+ for (i = 0; i < vport_num; i++) {
+ snprintf(name, sizeof(name), "idpf_vport_%d", i);
+ retval = rte_eth_dev_create(&pci_dev->device, name,
+ sizeof(struct idpf_vport),
+ NULL, NULL, idpf_dev_init,
+ NULL);
+ if (retval)
+ PMD_DRV_LOG(ERR, "failed to creat vport %d", i);
+ }
+
+ return 0;
+}
+
+static void
+idpf_adapter_rel(struct idpf_adapter *adapter)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ int i;
+
+ iecm_ctlq_deinit(hw);
+
+ if (adapter->caps) {
+ rte_free(adapter->caps);
+ adapter->caps = NULL;
+ }
+
+ if (adapter->mbx_resp) {
+ rte_free(adapter->mbx_resp);
+ adapter->mbx_resp = NULL;
+ }
+
+ if (adapter->vport_req_info) {
+ for (i = 0; i < adapter->max_vport_nb; i++) {
+ if (adapter->vport_req_info[i]) {
+ rte_free(adapter->vport_req_info[i]);
+ adapter->vport_req_info[i] = NULL;
+ }
+ }
+ rte_free(adapter->vport_req_info);
+ adapter->vport_req_info = NULL;
+ }
+
+ if (adapter->vport_recv_info) {
+ for (i = 0; i < adapter->max_vport_nb; i++) {
+ if (adapter->vport_recv_info[i]) {
+ rte_free(adapter->vport_recv_info[i]);
+ adapter->vport_recv_info[i] = NULL;
+ }
+ }
+ }
+
+ if (adapter->vports) {
+ /* Needn't free adapter->vports[i] since it's private data */
+ rte_free(adapter->vports);
+ adapter->vports = NULL;
+ }
+}
+
+static int
+idpf_pci_remove(struct rte_pci_device *pci_dev)
+{
+ if (adapter) {
+ idpf_adapter_rel(adapter);
+ rte_free(adapter);
+ adapter = NULL;
+ }
+
+ return rte_eth_dev_pci_generic_remove(pci_dev, idpf_dev_uninit);
+}
+
+static struct rte_pci_driver rte_idpf_pmd = {
+ .id_table = pci_id_idpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = idpf_pci_probe,
+ .remove = idpf_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
new file mode 100644
index 0000000000..762d5ff66a
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_ETHDEV_H_
+#define _IDPF_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_bus_pci.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+
+#include "base/iecm_osdep.h"
+#include "base/iecm_type.h"
+#include "base/iecm_devids.h"
+#include "base/iecm_lan_txrx.h"
+#include "base/iecm_lan_pf_regs.h"
+#include "base/virtchnl.h"
+#include "base/virtchnl2.h"
+
+#define IDPF_INVALID_VPORT_IDX 0xffff
+#define IDPF_TX_COMPLQ_PER_GRP 1
+#define IDPF_RX_BUFQ_PER_GRP 2
+
+#define IDPF_CTLQ_ID -1
+#define IDPF_CTLQ_LEN 64
+#define IDPF_DFLT_MBX_BUF_SIZE 4096
+
+#define IDPF_MAX_NUM_QUEUES 256
+#define IDPF_MIN_BUF_SIZE 1024
+#define IDPF_MAX_FRAME_SIZE 9728
+
+#define IDPF_NUM_MACADDR_MAX 64
+
+#define IDPF_MAX_PKT_TYPE 1024
+
+#define IDPF_VLAN_TAG_SIZE 4
+#define IDPF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN 6
+#endif
+
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+ IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+ IDPF_MSG_NON, /* Read nothing from admin queue */
+ IDPF_MSG_SYS, /* Read system msg from admin queue */
+ IDPF_MSG_CMD, /* Read async command result */
+};
+
+struct idpf_chunks_info {
+ uint32_t tx_start_qid;
+ uint32_t rx_start_qid;
+ /* Valid only if split queue model */
+ uint32_t tx_compl_start_qid;
+ uint32_t rx_buf_start_qid;
+
+ uint64_t tx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint64_t rx_qtail_start;
+ uint32_t rx_qtail_spacing;
+ uint64_t tx_compl_qtail_start;
+ uint32_t tx_compl_qtail_spacing;
+ uint64_t rx_buf_qtail_start;
+ uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+ struct idpf_adapter *adapter; /* Backreference to associated adapter */
+ uint16_t vport_id;
+ uint32_t txq_model;
+ uint32_t rxq_model;
+ uint16_t num_tx_q;
+ /* valid only if txq_model is split Q */
+ uint16_t num_tx_complq;
+ uint16_t num_rx_q;
+ /* valid only if rxq_model is split Q */
+ uint16_t num_rx_bufq;
+
+ uint16_t max_mtu;
+ uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+ enum virtchnl_rss_algorithm rss_algorithm;
+ uint16_t rss_key_size;
+ uint16_t rss_lut_size;
+
+ uint16_t sw_idx; /* SW idx */
+
+ struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+
+ /* RSS info */
+ uint32_t *rss_lut;
+ uint8_t *rss_key;
+ uint64_t rss_hf;
+
+ /* Chunk info */
+ struct idpf_chunks_info chunks_info;
+
+ /* Event from ipf */
+ bool link_up;
+ uint32_t link_speed;
+
+ bool stopped;
+};
+
+struct idpf_adapter {
+ struct iecm_hw hw;
+
+ struct virtchnl_version_info virtchnl_version;
+ struct virtchnl2_get_capabilities *caps;
+
+ volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+ uint32_t cmd_retval; /* return value of the cmd response from ipf */
+ uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+
+ uint32_t txq_model;
+ uint32_t rxq_model;
+
+ /* Vport info */
+ uint8_t **vport_req_info;
+ uint8_t **vport_recv_info;
+ struct idpf_vport **vports;
+ uint16_t max_vport_nb;
+ uint16_t cur_vport_nb;
+ uint16_t next_vport_idx;
+
+ /* Max config queue number per VC message */
+ uint32_t max_rxq_per_msg;
+ uint32_t max_txq_per_msg;
+
+ uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+ bool initialized;
+ bool stopped;
+};
+
+extern struct idpf_adapter *adapter;
+
+#define IDPF_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+ uint32_t ops;
+ uint8_t *in_args; /* buffer for sending */
+ uint32_t in_args_size; /* buffer size for sending */
+ uint8_t *out_buffer; /* buffer for response */
+ uint32_t out_size; /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+ adapter->cmd_retval = msg_ret;
+ rte_wmb();
+ adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct idpf_adapter *adapter)
+{
+ rte_wmb();
+ adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+ adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
+{
+ int ret = rte_atomic32_cmpset(&adapter->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+ if (!ret)
+ PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+ return !ret;
+}
+
+void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int idpf_check_api_version(struct idpf_adapter *adapter);
+int idpf_get_caps(struct idpf_adapter *adapter);
+int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
+int idpf_destroy_vport(struct idpf_vport *vport);
+
+int idpf_ena_dis_vport(struct idpf_vport *vport, bool enable);
+
+#endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
new file mode 100644
index 0000000000..906aae8463
--- /dev/null
+++ b/drivers/net/idpf/idpf_logs.h
@@ -0,0 +1,38 @@
+#ifndef _IDPF_LOGS_H_
+#define _IDPF_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_logtype_init;
+extern int idpf_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, idpf_logtype_init, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, idpf_logtype_driver, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
new file mode 100644
index 0000000000..77d77b82d8
--- /dev/null
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -0,0 +1,465 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_dev.h>
+
+#include "idpf_ethdev.h"
+
+#include "base/iecm_prototype.h"
+
+#define IDPF_CTLQ_LEN 64
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+ struct iecm_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+ uint16_t num_q_msg = IDPF_CTLQ_LEN;
+ struct iecm_dma_mem *dma_mem;
+ int err = 0;
+ uint32_t i;
+
+ for (i = 0; i < 10; i++) {
+ err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+ msleep(20);
+ if (num_q_msg)
+ break;
+ }
+ if (err)
+ goto error;
+
+ /* Empty queue is not an error */
+ for (i = 0; i < num_q_msg; i++) {
+ dma_mem = q_msg[i]->ctx.indirect.payload;
+ if (dma_mem) {
+ iecm_free_dma_mem(&adapter->hw, dma_mem);
+ rte_free(dma_mem);
+ }
+ rte_free(q_msg[i]);
+ }
+
+error:
+ return err;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op,
+ uint16_t msg_size, uint8_t *msg)
+{
+ struct iecm_ctlq_msg *ctlq_msg;
+ struct iecm_dma_mem *dma_mem;
+ int err = 0;
+
+ err = idpf_vc_clean(adapter);
+ if (err)
+ goto err;
+
+ ctlq_msg = (struct iecm_ctlq_msg *)rte_zmalloc(NULL,
+ sizeof(struct iecm_ctlq_msg), 0);
+ if (!ctlq_msg) {
+ err = -ENOMEM;
+ goto err;
+ }
+
+ dma_mem = (struct iecm_dma_mem *)rte_zmalloc(NULL,
+ sizeof(struct iecm_dma_mem), 0);
+ if (!dma_mem) {
+ err = -ENOMEM;
+ goto dma_mem_error;
+ }
+
+ dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+ iecm_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+ if (!dma_mem->va) {
+ err = -ENOMEM;
+ goto dma_alloc_error;
+ }
+
+ memcpy(dma_mem->va, msg, msg_size);
+
+ ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_pf;
+ ctlq_msg->func_id = 0;
+ ctlq_msg->data_len = msg_size;
+ ctlq_msg->cookie.mbx.chnl_opcode = op;
+ ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+ ctlq_msg->ctx.indirect.payload = dma_mem;
+
+ err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+ if (err)
+ goto send_error;
+
+ return err;
+
+send_error:
+ iecm_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+ rte_free(dma_mem);
+dma_mem_error:
+ rte_free(ctlq_msg);
+err:
+ return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_ipf(struct idpf_adapter *adapter, uint16_t buf_len,
+ uint8_t *buf)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ struct iecm_arq_event_info event;
+ enum idpf_vc_result result = IDPF_MSG_NON;
+ enum virtchnl_ops opcode;
+ uint16_t pending = 1;
+ int ret;
+
+ event.buf_len = buf_len;
+ event.msg_buf = buf;
+ ret = iecm_clean_arq_element(hw, &event, &pending);
+ if (ret) {
+ PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+ if (ret != IECM_ERR_CTLQ_NO_WORK)
+ result = IDPF_MSG_ERR;
+ return result;
+ }
+
+ opcode = (enum virtchnl_ops)rte_le_to_cpu_32(event.desc.cookie_high);
+ adapter->cmd_retval =
+ (enum virtchnl_status_code)rte_le_to_cpu_32(event.desc.cookie_low);
+
+ PMD_DRV_LOG(DEBUG, "CQ from ipf carries opcode %u, retval %d",
+ opcode, adapter->cmd_retval);
+
+ if (opcode == VIRTCHNL2_OP_EVENT) {
+ struct virtchnl2_event *ve =
+ (struct virtchnl2_event *)event.msg_buf;
+
+ result = IDPF_MSG_SYS;
+ switch (ve->event) {
+ case VIRTCHNL2_EVENT_LINK_CHANGE:
+ /* TBD */
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "%s: Unknown event %d from ipf",
+ __func__, ve->event);
+ break;
+ }
+ } else {
+ /* async reply msg on command issued by pf previously */
+ result = IDPF_MSG_CMD;
+ if (opcode != adapter->pend_cmd) {
+ PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+ adapter->pend_cmd, opcode);
+ result = IDPF_MSG_ERR;
+ }
+ }
+
+ return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS 10
+
+static int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+ enum idpf_vc_result result;
+ int err = 0;
+ int i = 0;
+ int ret;
+
+ if (_atomic_set_cmd(adapter, args->ops))
+ return -1;
+
+ ret = idpf_send_vc_msg(adapter, args->ops,
+ args->in_args_size,
+ args->in_args);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+ _clear_cmd(adapter);
+ return ret;
+ }
+
+ switch (args->ops) {
+ case VIRTCHNL_OP_VERSION:
+ case VIRTCHNL2_OP_GET_CAPS:
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ /* for init virtchnl ops, need to poll the response */
+ do {
+ result = idpf_read_msg_from_ipf(adapter,
+ args->out_size,
+ args->out_buffer);
+ if (result == IDPF_MSG_CMD)
+ break;
+ rte_delay_ms(ASQ_DELAY_MS);
+ } while (i++ < MAX_TRY_TIMES);
+ if (i >= MAX_TRY_TIMES ||
+ adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+ adapter->cmd_retval, args->ops);
+ }
+ _clear_cmd(adapter);
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+ break;
+ rte_delay_ms(ASQ_DELAY_MS);
+ /* If don't read msg or read sys event, continue */
+ } while (i++ < MAX_TRY_TIMES);
+ /* If there's no response is received, clear command */
+ if (i >= MAX_TRY_TIMES ||
+ adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+ adapter->cmd_retval, args->ops);
+ _clear_cmd(adapter);
+ }
+ break;
+ }
+
+ return err;
+}
+
+int
+idpf_check_api_version(struct idpf_adapter *adapter)
+{
+ struct virtchnl_version_info version;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&version, 0, sizeof(struct virtchnl_version_info));
+ version.major = VIRTCHNL_VERSION_MAJOR_2;
+ version.minor = VIRTCHNL_VERSION_MINOR_0;
+
+ args.ops = VIRTCHNL_OP_VERSION;
+ args.in_args = (uint8_t *)&version;
+ args.in_args_size = sizeof(version);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_VERSION");
+ return err;
+ }
+
+ return err;
+}
+
+int
+idpf_get_caps(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_get_capabilities caps_msg;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+ caps_msg.csum_caps =
+ VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_GENERIC |
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+ caps_msg.seg_caps =
+ VIRTCHNL2_CAP_SEG_IPV4_TCP |
+ VIRTCHNL2_CAP_SEG_IPV4_UDP |
+ VIRTCHNL2_CAP_SEG_IPV4_SCTP |
+ VIRTCHNL2_CAP_SEG_IPV6_TCP |
+ VIRTCHNL2_CAP_SEG_IPV6_UDP |
+ VIRTCHNL2_CAP_SEG_IPV6_SCTP |
+ VIRTCHNL2_CAP_SEG_GENERIC;
+
+ caps_msg.rss_caps =
+ VIRTCHNL2_CAP_RSS_IPV4_TCP |
+ VIRTCHNL2_CAP_RSS_IPV4_UDP |
+ VIRTCHNL2_CAP_RSS_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV4_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV6_TCP |
+ VIRTCHNL2_CAP_RSS_IPV6_UDP |
+ VIRTCHNL2_CAP_RSS_IPV6_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV6_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV4_AH |
+ VIRTCHNL2_CAP_RSS_IPV4_ESP |
+ VIRTCHNL2_CAP_RSS_IPV4_AH_ESP |
+ VIRTCHNL2_CAP_RSS_IPV6_AH |
+ VIRTCHNL2_CAP_RSS_IPV6_ESP |
+ VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+ caps_msg.hsplit_caps =
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6;
+
+ caps_msg.rsc_caps =
+ VIRTCHNL2_CAP_RSC_IPV4_TCP |
+ VIRTCHNL2_CAP_RSC_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSC_IPV6_TCP |
+ VIRTCHNL2_CAP_RSC_IPV6_SCTP;
+
+ caps_msg.other_caps =
+ VIRTCHNL2_CAP_RDMA |
+ VIRTCHNL2_CAP_SRIOV |
+ VIRTCHNL2_CAP_MACFILTER |
+ VIRTCHNL2_CAP_FLOW_DIRECTOR |
+ VIRTCHNL2_CAP_SPLITQ_QSCHED |
+ VIRTCHNL2_CAP_CRC |
+ VIRTCHNL2_CAP_WB_ON_ITR |
+ VIRTCHNL2_CAP_PROMISC |
+ VIRTCHNL2_CAP_LINK_SPEED |
+ VIRTCHNL2_CAP_VLAN;
+
+ args.ops = VIRTCHNL2_OP_GET_CAPS;
+ args.in_args = (uint8_t *)&caps_msg;
+ args.in_args_size = sizeof(caps_msg);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+ return err;
+ }
+
+ rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+ return err;
+}
+
+int
+idpf_create_vport(__rte_unused struct rte_eth_dev *dev)
+{
+ uint16_t idx = adapter->next_vport_idx;
+ struct virtchnl2_create_vport *vport_req_info =
+ (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+ struct virtchnl2_create_vport vport_msg;
+ struct idpf_cmd_info args;
+ int err = -1;
+
+ memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+ vport_msg.vport_type = vport_req_info->vport_type;
+ vport_msg.txq_model = vport_req_info->txq_model;
+ vport_msg.rxq_model = vport_req_info->rxq_model;
+ vport_msg.num_tx_q = vport_req_info->num_tx_q;
+ vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+ vport_msg.num_rx_q = vport_req_info->num_rx_q;
+ vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+ args.in_args = (uint8_t *)&vport_msg;
+ args.in_args_size = sizeof(vport_msg);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+ return err;
+ }
+
+ if (!adapter->vport_recv_info[idx]) {
+ adapter->vport_recv_info[idx] = rte_zmalloc(NULL,
+ IDPF_DFLT_MBX_BUF_SIZE, 0);
+ if (!adapter->vport_recv_info[idx]) {
+ PMD_INIT_LOG(ERR, "Failed to alloc vport_recv_info.");
+ return err;
+ }
+ }
+ rte_memcpy(adapter->vport_recv_info[idx], args.out_buffer,
+ IDPF_DFLT_MBX_BUF_SIZE);
+ return err;
+}
+
+int
+idpf_destroy_vport(struct idpf_vport *vport)
+{
+ struct virtchnl2_vport vc_vport;
+ struct idpf_cmd_info args;
+ int err;
+
+ vc_vport.vport_id = vport->vport_id;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+ args.in_args = (uint8_t *)&vc_vport;
+ args.in_args_size = sizeof(vc_vport);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+ return err;
+ }
+
+ return err;
+}
+
+int
+idpf_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+ struct virtchnl2_vport vc_vport;
+ struct idpf_cmd_info args;
+ int err;
+
+ vc_vport.vport_id = vport->vport_id;
+ args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+ VIRTCHNL2_OP_DISABLE_VPORT;
+ args.in_args = (u8 *)&vc_vport;
+ args.in_args_size = sizeof(vc_vport);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+ enable ? "ENABLE" : "DISABLE");
+ }
+
+ return err;
+}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
new file mode 100644
index 0000000000..262a7aa8c7
--- /dev/null
+++ b/drivers/net/idpf/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+ build = false
+ reason = 'not supported on Windows'
+ subdir_done()
+endif
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+ 'idpf_ethdev.c',
+ 'idpf_vchnl.c',
+)
+
+includes += include_directories('base')
diff --git a/drivers/net/idpf/version.map b/drivers/net/idpf/version.map
new file mode 100644
index 0000000000..b7da224860
--- /dev/null
+++ b/drivers/net/idpf/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index e35652fe63..8910154544 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -28,6 +28,7 @@ drivers = [
'i40e',
'iavf',
'ice',
+ 'idpf',
'igc',
'ionic',
'ipn3ke',
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 4/9] net/idpf: support queue ops
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (2 preceding siblings ...)
2022-05-09 9:11 ` [RFC v2 3/9] net/idpf: support device initialization Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 5/9] net/idpf: support getting device information Junfeng Guo
` (4 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Xiaoyun Li
Add queue ops for IDPF PMD:
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 85 +++
drivers/net/idpf/idpf_ethdev.h | 5 +
drivers/net/idpf/idpf_rxtx.c | 1252 ++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 167 +++++
drivers/net/idpf/idpf_vchnl.c | 342 +++++++++
drivers/net/idpf/meson.build | 1 +
6 files changed, 1852 insertions(+)
create mode 100644 drivers/net/idpf/idpf_rxtx.c
create mode 100644 drivers/net/idpf/idpf_rxtx.h
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e34165a87d..511770ed4f 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -12,6 +12,7 @@
#include <rte_dev.h>
#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
#define VPORT_NUM "vport_num"
@@ -33,6 +34,14 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_start = idpf_dev_start,
.dev_stop = idpf_dev_stop,
.dev_close = idpf_dev_close,
+ .rx_queue_start = idpf_rx_queue_start,
+ .rx_queue_stop = idpf_rx_queue_stop,
+ .tx_queue_start = idpf_tx_queue_start,
+ .tx_queue_stop = idpf_tx_queue_stop,
+ .rx_queue_setup = idpf_rx_queue_setup,
+ .rx_queue_release = idpf_dev_rx_queue_release,
+ .tx_queue_setup = idpf_tx_queue_setup,
+ .tx_queue_release = idpf_dev_tx_queue_release,
};
@@ -193,6 +202,65 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+static int
+idpf_config_queues(struct idpf_vport *vport)
+{
+ int err;
+
+ err = idpf_config_rxqs(vport);
+ if (err)
+ return err;
+
+ err = idpf_config_txqs(vport);
+
+ return err;
+}
+
+static int
+idpf_start_queues(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_tx_queue *txq;
+ int i, err = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (txq->tx_deferred_start)
+ continue;
+ if (idpf_tx_queue_init(dev, i) != 0) {
+ PMD_DRV_LOG(ERR, "Fail to init tx queue %u", i);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (rxq->rx_deferred_start)
+ continue;
+ if (idpf_rx_queue_init(dev, i) != 0) {
+ PMD_DRV_LOG(ERR, "Fail to init rx queue %u", i);
+ return -1;
+ }
+ }
+
+ err = idpf_ena_dis_queues(vport, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Fail to start queues");
+ return err;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ dev->data->tx_queue_state[i] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ dev->data->rx_queue_state[i] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
static int
idpf_dev_start(struct rte_eth_dev *dev)
{
@@ -203,6 +271,19 @@ idpf_dev_start(struct rte_eth_dev *dev)
vport->stopped = 0;
+ if (idpf_config_queues(vport)) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues");
+ goto err_queue;
+ }
+
+ idpf_set_rx_function(dev);
+ idpf_set_tx_function(dev);
+
+ if (idpf_start_queues(dev)) {
+ PMD_DRV_LOG(ERR, "Failed to start queues");
+ goto err_queue;
+ }
+
if (idpf_ena_dis_vport(vport, true)) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
goto err_vport;
@@ -211,6 +292,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
return 0;
err_vport:
+ idpf_stop_queues(dev);
+err_queue:
return -1;
}
@@ -228,6 +311,8 @@ idpf_dev_stop(struct rte_eth_dev *dev)
if (idpf_ena_dis_vport(vport, false))
PMD_DRV_LOG(ERR, "disable vport failed");
+ idpf_stop_queues(dev);
+
vport->stopped = 1;
dev->data->dev_started = 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 762d5ff66a..c5aa168d95 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -195,6 +195,11 @@ int idpf_get_caps(struct idpf_adapter *adapter);
int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
int idpf_destroy_vport(struct idpf_vport *vport);
+int idpf_config_rxqs(struct idpf_vport *vport);
+int idpf_config_txqs(struct idpf_vport *vport);
+int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on);
+int idpf_ena_dis_queues(struct idpf_vport *vport, bool enable);
int idpf_ena_dis_vport(struct idpf_vport *vport, bool enable);
#endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
new file mode 100644
index 0000000000..770ed52281
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -0,0 +1,1252 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+
+#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+ /* The following constraints must be satisfied:
+ * thresh < rxq->nb_rx_desc
+ */
+ if (thresh >= nb_desc) {
+ PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+ thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+ uint16_t tx_free_thresh)
+{
+ /* TX descriptors will have their RS bit set after tx_rs_thresh
+ * descriptors have been used. The TX descriptor ring will be cleaned
+ * after tx_free_thresh descriptors are used or if the number of
+ * descriptors required to transmit a packet is greater than the
+ * number of free TX descriptors.
+ *
+ * The following constraints must be satisfied:
+ * - tx_rs_thresh must be less than the size of the ring minus 2.
+ * - tx_free_thresh must be less than the size of the ring minus 3.
+ * - tx_rs_thresh must be less than or equal to tx_free_thresh.
+ * - tx_rs_thresh must be a divisor of the ring size.
+ *
+ * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+ * race condition, hence the maximum threshold constraints. When set
+ * to zero use default values.
+ */
+ if (tx_rs_thresh >= (nb_desc - 2)) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 2",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_free_thresh >= (nb_desc - 3)) {
+ PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 3.",
+ tx_free_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_rs_thresh > tx_free_thresh) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+ "equal to tx_free_thresh (%u).",
+ tx_rs_thresh, tx_free_thresh);
+ return -EINVAL;
+ }
+ if ((nb_desc % tx_rs_thresh) != 0) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+ "number of TX descriptors (%u).",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (!rxq->sw_ring)
+ return;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ if (rxq->sw_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+ rxq->sw_ring[i] = NULL;
+ }
+ }
+}
+
+static inline void
+release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+ uint16_t nb_desc, i;
+
+ if (!txq || !txq->sw_ring) {
+ PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+ return;
+ }
+
+ if (txq->sw_nb_desc) {
+ /* For split queue model, descriptor ring */
+ nb_desc = txq->sw_nb_desc;
+ } else {
+ /* For single queue model */
+ nb_desc = txq->nb_tx_desc;
+ }
+ for (i = 0; i < nb_desc; i++) {
+ if (txq->sw_ring[i].mbuf) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+}
+
+static const struct idpf_rxq_ops def_rxq_ops = {
+ .release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+ .release_mbufs = release_txq_mbufs,
+};
+
+static void
+idpf_rx_queue_release(void *rxq)
+{
+ struct idpf_rx_queue *q = (struct idpf_rx_queue *)rxq;
+
+ if (!q)
+ return;
+
+ /* Split queue */
+ if (q->bufq1 && q->bufq2) {
+ q->bufq1->ops->release_mbufs(q->bufq1);
+ rte_free(q->bufq1->sw_ring);
+ rte_memzone_free(q->bufq1->mz);
+ rte_free(q->bufq1);
+ q->bufq2->ops->release_mbufs(q->bufq2);
+ rte_free(q->bufq2->sw_ring);
+ rte_memzone_free(q->bufq2->mz);
+ rte_free(q->bufq2);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+ return;
+ }
+
+ /* Single queue */
+ q->ops->release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+}
+
+static void
+idpf_tx_queue_release(void *txq)
+{
+ struct idpf_tx_queue *q = (struct idpf_tx_queue *)txq;
+
+ if (!q)
+ return;
+
+ if (q->complq)
+ rte_free(q->complq);
+ q->ops->release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+}
+
+static inline void
+reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ rxq->rx_tail = 0;
+ rxq->expected_gen_id = 1;
+}
+
+static inline void
+reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+ for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+ rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+ /* The next descriptor id which can be received. */
+ rxq->rx_next_avail = 0;
+
+ /* The next descriptor id which can be refilled. */
+ rxq->rx_tail = 0;
+ /* The number of descriptors which can be refilled. */
+ rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+ rxq->bufq1 = NULL;
+ rxq->bufq2 = NULL;
+}
+
+static inline void
+reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+ reset_split_rx_descq(rxq);
+ reset_split_rx_bufq(rxq->bufq1);
+ reset_split_rx_bufq(rxq->bufq2);
+}
+
+static inline void
+reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+ for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+ rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+ rxq->rx_tail = 0;
+ rxq->nb_rx_hold = 0;
+
+ if (rxq->pkt_first_seg != NULL)
+ rte_pktmbuf_free(rxq->pkt_first_seg);
+
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+ struct idpf_tx_entry *txe;
+ uint32_t i, size;
+ uint16_t prev;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ size = sizeof(struct iecm_flex_tx_sched_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->desc_ring)[i] = 0;
+
+ txe = txq->sw_ring;
+ prev = (uint16_t)(txq->sw_nb_desc - 1);
+ for (i = 0; i < txq->sw_nb_desc; i++) {
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ /* Use this as next to clean for split desc queue */
+ txq->last_desc_cleaned = 0;
+ txq->sw_tail = 0;
+ txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+static inline void
+reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+ uint32_t i, size;
+
+ if (!cq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
+ return;
+ }
+
+ size = sizeof(struct iecm_splitq_tx_compl_desc) * cq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)cq->compl_ring)[i] = 0;
+
+ cq->tx_tail = 0;
+ cq->expected_gen_id = 1;
+}
+
+static inline void
+reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+ struct idpf_tx_entry *txe;
+ uint32_t i, size;
+ uint16_t prev;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ txe = txq->sw_ring;
+ size = sizeof(struct iecm_base_tx_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->tx_ring)[i] = 0;
+
+ prev = (uint16_t)(txq->nb_tx_desc - 1);
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ txq->tx_ring[i].qw1 =
+ rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DESC_DONE);
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+ txq->nb_free = txq->nb_tx_desc - 1;
+
+ txq->next_dd = txq->rs_thresh - 1;
+ txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+ uint16_t queue_idx, uint16_t rx_free_thresh,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t len;
+
+ bufq->mp = mp;
+ bufq->nb_rx_desc = nb_desc;
+ bufq->rx_free_thresh = rx_free_thresh;
+ bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+ bufq->port_id = dev->data->port_id;
+ bufq->rx_deferred_start = rx_conf->rx_deferred_start;
+ bufq->rx_hdr_len = 0;
+ bufq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ bufq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ bufq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+ bufq->rx_buf_len = len;
+
+ /* Allocate the software ring. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ bufq->sw_ring =
+ rte_zmalloc_socket("idpf rx bufq sw ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ return -ENOMEM;
+ }
+
+ /* Allocate a liitle more to support bulk allocate. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_splitq_rx_buf_desc),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
+ rte_free(bufq->sw_ring);
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ bufq->rx_ring_phys_addr = mz->iova;
+ bufq->rx_ring = mz->addr;
+
+ bufq->mz = mz;
+ reset_split_rx_bufq(bufq);
+ bufq->q_set = true;
+ bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+ queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+ bufq->ops = &def_rxq_ops;
+
+ /* TODO: allow bulk or vec */
+
+ return 0;
+}
+
+static int
+idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *bufq1, *bufq2;
+ const struct rte_memzone *mz;
+ uint16_t rx_free_thresh;
+ uint32_t ring_size;
+ uint16_t qid;
+ uint16_t len;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc);
+ return -EINVAL;
+ }
+
+ /* Check free threshold */
+ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+ IDPF_DEFAULT_RX_FREE_THRESH :
+ rx_conf->rx_free_thresh;
+ if (check_rx_thresh(nb_desc, rx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Setup Rx description queue */
+ rxq = rte_zmalloc_socket("idpf rxq",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+ return -ENOMEM;
+ }
+
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->rx_hdr_len = 0;
+ rxq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = len;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+ ret = -ENOMEM;
+ goto free_rxq;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ rxq->rx_ring_phys_addr = mz->iova;
+ rxq->rx_ring = mz->addr;
+
+ rxq->mz = mz;
+ reset_split_rx_descq(rxq);
+ rxq->q_set = true;
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ /* TODO: allow bulk or vec */
+
+ /* setup Rx buffer queue */
+ bufq1 = rte_zmalloc_socket("idpf bufq1",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq1) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
+ ret = -ENOMEM;
+ goto free_mz;
+ }
+ qid = 2 * queue_idx;
+ ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
+ nb_desc, socket_id, rx_conf, mp);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+ ret = -EINVAL;
+ goto free_bufq1;
+ }
+ rxq->bufq1 = bufq1;
+
+ bufq2 = rte_zmalloc_socket("idpf bufq2",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq2) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
+ rte_free(bufq1->sw_ring);
+ rte_memzone_free(bufq1->mz);
+ ret = -ENOMEM;
+ goto free_bufq1;
+ }
+ qid = 2 * queue_idx + 1;
+ ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
+ nb_desc, socket_id, rx_conf, mp);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+ rte_free(bufq1->sw_ring);
+ rte_memzone_free(bufq1->mz);
+ ret = -EINVAL;
+ goto free_bufq2;
+ }
+ rxq->bufq2 = bufq2;
+
+ return 0;
+
+free_bufq2:
+ rte_free(bufq2);
+free_bufq1:
+ rte_free(bufq1);
+free_mz:
+ rte_memzone_free(mz);
+free_rxq:
+ rte_free(rxq);
+
+ return ret;
+}
+
+static int
+idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_rx_queue *rxq;
+ const struct rte_memzone *mz;
+ uint16_t rx_free_thresh;
+ uint32_t ring_size;
+ uint16_t len;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ /* Check free threshold */
+ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+ IDPF_DEFAULT_RX_FREE_THRESH :
+ rx_conf->rx_free_thresh;
+ if (check_rx_thresh(nb_desc, rx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Setup Rx description queue */
+ rxq = rte_zmalloc_socket("idpf rxq",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+ return -ENOMEM;
+ }
+
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->rx_hdr_len = 0;
+ rxq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = len;
+
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ rxq->sw_ring =
+ rte_zmalloc_socket("idpf rxq sw ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ rte_free(rxq);
+ return -ENOMEM;
+ }
+
+ /* Allocate a liitle more to support bulk allocate. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_singleq_rx_buf_desc),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
+ rte_free(rxq->sw_ring);
+ rte_free(rxq);
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ rxq->rx_ring_phys_addr = mz->iova;
+ rxq->rx_ring = mz->addr;
+
+ rxq->mz = mz;
+ reset_single_rx_queue(rxq);
+ rxq->q_set = true;
+ dev->data->rx_queues[queue_idx] = rxq;
+ rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+ queue_idx * vport->chunks_info.rx_qtail_spacing);
+ rxq->ops = &def_rxq_ops;
+
+ return 0;
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+ return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, rx_conf, mp);
+ else
+ return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, rx_conf, mp);
+}
+
+static int
+idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_tx_queue *txq, *cq;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ PMD_INIT_FUNC_TRACE();
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH;
+ tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH;
+ if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("idpf split txq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ txq->nb_tx_desc = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ /* Allocate software ring */
+ txq->sw_nb_desc = 2 * nb_desc;
+ txq->sw_ring =
+ rte_zmalloc_socket("idpf split tx sw ring",
+ sizeof(struct idpf_tx_entry) *
+ txq->sw_nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct iecm_flex_tx_sched_desc) * txq->nb_tx_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->desc_ring = (struct iecm_flex_tx_sched_desc *)mz->addr;
+
+ txq->mz = mz;
+ reset_split_tx_descq(txq);
+ txq->q_set = true;
+ dev->data->tx_queues[queue_idx] = txq;
+ txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+ queue_idx * vport->chunks_info.tx_qtail_spacing);
+ txq->ops = &def_txq_ops;
+
+ /* Allocate the TX completion queue data structure. */
+ txq->complq = rte_zmalloc_socket("idpf splitq cq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ cq = txq->complq;
+ if (!cq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+ cq->nb_tx_desc = 2 * nb_desc;
+ cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+ cq->port_id = dev->data->port_id;
+ cq->txqs = dev->data->tx_queues;
+ cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+ ring_size = sizeof(struct iecm_splitq_tx_compl_desc) * cq->nb_tx_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+ cq->tx_ring_phys_addr = mz->iova;
+ cq->compl_ring = (struct iecm_splitq_tx_compl_desc *)mz->addr;
+ cq->mz = mz;
+ reset_split_tx_complq(cq);
+
+ return 0;
+}
+
+static int
+idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_tx_queue *txq;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ PMD_INIT_FUNC_TRACE();
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+ tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
+ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
+ check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("idpf txq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ /* TODO: vlan offload */
+
+ txq->nb_tx_desc = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ /* Allocate software ring */
+ txq->sw_ring =
+ rte_zmalloc_socket("idpf tx sw ring",
+ sizeof(struct idpf_tx_entry) * nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct iecm_base_tx_desc) * nb_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring = (struct iecm_base_tx_desc *)mz->addr;
+
+ txq->mz = mz;
+ reset_single_tx_queue(txq);
+ txq->q_set = true;
+ dev->data->tx_queues[queue_idx] = txq;
+ txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+ queue_idx * vport->chunks_info.tx_qtail_spacing);
+ txq->ops = &def_txq_ops;
+
+ return 0;
+}
+
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+ return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, tx_conf);
+ else
+ return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, tx_conf);
+}
+
+static int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+ rxd->pkt_addr = dma_addr;
+ rxd->hdr_addr = 0;
+#ifndef RTE_LIBRTE_IDPF_16BYTE_RX_DESC
+ rxd->rsvd1 = 0;
+ rxd->rsvd2 = 0;
+#endif
+
+ rxq->sw_ring[i] = mbuf;
+ }
+
+ return 0;
+}
+
+static int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+ rxd->qword0.buf_id = i;
+ rxd->qword0.rsvd0 = 0;
+ rxd->qword0.rsvd1 = 0;
+ rxd->pkt_addr = dma_addr;
+ rxd->hdr_addr = 0;
+ rxd->rsvd2 = 0;
+
+ rxq->sw_ring[i] = mbuf;
+ }
+
+ rxq->nb_rx_hold = 0;
+ rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+ return 0;
+}
+
+int
+idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_rx_queue *rxq;
+ int err;
+
+ if (rx_queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+
+ if (!rxq->bufq1) {
+ /* Single queue */
+ err = idpf_alloc_single_rxq_mbufs(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+ return err;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+ } else {
+ /* Split queue */
+ err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+ return err;
+ }
+ err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+ return err;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->nb_rx_desc - 1);
+ IECM_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->nb_rx_desc - 1);
+ }
+
+ return err;
+}
+
+int
+idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int err = 0;
+
+ PMD_DRV_FUNC_TRACE();
+
+ err = idpf_rx_queue_init(dev, rx_queue_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+ rx_queue_id);
+ return err;
+ }
+
+ /* Ready to switch the queue on */
+ err = idpf_switch_queue(vport, rx_queue_id, true, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+ rx_queue_id);
+ else
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
+int
+idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_tx_queue *txq;
+
+ if (tx_queue_id >= dev->data->nb_tx_queues)
+ return -EINVAL;
+
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+ return 0;
+}
+
+int
+idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int err = 0;
+
+ PMD_DRV_FUNC_TRACE();
+
+ err = idpf_tx_queue_init(dev, tx_queue_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+ tx_queue_id);
+ return err;
+ }
+
+ /* Ready to switch the queue on */
+ err = idpf_switch_queue(vport, tx_queue_id, false, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+ tx_queue_id);
+ else
+ dev->data->tx_queue_state[tx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
+int
+idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ int err;
+
+ PMD_DRV_FUNC_TRACE();
+
+ if (rx_queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ err = idpf_switch_queue(vport, rx_queue_id, true, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+ rx_queue_id);
+ return err;
+ }
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ rxq->ops->release_mbufs(rxq);
+ reset_single_rx_queue(rxq);
+ } else {
+ rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+ rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+ reset_split_rx_queue(rxq);
+ }
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+int
+idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_tx_queue *txq;
+ int err;
+
+ PMD_DRV_FUNC_TRACE();
+
+ if (tx_queue_id >= dev->data->nb_tx_queues)
+ return -EINVAL;
+
+ err = idpf_switch_queue(vport, tx_queue_id, false, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+ tx_queue_id);
+ return err;
+ }
+
+ txq = dev->data->tx_queues[tx_queue_id];
+ txq->ops->release_mbufs(txq);
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ reset_single_tx_queue(txq);
+ } else {
+ reset_split_tx_descq(txq);
+ reset_split_tx_complq(txq->complq);
+ }
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+void
+idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
+void
+idpf_stop_queues(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_tx_queue *txq;
+ int ret, i;
+
+ /* Stop All queues */
+ ret = idpf_ena_dis_queues(vport, false);
+ if (ret)
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (!rxq)
+ continue;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ rxq->ops->release_mbufs(rxq);
+ reset_single_rx_queue(rxq);
+ } else {
+ rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+ rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+ reset_split_rx_queue(rxq);
+ }
+ dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (!txq)
+ continue;
+ txq->ops->release_mbufs(txq);
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ reset_split_tx_descq(txq);
+ reset_split_tx_complq(txq->complq);
+ } else {
+ reset_single_tx_queue(txq);
+ }
+ dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+}
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
new file mode 100644
index 0000000000..705f706890
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_RXTX_H_
+#define _IDPF_RXTX_H_
+
+#include "base/iecm_osdep.h"
+#include "base/iecm_type.h"
+#include "base/iecm_devids.h"
+#include "base/iecm_lan_txrx.h"
+#include "base/iecm_lan_pf_regs.h"
+#include "base/virtchnl.h"
+#include "base/virtchnl2.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define IDPF_ALIGN_RING_DESC 32
+#define IDPF_MIN_RING_DESC 32
+#define IDPF_MAX_RING_DESC 4096
+#define IDPF_DMA_MEM_ALIGN 4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define IDPF_RING_BASE_ALIGN 128
+
+/* used for Rx Bulk Allocate */
+#define IDPF_RX_MAX_BURST 32
+
+#define IDPF_DEFAULT_RX_FREE_THRESH 32
+
+
+#define IDPF_DEFAULT_TX_RS_THRESH 128
+#define IDPF_DEFAULT_TX_FREE_THRESH 128
+
+#define IDPF_MIN_TSO_MSS 256
+#define IDPF_MAX_TSO_MSS 9668
+#define IDPF_TSO_MAX_SEG UINT8_MAX
+#define IDPF_TX_MAX_MTU_SEG 8
+
+struct idpf_rx_queue {
+ struct idpf_adapter *adapter; /* the adapter this queue belongs to */
+ struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
+ const struct rte_memzone *mz; /* memzone for Rx ring */
+ volatile void *rx_ring;
+ struct rte_mbuf **sw_ring; /* address of SW ring */
+ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
+
+ uint16_t nb_rx_desc; /* ring length */
+ uint16_t rx_tail; /* current value of tail */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
+
+ /* for rx bulk */
+ uint16_t rx_nb_avail; /* number of staged packets ready */
+ uint16_t rx_next_avail; /* index of next staged packets */
+ uint16_t rx_free_trigger; /* triggers rx buffer allocation */
+ struct rte_mbuf *rx_stage[IDPF_RX_MAX_BURST * 2]; /* store mbuf */
+
+ uint16_t port_id; /* device port ID */
+ uint16_t queue_id; /* Rx queue index */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint8_t rxdid;
+
+ bool q_set; /* if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+ const struct idpf_rxq_ops *ops;
+
+ /* only valid for split queue mode */
+ uint8_t expected_gen_id;
+ struct idpf_rx_queue *bufq1;
+ struct idpf_rx_queue *bufq2;
+};
+
+struct idpf_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+ const struct rte_memzone *mz; /* memzone for Tx ring */
+ volatile struct iecm_base_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile union {
+ struct iecm_flex_tx_sched_desc *desc_ring;
+ struct iecm_splitq_tx_compl_desc *compl_ring;
+ };
+ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ struct idpf_tx_entry *sw_ring; /* address array of SW ring */
+
+ uint16_t nb_tx_desc; /* ring length */
+ uint16_t tx_tail; /* current value of tail */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ /* number of used desc since RS bit set */
+ uint16_t nb_used;
+ uint16_t nb_free;
+ uint16_t last_desc_cleaned; /* last desc have been cleaned*/
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint64_t offloads;
+ uint16_t next_dd; /* next to set RS, for VPMD */
+ uint16_t next_rs; /* next to check DD, for VPMD */
+
+ bool q_set; /* if rx queue has been configured */
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ const struct idpf_txq_ops *ops;
+#define IDPF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IDPF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+
+ /* only valid for split queue mode */
+ uint16_t sw_nb_desc;
+ uint16_t sw_tail;
+ void **txqs;
+ uint32_t tx_start_qid;
+ uint8_t expected_gen_id;
+ struct idpf_tx_queue *complq;
+};
+
+/* Offload features */
+union idpf_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ /* uint64_t unused : 24; */
+ };
+};
+
+struct idpf_rxq_ops {
+ void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+ void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp);
+int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+
+int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf);
+int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+
+void idpf_stop_queues(struct rte_eth_dev *dev);
+
+#endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 77d77b82d8..74ed555449 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -21,6 +21,7 @@
#include <rte_dev.h>
#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
#include "base/iecm_prototype.h"
@@ -440,6 +441,347 @@ idpf_destroy_vport(struct idpf_vport *vport)
return err;
}
+#define IDPF_RX_BUF_STRIDE 64
+int
+idpf_config_rxqs(struct idpf_vport *vport)
+{
+ struct idpf_rx_queue **rxq =
+ (struct idpf_rx_queue **)vport->dev_data->rx_queues;
+ struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+ struct virtchnl2_rxq_info *rxq_info;
+ struct idpf_cmd_info args;
+ uint16_t total_qs, num_qs;
+ int size, err, i, j;
+ int k = 0;
+
+ total_qs = vport->num_rx_q + vport->num_rx_bufq;
+ while (total_qs) {
+ if (total_qs > adapter->max_rxq_per_msg) {
+ num_qs = adapter->max_rxq_per_msg;
+ total_qs -= adapter->max_rxq_per_msg;
+ } else {
+ num_qs = total_qs;
+ total_qs = 0;
+ }
+
+ size = sizeof(*vc_rxqs) + (num_qs - 1) *
+ sizeof(struct virtchnl2_rxq_info);
+ vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+ if (vc_rxqs == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+ err = -ENOMEM;
+ break;
+ }
+ vc_rxqs->vport_id = vport->vport_id;
+ vc_rxqs->num_qinfo = num_qs;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ for (i = 0; i < num_qs; i++, k++) {
+ rxq_info = &vc_rxqs->qinfo[i];
+ rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
+ rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+ rxq_info->queue_id = rxq[k]->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+ rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
+ rxq_info->max_pkt_size =
+ vport->dev_data->dev_conf.rxmode.mtu;
+
+ rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+ rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+ rxq_info->ring_len = rxq[k]->nb_rx_desc;
+ }
+ } else {
+ for (i = 0; i < num_qs / 3; i++, k++) {
+ /* Rx queue */
+ rxq_info = &vc_rxqs->qinfo[i * 3];
+ rxq_info->dma_ring_addr =
+ rxq[k]->rx_ring_phys_addr;
+ rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+ rxq_info->queue_id = rxq[k]->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
+ rxq_info->max_pkt_size =
+ vport->dev_data->dev_conf.rxmode.mtu;
+
+ rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+ rxq_info->ring_len = rxq[k]->nb_rx_desc;
+ rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
+ rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
+ rxq_info->rx_buffer_low_watermark = 64;
+
+ /* Buffer queue */
+ for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
+ struct idpf_rx_queue *bufq = j == 1 ?
+ rxq[k]->bufq1 : rxq[k]->bufq2;
+ rxq_info = &vc_rxqs->qinfo[i * 3 + j];
+ rxq_info->dma_ring_addr =
+ bufq->rx_ring_phys_addr;
+ rxq_info->type =
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ rxq_info->queue_id = bufq->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ rxq_info->data_buffer_size = bufq->rx_buf_len;
+ rxq_info->desc_ids =
+ VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ rxq_info->ring_len = bufq->nb_rx_desc;
+
+ rxq_info->buffer_notif_stride =
+ IDPF_RX_BUF_STRIDE;
+ rxq_info->rx_buffer_low_watermark = 64;
+ }
+ }
+ }
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+ args.in_args = (uint8_t *)vc_rxqs;
+ args.in_args_size = size;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ rte_free(vc_rxqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+ break;
+ }
+ }
+
+ return err;
+}
+
+int
+idpf_config_txqs(struct idpf_vport *vport)
+{
+ struct idpf_tx_queue **txq =
+ (struct idpf_tx_queue **)vport->dev_data->tx_queues;
+ struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+ struct virtchnl2_txq_info *txq_info;
+ struct idpf_cmd_info args;
+ uint16_t total_qs, num_qs;
+ int size, err, i;
+ int k = 0;
+
+ total_qs = vport->num_tx_q + vport->num_tx_complq;
+ while (total_qs) {
+ if (total_qs > adapter->max_txq_per_msg) {
+ num_qs = adapter->max_txq_per_msg;
+ total_qs -= adapter->max_txq_per_msg;
+ } else {
+ num_qs = total_qs;
+ total_qs = 0;
+ }
+ size = sizeof(*vc_txqs) + (num_qs - 1) *
+ sizeof(struct virtchnl2_txq_info);
+ vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+ if (vc_txqs == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+ err = -ENOMEM;
+ break;
+ }
+ vc_txqs->vport_id = vport->vport_id;
+ vc_txqs->num_qinfo = num_qs;
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ for (i = 0; i < num_qs; i++, k++) {
+ txq_info = &vc_txqs->qinfo[i];
+ txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+ txq_info->queue_id = txq[k]->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+ txq_info->ring_len = txq[k]->nb_tx_desc;
+ }
+ } else {
+ for (i = 0; i < num_qs / 2; i++, k++) {
+ /* txq info */
+ txq_info = &vc_txqs->qinfo[2 * i];
+ txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+ txq_info->queue_id = txq[k]->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ txq_info->ring_len = txq[k]->nb_tx_desc;
+ txq_info->tx_compl_queue_id =
+ txq[k]->complq->queue_id;
+
+ /* tx completion queue info */
+ txq_info = &vc_txqs->qinfo[2 * i + 1];
+ txq_info->dma_ring_addr =
+ txq[k]->complq->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ txq_info->queue_id = txq[k]->complq->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ txq_info->ring_len = txq[k]->complq->nb_tx_desc;
+ }
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+ args.in_args = (uint8_t *)vc_txqs;
+ args.in_args_size = size;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ rte_free(vc_txqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int
+idpf_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = vport->vport_id;
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (u8 *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
+int
+idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on)
+{
+ uint32_t type;
+ int err, queue_id;
+
+ /* switch txq/rxq */
+ type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+ if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+ queue_id = vport->chunks_info.rx_start_qid + qid;
+ else
+ queue_id = vport->chunks_info.tx_start_qid + qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+
+ /* switch tx completion queue */
+ if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ }
+
+ /* switch rx buffer queue */
+ if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ queue_id++;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ }
+
+ return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM 2
+int idpf_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ uint32_t type;
+ struct idpf_cmd_info args;
+ uint16_t num_chunks;
+ int err, len;
+
+ num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ num_chunks++;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ num_chunks++;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+ sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = num_chunks;
+ queue_select->vport_id = vport->vport_id;
+
+ type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+ queue_chunk[type].num_queues = vport->num_rx_q;
+
+ type = VIRTCHNL2_QUEUE_TYPE_TX;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+ queue_chunk[type].num_queues = vport->num_tx_q;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id =
+ vport->chunks_info.rx_buf_start_qid;
+ queue_chunk[type].num_queues = vport->num_rx_bufq;
+ }
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id =
+ vport->chunks_info.tx_compl_start_qid;
+ queue_chunk[type].num_queues = vport->num_tx_complq;
+ }
+
+ args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (u8 *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ enable ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
int
idpf_ena_dis_vport(struct idpf_vport *vport, bool enable)
{
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 262a7aa8c7..9bda251ead 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
sources = files(
'idpf_ethdev.c',
+ 'idpf_rxtx.c',
'idpf_vchnl.c',
)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 5/9] net/idpf: support getting device information
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (3 preceding siblings ...)
2022-05-09 9:11 ` [RFC v2 4/9] net/idpf: support queue ops Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 6/9] net/idpf: support packet type getting Junfeng Guo
` (3 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops dev_infos_get.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 69 ++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 511770ed4f..c58a40e7ab 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -28,6 +28,8 @@ static int idpf_dev_configure(struct rte_eth_dev *dev);
static int idpf_dev_start(struct rte_eth_dev *dev);
static int idpf_dev_stop(struct rte_eth_dev *dev);
static int idpf_dev_close(struct rte_eth_dev *dev);
+static int idpf_dev_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);
static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure = idpf_dev_configure,
@@ -42,8 +44,75 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.rx_queue_release = idpf_dev_rx_queue_release,
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release = idpf_dev_tx_queue_release,
+ .dev_infos_get = idpf_dev_info_get,
};
+static int
+idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = adapter->caps->max_rx_q;
+ dev_info->max_tx_queues = adapter->caps->max_tx_q;
+ dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
+ dev_info->max_rx_pktlen = IDPF_MAX_FRAME_SIZE;
+
+ dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+ dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+ dev_info->rx_offload_capa =
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa =
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+ .tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = IDPF_MAX_RING_DESC,
+ .nb_min = IDPF_MIN_RING_DESC,
+ .nb_align = IDPF_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = IDPF_MAX_RING_DESC,
+ .nb_min = IDPF_MIN_RING_DESC,
+ .nb_align = IDPF_ALIGN_RING_DESC,
+ };
+
+ return 0;
+}
static int
idpf_init_vport_req_info(struct rte_eth_dev *dev)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 6/9] net/idpf: support packet type getting
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (4 preceding siblings ...)
2022-05-09 9:11 ` [RFC v2 5/9] net/idpf: support getting device information Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 7/9] net/idpf: support link update Junfeng Guo
` (2 subsequent siblings)
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops dev_supported_ptypes_get.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 3 ++
drivers/net/idpf/idpf_rxtx.c | 51 ++++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 3 ++
3 files changed, 57 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c58a40e7ab..01fd023bfc 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -32,6 +32,7 @@ static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
static const struct eth_dev_ops idpf_eth_dev_ops = {
+ .dev_supported_ptypes_get = idpf_dev_supported_ptypes_get,
.dev_configure = idpf_dev_configure,
.dev_start = idpf_dev_start,
.dev_stop = idpf_dev_stop,
@@ -501,6 +502,8 @@ idpf_adapter_init(struct rte_eth_dev *dev)
if (adapter->initialized)
return 0;
+ idpf_set_default_ptype_table(dev);
+
hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
hw->hw_addr_len = pci_dev->mem_resource[0].len;
hw->back = adapter;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 770ed52281..6b436141c8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -8,6 +8,57 @@
#include "idpf_ethdev.h"
#include "idpf_rxtx.h"
+const uint32_t *
+idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ return ptypes;
+}
+
+static inline uint32_t
+idpf_get_default_pkt_type(uint16_t ptype)
+{
+ static const uint32_t type_table[IDPF_MAX_PKT_TYPE]
+ __rte_cache_aligned = {
+ [1] = RTE_PTYPE_L2_ETHER,
+ [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
+ [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_ICMP,
+ [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6,
+ [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_ICMP,
+ };
+
+ return type_table[ptype];
+}
+
+void __rte_cold
+idpf_set_default_ptype_table(struct rte_eth_dev *dev __rte_unused)
+{
+ int i;
+
+ for (i = 0; i < IDPF_MAX_PKT_TYPE; i++)
+ adapter->ptype_tbl[i] = idpf_get_default_pkt_type(i);
+}
+
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
{
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 705f706890..21b6d8cb84 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -164,4 +164,7 @@ void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
void idpf_stop_queues(struct rte_eth_dev *dev);
+void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+
#endif /* _IDPF_RXTX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 7/9] net/idpf: support link update
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (5 preceding siblings ...)
2022-05-09 9:11 ` [RFC v2 6/9] net/idpf: support packet type getting Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 8/9] net/idpf: support basic Rx/Tx Junfeng Guo
2022-05-09 9:11 ` [RFC v2 9/9] net/idpf: support RSS Junfeng Guo
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops link_update.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 22 ++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 2 ++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 01fd023bfc..39efb387cf 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -31,6 +31,27 @@ static int idpf_dev_close(struct rte_eth_dev *dev);
static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+int
+idpf_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct rte_eth_link new_link;
+
+ memset(&new_link, 0, sizeof(new_link));
+
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
+ new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+
+ return rte_eth_linkstatus_set(dev, &new_link);
+}
+
static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_supported_ptypes_get = idpf_dev_supported_ptypes_get,
.dev_configure = idpf_dev_configure,
@@ -46,6 +67,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release = idpf_dev_tx_queue_release,
.dev_infos_get = idpf_dev_info_get,
+ .link_update = idpf_dev_link_update,
};
static int
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index c5aa168d95..5520b2d6ce 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -189,6 +189,8 @@ _atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
return !ret;
}
+int idpf_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete);
void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
int idpf_check_api_version(struct idpf_adapter *adapter);
int idpf_get_caps(struct idpf_adapter *adapter);
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 8/9] net/idpf: support basic Rx/Tx
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (6 preceding siblings ...)
2022-05-09 9:11 ` [RFC v2 7/9] net/idpf: support link update Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-09 9:11 ` [RFC v2 9/9] net/idpf: support RSS Junfeng Guo
8 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Xiaoyun Li
Add basic RX & TX support in split queue mode and single queue mode.
Using split queue mode by default.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 93 ++++
drivers/net/idpf/idpf_rxtx.c | 877 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 33 ++
3 files changed, 1003 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 39efb387cf..1a985caf46 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -14,12 +14,16 @@
#include "idpf_ethdev.h"
#include "idpf_rxtx.h"
+#define IDPF_TX_SINGLE_Q "tx_single"
+#define IDPF_RX_SINGLE_Q "rx_single"
#define VPORT_NUM "vport_num"
struct idpf_adapter *adapter;
uint16_t vport_num = 1;
static const char * const idpf_valid_args[] = {
+ IDPF_TX_SINGLE_Q,
+ IDPF_RX_SINGLE_Q,
VPORT_NUM,
NULL
};
@@ -156,6 +160,30 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev)
(struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+ if (!adapter->txq_model) {
+ vport_info->txq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ vport_info->num_tx_q = dev->data->nb_tx_queues;
+ vport_info->num_tx_complq =
+ dev->data->nb_tx_queues * IDPF_TX_COMPLQ_PER_GRP;
+ } else {
+ vport_info->txq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ vport_info->num_tx_q = dev->data->nb_tx_queues;
+ vport_info->num_tx_complq = 0;
+ }
+ if (!adapter->rxq_model) {
+ vport_info->rxq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ vport_info->num_rx_q = dev->data->nb_rx_queues;
+ vport_info->num_rx_bufq =
+ dev->data->nb_rx_queues * IDPF_RX_BUFQ_PER_GRP;
+ } else {
+ vport_info->rxq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ vport_info->num_rx_q = dev->data->nb_rx_queues;
+ vport_info->num_rx_bufq = 0;
+ }
return 0;
}
@@ -426,6 +454,56 @@ idpf_dev_close(struct rte_eth_dev *dev)
return 0;
}
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+ int *i = (int *)args;
+ char *end;
+ int num;
+
+ num = strtoul(value, &end, 10);
+
+ if (num != 0 && num != 1) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+ "value must be 0 or 1",
+ value, key);
+ return -1;
+ }
+
+ *i = num;
+ return 0;
+}
+
+static int idpf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key");
+ return -EINVAL;
+ }
+
+ ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
+ &adapter->txq_model);
+ if (ret)
+ goto bail;
+
+ ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
+ &adapter->rxq_model);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
static void
idpf_reset_pf(struct iecm_hw *hw)
{
@@ -533,6 +611,12 @@ idpf_adapter_init(struct rte_eth_dev *dev)
hw->device_id = pci_dev->id.device_id;
hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+ ret = idpf_parse_devargs(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
idpf_reset_pf(hw);
ret = idpf_check_pf_reset_done(hw);
if (ret) {
@@ -641,6 +725,15 @@ idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void *init_params)
dev->dev_ops = &idpf_eth_dev_ops;
+ /* for secondary processes, we don't initialise any further as primary
+ * has already done this work.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ idpf_set_rx_function(dev);
+ idpf_set_tx_function(dev);
+ return ret;
+ }
+
ret = idpf_adapter_init(dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to init adapter.");
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 6b436141c8..d5613d63d6 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1301,3 +1301,880 @@ idpf_stop_queues(struct rte_eth_dev *dev)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
}
+
+#define IDPF_RX_ERR0_QW1 \
+ (BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+ uint64_t flags = 0;
+
+ if (unlikely(!(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S))))
+ return flags;
+
+ if (likely((err & IDPF_RX_ERR0_QW1) == 0)) {
+ flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)))
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)))
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)))
+ flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)))
+ flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+ return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_HASH1_S 0
+#define IDPF_RX_FLEX_DESC_HASH2_S 16
+#define IDPF_RX_FLEX_DESC_HASH3_S 24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+ uint8_t status_err0_qw0;
+ uint64_t flags = 0;
+
+ status_err0_qw0 = rx_desc->status_err0_qw0;
+
+ if (status_err0_qw0 & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) {
+ flags |= RTE_MBUF_F_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_16(rx_desc->hash1) |
+ ((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+ IDPF_RX_FLEX_DESC_HASH2_S) |
+ ((uint32_t)(rx_desc->hash3) <<
+ IDPF_RX_FLEX_DESC_HASH3_S);
+ }
+
+ return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+ volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+ volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+ uint16_t nb_refill = rx_bufq->nb_rx_hold;
+ uint16_t nb_desc = rx_bufq->nb_rx_desc;
+ uint16_t next_avail = rx_bufq->rx_tail;
+ struct rte_mbuf *nmb[nb_refill];
+ struct rte_eth_dev *dev;
+ uint64_t dma_addr;
+ uint16_t delta;
+
+ if (nb_refill <= rx_bufq->rx_free_thresh)
+ return;
+
+ if (nb_refill >= nb_desc)
+ nb_refill = nb_desc - 1;
+
+ rx_buf_ring =
+ (volatile struct virtchnl2_splitq_rx_buf_desc *)rx_bufq->rx_ring;
+ delta = nb_desc - next_avail;
+ if (delta < nb_refill) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta))) {
+ for (int i = 0; i < delta; i++) {
+ rx_buf_desc = &rx_buf_ring[next_avail + i];
+ rx_bufq->sw_ring[next_avail + i] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->hdr_addr = 0;
+ rx_buf_desc->pkt_addr = dma_addr;
+ }
+ nb_refill -= delta;
+ next_avail = 0;
+ rx_bufq->nb_rx_hold -= delta;
+ } else {
+ dev = &rte_eth_devices[rx_bufq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rx_bufq->port_id, rx_bufq->queue_id);
+ return;
+ }
+ }
+
+ if (nb_desc - next_avail >= nb_refill) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill))) {
+ for (int i = 0; i < nb_refill; i++) {
+ rx_buf_desc = &rx_buf_ring[next_avail + i];
+ rx_bufq->sw_ring[next_avail + i] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->hdr_addr = 0;
+ rx_buf_desc->pkt_addr = dma_addr;
+ }
+ next_avail += nb_refill;
+ rx_bufq->nb_rx_hold -= nb_refill;
+ } else {
+ dev = &rte_eth_devices[rx_bufq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rx_bufq->port_id, rx_bufq->queue_id);
+ }
+ }
+
+ IECM_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+ rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+ uint16_t pktlen_gen_bufq_id;
+ struct idpf_rx_queue *rxq;
+ const uint32_t *ptype_tbl;
+ uint8_t status_err0_qw1;
+ struct rte_mbuf *rxm;
+ uint16_t rx_id_bufq1;
+ uint16_t rx_id_bufq2;
+ uint64_t pkt_flags;
+ uint16_t pkt_len;
+ uint16_t bufq_id;
+ uint16_t gen_id;
+ uint16_t rx_id;
+ uint16_t nb_rx;
+
+ nb_rx = 0;
+ rxq = (struct idpf_rx_queue *)rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+ rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+ rx_desc_ring =
+ (volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *)rxq->rx_ring;
+ ptype_tbl = rxq->adapter->ptype_tbl;
+
+ while (nb_rx < nb_pkts) {
+ rx_desc = &rx_desc_ring[rx_id];
+
+ pktlen_gen_bufq_id =
+ rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ gen_id = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+ if (gen_id != rxq->expected_gen_id)
+ break;
+
+ pkt_len = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+ if (!pkt_len)
+ PMD_RX_LOG(ERR, "Packet length is 0");
+
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc)) {
+ rx_id = 0;
+ rxq->expected_gen_id ^= 1;
+ }
+
+ bufq_id = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+ if (!bufq_id) {
+ rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+ rx_id_bufq1++;
+ if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+ rx_id_bufq1 = 0;
+ rxq->bufq1->nb_rx_hold++;
+ } else {
+ rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+ rx_id_bufq2++;
+ if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+ rx_id_bufq2 = 0;
+ rxq->bufq2->nb_rx_hold++;
+ }
+
+ pkt_len -= rxq->crc_len;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->next = NULL;
+ rxm->nb_segs = 1;
+ rxm->port = rxq->port_id;
+ rxm->ol_flags = 0;
+ rxm->packet_type =
+ ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+ status_err0_qw1 = rx_desc->status_err0_qw1;
+ pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+ pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+ rxm->ol_flags |= pkt_flags;
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+
+ if (nb_rx) {
+ rxq->rx_tail = rx_id;
+ if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+ rxq->bufq1->rx_next_avail = rx_id_bufq1;
+ if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+ rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+ idpf_split_rx_bufq_refill(rxq->bufq1);
+ idpf_split_rx_bufq_refill(rxq->bufq2);
+ }
+
+ return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+ volatile struct iecm_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+ volatile struct iecm_splitq_tx_compl_desc *txd;
+ uint16_t next = cq->tx_tail;
+ struct idpf_tx_entry *txe;
+ struct idpf_tx_queue *txq;
+ uint16_t gen, qid, q_head;
+ uint8_t ctype;
+
+ txd = &compl_ring[next];
+ gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S;
+ if (gen != cq->expected_gen_id)
+ return;
+
+ ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_COMPL_TYPE_M) >> IECM_TXD_COMPLQ_COMPL_TYPE_S;
+ qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S;
+ q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+ txq = cq->txqs[qid - cq->tx_start_qid];
+
+ switch (ctype) {
+ case IECM_TXD_COMPLT_RE:
+ if (q_head == 0)
+ txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+ else
+ txq->last_desc_cleaned = q_head - 1;
+ if (unlikely(!(txq->last_desc_cleaned % 32))) {
+ PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
+ q_head);
+ return;
+ }
+
+ break;
+ case IECM_TXD_COMPLT_RS:
+ txq->nb_free++;
+ txq->nb_used--;
+ txe = &txq->sw_ring[q_head];
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "unknown completion type.");
+ return;
+ }
+
+ if (++next == cq->nb_tx_desc) {
+ next = 0;
+ cq->expected_gen_id ^= 1;
+ }
+
+ cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+ if (flags & RTE_MBUF_F_TX_TCP_SEG)
+ return 1;
+
+ return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+ union idpf_tx_offload tx_offload,
+ volatile union iecm_flex_tx_ctx_desc *ctx_desc)
+{
+ uint16_t cmd_dtype;
+ uint32_t tso_len;
+ uint8_t hdr_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return;
+ }
+
+ hdr_len = tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+ cmd_dtype = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX |
+ IECM_TX_FLEX_CTX_DESC_CMD_TSO;
+ tso_len = mbuf->pkt_len - hdr_len;
+
+ ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+ ctx_desc->tso.qw0.hdr_len = hdr_len;
+ ctx_desc->tso.qw0.mss_rt =
+ rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+ IECM_TXD_FLEX_CTX_MSS_RT_M);
+ ctx_desc->tso.qw0.flex_tlen =
+ rte_cpu_to_le_32(tso_len &
+ IECM_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+ volatile struct iecm_flex_tx_sched_desc *txr = txq->desc_ring;
+ volatile struct iecm_flex_tx_sched_desc *txd;
+ struct idpf_tx_entry *sw_ring = txq->sw_ring;
+ union idpf_tx_offload tx_offload = {0};
+ struct idpf_tx_entry *txe, *txn;
+ uint16_t nb_used, tx_id, sw_id;
+ struct rte_mbuf *tx_pkt;
+ uint16_t nb_to_clean;
+ uint16_t nb_tx = 0;
+ uint64_t ol_flags;
+ uint16_t nb_ctx;
+
+ tx_id = txq->tx_tail;
+ sw_id = txq->sw_tail;
+ txe = &sw_ring[sw_id];
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ tx_pkt = tx_pkts[nb_tx];
+
+ if (txq->nb_free <= txq->free_thresh) {
+ /* TODO: Need to refine
+ * 1. free and clean: Better to decide a clean destination instead of
+ * loop times. And don't free mbuf when RS got immediately, free when
+ * transmit or according to the clean destination.
+ * Now, just ingnore the RE write back, free mbuf when get RS
+ * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+ * need to be separated.
+ **/
+ nb_to_clean = 2 * txq->rs_thresh;
+ while (nb_to_clean--)
+ idpf_split_tx_free(txq->complq);
+ }
+
+ if (txq->nb_free < tx_pkt->nb_segs)
+ break;
+
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = idpf_calc_context_desc(ol_flags);
+ nb_used = tx_pkt->nb_segs + nb_ctx;
+
+ /* context descriptor */
+ if (nb_ctx) {
+ volatile union iecm_flex_tx_ctx_desc *ctx_desc =
+ (volatile union iecm_flex_tx_ctx_desc *)&txr[tx_id];
+
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+ idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+ ctx_desc);
+
+ tx_id++;
+ if (tx_id == txq->nb_tx_desc)
+ tx_id = 0;
+ }
+
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+ txe->mbuf = tx_pkt;
+
+ /* Setup TX descriptor */
+ txd->buf_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+ txd->qw1.cmd_dtype =
+ rte_cpu_to_le_16(IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+ txd->qw1.rxr_bufsize = tx_pkt->data_len;
+ txd->qw1.compl_tag = sw_id;
+ tx_id++;
+ if (tx_id == txq->nb_tx_desc)
+ tx_id = 0;
+ sw_id = txe->next_id;
+ txe = txn;
+ tx_pkt = tx_pkt->next;
+ } while (tx_pkt);
+
+ /* fill the last descriptor with End of Packet (EOP) bit */
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_EOP;
+
+ if (unlikely(!(tx_id % 32)))
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_RE;
+ if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_CS_EN;
+ txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+ txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+ }
+
+ /* update the tail pointer if any packets were processed */
+ if (likely(nb_tx)) {
+ IECM_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+ txq->sw_tail = sw_id;
+ }
+
+ return nb_tx;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+ uint16_t rx_id)
+{
+ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+ if (nb_hold > rxq->rx_free_thresh) {
+ PMD_RX_LOG(DEBUG,
+ "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+ rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+ rx_id = (uint16_t)((rx_id == 0) ?
+ (rxq->nb_rx_desc - 1) : (rx_id - 1));
+ IECM_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+ nb_hold = 0;
+ }
+ rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile union virtchnl2_rx_desc *rx_ring;
+ volatile union virtchnl2_rx_desc *rxdp;
+ struct idpf_rx_queue *rxq;
+ const uint32_t *ptype_tbl;
+ uint16_t rx_id, nb_hold;
+ struct rte_eth_dev *dev;
+ uint16_t rx_packet_len;
+ struct rte_mbuf *rxe;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *nmb;
+ uint16_t rx_status0;
+ uint64_t dma_addr;
+ uint16_t nb_rx;
+
+ nb_rx = 0;
+ nb_hold = 0;
+ rxq = rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_ring = rxq->rx_ring;
+ ptype_tbl = rxq->adapter->ptype_tbl;
+
+ while (nb_rx < nb_pkts) {
+ rxdp = &rx_ring[rx_id];
+ rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+ /* Check the DD bit first */
+ if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+ break;
+
+ rx_packet_len = (rte_cpu_to_le_16(rxdp->flex_nic_wb.pkt_len)) -
+ rxq->crc_len;
+
+ nmb = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!nmb)) {
+ dev = &rte_eth_devices[rxq->port_id];
+ dev->data->rx_mbuf_alloc_failed++;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+ "queue_id=%u", rxq->port_id, rxq->queue_id);
+ break;
+ }
+
+ nb_hold++;
+ rxe = rxq->sw_ring[rx_id];
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc))
+ rx_id = 0;
+
+ /* Prefetch next mbuf */
+ rte_prefetch0(rxq->sw_ring[rx_id]);
+
+ /* When next RX descriptor is on a cache line boundary,
+ * prefetch the next 4 RX descriptors and next 8 pointers
+ * to mbufs.
+ */
+ if ((rx_id & 0x3) == 0) {
+ rte_prefetch0(&rx_ring[rx_id]);
+ rte_prefetch0(rxq->sw_ring[rx_id]);
+ }
+ rxm = rxe;
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+ rxdp->read.hdr_addr = 0;
+ rxdp->read.pkt_addr = dma_addr;
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = rx_packet_len;
+ rxm->data_len = rx_packet_len;
+ rxm->port = rxq->port_id;
+ rxm->ol_flags = 0;
+ rxm->packet_type =
+ ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxdp->flex_nic_wb.ptype_flex_flags0) &
+ VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+ rxq->rx_tail = rx_id;
+
+ idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+ return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+ uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+ struct idpf_tx_entry *sw_ring = txq->sw_ring;
+ uint16_t nb_tx_desc = txq->nb_tx_desc;
+ uint16_t desc_to_clean_to;
+ uint16_t nb_tx_to_clean;
+
+ volatile struct iecm_base_tx_desc *txd = txq->tx_ring;
+
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ if (desc_to_clean_to >= nb_tx_desc)
+ desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+ desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+ if ((txd[desc_to_clean_to].qw1 &
+ rte_cpu_to_le_64(IECM_TXD_QW1_DTYPE_M)) !=
+ rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DESC_DONE)) {
+ PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
+ "(port=%d queue=%d)", desc_to_clean_to,
+ txq->port_id, txq->queue_id);
+ return -1;
+ }
+
+ if (last_desc_cleaned > desc_to_clean_to)
+ nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+ desc_to_clean_to);
+ else
+ nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+ last_desc_cleaned);
+
+ txd[desc_to_clean_to].qw1 = 0;
+
+ txq->last_desc_cleaned = desc_to_clean_to;
+ txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+ return 0;
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+idpf_set_tso_ctx(struct rte_mbuf *mbuf, union idpf_tx_offload tx_offload)
+{
+ uint64_t ctx_desc = 0;
+ uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return ctx_desc;
+ }
+
+ hdr_len = tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+
+ cd_cmd = IECM_TX_CTX_DESC_TSO;
+ cd_tso_len = mbuf->pkt_len - hdr_len;
+ ctx_desc |= ((uint64_t)cd_cmd << IECM_TXD_CTX_QW1_CMD_S) |
+ ((uint64_t)cd_tso_len << IECM_TXD_CTX_QW1_TSO_LEN_S) |
+ ((uint64_t)mbuf->tso_segsz << IECM_TXD_CTX_QW1_MSS_S);
+
+ return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+idpf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size)
+{
+ return rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DATA |
+ ((uint64_t)td_cmd << IECM_TXD_QW1_CMD_S) |
+ ((uint64_t)td_offset <<
+ IECM_TXD_QW1_OFFSET_S) |
+ ((uint64_t)size <<
+ IECM_TXD_QW1_TX_BUF_SZ_S));
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct iecm_base_tx_desc *txd;
+ volatile struct iecm_base_tx_desc *txr;
+ union idpf_tx_offload tx_offload = {0};
+ struct idpf_tx_entry *txe, *txn;
+ struct idpf_tx_entry *sw_ring;
+ struct idpf_tx_queue *txq;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ uint64_t buf_dma_addr;
+ uint32_t td_offset;
+ uint64_t ol_flags;
+ uint16_t tx_last;
+ uint16_t nb_used;
+ uint16_t nb_ctx;
+ uint32_t td_cmd;
+ uint16_t tx_id;
+ uint16_t nb_tx;
+ uint16_t slen;
+
+ txq = tx_queue;
+ sw_ring = txq->sw_ring;
+ txr = txq->tx_ring;
+ tx_id = txq->tx_tail;
+ txe = &sw_ring[tx_id];
+
+ /* Check if the descriptor ring needs to be cleaned. */
+ if (txq->nb_free < txq->free_thresh)
+ (void)idpf_xmit_cleanup(txq);
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ td_cmd = 0;
+ td_offset = 0;
+
+ tx_pkt = *tx_pkts++;
+ RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = idpf_calc_context_desc(ol_flags);
+
+ /* The number of descriptors that must be allocated for
+ * a packet equals to the number of the segments of that
+ * packet plus 1 context descriptor if needed.
+ */
+ nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+ tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+ /* Circular ring */
+ if (tx_last >= txq->nb_tx_desc)
+ tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+ " tx_first=%u tx_last=%u",
+ txq->port_id, txq->queue_id, tx_id, tx_last);
+
+ if (nb_used > txq->nb_free) {
+ if (idpf_xmit_cleanup(txq)) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ if (unlikely(nb_used > txq->rs_thresh)) {
+ while (nb_used > txq->nb_free) {
+ if (idpf_xmit_cleanup(txq)) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ }
+ }
+ }
+
+ /* According to datasheet, the bit2 is reserved and must be
+ * set to 1.
+ */
+ td_cmd |= 0x04;
+
+ if (nb_ctx) {
+ /* Setup TX context descriptor if required */
+ volatile union iecm_flex_tx_ctx_desc *ctx_txd =
+ (volatile union iecm_flex_tx_ctx_desc *)
+ &txr[tx_id];
+
+ txn = &sw_ring[txe->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+
+ /* TSO enabled */
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+ idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+ ctx_txd);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ }
+
+ m_seg = tx_pkt;
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+
+ if (txe->mbuf)
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = m_seg;
+
+ /* Setup TX Descriptor */
+ slen = m_seg->data_len;
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ txd->qw1 = idpf_build_ctob(td_cmd, td_offset, slen);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ m_seg = m_seg->next;
+ } while (m_seg);
+
+ /* The last packet data descriptor needs End Of Packet (EOP) */
+ td_cmd |= IECM_TX_DESC_CMD_EOP;
+ txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+ txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+ if (txq->nb_used >= txq->rs_thresh) {
+ PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+ "%4u (port=%d queue=%d)",
+ tx_last, txq->port_id, txq->queue_id);
+
+ td_cmd |= IECM_TX_DESC_CMD_RS;
+
+ /* Update txq RS bit counters */
+ txq->nb_used = 0;
+ }
+
+ txd->qw1 |=
+ rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+ IECM_TXD_QW1_CMD_S);
+ }
+
+end_of_tx:
+ rte_wmb();
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+ txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+ IECM_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+
+ return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ int i, ret;
+ uint64_t ol_flags;
+ struct rte_mbuf *m;
+
+ for (i = 0; i < nb_pkts; i++) {
+ m = tx_pkts[i];
+ ol_flags = m->ol_flags;
+
+ /* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+ if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+ rte_errno = EINVAL;
+ return i;
+ }
+ } else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+ (m->tso_segsz > IDPF_MAX_TSO_MSS)) {
+ /* MSS outside the range are considered malicious */
+ rte_errno = EINVAL;
+ return i;
+ }
+
+ if (ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) {
+ rte_errno = ENOTSUP;
+ return i;
+ }
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+ }
+
+ return i;
+}
+
+void
+idpf_set_rx_function(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+ return;
+ }
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+ return;
+ }
+}
+
+void
+idpf_set_tx_function(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_prep_pkts;
+ return;
+ }
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_prep_pkts;
+ return;
+ }
+}
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 21b6d8cb84..d9451d2e2d 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -35,6 +35,25 @@
#define IDPF_TSO_MAX_SEG UINT8_MAX
#define IDPF_TX_MAX_MTU_SEG 8
+#define IDPF_TX_CKSUM_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_OUTER_IPV6 | \
+ RTE_MBUF_F_TX_OUTER_IPV4 | \
+ RTE_MBUF_F_TX_IPV6 | \
+ RTE_MBUF_F_TX_IPV4 | \
+ RTE_MBUF_F_TX_VLAN | \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_ETH_TX_OFFLOAD_SECURITY)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+ (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
struct idpf_rx_queue {
struct idpf_adapter *adapter; /* the adapter this queue belongs to */
struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
@@ -162,8 +181,22 @@ int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
void idpf_stop_queues(struct rte_eth_dev *dev);
+void idpf_set_rx_function(struct rte_eth_dev *dev);
+void idpf_set_tx_function(struct rte_eth_dev *dev);
+
void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v2 9/9] net/idpf: support RSS
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
` (7 preceding siblings ...)
2022-05-09 9:11 ` [RFC v2 8/9] net/idpf: support basic Rx/Tx Junfeng Guo
@ 2022-05-09 9:11 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
8 siblings, 1 reply; 33+ messages in thread
From: Junfeng Guo @ 2022-05-09 9:11 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add RSS support.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 106 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 18 +++++-
drivers/net/idpf/idpf_vchnl.c | 93 +++++++++++++++++++++++++++++
3 files changed, 216 insertions(+), 1 deletion(-)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1a985caf46..2a0304c18e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -85,6 +85,7 @@ idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa =
@@ -292,9 +293,96 @@ idpf_init_vport(struct rte_eth_dev *dev)
return 0;
}
+static int
+idpf_config_rss(struct idpf_vport *vport)
+{
+ int ret;
+
+ ret = idpf_set_rss_key(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+ return ret;
+ }
+
+ ret = idpf_set_rss_lut(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+ return ret;
+ }
+
+ ret = idpf_set_rss_hash(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+ return ret;
+ }
+
+ return ret;
+}
+
+static int
+idpf_init_rss(struct idpf_vport *vport)
+{
+ struct rte_eth_rss_conf *rss_conf;
+ uint16_t i, nb_q, lut_size;
+ int ret = 0;
+
+ rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
+ nb_q = vport->num_rx_q;
+
+ vport->rss_key = (uint8_t *)rte_zmalloc("rss_key",
+ vport->rss_key_size, 0);
+ if (!vport->rss_key) {
+ PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+ ret = -ENOMEM;
+ goto err_key;
+ }
+
+ lut_size = vport->rss_lut_size;
+ vport->rss_lut = (uint32_t *)rte_zmalloc("rss_lut",
+ sizeof(uint32_t) * lut_size, 0);
+ if (!vport->rss_lut) {
+ PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
+ ret = -ENOMEM;
+ goto err_lut;
+ }
+
+ if (!rss_conf->rss_key) {
+ for (i = 0; i < vport->rss_key_size; i++)
+ vport->rss_key[i] = (uint8_t)rte_rand();
+ } else {
+ rte_memcpy(vport->rss_key, rss_conf->rss_key,
+ RTE_MIN(rss_conf->rss_key_len,
+ vport->rss_key_size));
+ }
+
+ for (i = 0; i < lut_size; i++)
+ vport->rss_lut[i] = i % nb_q;
+
+ vport->rss_hf = IECM_DEFAULT_RSS_HASH_EXPANDED;
+
+ ret = idpf_config_rss(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS");
+ goto err_cfg;
+ }
+
+ return ret;
+
+err_cfg:
+ rte_free(vport->rss_lut);
+ vport->rss_lut = NULL;
+err_lut:
+ rte_free(vport->rss_key);
+ vport->rss_key = NULL;
+err_key:
+ return ret;
+}
+
static int
idpf_dev_configure(struct rte_eth_dev *dev)
{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
int ret = 0;
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
@@ -319,6 +407,14 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+ if (adapter->caps->rss_caps) {
+ ret = idpf_init_rss(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init rss");
+ return ret;
+ }
+ }
+
return ret;
}
@@ -451,6 +547,16 @@ idpf_dev_close(struct rte_eth_dev *dev)
idpf_dev_stop(dev);
idpf_destroy_vport(vport);
+ if (vport->rss_lut) {
+ rte_free(vport->rss_lut);
+ vport->rss_lut = NULL;
+ }
+
+ if (vport->rss_key) {
+ rte_free(vport->rss_key);
+ vport->rss_key = NULL;
+ }
+
return 0;
}
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 5520b2d6ce..0b8e163bbb 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -43,6 +43,20 @@
#define IDPF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
+#define IDPF_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+
#ifndef ETH_ADDR_LEN
#define ETH_ADDR_LEN 6
#endif
@@ -196,7 +210,9 @@ int idpf_check_api_version(struct idpf_adapter *adapter);
int idpf_get_caps(struct idpf_adapter *adapter);
int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
int idpf_destroy_vport(struct idpf_vport *vport);
-
+int idpf_set_rss_key(struct idpf_vport *vport);
+int idpf_set_rss_lut(struct idpf_vport *vport);
+int idpf_set_rss_hash(struct idpf_vport *vport);
int idpf_config_rxqs(struct idpf_vport *vport);
int idpf_config_txqs(struct idpf_vport *vport);
int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 74ed555449..fb7cee6915 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -441,6 +441,99 @@ idpf_destroy_vport(struct idpf_vport *vport)
return err;
}
+int
+idpf_set_rss_key(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_key *rss_key;
+ struct idpf_cmd_info args;
+ int len, err;
+
+ len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+ (vport->rss_key_size - 1);
+ rss_key = rte_zmalloc("rss_key", len, 0);
+ if (!rss_key)
+ return -ENOMEM;
+
+ rss_key->vport_id = vport->vport_id;
+ rss_key->key_len = vport->rss_key_size;
+ rte_memcpy(rss_key->key, vport->rss_key,
+ sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+ args.in_args = (uint8_t *)rss_key;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+ return err;
+ }
+
+ rte_free(rss_key);
+ return err;
+}
+
+int
+idpf_set_rss_lut(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_lut *rss_lut;
+ struct idpf_cmd_info args;
+ int len, err;
+
+ len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+ (vport->rss_lut_size - 1);
+ rss_lut = rte_zmalloc("rss_lut", len, 0);
+ if (!rss_lut)
+ return -ENOMEM;
+
+ rss_lut->vport_id = vport->vport_id;
+ rss_lut->lut_entries = vport->rss_lut_size;
+ rte_memcpy(rss_lut->lut, vport->rss_lut,
+ sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+ args.in_args = (uint8_t *)rss_lut;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+ rte_free(rss_lut);
+ return err;
+}
+
+int
+idpf_set_rss_hash(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_hash rss_hash;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&rss_hash, 0, sizeof(rss_hash));
+ rss_hash.ptype_groups = vport->rss_hf;
+ rss_hash.vport_id = vport->vport_id;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+ args.in_args = (uint8_t *)&rss_hash;
+ args.in_args_size = sizeof(rss_hash);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+ return err;
+}
+
#define IDPF_RX_BUF_STRIDE 64
int
idpf_config_rxqs(struct idpf_vport *vport)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 00/11] add support for idpf PMD in DPDK
2022-05-09 9:11 ` [RFC v2 9/9] net/idpf: support RSS Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 01/11] net/idpf/base: introduce base code Junfeng Guo
` (10 more replies)
0 siblings, 11 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
This is a draft of idpf (Infrastructure Data Path Function) PMD
in DPDK for Intel Device ID of 0x1452 and 0x1453.
v3:
fix small issues and add support for device id 0x1453.
v2:
fix code typo in func idpf_set_tx_function.
Junfeng Guo (11):
net/idpf/base: introduce base code
net/idpf/base: add OS specific implementation
net/idpf: support device initialization
net/idpf: support queue ops
net/idpf: support getting device information
net/idpf: support packet type getting
net/idpf: support link update
net/idpf: support basic Rx/Tx
net/idpf: support RSS
net/idpf: support MTU configuration
net/idpf: add CPF device ID for idpf map table
drivers/net/idpf/base/iecm_alloc.h | 22 +
drivers/net/idpf/base/iecm_common.c | 359 +++
drivers/net/idpf/base/iecm_controlq.c | 662 ++++
drivers/net/idpf/base/iecm_controlq.h | 214 ++
drivers/net/idpf/base/iecm_controlq_api.h | 227 ++
drivers/net/idpf/base/iecm_controlq_setup.c | 179 ++
drivers/net/idpf/base/iecm_devids.h | 18 +
drivers/net/idpf/base/iecm_lan_pf_regs.h | 134 +
drivers/net/idpf/base/iecm_lan_txrx.h | 428 +++
drivers/net/idpf/base/iecm_lan_vf_regs.h | 114 +
drivers/net/idpf/base/iecm_osdep.h | 365 +++
drivers/net/idpf/base/iecm_prototype.h | 45 +
drivers/net/idpf/base/iecm_type.h | 106 +
drivers/net/idpf/base/meson.build | 27 +
drivers/net/idpf/base/siov_regs.h | 41 +
drivers/net/idpf/base/virtchnl.h | 2743 +++++++++++++++++
drivers/net/idpf/base/virtchnl2.h | 1411 +++++++++
drivers/net/idpf/base/virtchnl2_lan_desc.h | 603 ++++
drivers/net/idpf/base/virtchnl_inline_ipsec.h | 567 ++++
drivers/net/idpf/idpf_ethdev.c | 1055 +++++++
drivers/net/idpf/idpf_ethdev.h | 224 ++
drivers/net/idpf/idpf_logs.h | 38 +
drivers/net/idpf/idpf_rxtx.c | 2180 +++++++++++++
drivers/net/idpf/idpf_rxtx.h | 203 ++
drivers/net/idpf/idpf_vchnl.c | 909 ++++++
drivers/net/idpf/meson.build | 19 +
drivers/net/idpf/version.map | 3 +
drivers/net/meson.build | 1 +
28 files changed, 12897 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_alloc.h
create mode 100644 drivers/net/idpf/base/iecm_common.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
create mode 100644 drivers/net/idpf/base/iecm_devids.h
create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_osdep.h
create mode 100644 drivers/net/idpf/base/iecm_prototype.h
create mode 100644 drivers/net/idpf/base/iecm_type.h
create mode 100644 drivers/net/idpf/base/meson.build
create mode 100644 drivers/net/idpf/base/siov_regs.h
create mode 100644 drivers/net/idpf/base/virtchnl.h
create mode 100644 drivers/net/idpf/base/virtchnl2.h
create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
create mode 100644 drivers/net/idpf/idpf_ethdev.c
create mode 100644 drivers/net/idpf/idpf_ethdev.h
create mode 100644 drivers/net/idpf/idpf_logs.h
create mode 100644 drivers/net/idpf/idpf_rxtx.c
create mode 100644 drivers/net/idpf/idpf_rxtx.h
create mode 100644 drivers/net/idpf/idpf_vchnl.c
create mode 100644 drivers/net/idpf/meson.build
create mode 100644 drivers/net/idpf/version.map
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 01/11] net/idpf/base: introduce base code
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 15:26 ` Stephen Hemminger
2022-05-18 8:25 ` [RFC v3 02/11] net/idpf/base: add OS specific implementation Junfeng Guo
` (9 subsequent siblings)
10 siblings, 1 reply; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Introduce base code for IDPF (Infrastructure Data Path Function) PMD.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_alloc.h | 22 +
drivers/net/idpf/base/iecm_common.c | 359 +++
drivers/net/idpf/base/iecm_controlq.c | 662 ++++
drivers/net/idpf/base/iecm_controlq.h | 214 ++
drivers/net/idpf/base/iecm_controlq_api.h | 227 ++
drivers/net/idpf/base/iecm_controlq_setup.c | 179 ++
drivers/net/idpf/base/iecm_devids.h | 17 +
drivers/net/idpf/base/iecm_lan_pf_regs.h | 134 +
drivers/net/idpf/base/iecm_lan_txrx.h | 428 +++
drivers/net/idpf/base/iecm_lan_vf_regs.h | 114 +
drivers/net/idpf/base/iecm_prototype.h | 45 +
drivers/net/idpf/base/iecm_type.h | 106 +
drivers/net/idpf/base/meson.build | 27 +
drivers/net/idpf/base/siov_regs.h | 41 +
drivers/net/idpf/base/virtchnl.h | 2743 +++++++++++++++++
drivers/net/idpf/base/virtchnl2.h | 1411 +++++++++
drivers/net/idpf/base/virtchnl2_lan_desc.h | 603 ++++
drivers/net/idpf/base/virtchnl_inline_ipsec.h | 567 ++++
18 files changed, 7899 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_alloc.h
create mode 100644 drivers/net/idpf/base/iecm_common.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.c
create mode 100644 drivers/net/idpf/base/iecm_controlq.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_api.h
create mode 100644 drivers/net/idpf/base/iecm_controlq_setup.c
create mode 100644 drivers/net/idpf/base/iecm_devids.h
create mode 100644 drivers/net/idpf/base/iecm_lan_pf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_lan_txrx.h
create mode 100644 drivers/net/idpf/base/iecm_lan_vf_regs.h
create mode 100644 drivers/net/idpf/base/iecm_prototype.h
create mode 100644 drivers/net/idpf/base/iecm_type.h
create mode 100644 drivers/net/idpf/base/meson.build
create mode 100644 drivers/net/idpf/base/siov_regs.h
create mode 100644 drivers/net/idpf/base/virtchnl.h
create mode 100644 drivers/net/idpf/base/virtchnl2.h
create mode 100644 drivers/net/idpf/base/virtchnl2_lan_desc.h
create mode 100644 drivers/net/idpf/base/virtchnl_inline_ipsec.h
diff --git a/drivers/net/idpf/base/iecm_alloc.h b/drivers/net/idpf/base/iecm_alloc.h
new file mode 100644
index 0000000000..7ea219c784
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_alloc.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_ALLOC_H_
+#define _IECM_ALLOC_H_
+
+/* Memory types */
+enum iecm_memset_type {
+ IECM_NONDMA_MEM = 0,
+ IECM_DMA_MEM
+};
+
+/* Memcpy types */
+enum iecm_memcpy_type {
+ IECM_NONDMA_TO_NONDMA = 0,
+ IECM_NONDMA_TO_DMA,
+ IECM_DMA_TO_DMA,
+ IECM_DMA_TO_NONDMA
+};
+
+#endif /* _IECM_ALLOC_H_ */
diff --git a/drivers/net/idpf/base/iecm_common.c b/drivers/net/idpf/base/iecm_common.c
new file mode 100644
index 0000000000..418fd99298
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_common.c
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#include "iecm_type.h"
+#include "iecm_prototype.h"
+#include "virtchnl.h"
+
+
+/**
+ * iecm_set_mac_type - Sets MAC type
+ * @hw: pointer to the HW structure
+ *
+ * This function sets the mac type of the adapter based on the
+ * vendor ID and device ID stored in the hw structure.
+ */
+int iecm_set_mac_type(struct iecm_hw *hw)
+{
+ int status = IECM_SUCCESS;
+
+ DEBUGFUNC("iecm_set_mac_type\n");
+
+ if (hw->vendor_id == IECM_INTEL_VENDOR_ID) {
+ switch (hw->device_id) {
+ case IECM_DEV_ID_PF:
+ hw->mac.type = IECM_MAC_PF;
+ break;
+ default:
+ hw->mac.type = IECM_MAC_GENERIC;
+ break;
+ }
+ } else {
+ status = IECM_ERR_DEVICE_NOT_SUPPORTED;
+ }
+
+ DEBUGOUT2("iecm_set_mac_type found mac: %d, returns: %d\n",
+ hw->mac.type, status);
+ return status;
+}
+
+/**
+ * iecm_init_hw - main initialization routine
+ * @hw: pointer to the hardware structure
+ * @ctlq_size: struct to pass ctlq size data
+ */
+int iecm_init_hw(struct iecm_hw *hw, struct iecm_ctlq_size ctlq_size)
+{
+ struct iecm_ctlq_create_info *q_info;
+ int status = IECM_SUCCESS;
+ struct iecm_ctlq_info *cq = NULL;
+
+ /* Setup initial control queues */
+ q_info = (struct iecm_ctlq_create_info *)
+ iecm_calloc(hw, 2, sizeof(struct iecm_ctlq_create_info));
+ if (!q_info)
+ return IECM_ERR_NO_MEMORY;
+
+ q_info[0].type = IECM_CTLQ_TYPE_MAILBOX_TX;
+ q_info[0].buf_size = ctlq_size.asq_buf_size;
+ q_info[0].len = ctlq_size.asq_ring_size;
+ q_info[0].id = -1; /* default queue */
+
+ if (hw->mac.type == IECM_MAC_PF) {
+ q_info[0].reg.head = PF_FW_ATQH;
+ q_info[0].reg.tail = PF_FW_ATQT;
+ q_info[0].reg.len = PF_FW_ATQLEN;
+ q_info[0].reg.bah = PF_FW_ATQBAH;
+ q_info[0].reg.bal = PF_FW_ATQBAL;
+ q_info[0].reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
+ q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
+ q_info[0].reg.head_mask = PF_FW_ATQH_ATQH_M;
+ } else {
+ q_info[0].reg.head = VF_ATQH;
+ q_info[0].reg.tail = VF_ATQT;
+ q_info[0].reg.len = VF_ATQLEN;
+ q_info[0].reg.bah = VF_ATQBAH;
+ q_info[0].reg.bal = VF_ATQBAL;
+ q_info[0].reg.len_mask = VF_ATQLEN_ATQLEN_M;
+ q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
+ q_info[0].reg.head_mask = VF_ATQH_ATQH_M;
+ }
+
+ q_info[1].type = IECM_CTLQ_TYPE_MAILBOX_RX;
+ q_info[1].buf_size = ctlq_size.arq_buf_size;
+ q_info[1].len = ctlq_size.arq_ring_size;
+ q_info[1].id = -1; /* default queue */
+
+ if (hw->mac.type == IECM_MAC_PF) {
+ q_info[1].reg.head = PF_FW_ARQH;
+ q_info[1].reg.tail = PF_FW_ARQT;
+ q_info[1].reg.len = PF_FW_ARQLEN;
+ q_info[1].reg.bah = PF_FW_ARQBAH;
+ q_info[1].reg.bal = PF_FW_ARQBAL;
+ q_info[1].reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
+ q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
+ q_info[1].reg.head_mask = PF_FW_ARQH_ARQH_M;
+ } else {
+ q_info[1].reg.head = VF_ARQH;
+ q_info[1].reg.tail = VF_ARQT;
+ q_info[1].reg.len = VF_ARQLEN;
+ q_info[1].reg.bah = VF_ARQBAH;
+ q_info[1].reg.bal = VF_ARQBAL;
+ q_info[1].reg.len_mask = VF_ARQLEN_ARQLEN_M;
+ q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
+ q_info[1].reg.head_mask = VF_ARQH_ARQH_M;
+ }
+
+ status = iecm_ctlq_init(hw, 2, q_info);
+ if (status != IECM_SUCCESS) {
+ /* TODO return error */
+ iecm_free(hw, q_info);
+ return status;
+ }
+
+ LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, iecm_ctlq_info, cq_list) {
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ hw->asq = cq;
+ else if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX)
+ hw->arq = cq;
+ }
+
+ /* TODO hardcode a mac addr for now */
+ hw->mac.addr[0] = 0x00;
+ hw->mac.addr[1] = 0x00;
+ hw->mac.addr[2] = 0x00;
+ hw->mac.addr[3] = 0x00;
+ hw->mac.addr[4] = 0x03;
+ hw->mac.addr[5] = 0x14;
+
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_send_msg_to_cp
+ * @hw: pointer to the hardware structure
+ * @v_opcode: opcodes for VF-PF communication
+ * @v_retval: return error code
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ * @cmd_details: pointer to command details
+ *
+ * Send message to CP. By default, this message
+ * is sent asynchronously, i.e. iecm_asq_send_command() does not wait for
+ * completion before returning.
+ */
+int iecm_send_msg_to_cp(struct iecm_hw *hw, enum virtchnl_ops v_opcode,
+ int v_retval, u8 *msg, u16 msglen)
+{
+ struct iecm_ctlq_msg ctlq_msg = { 0 };
+ struct iecm_dma_mem dma_mem = { 0 };
+ int status;
+
+ ctlq_msg.opcode = iecm_mbq_opc_send_msg_to_pf;
+ ctlq_msg.func_id = 0;
+ ctlq_msg.data_len = msglen;
+ ctlq_msg.cookie.mbx.chnl_retval = v_retval;
+ ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
+
+ if (msglen > 0) {
+ dma_mem.va = (struct iecm_dma_mem *)
+ iecm_alloc_dma_mem(hw, &dma_mem, msglen);
+ if (!dma_mem.va)
+ return IECM_ERR_NO_MEMORY;
+
+ iecm_memcpy(dma_mem.va, msg, msglen, IECM_NONDMA_TO_DMA);
+ ctlq_msg.ctx.indirect.payload = &dma_mem;
+ }
+ status = iecm_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
+
+ if (dma_mem.va)
+ iecm_free_dma_mem(hw, &dma_mem);
+
+ return status;
+}
+
+/**
+ * iecm_asq_done - check if FW has processed the Admin Send Queue
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if the firmware has processed all descriptors on the
+ * admin send queue. Returns false if there are still requests pending.
+ */
+bool iecm_asq_done(struct iecm_hw *hw)
+{
+ /* AQ designers suggest use of head for better
+ * timing reliability than DD bit
+ */
+ return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
+}
+
+/**
+ * iecm_check_asq_alive
+ * @hw: pointer to the hw struct
+ *
+ * Returns true if Queue is enabled else false.
+ */
+bool iecm_check_asq_alive(struct iecm_hw *hw)
+{
+ if (hw->asq->reg.len)
+ return !!(rd32(hw, hw->asq->reg.len) &
+ PF_FW_ATQLEN_ATQENABLE_M);
+
+ return false;
+}
+
+/**
+ * iecm_clean_arq_element
+ * @hw: pointer to the hw struct
+ * @e: event info from the receive descriptor, includes any buffers
+ * @pending: number of events that could be left to process
+ *
+ * This function cleans one Admin Receive Queue element and returns
+ * the contents through e. It can also return how many events are
+ * left to process through 'pending'
+ */
+int iecm_clean_arq_element(struct iecm_hw *hw,
+ struct iecm_arq_event_info *e, u16 *pending)
+{
+ struct iecm_ctlq_msg msg = { 0 };
+ int status;
+
+ *pending = 1;
+
+ status = iecm_ctlq_recv(hw->arq, pending, &msg);
+
+ /* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
+ e->desc.opcode = msg.opcode;
+ e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
+ e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
+ e->desc.ret_val = msg.status;
+ e->desc.datalen = msg.data_len;
+ if (msg.data_len > 0) {
+ e->buf_len = msg.data_len;
+ iecm_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg.data_len,
+ IECM_DMA_TO_NONDMA);
+ }
+ return status;
+}
+
+/**
+ * iecm_deinit_hw - shutdown routine
+ * @hw: pointer to the hardware structure
+ */
+int iecm_deinit_hw(struct iecm_hw *hw)
+{
+ hw->asq = NULL;
+ hw->arq = NULL;
+
+ return iecm_ctlq_deinit(hw);
+}
+
+/**
+ * iecm_reset
+ * @hw: pointer to the hardware structure
+ *
+ * Send a RESET message to the CPF. Does not wait for response from CPF
+ * as none will be forthcoming. Immediately after calling this function,
+ * the control queue should be shut down and (optionally) reinitialized.
+ */
+int iecm_reset(struct iecm_hw *hw)
+{
+ return iecm_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
+ IECM_SUCCESS, NULL, 0);
+}
+
+/**
+ * iecm_get_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ * @set: set true to set the table, false to get the table
+ *
+ * Internal function to get or set RSS look up table
+ */
+STATIC int iecm_get_set_rss_lut(struct iecm_hw *hw, u16 vsi_id,
+ bool pf_lut, u8 *lut, u16 lut_size,
+ bool set)
+{
+ /* TODO fill out command */
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_get_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * get the RSS lookup table, PF or VSI type
+ */
+int iecm_get_rss_lut(struct iecm_hw *hw, u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size)
+{
+ return iecm_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
+}
+
+/**
+ * iecm_set_rss_lut
+ * @hw: pointer to the hardware structure
+ * @vsi_id: vsi fw index
+ * @pf_lut: for PF table set true, for VSI table set false
+ * @lut: pointer to the lut buffer provided by the caller
+ * @lut_size: size of the lut buffer
+ *
+ * set the RSS lookup table, PF or VSI type
+ */
+int iecm_set_rss_lut(struct iecm_hw *hw, u16 vsi_id, bool pf_lut,
+ u8 *lut, u16 lut_size)
+{
+ return iecm_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
+}
+
+/**
+ * iecm_get_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ * @set: set true to set the key, false to get the key
+ *
+ * get the RSS key per VSI
+ */
+STATIC int iecm_get_set_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key,
+ bool set)
+{
+ /* TODO fill out command */
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_get_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ */
+int iecm_get_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key)
+{
+ return iecm_get_set_rss_key(hw, vsi_id, key, false);
+}
+
+/**
+ * iecm_set_rss_key
+ * @hw: pointer to the hw struct
+ * @vsi_id: vsi fw index
+ * @key: pointer to key info struct
+ *
+ * set the RSS key per VSI
+ */
+int iecm_set_rss_key(struct iecm_hw *hw, u16 vsi_id,
+ struct iecm_get_set_rss_key_data *key)
+{
+ return iecm_get_set_rss_key(hw, vsi_id, key, true);
+}
diff --git a/drivers/net/idpf/base/iecm_controlq.c b/drivers/net/idpf/base/iecm_controlq.c
new file mode 100644
index 0000000000..3a877bbf74
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq.c
@@ -0,0 +1,662 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#include "iecm_controlq.h"
+
+/**
+ * iecm_ctlq_setup_regs - initialize control queue registers
+ * @cq: pointer to the specific control queue
+ * @q_create_info: structs containing info for each queue to be initialized
+ */
+static void
+iecm_ctlq_setup_regs(struct iecm_ctlq_info *cq,
+ struct iecm_ctlq_create_info *q_create_info)
+{
+ /* set head and tail registers in our local struct */
+ cq->reg.head = q_create_info->reg.head;
+ cq->reg.tail = q_create_info->reg.tail;
+ cq->reg.len = q_create_info->reg.len;
+ cq->reg.bah = q_create_info->reg.bah;
+ cq->reg.bal = q_create_info->reg.bal;
+ cq->reg.len_mask = q_create_info->reg.len_mask;
+ cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask;
+ cq->reg.head_mask = q_create_info->reg.head_mask;
+}
+
+/**
+ * iecm_ctlq_init_regs - Initialize control queue registers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ * @is_rxq: true if receive control queue, false otherwise
+ *
+ * Initialize registers. The caller is expected to have already initialized the
+ * descriptor ring memory and buffer memory
+ */
+static void iecm_ctlq_init_regs(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ bool is_rxq)
+{
+ /* Update tail to post pre-allocated buffers for rx queues */
+ if (is_rxq)
+ wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));
+
+ /* For non-Mailbox control queues only TAIL need to be set */
+ if (cq->q_id != -1)
+ return;
+
+ /* Clear Head for both send or receive */
+ wr32(hw, cq->reg.head, 0);
+
+ /* set starting point */
+ wr32(hw, cq->reg.bal, IECM_LO_DWORD(cq->desc_ring.pa));
+ wr32(hw, cq->reg.bah, IECM_HI_DWORD(cq->desc_ring.pa));
+ wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));
+}
+
+/**
+ * iecm_ctlq_init_rxq_bufs - populate receive queue descriptors with buf
+ * @cq: pointer to the specific Control queue
+ *
+ * Record the address of the receive queue DMA buffers in the descriptors.
+ * The buffers must have been previously allocated.
+ */
+static void iecm_ctlq_init_rxq_bufs(struct iecm_ctlq_info *cq)
+{
+ int i = 0;
+
+ for (i = 0; i < cq->ring_size; i++) {
+ struct iecm_ctlq_desc *desc = IECM_CTLQ_DESC(cq, i);
+ struct iecm_dma_mem *bi = cq->bi.rx_buff[i];
+
+ /* No buffer to post to descriptor, continue */
+ if (!bi)
+ continue;
+
+ desc->flags =
+ CPU_TO_LE16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+ desc->opcode = 0;
+ desc->datalen = (__le16)CPU_TO_LE16(bi->size);
+ desc->ret_val = 0;
+ desc->cookie_high = 0;
+ desc->cookie_low = 0;
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(bi->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(bi->pa));
+ desc->params.indirect.param0 = 0;
+ desc->params.indirect.param1 = 0;
+ }
+}
+
+/**
+ * iecm_ctlq_shutdown - shutdown the CQ
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for any controq queue
+ */
+static void iecm_ctlq_shutdown(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ iecm_acquire_lock(&cq->cq_lock);
+
+ if (!cq->ring_size)
+ goto shutdown_sq_out;
+
+
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_dealloc_ring_res(hw, cq);
+
+ /* Set ring_size to 0 to indicate uninitialized queue */
+ cq->ring_size = 0;
+
+shutdown_sq_out:
+ iecm_release_lock(&cq->cq_lock);
+ iecm_destroy_lock(&cq->cq_lock);
+}
+
+/**
+ * iecm_ctlq_add - add one control queue
+ * @hw: pointer to hardware struct
+ * @qinfo: info for queue to be created
+ * @cq_out: (output) double pointer to control queue to be created
+ *
+ * Allocate and initialize a control queue and add it to the control queue list.
+ * The cq parameter will be allocated/initialized and passed back to the caller
+ * if no errors occur.
+ *
+ * Note: iecm_ctlq_init must be called prior to any calls to iecm_ctlq_add
+ */
+int iecm_ctlq_add(struct iecm_hw *hw,
+ struct iecm_ctlq_create_info *qinfo,
+ struct iecm_ctlq_info **cq_out)
+{
+ bool is_rxq = false;
+ int status = IECM_SUCCESS;
+
+ if (!qinfo->len || !qinfo->buf_size ||
+ qinfo->len > IECM_CTLQ_MAX_RING_SIZE ||
+ qinfo->buf_size > IECM_CTLQ_MAX_BUF_LEN)
+ return IECM_ERR_CFG;
+
+ *cq_out = (struct iecm_ctlq_info *)
+ iecm_calloc(hw, 1, sizeof(struct iecm_ctlq_info));
+ if (!(*cq_out))
+ return IECM_ERR_NO_MEMORY;
+
+ (*cq_out)->cq_type = qinfo->type;
+ (*cq_out)->q_id = qinfo->id;
+ (*cq_out)->buf_size = qinfo->buf_size;
+ (*cq_out)->ring_size = qinfo->len;
+
+ (*cq_out)->next_to_use = 0;
+ (*cq_out)->next_to_clean = 0;
+ (*cq_out)->next_to_post = (*cq_out)->ring_size - 1;
+
+ switch (qinfo->type) {
+ case IECM_CTLQ_TYPE_MAILBOX_RX:
+ is_rxq = true;
+ fallthrough;
+ case IECM_CTLQ_TYPE_MAILBOX_TX:
+ status = iecm_ctlq_alloc_ring_res(hw, *cq_out);
+ break;
+ default:
+ status = IECM_ERR_PARAM;
+ break;
+ }
+
+ if (status)
+ goto init_free_q;
+
+ if (is_rxq) {
+ iecm_ctlq_init_rxq_bufs(*cq_out);
+ } else {
+ /* Allocate the array of msg pointers for TX queues */
+ (*cq_out)->bi.tx_msg = (struct iecm_ctlq_msg **)
+ iecm_calloc(hw, qinfo->len,
+ sizeof(struct iecm_ctlq_msg *));
+ if (!(*cq_out)->bi.tx_msg) {
+ status = IECM_ERR_NO_MEMORY;
+ goto init_dealloc_q_mem;
+ }
+ }
+
+ iecm_ctlq_setup_regs(*cq_out, qinfo);
+
+ iecm_ctlq_init_regs(hw, *cq_out, is_rxq);
+
+ iecm_init_lock(&(*cq_out)->cq_lock);
+
+ LIST_INSERT_HEAD(&hw->cq_list_head, (*cq_out), cq_list);
+
+ return status;
+
+init_dealloc_q_mem:
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_dealloc_ring_res(hw, *cq_out);
+init_free_q:
+ iecm_free(hw, *cq_out);
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_remove - deallocate and remove specified control queue
+ * @hw: pointer to hardware struct
+ * @cq: pointer to control queue to be removed
+ */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ LIST_REMOVE(cq, cq_list);
+ iecm_ctlq_shutdown(hw, cq);
+ iecm_free(hw, cq);
+}
+
+/**
+ * iecm_ctlq_init - main initialization routine for all control queues
+ * @hw: pointer to hardware struct
+ * @num_q: number of queues to initialize
+ * @q_info: array of structs containing info for each queue to be initialized
+ *
+ * This initializes any number and any type of control queues. This is an all
+ * or nothing routine; if one fails, all previously allocated queues will be
+ * destroyed. This must be called prior to using the individual add/remove
+ * APIs.
+ */
+int iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+ struct iecm_ctlq_create_info *q_info)
+{
+ struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+ int ret_code = IECM_SUCCESS;
+ int i = 0;
+
+ LIST_INIT(&hw->cq_list_head);
+
+ for (i = 0; i < num_q; i++) {
+ struct iecm_ctlq_create_info *qinfo = q_info + i;
+
+ ret_code = iecm_ctlq_add(hw, qinfo, &cq);
+ if (ret_code)
+ goto init_destroy_qs;
+ }
+
+ return ret_code;
+
+init_destroy_qs:
+ LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
+ iecm_ctlq_info, cq_list)
+ iecm_ctlq_remove(hw, cq);
+
+ return ret_code;
+}
+
+/**
+ * iecm_ctlq_deinit - destroy all control queues
+ * @hw: pointer to hw struct
+ */
+int iecm_ctlq_deinit(struct iecm_hw *hw)
+{
+ struct iecm_ctlq_info *cq = NULL, *tmp = NULL;
+ int ret_code = IECM_SUCCESS;
+
+ LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
+ iecm_ctlq_info, cq_list)
+ iecm_ctlq_remove(hw, cq);
+
+ return ret_code;
+}
+
+/**
+ * iecm_ctlq_send - send command to Control Queue (CTQ)
+ * @hw: pointer to hw struct
+ * @cq: handle to control queue struct to send on
+ * @num_q_msg: number of messages to send on control queue
+ * @q_msg: pointer to array of queue messages to be sent
+ *
+ * The caller is expected to allocate DMAable buffers and pass them to the
+ * send routine via the q_msg struct / control queue specific data struct.
+ * The control queue will hold a reference to each send message until
+ * the completion for that message has been cleaned.
+ */
+int iecm_ctlq_send(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ u16 num_q_msg, struct iecm_ctlq_msg q_msg[])
+{
+ struct iecm_ctlq_desc *desc;
+ int num_desc_avail = 0;
+ int status = IECM_SUCCESS;
+ int i = 0;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ /* Ensure there are enough descriptors to send all messages */
+ num_desc_avail = IECM_CTLQ_DESC_UNUSED(cq);
+ if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
+ status = IECM_ERR_CTLQ_FULL;
+ goto sq_send_command_out;
+ }
+
+ for (i = 0; i < num_q_msg; i++) {
+ struct iecm_ctlq_msg *msg = &q_msg[i];
+ u64 msg_cookie;
+
+ desc = IECM_CTLQ_DESC(cq, cq->next_to_use);
+
+ desc->opcode = CPU_TO_LE16(msg->opcode);
+ desc->pfid_vfid = CPU_TO_LE16(msg->func_id);
+
+ msg_cookie = *(u64 *)&msg->cookie;
+ desc->cookie_high =
+ CPU_TO_LE32(IECM_HI_DWORD(msg_cookie));
+ desc->cookie_low =
+ CPU_TO_LE32(IECM_LO_DWORD(msg_cookie));
+
+ desc->flags = CPU_TO_LE16((msg->host_id & IECM_HOST_ID_MASK) <<
+ IECM_CTLQ_FLAG_HOST_ID_S);
+ if (msg->data_len) {
+ struct iecm_dma_mem *buff = msg->ctx.indirect.payload;
+
+ desc->datalen |= CPU_TO_LE16(msg->data_len);
+ desc->flags |= CPU_TO_LE16(IECM_CTLQ_FLAG_BUF);
+ desc->flags |= CPU_TO_LE16(IECM_CTLQ_FLAG_RD);
+
+ /* Update the address values in the desc with the pa
+ * value for respective buffer
+ */
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(buff->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(buff->pa));
+
+ iecm_memcpy(&desc->params, msg->ctx.indirect.context,
+ IECM_INDIRECT_CTX_SIZE, IECM_NONDMA_TO_DMA);
+ } else {
+ iecm_memcpy(&desc->params, msg->ctx.direct,
+ IECM_DIRECT_CTX_SIZE, IECM_NONDMA_TO_DMA);
+ }
+
+ /* Store buffer info */
+ cq->bi.tx_msg[cq->next_to_use] = msg;
+
+ (cq->next_to_use)++;
+ if (cq->next_to_use == cq->ring_size)
+ cq->next_to_use = 0;
+ }
+
+ /* Force memory write to complete before letting hardware
+ * know that there are new descriptors to fetch.
+ */
+ iecm_wmb();
+
+ wr32(hw, cq->reg.tail, cq->next_to_use);
+
+sq_send_command_out:
+ iecm_release_lock(&cq->cq_lock);
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_clean_sq - reclaim send descriptors on HW write back for the
+ * requested queue
+ * @cq: pointer to the specific Control queue
+ * @clean_count: (input|output) number of descriptors to clean as input, and
+ * number of descriptors actually cleaned as output
+ * @msg_status: (output) pointer to msg pointer array to be populated; needs
+ * to be allocated by caller
+ *
+ * Returns an array of message pointers associated with the cleaned
+ * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned
+ * descriptors. The status will be returned for each; any messages that failed
+ * to send will have a non-zero status. The caller is expected to free original
+ * ctlq_msgs and free or reuse the DMA buffers.
+ */
+int iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count,
+ struct iecm_ctlq_msg *msg_status[])
+{
+ struct iecm_ctlq_desc *desc;
+ u16 i = 0, num_to_clean;
+ u16 ntc, desc_err;
+ int ret = IECM_SUCCESS;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ if (*clean_count == 0)
+ return IECM_SUCCESS;
+ if (*clean_count > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *clean_count;
+
+ for (i = 0; i < num_to_clean; i++) {
+ /* Fetch next descriptor and check if marked as done */
+ desc = IECM_CTLQ_DESC(cq, ntc);
+ if (!(LE16_TO_CPU(desc->flags) & IECM_CTLQ_FLAG_DD))
+ break;
+
+ desc_err = LE16_TO_CPU(desc->ret_val);
+ if (desc_err) {
+ /* strip off FW internal code */
+ desc_err &= 0xff;
+ }
+
+ msg_status[i] = cq->bi.tx_msg[ntc];
+ msg_status[i]->status = desc_err;
+
+ cq->bi.tx_msg[ntc] = NULL;
+
+ /* Zero out any stale data */
+ iecm_memset(desc, 0, sizeof(*desc), IECM_DMA_MEM);
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ }
+
+ cq->next_to_clean = ntc;
+
+ iecm_release_lock(&cq->cq_lock);
+
+ /* Return number of descriptors actually cleaned */
+ *clean_count = i;
+
+ return ret;
+}
+
+/**
+ * iecm_ctlq_post_rx_buffs - post buffers to descriptor ring
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue handle
+ * @buff_count: (input|output) input is number of buffers caller is trying to
+ * return; output is number of buffers that were not posted
+ * @buffs: array of pointers to dma mem structs to be given to hardware
+ *
+ * Caller uses this function to return DMA buffers to the descriptor ring after
+ * consuming them; buff_count will be the number of buffers.
+ *
+ * Note: this function needs to be called after a receive call even
+ * if there are no DMA buffers to be returned, i.e. buff_count = 0,
+ * buffs = NULL to support direct commands
+ */
+int iecm_ctlq_post_rx_buffs(struct iecm_hw *hw, struct iecm_ctlq_info *cq,
+ u16 *buff_count, struct iecm_dma_mem **buffs)
+{
+ struct iecm_ctlq_desc *desc;
+ u16 ntp = cq->next_to_post;
+ bool buffs_avail = false;
+ u16 tbp = ntp + 1;
+ int status = IECM_SUCCESS;
+ int i = 0;
+
+ if (*buff_count > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ if (*buff_count > 0)
+ buffs_avail = true;
+
+ iecm_acquire_lock(&cq->cq_lock);
+
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ if (tbp == cq->next_to_clean)
+ /* Nothing to do */
+ goto post_buffs_out;
+
+ /* Post buffers for as many as provided or up until the last one used */
+ while (ntp != cq->next_to_clean) {
+ desc = IECM_CTLQ_DESC(cq, ntp);
+
+ if (cq->bi.rx_buff[ntp])
+ goto fill_desc;
+ if (!buffs_avail) {
+ /* If the caller hasn't given us any buffers or
+ * there are none left, search the ring itself
+ * for an available buffer to move to this
+ * entry starting at the next entry in the ring
+ */
+ tbp = ntp + 1;
+
+ /* Wrap ring if necessary */
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ while (tbp != cq->next_to_clean) {
+ if (cq->bi.rx_buff[tbp]) {
+ cq->bi.rx_buff[ntp] =
+ cq->bi.rx_buff[tbp];
+ cq->bi.rx_buff[tbp] = NULL;
+
+ /* Found a buffer, no need to
+ * search anymore
+ */
+ break;
+ }
+
+ /* Wrap ring if necessary */
+ tbp++;
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+ }
+
+ if (tbp == cq->next_to_clean)
+ goto post_buffs_out;
+ } else {
+ /* Give back pointer to DMA buffer */
+ cq->bi.rx_buff[ntp] = buffs[i];
+ i++;
+
+ if (i >= *buff_count)
+ buffs_avail = false;
+ }
+
+fill_desc:
+ desc->flags =
+ CPU_TO_LE16(IECM_CTLQ_FLAG_BUF | IECM_CTLQ_FLAG_RD);
+
+ /* Post buffers to descriptor */
+ desc->datalen = CPU_TO_LE16(cq->bi.rx_buff[ntp]->size);
+ desc->params.indirect.addr_high =
+ CPU_TO_LE32(IECM_HI_DWORD(cq->bi.rx_buff[ntp]->pa));
+ desc->params.indirect.addr_low =
+ CPU_TO_LE32(IECM_LO_DWORD(cq->bi.rx_buff[ntp]->pa));
+
+ ntp++;
+ if (ntp == cq->ring_size)
+ ntp = 0;
+ }
+
+post_buffs_out:
+ /* Only update tail if buffers were actually posted */
+ if (cq->next_to_post != ntp) {
+ if (ntp)
+ /* Update next_to_post to ntp - 1 since current ntp
+ * will not have a buffer
+ */
+ cq->next_to_post = ntp - 1;
+ else
+ /* Wrap to end of end ring since current ntp is 0 */
+ cq->next_to_post = cq->ring_size - 1;
+
+ wr32(hw, cq->reg.tail, cq->next_to_post);
+ }
+
+ iecm_release_lock(&cq->cq_lock);
+
+ /* return the number of buffers that were not posted */
+ *buff_count = *buff_count - i;
+
+ return status;
+}
+
+/**
+ * iecm_ctlq_recv - receive control queue message call back
+ * @cq: pointer to control queue handle to receive on
+ * @num_q_msg: (input|output) input number of messages that should be received;
+ * output number of messages actually received
+ * @q_msg: (output) array of received control queue messages on this q;
+ * needs to be pre-allocated by caller for as many messages as requested
+ *
+ * Called by interrupt handler or polling mechanism. Caller is expected
+ * to free buffers
+ */
+int iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg,
+ struct iecm_ctlq_msg *q_msg)
+{
+ u16 num_to_clean, ntc, ret_val, flags;
+ struct iecm_ctlq_desc *desc;
+ int ret_code = IECM_SUCCESS;
+ u16 i = 0;
+
+ if (!cq || !cq->ring_size)
+ return IECM_ERR_CTLQ_EMPTY;
+
+ if (*num_q_msg == 0)
+ return IECM_SUCCESS;
+ else if (*num_q_msg > cq->ring_size)
+ return IECM_ERR_PARAM;
+
+ /* take the lock before we start messing with the ring */
+ iecm_acquire_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *num_q_msg;
+
+ for (i = 0; i < num_to_clean; i++) {
+ u64 msg_cookie;
+
+ /* Fetch next descriptor and check if marked as done */
+ desc = IECM_CTLQ_DESC(cq, ntc);
+ flags = LE16_TO_CPU(desc->flags);
+
+ if (!(flags & IECM_CTLQ_FLAG_DD))
+ break;
+
+ ret_val = LE16_TO_CPU(desc->ret_val);
+
+ q_msg[i].vmvf_type = (flags &
+ (IECM_CTLQ_FLAG_FTYPE_VM |
+ IECM_CTLQ_FLAG_FTYPE_PF)) >>
+ IECM_CTLQ_FLAG_FTYPE_S;
+
+ if (flags & IECM_CTLQ_FLAG_ERR)
+ ret_code = IECM_ERR_CTLQ_ERROR;
+
+ msg_cookie = (u64)LE32_TO_CPU(desc->cookie_high) << 32;
+ msg_cookie |= (u64)LE32_TO_CPU(desc->cookie_low);
+ iecm_memcpy(&q_msg[i].cookie, &msg_cookie, sizeof(u64),
+ IECM_NONDMA_TO_NONDMA);
+
+ q_msg[i].opcode = LE16_TO_CPU(desc->opcode);
+ q_msg[i].data_len = LE16_TO_CPU(desc->datalen);
+ q_msg[i].status = ret_val;
+
+ if (desc->datalen) {
+ iecm_memcpy(q_msg[i].ctx.indirect.context,
+ &desc->params.indirect,
+ IECM_INDIRECT_CTX_SIZE,
+ IECM_DMA_TO_NONDMA);
+
+ /* Assign pointer to dma buffer to ctlq_msg array
+ * to be given to upper layer
+ */
+ q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
+
+ /* Zero out pointer to DMA buffer info;
+ * will be repopulated by post buffers API
+ */
+ cq->bi.rx_buff[ntc] = NULL;
+ } else {
+ iecm_memcpy(q_msg[i].ctx.direct,
+ desc->params.raw,
+ IECM_DIRECT_CTX_SIZE,
+ IECM_DMA_TO_NONDMA);
+ }
+
+ /* Zero out stale data in descriptor */
+ iecm_memset(desc, 0, sizeof(struct iecm_ctlq_desc),
+ IECM_DMA_MEM);
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ };
+
+ cq->next_to_clean = ntc;
+
+ iecm_release_lock(&cq->cq_lock);
+
+ *num_q_msg = i;
+ if (*num_q_msg == 0)
+ ret_code = IECM_ERR_CTLQ_NO_WORK;
+
+ return ret_code;
+}
diff --git a/drivers/net/idpf/base/iecm_controlq.h b/drivers/net/idpf/base/iecm_controlq.h
new file mode 100644
index 0000000000..0964146b49
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq.h
@@ -0,0 +1,214 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_CONTROLQ_H_
+#define _IECM_CONTROLQ_H_
+
+#ifdef __KERNEL__
+#include <linux/slab.h>
+#endif
+
+#ifndef __KERNEL__
+#include "iecm_osdep.h"
+#include "iecm_alloc.h"
+/* This is used to explicitly annotate when a switch case falls through to the
+ * next case.
+ */
+#define fallthrough do {} while (0)
+#endif
+#include "iecm_controlq_api.h"
+
+/* Maximum buffer lengths for all control queue types */
+#define IECM_CTLQ_MAX_RING_SIZE 1024
+#define IECM_CTLQ_MAX_BUF_LEN 4096
+
+#define IECM_CTLQ_DESC(R, i) \
+ (&(((struct iecm_ctlq_desc *)((R)->desc_ring.va))[i]))
+
+#define IECM_CTLQ_DESC_UNUSED(R) \
+ (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
+ (R)->next_to_clean - (R)->next_to_use - 1)
+
+#ifndef __KERNEL__
+/* Data type manipulation macros. */
+#define IECM_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
+#define IECM_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF))
+#define IECM_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF))
+#define IECM_LO_WORD(x) ((u16)((x) & 0xFFFF))
+
+#endif
+/* Control Queue default settings */
+#define IECM_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */
+
+struct iecm_ctlq_desc {
+ __le16 flags;
+ __le16 opcode;
+ __le16 datalen; /* 0 for direct commands */
+ union {
+ __le16 ret_val;
+ __le16 pfid_vfid;
+#define IECM_CTLQ_DESC_VF_ID_S 0
+#define IECM_CTLQ_DESC_VF_ID_M (0x7FF << IECM_CTLQ_DESC_VF_ID_S)
+#define IECM_CTLQ_DESC_PF_ID_S 11
+#define IECM_CTLQ_DESC_PF_ID_M (0x1F << IECM_CTLQ_DESC_PF_ID_S)
+ };
+ __le32 cookie_high;
+ __le32 cookie_low;
+ union {
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 param2;
+ __le32 param3;
+ } direct;
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 addr_high;
+ __le32 addr_low;
+ } indirect;
+ u8 raw[16];
+ } params;
+};
+
+/* Flags sub-structure
+ * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| HOST_ID |
+ */
+/* command flags and offsets */
+#define IECM_CTLQ_FLAG_DD_S 0
+#define IECM_CTLQ_FLAG_CMP_S 1
+#define IECM_CTLQ_FLAG_ERR_S 2
+#define IECM_CTLQ_FLAG_FTYPE_S 6
+#define IECM_CTLQ_FLAG_RD_S 10
+#define IECM_CTLQ_FLAG_VFC_S 11
+#define IECM_CTLQ_FLAG_BUF_S 12
+#define IECM_CTLQ_FLAG_HOST_ID_S 13
+
+#define IECM_CTLQ_FLAG_DD BIT(IECM_CTLQ_FLAG_DD_S) /* 0x1 */
+#define IECM_CTLQ_FLAG_CMP BIT(IECM_CTLQ_FLAG_CMP_S) /* 0x2 */
+#define IECM_CTLQ_FLAG_ERR BIT(IECM_CTLQ_FLAG_ERR_S) /* 0x4 */
+#define IECM_CTLQ_FLAG_FTYPE_VM BIT(IECM_CTLQ_FLAG_FTYPE_S) /* 0x40 */
+#define IECM_CTLQ_FLAG_FTYPE_PF BIT(IECM_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */
+#define IECM_CTLQ_FLAG_RD BIT(IECM_CTLQ_FLAG_RD_S) /* 0x400 */
+#define IECM_CTLQ_FLAG_VFC BIT(IECM_CTLQ_FLAG_VFC_S) /* 0x800 */
+#define IECM_CTLQ_FLAG_BUF BIT(IECM_CTLQ_FLAG_BUF_S) /* 0x1000 */
+
+/* Host ID is a special field that has 3b and not a 1b flag */
+#define IECM_CTLQ_FLAG_HOST_ID_M MAKE_MASK(0x7000UL, IECM_CTLQ_FLAG_HOST_ID_S)
+
+struct iecm_mbxq_desc {
+ u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */
+ u32 chnl_opcode; /* avoid confusion with desc->opcode */
+ u32 chnl_retval; /* ditto for desc->retval */
+ u32 pf_vf_id; /* used by CP when sending to PF */
+};
+
+enum iecm_mac_type {
+ IECM_MAC_UNKNOWN = 0,
+ IECM_MAC_PF,
+ IECM_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct iecm_mac_info {
+ enum iecm_mac_type type;
+ u8 addr[ETH_ALEN];
+ u8 perm_addr[ETH_ALEN];
+};
+
+#define IECM_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum iecm_bus_type {
+ iecm_bus_type_unknown = 0,
+ iecm_bus_type_pci,
+ iecm_bus_type_pcix,
+ iecm_bus_type_pci_express,
+ iecm_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum iecm_bus_speed {
+ iecm_bus_speed_unknown = 0,
+ iecm_bus_speed_33 = 33,
+ iecm_bus_speed_66 = 66,
+ iecm_bus_speed_100 = 100,
+ iecm_bus_speed_120 = 120,
+ iecm_bus_speed_133 = 133,
+ iecm_bus_speed_2500 = 2500,
+ iecm_bus_speed_5000 = 5000,
+ iecm_bus_speed_8000 = 8000,
+ iecm_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum iecm_bus_width {
+ iecm_bus_width_unknown = 0,
+ iecm_bus_width_pcie_x1 = 1,
+ iecm_bus_width_pcie_x2 = 2,
+ iecm_bus_width_pcie_x4 = 4,
+ iecm_bus_width_pcie_x8 = 8,
+ iecm_bus_width_32 = 32,
+ iecm_bus_width_64 = 64,
+ iecm_bus_width_reserved
+};
+
+/* Bus parameters */
+struct iecm_bus_info {
+ enum iecm_bus_speed speed;
+ enum iecm_bus_width width;
+ enum iecm_bus_type type;
+
+ u16 func;
+ u16 device;
+ u16 lan_id;
+ u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct iecm_hw_func_caps {
+ u32 num_alloc_vfs;
+ u32 vf_base_id;
+};
+
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct iecm_hw {
+ u8 *hw_addr;
+ u64 hw_addr_len;
+ void *back;
+
+ /* control queue - send and receive */
+ struct iecm_ctlq_info *asq;
+ struct iecm_ctlq_info *arq;
+
+ /* subsystem structs */
+ struct iecm_mac_info mac;
+ struct iecm_bus_info bus;
+ struct iecm_hw_func_caps func_caps;
+
+ /* pci info */
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+ bool adapter_stopped;
+
+ LIST_HEAD_TYPE(list_head, iecm_ctlq_info) cq_list_head;
+};
+
+int iecm_ctlq_alloc_ring_res(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq);
+
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq);
+
+/* prototype for functions used for dynamic memory allocation */
+void *iecm_alloc_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem,
+ u64 size);
+void iecm_free_dma_mem(struct iecm_hw *hw, struct iecm_dma_mem *mem);
+#endif /* _IECM_CONTROLQ_H_ */
diff --git a/drivers/net/idpf/base/iecm_controlq_api.h b/drivers/net/idpf/base/iecm_controlq_api.h
new file mode 100644
index 0000000000..27511ffd51
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq_api.h
@@ -0,0 +1,227 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_CONTROLQ_API_H_
+#define _IECM_CONTROLQ_API_H_
+
+#ifdef __KERNEL__
+#include "iecm_mem.h"
+#else /* !__KERNEL__ */
+/* Error Codes */
+/* Linux kernel driver can't directly use these. Instead, they are mapped to
+ * linux compatible error codes which get translated in the build script.
+ */
+#define IECM_SUCCESS 0
+#define IECM_ERR_PARAM -53 /* -EBADR */
+#define IECM_ERR_NOT_IMPL -95 /* -EOPNOTSUPP */
+#define IECM_ERR_NOT_READY -16 /* -EBUSY */
+#define IECM_ERR_BAD_PTR -14 /* -EFAULT */
+#define IECM_ERR_INVAL_SIZE -90 /* -EMSGSIZE */
+#define IECM_ERR_DEVICE_NOT_SUPPORTED -19 /* -ENODEV */
+#define IECM_ERR_FW_API_VER -13 /* -EACCESS */
+#define IECM_ERR_NO_MEMORY -12 /* -ENOMEM */
+#define IECM_ERR_CFG -22 /* -EINVAL */
+#define IECM_ERR_OUT_OF_RANGE -34 /* -ERANGE */
+#define IECM_ERR_ALREADY_EXISTS -17 /* -EEXIST */
+#define IECM_ERR_DOES_NOT_EXIST -6 /* -ENXIO */
+#define IECM_ERR_IN_USE -114 /* -EALREADY */
+#define IECM_ERR_MAX_LIMIT -109 /* -ETOOMANYREFS */
+#define IECM_ERR_RESET_ONGOING -104 /* -ECONNRESET */
+
+/* CRQ/CSQ specific error codes */
+#define IECM_ERR_CTLQ_ERROR -74 /* -EBADMSG */
+#define IECM_ERR_CTLQ_TIMEOUT -110 /* -ETIMEDOUT */
+#define IECM_ERR_CTLQ_FULL -28 /* -ENOSPC */
+#define IECM_ERR_CTLQ_NO_WORK -42 /* -ENOMSG */
+#define IECM_ERR_CTLQ_EMPTY -105 /* -ENOBUFS */
+#endif /* !__KERNEL__ */
+
+struct iecm_hw;
+
+/* Used for queue init, response and events */
+enum iecm_ctlq_type {
+ IECM_CTLQ_TYPE_MAILBOX_TX = 0,
+ IECM_CTLQ_TYPE_MAILBOX_RX = 1,
+ IECM_CTLQ_TYPE_CONFIG_TX = 2,
+ IECM_CTLQ_TYPE_CONFIG_RX = 3,
+ IECM_CTLQ_TYPE_EVENT_RX = 4,
+ IECM_CTLQ_TYPE_RDMA_TX = 5,
+ IECM_CTLQ_TYPE_RDMA_RX = 6,
+ IECM_CTLQ_TYPE_RDMA_COMPL = 7
+};
+
+/*
+ * Generic Control Queue Structures
+ */
+
+struct iecm_ctlq_reg {
+ /* used for queue tracking */
+ u32 head;
+ u32 tail;
+ /* Below applies only to default mb (if present) */
+ u32 len;
+ u32 bah;
+ u32 bal;
+ u32 len_mask;
+ u32 len_ena_mask;
+ u32 head_mask;
+};
+
+/* Generic queue msg structure */
+struct iecm_ctlq_msg {
+ u8 vmvf_type; /* represents the source of the message on recv */
+#define IECM_VMVF_TYPE_VF 0
+#define IECM_VMVF_TYPE_VM 1
+#define IECM_VMVF_TYPE_PF 2
+ u8 host_id;
+ /* 3b field used only when sending a message to peer - to be used in
+ * combination with target func_id to route the message
+ */
+#define IECM_HOST_ID_MASK 0x7
+
+ u16 opcode;
+ u16 data_len; /* data_len = 0 when no payload is attached */
+ union {
+ u16 func_id; /* when sending a message */
+ u16 status; /* when receiving a message */
+ };
+ union {
+ struct {
+ u32 chnl_retval;
+ u32 chnl_opcode;
+ } mbx;
+ } cookie;
+ union {
+#define IECM_DIRECT_CTX_SIZE 16
+#define IECM_INDIRECT_CTX_SIZE 8
+ /* 16 bytes of context can be provided or 8 bytes of context
+ * plus the address of a DMA buffer
+ */
+ u8 direct[IECM_DIRECT_CTX_SIZE];
+ struct {
+ u8 context[IECM_INDIRECT_CTX_SIZE];
+ struct iecm_dma_mem *payload;
+ } indirect;
+ } ctx;
+};
+
+/* Generic queue info structures */
+/* MB, CONFIG and EVENT q do not have extended info */
+struct iecm_ctlq_create_info {
+ enum iecm_ctlq_type type;
+ int id; /* absolute queue offset passed as input
+ * -1 for default mailbox if present
+ */
+ u16 len; /* Queue length passed as input */
+ u16 buf_size; /* buffer size passed as input */
+ u64 base_address; /* output, HPA of the Queue start */
+ struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+
+ int ext_info_size;
+ void *ext_info; /* Specific to q type */
+};
+
+/* Control Queue information */
+struct iecm_ctlq_info {
+ LIST_ENTRY_TYPE(iecm_ctlq_info) cq_list;
+
+ enum iecm_ctlq_type cq_type;
+ int q_id;
+ iecm_lock cq_lock; /* queue lock
+ * iecm_lock is defined in OSdep.h
+ */
+ /* used for interrupt processing */
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 next_to_post; /* starting descriptor to post buffers
+ * to after recev
+ */
+
+ struct iecm_dma_mem desc_ring; /* descriptor ring memory
+ * iecm_dma_mem is defined in OSdep.h
+ */
+ union {
+ struct iecm_dma_mem **rx_buff;
+ struct iecm_ctlq_msg **tx_msg;
+ } bi;
+
+ u16 buf_size; /* queue buffer size */
+ u16 ring_size; /* Number of descriptors */
+ struct iecm_ctlq_reg reg; /* registers accessed by ctlqs */
+};
+
+/* PF/VF mailbox commands */
+enum iecm_mbx_opc {
+ /* iecm_mbq_opc_send_msg_to_pf:
+ * usage: used by PF or VF to send a message to its CPF
+ * target: RX queue and function ID of parent PF taken from HW
+ */
+ iecm_mbq_opc_send_msg_to_pf = 0x0801,
+
+ /* iecm_mbq_opc_send_msg_to_vf:
+ * usage: used by PF to send message to a VF
+ * target: VF control queue ID must be specified in descriptor
+ */
+ iecm_mbq_opc_send_msg_to_vf = 0x0802,
+
+ /* iecm_mbq_opc_send_msg_to_peer_pf:
+ * usage: used by any function to send message to any peer PF
+ * target: RX queue and host of parent PF taken from HW
+ */
+ iecm_mbq_opc_send_msg_to_peer_pf = 0x0803,
+
+ /* iecm_mbq_opc_send_msg_to_peer_drv:
+ * usage: used by any function to send message to any peer driver
+ * target: RX queue and target host must be specific in descriptor
+ */
+ iecm_mbq_opc_send_msg_to_peer_drv = 0x0804,
+};
+
+/*
+ * API supported for control queue management
+ */
+
+/* Will init all required q including default mb. "q_info" is an array of
+ * create_info structs equal to the number of control queues to be created.
+ */
+int iecm_ctlq_init(struct iecm_hw *hw, u8 num_q,
+ struct iecm_ctlq_create_info *q_info);
+
+/* Allocate and initialize a single control queue, which will be added to the
+ * control queue list; returns a handle to the created control queue
+ */
+int iecm_ctlq_add(struct iecm_hw *hw,
+ struct iecm_ctlq_create_info *qinfo,
+ struct iecm_ctlq_info **cq);
+
+/* Deinitialize and deallocate a single control queue */
+void iecm_ctlq_remove(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq);
+
+/* Sends messages to HW and will also free the buffer*/
+int iecm_ctlq_send(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq,
+ u16 num_q_msg,
+ struct iecm_ctlq_msg q_msg[]);
+
+/* Receives messages and called by interrupt handler/polling
+ * initiated by app/process. Also caller is supposed to free the buffers
+ */
+int iecm_ctlq_recv(struct iecm_ctlq_info *cq, u16 *num_q_msg,
+ struct iecm_ctlq_msg *q_msg);
+
+/* Reclaims send descriptors on HW write back */
+int iecm_ctlq_clean_sq(struct iecm_ctlq_info *cq, u16 *clean_count,
+ struct iecm_ctlq_msg *msg_status[]);
+
+/* Indicate RX buffers are done being processed */
+int iecm_ctlq_post_rx_buffs(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq,
+ u16 *buff_count,
+ struct iecm_dma_mem **buffs);
+
+/* Will destroy all q including the default mb */
+int iecm_ctlq_deinit(struct iecm_hw *hw);
+
+#endif /* _IECM_CONTROLQ_API_H_ */
diff --git a/drivers/net/idpf/base/iecm_controlq_setup.c b/drivers/net/idpf/base/iecm_controlq_setup.c
new file mode 100644
index 0000000000..eb6cf7651d
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_controlq_setup.c
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+
+#include "iecm_controlq.h"
+
+
+/**
+ * iecm_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ */
+static int
+iecm_ctlq_alloc_desc_ring(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ size_t size = cq->ring_size * sizeof(struct iecm_ctlq_desc);
+
+ cq->desc_ring.va = iecm_alloc_dma_mem(hw, &cq->desc_ring, size);
+ if (!cq->desc_ring.va)
+ return IECM_ERR_NO_MEMORY;
+
+ return IECM_SUCCESS;
+}
+
+/**
+ * iecm_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Allocate the buffer head for all control queues, and if it's a receive
+ * queue, allocate DMA buffers
+ */
+static int iecm_ctlq_alloc_bufs(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ int i = 0;
+
+ /* Do not allocate DMA buffers for transmit queues */
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ return IECM_SUCCESS;
+
+ /* We'll be allocating the buffer info memory first, then we can
+ * allocate the mapped buffers for the event processing
+ */
+ cq->bi.rx_buff = (struct iecm_dma_mem **)
+ iecm_calloc(hw, cq->ring_size,
+ sizeof(struct iecm_dma_mem *));
+ if (!cq->bi.rx_buff)
+ return IECM_ERR_NO_MEMORY;
+
+ /* allocate the mapped buffers (except for the last one) */
+ for (i = 0; i < cq->ring_size - 1; i++) {
+ struct iecm_dma_mem *bi;
+ int num = 1; /* number of iecm_dma_mem to be allocated */
+
+ cq->bi.rx_buff[i] = (struct iecm_dma_mem *)iecm_calloc(hw, num,
+ sizeof(struct iecm_dma_mem));
+ if (!cq->bi.rx_buff[i])
+ goto unwind_alloc_cq_bufs;
+
+ bi = cq->bi.rx_buff[i];
+
+ bi->va = iecm_alloc_dma_mem(hw, bi, cq->buf_size);
+ if (!bi->va) {
+ /* unwind will not free the failed entry */
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ goto unwind_alloc_cq_bufs;
+ }
+ }
+
+ return IECM_SUCCESS;
+
+unwind_alloc_cq_bufs:
+ /* don't try to free the one that failed... */
+ i--;
+ for (; i >= 0; i--) {
+ iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ }
+ iecm_free(hw, cq->bi.rx_buff);
+
+ return IECM_ERR_NO_MEMORY;
+}
+
+/**
+ * iecm_ctlq_free_desc_ring - Free Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * This assumes the posted send buffers have already been cleaned
+ * and de-allocated
+ */
+static void iecm_ctlq_free_desc_ring(struct iecm_hw *hw,
+ struct iecm_ctlq_info *cq)
+{
+ iecm_free_dma_mem(hw, &cq->desc_ring);
+}
+
+/**
+ * iecm_ctlq_free_bufs - Free CQ buffer info elements
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
+ * queues. The upper layers are expected to manage freeing of TX DMA buffers
+ */
+static void iecm_ctlq_free_bufs(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ void *bi;
+
+ if (cq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX) {
+ int i;
+
+ /* free DMA buffers for rx queues*/
+ for (i = 0; i < cq->ring_size; i++) {
+ if (cq->bi.rx_buff[i]) {
+ iecm_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ iecm_free(hw, cq->bi.rx_buff[i]);
+ }
+ }
+
+ bi = (void *)cq->bi.rx_buff;
+ } else {
+ bi = (void *)cq->bi.tx_msg;
+ }
+
+ /* free the buffer header */
+ iecm_free(hw, bi);
+}
+
+/**
+ * iecm_ctlq_dealloc_ring_res - Free memory allocated for control queue
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the memory used by the ring, buffers and other related structures
+ */
+void iecm_ctlq_dealloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ /* free ring buffers and the ring itself */
+ iecm_ctlq_free_bufs(hw, cq);
+ iecm_ctlq_free_desc_ring(hw, cq);
+}
+
+/**
+ * iecm_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue struct
+ *
+ * Do *NOT* hold the lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+int iecm_ctlq_alloc_ring_res(struct iecm_hw *hw, struct iecm_ctlq_info *cq)
+{
+ int ret_code;
+
+ /* verify input for valid configuration */
+ if (!cq->ring_size || !cq->buf_size)
+ return IECM_ERR_CFG;
+
+ /* allocate the ring memory */
+ ret_code = iecm_ctlq_alloc_desc_ring(hw, cq);
+ if (ret_code)
+ return ret_code;
+
+ /* allocate buffers in the rings */
+ ret_code = iecm_ctlq_alloc_bufs(hw, cq);
+ if (ret_code)
+ goto iecm_init_cq_free_ring;
+
+ /* success! */
+ return IECM_SUCCESS;
+
+iecm_init_cq_free_ring:
+ iecm_free_dma_mem(hw, &cq->desc_ring);
+ return ret_code;
+}
diff --git a/drivers/net/idpf/base/iecm_devids.h b/drivers/net/idpf/base/iecm_devids.h
new file mode 100644
index 0000000000..839214cb40
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_devids.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_DEVIDS_H_
+#define _IECM_DEVIDS_H_
+
+/* Vendor ID */
+#define IECM_INTEL_VENDOR_ID 0x8086
+
+/* Device IDs */
+#define IECM_DEV_ID_PF 0x1452
+
+
+
+
+#endif /* _IECM_DEVIDS_H_ */
diff --git a/drivers/net/idpf/base/iecm_lan_pf_regs.h b/drivers/net/idpf/base/iecm_lan_pf_regs.h
new file mode 100644
index 0000000000..c6c460dab0
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_pf_regs.h
@@ -0,0 +1,134 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_PF_REGS_H_
+#define _IECM_LAN_PF_REGS_H_
+
+
+/* Receive queues */
+#define PF_QRX_BASE 0x00000000
+#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000)))
+#define PF_QRX_BUFFQ_BASE 0x03000000
+#define PF_QRX_BUFFQ_TAIL(_QRX) (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
+
+/* Transmit queues */
+#define PF_QTX_BASE 0x05000000
+#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000))
+
+
+/* Control(PF Mailbox) Queue */
+#define PF_FW_BASE 0x08400000
+
+#define PF_FW_ARQBAL (PF_FW_BASE)
+#define PF_FW_ARQBAH (PF_FW_BASE + 0x4)
+#define PF_FW_ARQLEN (PF_FW_BASE + 0x8)
+#define PF_FW_ARQLEN_ARQLEN_S 0
+#define PF_FW_ARQLEN_ARQLEN_M MAKEMASK(0x1FFF, PF_FW_ARQLEN_ARQLEN_S)
+#define PF_FW_ARQLEN_ARQVFE_S 28
+#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S)
+#define PF_FW_ARQLEN_ARQOVFL_S 29
+#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S)
+#define PF_FW_ARQLEN_ARQCRIT_S 30
+#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S)
+#define PF_FW_ARQLEN_ARQENABLE_S 31
+#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S)
+#define PF_FW_ARQH (PF_FW_BASE + 0xC)
+#define PF_FW_ARQH_ARQH_S 0
+#define PF_FW_ARQH_ARQH_M MAKEMASK(0x1FFF, PF_FW_ARQH_ARQH_S)
+#define PF_FW_ARQT (PF_FW_BASE + 0x10)
+
+#define PF_FW_ATQBAL (PF_FW_BASE + 0x14)
+#define PF_FW_ATQBAH (PF_FW_BASE + 0x18)
+#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C)
+#define PF_FW_ATQLEN_ATQLEN_S 0
+#define PF_FW_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, PF_FW_ATQLEN_ATQLEN_S)
+#define PF_FW_ATQLEN_ATQVFE_S 28
+#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S)
+#define PF_FW_ATQLEN_ATQOVFL_S 29
+#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S)
+#define PF_FW_ATQLEN_ATQCRIT_S 30
+#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S)
+#define PF_FW_ATQLEN_ATQENABLE_S 31
+#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S)
+#define PF_FW_ATQH (PF_FW_BASE + 0x20)
+#define PF_FW_ATQH_ATQH_S 0
+#define PF_FW_ATQH_ATQH_M MAKEMASK(0x3FF, PF_FW_ATQH_ATQH_S)
+#define PF_FW_ATQT (PF_FW_BASE + 0x24)
+
+/* Interrupts */
+#define PF_GLINT_BASE 0x08900000
+#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000))
+#define PF_GLINT_DYN_CTL_INTENA_S 0
+#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S)
+#define PF_GLINT_DYN_CTL_CLEARPBA_S 1
+#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
+#define PF_GLINT_DYN_CTL_ITR_INDX_S 3
+#define PF_GLINT_DYN_CTL_ITR_INDX_M MAKEMASK(0x3, PF_GLINT_DYN_CTL_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_INTERVAL_S 5
+#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_S 30
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_M BIT(PF_GLINT_DYN_CTL_WB_ON_ITR_S)
+#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31
+#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
+#define PF_GLINT_ITR_V2(_i, _reg_start) (((_i) * 4) + (_reg_start))
+#define PF_GLINT_ITR(_i, _INT) (PF_GLINT_BASE + (((_i) + 1) * 4) + ((_INT) * 0x1000))
+#define PF_GLINT_ITR_MAX_INDEX 2
+#define PF_GLINT_ITR_INTERVAL_S 0
+#define PF_GLINT_ITR_INTERVAL_M MAKEMASK(0xFFF, PF_GLINT_ITR_INTERVAL_S)
+
+/* Timesync registers */
+#define PF_TIMESYNC_BASE 0x08404000
+#define PF_GLTSYN_CMD_SYNC (PF_TIMESYNC_BASE)
+#define PF_GLTSYN_CMD_SYNC_EXEC_CMD_S 0
+#define PF_GLTSYN_CMD_SYNC_EXEC_CMD_M MAKEMASK(0x3, PF_GLTSYN_CMD_SYNC_EXEC_CMD_S)
+#define PF_GLTSYN_CMD_SYNC_SHTIME_EN_S 2
+#define PF_GLTSYN_CMD_SYNC_SHTIME_EN_M BIT(PF_GLTSYN_CMD_SYNC_SHTIME_EN_S)
+#define PF_GLTSYN_SHTIME_0 (PF_TIMESYNC_BASE + 0x4)
+#define PF_GLTSYN_SHTIME_L (PF_TIMESYNC_BASE + 0x8)
+#define PF_GLTSYN_SHTIME_H (PF_TIMESYNC_BASE + 0xC)
+#define PF_GLTSYN_ART_L (PF_TIMESYNC_BASE + 0x10)
+#define PF_GLTSYN_ART_H (PF_TIMESYNC_BASE + 0x14)
+
+/* Generic registers */
+#define PF_INT_DIR_OICR_ENA 0x08406000
+#define PF_INT_DIR_OICR_ENA_S 0
+#define PF_INT_DIR_OICR_ENA_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_ENA_S)
+#define PF_INT_DIR_OICR 0x08406004
+#define PF_INT_DIR_OICR_TSYN_EVNT 0
+#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1)
+#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2)
+#define PF_INT_DIR_OICR_CAUSE 0x08406008
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_M MAKEMASK(0xFFFFFFFF, PF_INT_DIR_OICR_CAUSE_CAUSE_S)
+#define PF_INT_PBA_CLEAR 0x0840600C
+
+#define PF_FUNC_RID 0x08406010
+#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M MAKEMASK(0x7, PF_FUNC_RID_FUNCTION_NUMBER_S)
+#define PF_FUNC_RID_DEVICE_NUMBER_S 3
+#define PF_FUNC_RID_DEVICE_NUMBER_M MAKEMASK(0x1F, PF_FUNC_RID_DEVICE_NUMBER_S)
+#define PF_FUNC_RID_BUS_NUMBER_S 8
+#define PF_FUNC_RID_BUS_NUMBER_M MAKEMASK(0xFF, PF_FUNC_RID_BUS_NUMBER_S)
+
+/* Reset registers */
+#define PFGEN_RTRIG 0x08407000
+#define PFGEN_RTRIG_CORER_S 0
+#define PFGEN_RTRIG_CORER_M BIT(0)
+#define PFGEN_RTRIG_LINKR_S 1
+#define PFGEN_RTRIG_LINKR_M BIT(1)
+#define PFGEN_RTRIG_IMCR_S 2
+#define PFGEN_RTRIG_IMCR_M BIT(2)
+#define PFGEN_RSTAT 0x08407008 /* PFR Status */
+#define PFGEN_RSTAT_PFR_STATE_S 0
+#define PFGEN_RSTAT_PFR_STATE_M MAKEMASK(0x3, PFGEN_RSTAT_PFR_STATE_S)
+#define PFGEN_CTRL 0x0840700C
+#define PFGEN_CTRL_PFSWR BIT(0)
+
+#endif
diff --git a/drivers/net/idpf/base/iecm_lan_txrx.h b/drivers/net/idpf/base/iecm_lan_txrx.h
new file mode 100644
index 0000000000..3e5320975d
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_txrx.h
@@ -0,0 +1,428 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_TXRX_H_
+#define _IECM_LAN_TXRX_H_
+#ifndef __KERNEL__
+#include "iecm_osdep.h"
+#endif
+
+enum iecm_rss_hash {
+ /* Values 0 - 28 are reserved for future use */
+ IECM_HASH_INVALID = 0,
+ IECM_HASH_NONF_UNICAST_IPV4_UDP = 29,
+ IECM_HASH_NONF_MULTICAST_IPV4_UDP,
+ IECM_HASH_NONF_IPV4_UDP,
+ IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
+ IECM_HASH_NONF_IPV4_TCP,
+ IECM_HASH_NONF_IPV4_SCTP,
+ IECM_HASH_NONF_IPV4_OTHER,
+ IECM_HASH_FRAG_IPV4,
+ /* Values 37-38 are reserved */
+ IECM_HASH_NONF_UNICAST_IPV6_UDP = 39,
+ IECM_HASH_NONF_MULTICAST_IPV6_UDP,
+ IECM_HASH_NONF_IPV6_UDP,
+ IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
+ IECM_HASH_NONF_IPV6_TCP,
+ IECM_HASH_NONF_IPV6_SCTP,
+ IECM_HASH_NONF_IPV6_OTHER,
+ IECM_HASH_FRAG_IPV6,
+ IECM_HASH_NONF_RSVD47,
+ IECM_HASH_NONF_FCOE_OX,
+ IECM_HASH_NONF_FCOE_RX,
+ IECM_HASH_NONF_FCOE_OTHER,
+ /* Values 51-62 are reserved */
+ IECM_HASH_L2_PAYLOAD = 63,
+ IECM_HASH_MAX
+};
+
+/* Supported RSS offloads */
+#define IECM_DEFAULT_RSS_HASH ( \
+ BIT_ULL(IECM_HASH_NONF_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_SCTP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_TCP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_OTHER) | \
+ BIT_ULL(IECM_HASH_FRAG_IPV4) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_TCP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_SCTP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_OTHER) | \
+ BIT_ULL(IECM_HASH_FRAG_IPV6) | \
+ BIT_ULL(IECM_HASH_L2_PAYLOAD))
+
+ /* TODO: Wrap belwo comment under internal flag
+ * Below 6 pcktypes are not supported by FVL or older products
+ * They are supported by FPK and future products
+ */
+#define IECM_DEFAULT_RSS_HASH_EXPANDED (IECM_DEFAULT_RSS_HASH | \
+ BIT_ULL(IECM_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IECM_HASH_NONF_UNICAST_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV4_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IECM_HASH_NONF_UNICAST_IPV6_UDP) | \
+ BIT_ULL(IECM_HASH_NONF_MULTICAST_IPV6_UDP))
+
+/* For iecm_splitq_base_tx_compl_desc */
+#define IECM_TXD_COMPLQ_GEN_S 15
+#define IECM_TXD_COMPLQ_GEN_M BIT_ULL(IECM_TXD_COMPLQ_GEN_S)
+#define IECM_TXD_COMPLQ_COMPL_TYPE_S 11
+#define IECM_TXD_COMPLQ_COMPL_TYPE_M \
+ MAKEMASK(0x7UL, IECM_TXD_COMPLQ_COMPL_TYPE_S)
+#define IECM_TXD_COMPLQ_QID_S 0
+#define IECM_TXD_COMPLQ_QID_M MAKEMASK(0x3FFUL, IECM_TXD_COMPLQ_QID_S)
+
+/* For base mode TX descriptors */
+
+#define IECM_TXD_CTX_QW0_TUNN_L4T_CS_S 23
+#define IECM_TXD_CTX_QW0_TUNN_L4T_CS_M BIT_ULL(IECM_TXD_CTX_QW0_TUNN_L4T_CS_S)
+#define IECM_TXD_CTX_QW0_TUNN_DECTTL_S 19
+#define IECM_TXD_CTX_QW0_TUNN_DECTTL_M \
+ (0xFULL << IECM_TXD_CTX_QW0_TUNN_DECTTL_S)
+#define IECM_TXD_CTX_QW0_TUNN_NATLEN_S 12
+#define IECM_TXD_CTX_QW0_TUNN_NATLEN_M \
+ (0X7FULL << IECM_TXD_CTX_QW0_TUNN_NATLEN_S)
+#define IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_S 11
+#define IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_M \
+ BIT_ULL(IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_S)
+#define IECM_TXD_CTX_EIP_NOINC_IPID_CONST \
+ IECM_TXD_CTX_QW0_TUNN_EIP_NOINC_M
+#define IECM_TXD_CTX_QW0_TUNN_NATT_S 9
+#define IECM_TXD_CTX_QW0_TUNN_NATT_M (0x3ULL << IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_UDP_TUNNELING BIT_ULL(IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_GRE_TUNNELING (0x2ULL << IECM_TXD_CTX_QW0_TUNN_NATT_S)
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_S 2
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_M \
+ (0x3FULL << IECM_TXD_CTX_QW0_TUNN_EXT_IPLEN_S)
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IP_S 0
+#define IECM_TXD_CTX_QW0_TUNN_EXT_IP_M \
+ (0x3ULL << IECM_TXD_CTX_QW0_TUNN_EXT_IP_S)
+
+#define IECM_TXD_CTX_QW1_MSS_S 50
+#define IECM_TXD_CTX_QW1_MSS_M \
+ MAKEMASK(0x3FFFULL, IECM_TXD_CTX_QW1_MSS_S)
+#define IECM_TXD_CTX_QW1_TSO_LEN_S 30
+#define IECM_TXD_CTX_QW1_TSO_LEN_M \
+ MAKEMASK(0x3FFFFULL, IECM_TXD_CTX_QW1_TSO_LEN_S)
+#define IECM_TXD_CTX_QW1_CMD_S 4
+#define IECM_TXD_CTX_QW1_CMD_M \
+ MAKEMASK(0xFFFUL, IECM_TXD_CTX_QW1_CMD_S)
+#define IECM_TXD_CTX_QW1_DTYPE_S 0
+#define IECM_TXD_CTX_QW1_DTYPE_M \
+ MAKEMASK(0xFUL, IECM_TXD_CTX_QW1_DTYPE_S)
+#define IECM_TXD_QW1_L2TAG1_S 48
+#define IECM_TXD_QW1_L2TAG1_M \
+ MAKEMASK(0xFFFFULL, IECM_TXD_QW1_L2TAG1_S)
+#define IECM_TXD_QW1_TX_BUF_SZ_S 34
+#define IECM_TXD_QW1_TX_BUF_SZ_M \
+ MAKEMASK(0x3FFFULL, IECM_TXD_QW1_TX_BUF_SZ_S)
+#define IECM_TXD_QW1_OFFSET_S 16
+#define IECM_TXD_QW1_OFFSET_M \
+ MAKEMASK(0x3FFFFULL, IECM_TXD_QW1_OFFSET_S)
+#define IECM_TXD_QW1_CMD_S 4
+#define IECM_TXD_QW1_CMD_M MAKEMASK(0xFFFUL, IECM_TXD_QW1_CMD_S)
+#define IECM_TXD_QW1_DTYPE_S 0
+#define IECM_TXD_QW1_DTYPE_M MAKEMASK(0xFUL, IECM_TXD_QW1_DTYPE_S)
+
+/* TX Completion Descriptor Completion Types */
+#define IECM_TXD_COMPLT_ITR_FLUSH 0
+#define IECM_TXD_COMPLT_RULE_MISS 1
+#define IECM_TXD_COMPLT_RS 2
+#define IECM_TXD_COMPLT_REINJECTED 3
+#define IECM_TXD_COMPLT_RE 4
+#define IECM_TXD_COMPLT_SW_MARKER 5
+
+enum iecm_tx_desc_dtype_value {
+ IECM_TX_DESC_DTYPE_DATA = 0,
+ IECM_TX_DESC_DTYPE_CTX = 1,
+ IECM_TX_DESC_DTYPE_REINJECT_CTX = 2,
+ IECM_TX_DESC_DTYPE_FLEX_DATA = 3,
+ IECM_TX_DESC_DTYPE_FLEX_CTX = 4,
+ IECM_TX_DESC_DTYPE_FLEX_TSO_CTX = 5,
+ IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 = 6,
+ IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7,
+ IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX = 8,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_TSO_CTX = 9,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_SA_CTX = 10,
+ IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX = 11,
+ IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_TSO_CTX = 13,
+ IECM_TX_DESC_DTYPE_FLEX_HOSTSPLIT_CTX = 14,
+ /* DESC_DONE - HW has completed write-back of descriptor */
+ IECM_TX_DESC_DTYPE_DESC_DONE = 15,
+};
+
+enum iecm_tx_ctx_desc_cmd_bits {
+ IECM_TX_CTX_DESC_TSO = 0x01,
+ IECM_TX_CTX_DESC_TSYN = 0x02,
+ IECM_TX_CTX_DESC_IL2TAG2 = 0x04,
+ IECM_TX_CTX_DESC_RSVD = 0x08,
+ IECM_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
+ IECM_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
+ IECM_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
+ IECM_TX_CTX_DESC_SWTCH_VSI = 0x30,
+ IECM_TX_CTX_DESC_FILT_AU_EN = 0x40,
+ IECM_TX_CTX_DESC_FILT_AU_EVICT = 0x80,
+ IECM_TX_CTX_DESC_RSVD1 = 0xF00
+};
+
+enum iecm_tx_desc_len_fields {
+ /* Note: These are predefined bit offsets */
+ IECM_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */
+ IECM_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */
+ IECM_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */
+};
+
+#define IECM_TXD_QW1_MACLEN_M MAKEMASK(0x7FUL, IECM_TX_DESC_LEN_MACLEN_S)
+#define IECM_TXD_QW1_IPLEN_M MAKEMASK(0x7FUL, IECM_TX_DESC_LEN_IPLEN_S)
+#define IECM_TXD_QW1_L4LEN_M MAKEMASK(0xFUL, IECM_TX_DESC_LEN_L4_LEN_S)
+#define IECM_TXD_QW1_FCLEN_M MAKEMASK(0xFUL, IECM_TX_DESC_LEN_L4_LEN_S)
+
+enum iecm_tx_base_desc_cmd_bits {
+ IECM_TX_DESC_CMD_EOP = 0x0001,
+ IECM_TX_DESC_CMD_RS = 0x0002,
+ /* only on VFs else RSVD */
+ IECM_TX_DESC_CMD_ICRC = 0x0004,
+ IECM_TX_DESC_CMD_IL2TAG1 = 0x0008,
+ IECM_TX_DESC_CMD_RSVD1 = 0x0010,
+ IECM_TX_DESC_CMD_IIPT_NONIP = 0x0000, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV6 = 0x0020, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV4 = 0x0040, /* 2 BITS */
+ IECM_TX_DESC_CMD_IIPT_IPV4_CSUM = 0x0060, /* 2 BITS */
+ IECM_TX_DESC_CMD_RSVD2 = 0x0080,
+ IECM_TX_DESC_CMD_L4T_EOFT_UNK = 0x0000, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_TCP = 0x0100, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_SCTP = 0x0200, /* 2 BITS */
+ IECM_TX_DESC_CMD_L4T_EOFT_UDP = 0x0300, /* 2 BITS */
+ IECM_TX_DESC_CMD_RSVD3 = 0x0400,
+ IECM_TX_DESC_CMD_RSVD4 = 0x0800,
+};
+
+/* Transmit descriptors */
+/* splitq tx buf, singleq tx buf and singleq compl desc */
+struct iecm_base_tx_desc {
+ __le64 buf_addr; /* Address of descriptor's data buf */
+ __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
+};/* read used with buffer queues*/
+
+struct iecm_splitq_tx_compl_desc {
+ /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
+ __le16 qid_comptype_gen;
+ union {
+ __le16 q_head; /* Queue head */
+ __le16 compl_tag; /* Completion tag */
+ } q_head_compl_tag;
+ u32 rsvd;
+
+};/* writeback used with completion queues*/
+
+/* Context descriptors */
+struct iecm_base_tx_ctx_desc {
+ struct {
+ __le32 tunneling_params;
+ __le16 l2tag2;
+ __le16 rsvd1;
+ } qw0;
+ __le64 qw1; /* type_cmd_tlen_mss/rt_hint */
+};
+
+/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
+enum iecm_tx_flex_desc_cmd_bits {
+ IECM_TX_FLEX_DESC_CMD_EOP = 0x01,
+ IECM_TX_FLEX_DESC_CMD_RS = 0x02,
+ IECM_TX_FLEX_DESC_CMD_RE = 0x04,
+ IECM_TX_FLEX_DESC_CMD_IL2TAG1 = 0x08,
+ IECM_TX_FLEX_DESC_CMD_DUMMY = 0x10,
+ IECM_TX_FLEX_DESC_CMD_CS_EN = 0x20,
+ IECM_TX_FLEX_DESC_CMD_FILT_AU_EN = 0x40,
+ IECM_TX_FLEX_DESC_CMD_FILT_AU_EVICT = 0x80,
+};
+
+struct iecm_flex_tx_desc {
+ __le64 buf_addr; /* Packet buffer address */
+ struct {
+ __le16 cmd_dtype;
+#define IECM_FLEX_TXD_QW1_DTYPE_S 0
+#define IECM_FLEX_TXD_QW1_DTYPE_M \
+ MAKEMASK(0x1FUL, IECM_FLEX_TXD_QW1_DTYPE_S)
+#define IECM_FLEX_TXD_QW1_CMD_S 5
+#define IECM_FLEX_TXD_QW1_CMD_M MAKEMASK(0x7FFUL, IECM_TXD_QW1_CMD_S)
+ union {
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_DATA_(0x03) */
+ u8 raw[4];
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSYN_L2TAG1 (0x06) */
+ struct {
+ __le16 l2tag1;
+ u8 flex;
+ u8 tsync;
+ } tsync;
+
+ /* DTYPE=IECM_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
+ struct {
+ __le16 l2tag1;
+ __le16 l2tag2;
+ } l2tags;
+ } flex;
+ __le16 buf_size;
+ } qw1;
+};
+
+struct iecm_flex_tx_sched_desc {
+ __le64 buf_addr; /* Packet buffer address */
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
+ struct {
+ u8 cmd_dtype;
+#define IECM_TXD_FLEX_FLOW_DTYPE_M 0x1F
+#define IECM_TXD_FLEX_FLOW_CMD_EOP 0x20
+#define IECM_TXD_FLEX_FLOW_CMD_CS_EN 0x40
+#define IECM_TXD_FLEX_FLOW_CMD_RE 0x80
+
+ u8 rsvd[3];
+
+ __le16 compl_tag;
+ __le16 rxr_bufsize;
+#define IECM_TXD_FLEX_FLOW_RXR 0x4000
+#define IECM_TXD_FLEX_FLOW_BUFSIZE_M 0x3FFF
+ } qw1;
+};
+
+/* Common cmd fields for all flex context descriptors
+ * Note: these defines already account for the 5 bit dtype in the cmd_dtype
+ * field
+ */
+enum iecm_tx_flex_ctx_desc_cmd_bits {
+ IECM_TX_FLEX_CTX_DESC_CMD_TSO = 0x0020,
+ IECM_TX_FLEX_CTX_DESC_CMD_TSYN_EN = 0x0040,
+ IECM_TX_FLEX_CTX_DESC_CMD_L2TAG2 = 0x0080,
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = 0x0200, /* 2 bits */
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = 0x0400, /* 2 bits */
+ IECM_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = 0x0600, /* 2 bits */
+};
+
+/* Standard flex descriptor TSO context quad word */
+struct iecm_flex_tx_tso_ctx_qw {
+ __le32 flex_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF
+#define IECM_TXD_FLEX_TSO_CTX_FLEX_S 24
+ __le16 mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF
+ u8 hdr_len;
+ u8 flex;
+};
+
+union iecm_flex_tx_ctx_desc {
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_CTX (0x04) */
+ struct {
+ u8 qw0_flex[8];
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag1;
+ u8 qw1_flex[4];
+ } qw1;
+ } gen;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
+ struct {
+ struct iecm_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ u8 flex[6];
+ } qw1;
+ } tso;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_TSO_L2TAG2_PARSTAG_CTX (0x08) */
+ struct {
+ struct iecm_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag2;
+ u8 flex0;
+ u8 ptag;
+ u8 flex1[2];
+ } qw1;
+ } tso_l2tag2_ptag;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_FLEX_L2TAG2_CTX (0x0B) */
+ struct {
+ u8 qw0_flex[8];
+ struct {
+ __le16 cmd_dtype;
+ __le16 l2tag2;
+ u8 flex[4];
+ } qw1;
+ } l2tag2;
+
+ /* DTYPE = IECM_TX_DESC_DTYPE_REINJECT_CTX (0x02) */
+ struct {
+ struct {
+ __le32 sa_domain;
+#define IECM_TXD_FLEX_CTX_SA_DOM_M 0xFFFF
+#define IECM_TXD_FLEX_CTX_SA_DOM_VAL 0x10000
+ __le32 sa_idx;
+#define IECM_TXD_FLEX_CTX_SAIDX_M 0x1FFFFF
+ } qw0;
+ struct {
+ __le16 cmd_dtype;
+ __le16 txr2comp;
+#define IECM_TXD_FLEX_CTX_TXR2COMP 0x1
+ __le16 miss_txq_comp_tag;
+ __le16 miss_txq_id;
+ } qw1;
+ } reinjection_pkt;
+};
+
+/* Host Split Context Descriptors */
+struct iecm_flex_tx_hs_ctx_desc {
+ union {
+ struct {
+ __le32 host_fnum_tlen;
+#define IECM_TXD_FLEX_CTX_TLEN_S 0
+#define IECM_TXD_FLEX_CTX_TLEN_M 0x1FFFF
+#define IECM_TXD_FLEX_CTX_FNUM_S 18
+#define IECM_TXD_FLEX_CTX_FNUM_M 0x7FF
+#define IECM_TXD_FLEX_CTX_HOST_S 29
+#define IECM_TXD_FLEX_CTX_HOST_M 0x7
+ __le16 ftype_mss_rt;
+#define IECM_TXD_FLEX_CTX_MSS_RT_0 0
+#define IECM_TXD_FLEX_CTX_MSS_RT_M 0x3FFF
+#define IECM_TXD_FLEX_CTX_FTYPE_S 14
+#define IECM_TXD_FLEX_CTX_FTYPE_VF MAKEMASK(0x0, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_VDEV MAKEMASK(0x1, IECM_TXD_FLEX_CTX_FTYPE_S)
+#define IECM_TXD_FLEX_CTX_FTYPE_PF MAKEMASK(0x2, IECM_TXD_FLEX_CTX_FTYPE_S)
+ u8 hdr_len;
+ u8 ptag;
+ } tso;
+ struct {
+ u8 flex0[2];
+ __le16 host_fnum_ftype;
+ u8 flex1[3];
+ u8 ptag;
+ } no_tso;
+ } qw0;
+
+ __le64 qw1_cmd_dtype;
+#define IECM_TXD_FLEX_CTX_QW1_PASID_S 16
+#define IECM_TXD_FLEX_CTX_QW1_PASID_M 0xFFFFF
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID_S 36
+#define IECM_TXD_FLEX_CTX_QW1_PASID_VALID \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_PASID_VALID_S)
+#define IECM_TXD_FLEX_CTX_QW1_TPH_S 37
+#define IECM_TXD_FLEX_CTX_QW1_TPH \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_TPH_S)
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_S 38
+#define IECM_TXD_FLEX_CTX_QW1_PFNUM_M 0xF
+/* The following are only valid for DTYPE = 0x09 and DTYPE = 0x0A */
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_S 42
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_M 0x1FFFFF
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S 63
+#define IECM_TXD_FLEX_CTX_QW1_SAIDX_VALID \
+ MAKEMASK(0x1, IECM_TXD_FLEX_CTX_QW1_SAIDX_VAL_S)
+/* The following are only valid for DTYPE = 0x0D and DTYPE = 0x0E */
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_S 48
+#define IECM_TXD_FLEX_CTX_QW1_FLEX0_M 0xFF
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_S 56
+#define IECM_TXD_FLEX_CTX_QW1_FLEX1_M 0xFF
+};
+#endif /* _IECM_LAN_TXRX_H_ */
diff --git a/drivers/net/idpf/base/iecm_lan_vf_regs.h b/drivers/net/idpf/base/iecm_lan_vf_regs.h
new file mode 100644
index 0000000000..1ba1a8dea6
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_lan_vf_regs.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_LAN_VF_REGS_H_
+#define _IECM_LAN_VF_REGS_H_
+
+
+/* Reset */
+#define VFGEN_RSTAT 0x00008800
+#define VFGEN_RSTAT_VFR_STATE_S 0
+#define VFGEN_RSTAT_VFR_STATE_M MAKEMASK(0x3, VFGEN_RSTAT_VFR_STATE_S)
+
+/* Control(VF Mailbox) Queue */
+#define VF_BASE 0x00006000
+
+#define VF_ATQBAL (VF_BASE + 0x1C00)
+#define VF_ATQBAH (VF_BASE + 0x1800)
+#define VF_ATQLEN (VF_BASE + 0x0800)
+#define VF_ATQLEN_ATQLEN_S 0
+#define VF_ATQLEN_ATQLEN_M MAKEMASK(0x3FF, VF_ATQLEN_ATQLEN_S)
+#define VF_ATQLEN_ATQVFE_S 28
+#define VF_ATQLEN_ATQVFE_M BIT(VF_ATQLEN_ATQVFE_S)
+#define VF_ATQLEN_ATQOVFL_S 29
+#define VF_ATQLEN_ATQOVFL_M BIT(VF_ATQLEN_ATQOVFL_S)
+#define VF_ATQLEN_ATQCRIT_S 30
+#define VF_ATQLEN_ATQCRIT_M BIT(VF_ATQLEN_ATQCRIT_S)
+#define VF_ATQLEN_ATQENABLE_S 31
+#define VF_ATQLEN_ATQENABLE_M BIT(VF_ATQLEN_ATQENABLE_S)
+#define VF_ATQH (VF_BASE + 0x0400)
+#define VF_ATQH_ATQH_S 0
+#define VF_ATQH_ATQH_M MAKEMASK(0x3FF, VF_ATQH_ATQH_S)
+#define VF_ATQT (VF_BASE + 0x2400)
+
+#define VF_ARQBAL (VF_BASE + 0x0C00)
+#define VF_ARQBAH (VF_BASE)
+#define VF_ARQLEN (VF_BASE + 0x2000)
+#define VF_ARQLEN_ARQLEN_S 0
+#define VF_ARQLEN_ARQLEN_M MAKEMASK(0x3FF, VF_ARQLEN_ARQLEN_S)
+#define VF_ARQLEN_ARQVFE_S 28
+#define VF_ARQLEN_ARQVFE_M BIT(VF_ARQLEN_ARQVFE_S)
+#define VF_ARQLEN_ARQOVFL_S 29
+#define VF_ARQLEN_ARQOVFL_M BIT(VF_ARQLEN_ARQOVFL_S)
+#define VF_ARQLEN_ARQCRIT_S 30
+#define VF_ARQLEN_ARQCRIT_M BIT(VF_ARQLEN_ARQCRIT_S)
+#define VF_ARQLEN_ARQENABLE_S 31
+#define VF_ARQLEN_ARQENABLE_M BIT(VF_ARQLEN_ARQENABLE_S)
+#define VF_ARQH (VF_BASE + 0x1400)
+#define VF_ARQH_ARQH_S 0
+#define VF_ARQH_ARQH_M MAKEMASK(0x1FFF, VF_ARQH_ARQH_S)
+#define VF_ARQT (VF_BASE + 0x1000)
+
+/* Transmit queues */
+#define VF_QTX_TAIL_BASE 0x00000000
+#define VF_QTX_TAIL(_QTX) (VF_QTX_TAIL_BASE + (_QTX) * 0x4)
+#define VF_QTX_TAIL_EXT_BASE 0x00040000
+#define VF_QTX_TAIL_EXT(_QTX) (VF_QTX_TAIL_EXT_BASE + ((_QTX) * 4))
+
+/* Receive queues */
+#define VF_QRX_TAIL_BASE 0x00002000
+#define VF_QRX_TAIL(_QRX) (VF_QRX_TAIL_BASE + ((_QRX) * 4))
+#define VF_QRX_TAIL_EXT_BASE 0x00050000
+#define VF_QRX_TAIL_EXT(_QRX) (VF_QRX_TAIL_EXT_BASE + ((_QRX) * 4))
+#define VF_QRXB_TAIL_BASE 0x00060000
+#define VF_QRXB_TAIL(_QRX) (VF_QRXB_TAIL_BASE + ((_QRX) * 4))
+
+/* Interrupts */
+#define VF_INT_DYN_CTL0 0x00005C00
+#define VF_INT_DYN_CTL0_INTENA_S 0
+#define VF_INT_DYN_CTL0_INTENA_M BIT(VF_INT_DYN_CTL0_INTENA_S)
+#define VF_INT_DYN_CTL0_ITR_INDX_S 3
+#define VF_INT_DYN_CTL0_ITR_INDX_M MAKEMASK(0x3, VF_INT_DYN_CTL0_ITR_INDX_S)
+#define VF_INT_DYN_CTLN(_INT) (0x00003800 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_EXT(_INT) (0x00070000 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_INTENA_S 0
+#define VF_INT_DYN_CTLN_INTENA_M BIT(VF_INT_DYN_CTLN_INTENA_S)
+#define VF_INT_DYN_CTLN_CLEARPBA_S 1
+#define VF_INT_DYN_CTLN_CLEARPBA_M BIT(VF_INT_DYN_CTLN_CLEARPBA_S)
+#define VF_INT_DYN_CTLN_SWINT_TRIG_S 2
+#define VF_INT_DYN_CTLN_SWINT_TRIG_M BIT(VF_INT_DYN_CTLN_SWINT_TRIG_S)
+#define VF_INT_DYN_CTLN_ITR_INDX_S 3
+#define VF_INT_DYN_CTLN_ITR_INDX_M MAKEMASK(0x3, VF_INT_DYN_CTLN_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_INTERVAL_S 5
+#define VF_INT_DYN_CTLN_INTERVAL_M BIT(VF_INT_DYN_CTLN_INTERVAL_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S 24
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_S 25
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_WB_ON_ITR_S 30
+#define VF_INT_DYN_CTLN_WB_ON_ITR_M BIT(VF_INT_DYN_CTLN_WB_ON_ITR_S)
+#define VF_INT_DYN_CTLN_INTENA_MSK_S 31
+#define VF_INT_DYN_CTLN_INTENA_MSK_M BIT(VF_INT_DYN_CTLN_INTENA_MSK_S)
+#define VF_INT_ITR0(_i) (0x00004C00 + ((_i) * 4))
+#define VF_INT_ITRN_V2(_i, _reg_start) ((_reg_start) + (((_i)) * 4))
+#define VF_INT_ITRN(_i, _INT) (0x00002800 + ((_i) * 4) + ((_INT) * 0x40))
+#define VF_INT_ITRN_64(_i, _INT) (0x00002C00 + ((_i) * 4) + ((_INT) * 0x100))
+#define VF_INT_ITRN_2K(_i, _INT) (0x00072000 + ((_i) * 4) + ((_INT) * 0x100))
+#define VF_INT_ITRN_MAX_INDEX 2
+#define VF_INT_ITRN_INTERVAL_S 0
+#define VF_INT_ITRN_INTERVAL_M MAKEMASK(0xFFF, VF_INT_ITRN_INTERVAL_S)
+#define VF_INT_PBA_CLEAR 0x00008900
+
+#define VF_INT_ICR0_ENA1 0x00005000
+#define VF_INT_ICR0_ENA1_ADMINQ_S 30
+#define VF_INT_ICR0_ENA1_ADMINQ_M BIT(VF_INT_ICR0_ENA1_ADMINQ_S)
+#define VF_INT_ICR0_ENA1_RSVD_S 31
+#define VF_INT_ICR01 0x00004800
+#define VF_QF_HENA(_i) (0x0000C400 + ((_i) * 4))
+#define VF_QF_HENA_MAX_INDX 1
+#define VF_QF_HKEY(_i) (0x0000CC00 + ((_i) * 4))
+#define VF_QF_HKEY_MAX_INDX 12
+#define VF_QF_HLUT(_i) (0x0000D000 + ((_i) * 4))
+#define VF_QF_HLUT_MAX_INDX 15
+#endif
diff --git a/drivers/net/idpf/base/iecm_prototype.h b/drivers/net/idpf/base/iecm_prototype.h
new file mode 100644
index 0000000000..cd3ee8dcbc
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_prototype.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_PROTOTYPE_H_
+#define _IECM_PROTOTYPE_H_
+
+/* Include generic macros and types first */
+#include "iecm_osdep.h"
+#include "iecm_controlq.h"
+#include "iecm_type.h"
+#include "iecm_alloc.h"
+#include "iecm_devids.h"
+#include "iecm_controlq_api.h"
+#include "iecm_lan_pf_regs.h"
+#include "iecm_lan_vf_regs.h"
+#include "iecm_lan_txrx.h"
+#include "virtchnl.h"
+
+#define APF
+
+int iecm_init_hw(struct iecm_hw *hw, struct iecm_ctlq_size ctlq_size);
+int iecm_deinit_hw(struct iecm_hw *hw);
+
+int iecm_clean_arq_element(struct iecm_hw *hw,
+ struct iecm_arq_event_info *e,
+ u16 *events_pending);
+bool iecm_asq_done(struct iecm_hw *hw);
+bool iecm_check_asq_alive(struct iecm_hw *hw);
+
+int iecm_get_rss_lut(struct iecm_hw *hw, u16 seid, bool pf_lut,
+ u8 *lut, u16 lut_size);
+int iecm_set_rss_lut(struct iecm_hw *hw, u16 seid, bool pf_lut,
+ u8 *lut, u16 lut_size);
+int iecm_get_rss_key(struct iecm_hw *hw, u16 seid,
+ struct iecm_get_set_rss_key_data *key);
+int iecm_set_rss_key(struct iecm_hw *hw, u16 seid,
+ struct iecm_get_set_rss_key_data *key);
+
+int iecm_set_mac_type(struct iecm_hw *hw);
+
+int iecm_reset(struct iecm_hw *hw);
+int iecm_send_msg_to_cp(struct iecm_hw *hw, enum virtchnl_ops v_opcode,
+ int v_retval, u8 *msg, u16 msglen);
+#endif /* _IECM_PROTOTYPE_H_ */
diff --git a/drivers/net/idpf/base/iecm_type.h b/drivers/net/idpf/base/iecm_type.h
new file mode 100644
index 0000000000..fdde9c6e61
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_type.h
@@ -0,0 +1,106 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_TYPE_H_
+#define _IECM_TYPE_H_
+
+#include "iecm_controlq.h"
+
+#define UNREFERENCED_XPARAMETER
+#define UNREFERENCED_1PARAMETER(_p) (_p);
+#define UNREFERENCED_2PARAMETER(_p, _q) (_p); (_q);
+#define UNREFERENCED_3PARAMETER(_p, _q, _r) (_p); (_q); (_r);
+#define UNREFERENCED_4PARAMETER(_p, _q, _r, _s) (_p); (_q); (_r); (_s);
+#define UNREFERENCED_5PARAMETER(_p, _q, _r, _s, _t) (_p); (_q); (_r); (_s); (_t);
+
+#define MAKEMASK(m, s) ((m) << (s))
+
+struct iecm_eth_stats {
+ u64 rx_bytes; /* gorc */
+ u64 rx_unicast; /* uprc */
+ u64 rx_multicast; /* mprc */
+ u64 rx_broadcast; /* bprc */
+ u64 rx_discards; /* rdpc */
+ u64 rx_unknown_protocol; /* rupp */
+ u64 tx_bytes; /* gotc */
+ u64 tx_unicast; /* uptc */
+ u64 tx_multicast; /* mptc */
+ u64 tx_broadcast; /* bptc */
+ u64 tx_discards; /* tdpc */
+ u64 tx_errors; /* tepc */
+};
+
+/* Statistics collected by the MAC */
+struct iecm_hw_port_stats {
+ /* eth stats collected by the port */
+ struct iecm_eth_stats eth;
+
+ /* additional port specific stats */
+ u64 tx_dropped_link_down; /* tdold */
+ u64 crc_errors; /* crcerrs */
+ u64 illegal_bytes; /* illerrc */
+ u64 error_bytes; /* errbc */
+ u64 mac_local_faults; /* mlfc */
+ u64 mac_remote_faults; /* mrfc */
+ u64 rx_length_errors; /* rlec */
+ u64 link_xon_rx; /* lxonrxc */
+ u64 link_xoff_rx; /* lxoffrxc */
+ u64 priority_xon_rx[8]; /* pxonrxc[8] */
+ u64 priority_xoff_rx[8]; /* pxoffrxc[8] */
+ u64 link_xon_tx; /* lxontxc */
+ u64 link_xoff_tx; /* lxofftxc */
+ u64 priority_xon_tx[8]; /* pxontxc[8] */
+ u64 priority_xoff_tx[8]; /* pxofftxc[8] */
+ u64 priority_xon_2_xoff[8]; /* pxon2offc[8] */
+ u64 rx_size_64; /* prc64 */
+ u64 rx_size_127; /* prc127 */
+ u64 rx_size_255; /* prc255 */
+ u64 rx_size_511; /* prc511 */
+ u64 rx_size_1023; /* prc1023 */
+ u64 rx_size_1522; /* prc1522 */
+ u64 rx_size_big; /* prc9522 */
+ u64 rx_undersize; /* ruc */
+ u64 rx_fragments; /* rfc */
+ u64 rx_oversize; /* roc */
+ u64 rx_jabber; /* rjc */
+ u64 tx_size_64; /* ptc64 */
+ u64 tx_size_127; /* ptc127 */
+ u64 tx_size_255; /* ptc255 */
+ u64 tx_size_511; /* ptc511 */
+ u64 tx_size_1023; /* ptc1023 */
+ u64 tx_size_1522; /* ptc1522 */
+ u64 tx_size_big; /* ptc9522 */
+ u64 mac_short_packet_dropped; /* mspdc */
+ u64 checksum_error; /* xec */
+};
+/* Static buffer size to initialize control queue */
+struct iecm_ctlq_size {
+ u16 asq_buf_size;
+ u16 asq_ring_size;
+ u16 arq_buf_size;
+ u16 arq_ring_size;
+};
+
+/* Temporary definition to compile - TBD if needed */
+struct iecm_arq_event_info {
+ struct iecm_ctlq_desc desc;
+ u16 msg_len;
+ u16 buf_len;
+ u8 *msg_buf;
+};
+
+struct iecm_get_set_rss_key_data {
+ u8 standard_rss_key[0x28];
+ u8 extended_hash_key[0xc];
+};
+
+struct iecm_aq_get_phy_abilities_resp {
+ __le32 phy_type;
+};
+
+struct iecm_filter_program_desc {
+ __le32 qid;
+};
+
+#endif /* _IECM_TYPE_H_ */
diff --git a/drivers/net/idpf/base/meson.build b/drivers/net/idpf/base/meson.build
new file mode 100644
index 0000000000..1ad9a87d9d
--- /dev/null
+++ b/drivers/net/idpf/base/meson.build
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+sources = [
+ 'iecm_common.c',
+ 'iecm_controlq.c',
+ 'iecm_controlq_setup.c',
+]
+
+error_cflags = ['-Wno-unused-value',
+ '-Wno-unused-but-set-variable',
+ '-Wno-unused-variable',
+ '-Wno-unused-parameter',
+]
+
+c_args = cflags
+
+foreach flag: error_cflags
+ if cc.has_argument(flag)
+ c_args += flag
+ endif
+endforeach
+
+base_lib = static_library('idpf_base', sources,
+ dependencies: static_rte_eal,
+ c_args: c_args)
+base_objs = base_lib.extract_all_objects()
\ No newline at end of file
diff --git a/drivers/net/idpf/base/siov_regs.h b/drivers/net/idpf/base/siov_regs.h
new file mode 100644
index 0000000000..bb7b2daac0
--- /dev/null
+++ b/drivers/net/idpf/base/siov_regs.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+#ifndef _SIOV_REGS_H_
+#define _SIOV_REGS_H_
+#define VDEV_MBX_START 0x20000 /* Begin at 128KB */
+#define VDEV_MBX_ATQBAL (VDEV_MBX_START + 0x0000)
+#define VDEV_MBX_ATQBAH (VDEV_MBX_START + 0x0004)
+#define VDEV_MBX_ATQLEN (VDEV_MBX_START + 0x0008)
+#define VDEV_MBX_ATQH (VDEV_MBX_START + 0x000C)
+#define VDEV_MBX_ATQT (VDEV_MBX_START + 0x0010)
+#define VDEV_MBX_ARQBAL (VDEV_MBX_START + 0x0014)
+#define VDEV_MBX_ARQBAH (VDEV_MBX_START + 0x0018)
+#define VDEV_MBX_ARQLEN (VDEV_MBX_START + 0x001C)
+#define VDEV_MBX_ARQH (VDEV_MBX_START + 0x0020)
+#define VDEV_MBX_ARQT (VDEV_MBX_START + 0x0024)
+#define VDEV_GET_RSTAT 0x21000 /* 132KB for RSTAT */
+
+/* Begin at offset after 1MB (after 256 4k pages) */
+#define VDEV_QRX_TAIL_START 0x100000
+#define VDEV_QRX_TAIL(_i) (VDEV_QRX_TAIL_START + ((_i) * 0x1000)) /* 2k Rx queues */
+
+#define VDEV_QRX_BUFQ_TAIL_START 0x900000 /* Begin at offset of 9MB for Rx buffer queue tail register pages */
+#define VDEV_QRX_BUFQ_TAIL(_i) (VDEV_QRX_BUFQ_TAIL_START + ((_i) * 0x1000)) /* 2k Rx buffer queues */
+
+#define VDEV_QTX_TAIL_START 0x1100000 /* Begin at offset of 17MB for 2k Tx queues */
+#define VDEV_QTX_TAIL(_i) (VDEV_QTX_TAIL_START + ((_i) * 0x1000)) /* 2k Tx queues */
+
+#define VDEV_QTX_COMPL_TAIL_START 0x1900000 /* Begin at offset of 25MB for 2k Tx completion queues */
+#define VDEV_QTX_COMPL_TAIL(_i) (VDEV_QTX_COMPL_TAIL_START + ((_i) * 0x1000)) /* 2k Tx completion queues */
+
+#define VDEV_INT_DYN_CTL01 0x2100000 /* Begin at offset 33MB */
+
+#define VDEV_INT_DYN_START (VDEV_INT_DYN_CTL01 + 0x1000) /* Begin at offset of 33MB + 4k to accomdate CTL01 register */
+#define VDEV_INT_DYN_CTL(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000))
+#define VDEV_INT_ITR_0(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x04)
+#define VDEV_INT_ITR_1(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x08)
+#define VDEV_INT_ITR_2(_i) (VDEV_INT_DYN_START + ((_i) * 0x1000) + 0x0C)
+
+/* Next offset to begin at 42MB (0x2A00000) */
+#endif /* _SIOV_REGS_H_ */
diff --git a/drivers/net/idpf/base/virtchnl.h b/drivers/net/idpf/base/virtchnl.h
new file mode 100644
index 0000000000..b5d0d5ffd3
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl.h
@@ -0,0 +1,2743 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL_H_
+#define _VIRTCHNL_H_
+
+/* Description:
+ * This header file describes the Virtual Function (VF) - Physical Function
+ * (PF) communication protocol used by the drivers for all devices starting
+ * from our 40G product line
+ *
+ * Admin queue buffer usage:
+ * desc->opcode is always aqc_opc_send_msg_to_pf
+ * flags, retval, datalen, and data addr are all used normally.
+ * The Firmware copies the cookie fields when sending messages between the
+ * PF and VF, but uses all other fields internally. Due to this limitation,
+ * we must send all messages as "indirect", i.e. using an external buffer.
+ *
+ * All the VSI indexes are relative to the VF. Each VF can have maximum of
+ * three VSIs. All the queue indexes are relative to the VSI. Each VF can
+ * have a maximum of sixteen queues for all of its VSIs.
+ *
+ * The PF is required to return a status code in v_retval for all messages
+ * except RESET_VF, which does not require any response. The returned value
+ * is of virtchnl_status_code type, defined here.
+ *
+ * In general, VF driver initialization should roughly follow the order of
+ * these opcodes. The VF driver must first validate the API version of the
+ * PF driver, then request a reset, then get resources, then configure
+ * queues and interrupts. After these operations are complete, the VF
+ * driver may start its queues, optionally add MAC and VLAN filters, and
+ * process traffic.
+ */
+
+/* START GENERIC DEFINES
+ * Need to ensure the following enums and defines hold the same meaning and
+ * value in current and future projects
+ */
+
+#define VIRTCHNL_ETH_LENGTH_OF_ADDRESS 6
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL_CHECK_STRUCT_LEN(n, X) enum virtchnl_static_assert_enum_##X \
+ { virtchnl_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL_CHECK_UNION_LEN(n, X) enum virtchnl_static_asset_enum_##X \
+ { virtchnl_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+
+/* Error Codes
+ * Note that many older versions of various iAVF drivers convert the reported
+ * status code directly into an iavf_status enumeration. For this reason, it
+ * is important that the values of these enumerations line up.
+ */
+enum virtchnl_status_code {
+ VIRTCHNL_STATUS_SUCCESS = 0,
+ VIRTCHNL_STATUS_ERR_PARAM = -5,
+ VIRTCHNL_STATUS_ERR_NO_MEMORY = -18,
+ VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH = -38,
+ VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR = -39,
+ VIRTCHNL_STATUS_ERR_INVALID_VF_ID = -40,
+ VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR = -53,
+ VIRTCHNL_STATUS_ERR_NOT_SUPPORTED = -64,
+};
+
+/* Backward compatibility */
+#define VIRTCHNL_ERR_PARAM VIRTCHNL_STATUS_ERR_PARAM
+#define VIRTCHNL_STATUS_NOT_SUPPORTED VIRTCHNL_STATUS_ERR_NOT_SUPPORTED
+
+#define VIRTCHNL_LINK_SPEED_2_5GB_SHIFT 0x0
+#define VIRTCHNL_LINK_SPEED_100MB_SHIFT 0x1
+#define VIRTCHNL_LINK_SPEED_1000MB_SHIFT 0x2
+#define VIRTCHNL_LINK_SPEED_10GB_SHIFT 0x3
+#define VIRTCHNL_LINK_SPEED_40GB_SHIFT 0x4
+#define VIRTCHNL_LINK_SPEED_20GB_SHIFT 0x5
+#define VIRTCHNL_LINK_SPEED_25GB_SHIFT 0x6
+#define VIRTCHNL_LINK_SPEED_5GB_SHIFT 0x7
+
+enum virtchnl_link_speed {
+ VIRTCHNL_LINK_SPEED_UNKNOWN = 0,
+ VIRTCHNL_LINK_SPEED_100MB = BIT(VIRTCHNL_LINK_SPEED_100MB_SHIFT),
+ VIRTCHNL_LINK_SPEED_1GB = BIT(VIRTCHNL_LINK_SPEED_1000MB_SHIFT),
+ VIRTCHNL_LINK_SPEED_10GB = BIT(VIRTCHNL_LINK_SPEED_10GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_40GB = BIT(VIRTCHNL_LINK_SPEED_40GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_20GB = BIT(VIRTCHNL_LINK_SPEED_20GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_25GB = BIT(VIRTCHNL_LINK_SPEED_25GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_2_5GB = BIT(VIRTCHNL_LINK_SPEED_2_5GB_SHIFT),
+ VIRTCHNL_LINK_SPEED_5GB = BIT(VIRTCHNL_LINK_SPEED_5GB_SHIFT),
+};
+
+/* for hsplit_0 field of Rx HMC context */
+/* deprecated with AVF 1.0 */
+enum virtchnl_rx_hsplit {
+ VIRTCHNL_RX_HSPLIT_NO_SPLIT = 0,
+ VIRTCHNL_RX_HSPLIT_SPLIT_L2 = 1,
+ VIRTCHNL_RX_HSPLIT_SPLIT_IP = 2,
+ VIRTCHNL_RX_HSPLIT_SPLIT_TCP_UDP = 4,
+ VIRTCHNL_RX_HSPLIT_SPLIT_SCTP = 8,
+};
+
+enum virtchnl_bw_limit_type {
+ VIRTCHNL_BW_SHAPER = 0,
+};
+/* END GENERIC DEFINES */
+
+/* Opcodes for VF-PF communication. These are placed in the v_opcode field
+ * of the virtchnl_msg structure.
+ */
+enum virtchnl_ops {
+/* The PF sends status change events to VFs using
+ * the VIRTCHNL_OP_EVENT opcode.
+ * VFs send requests to the PF using the other ops.
+ * Use of "advanced opcode" features must be negotiated as part of capabilities
+ * exchange and are not considered part of base mode feature set.
+ *
+ */
+ VIRTCHNL_OP_UNKNOWN = 0,
+ VIRTCHNL_OP_VERSION = 1, /* must ALWAYS be 1 */
+ VIRTCHNL_OP_RESET_VF = 2,
+ VIRTCHNL_OP_GET_VF_RESOURCES = 3,
+ VIRTCHNL_OP_CONFIG_TX_QUEUE = 4,
+ VIRTCHNL_OP_CONFIG_RX_QUEUE = 5,
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES = 6,
+ VIRTCHNL_OP_CONFIG_IRQ_MAP = 7,
+ VIRTCHNL_OP_ENABLE_QUEUES = 8,
+ VIRTCHNL_OP_DISABLE_QUEUES = 9,
+ VIRTCHNL_OP_ADD_ETH_ADDR = 10,
+ VIRTCHNL_OP_DEL_ETH_ADDR = 11,
+ VIRTCHNL_OP_ADD_VLAN = 12,
+ VIRTCHNL_OP_DEL_VLAN = 13,
+ VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
+ VIRTCHNL_OP_GET_STATS = 15,
+ VIRTCHNL_OP_RSVD = 16,
+ VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+ /* opcode 19 is reserved */
+ /* opcodes 20, 21, and 22 are reserved */
+ VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+ VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+ VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+ VIRTCHNL_OP_SET_RSS_HENA = 26,
+ VIRTCHNL_OP_ENABLE_VLAN_STRIPPING = 27,
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING = 28,
+ VIRTCHNL_OP_REQUEST_QUEUES = 29,
+ VIRTCHNL_OP_ENABLE_CHANNELS = 30,
+ VIRTCHNL_OP_DISABLE_CHANNELS = 31,
+ VIRTCHNL_OP_ADD_CLOUD_FILTER = 32,
+ VIRTCHNL_OP_DEL_CLOUD_FILTER = 33,
+ /* opcode 34 is reserved */
+ /* opcodes 38, 39, 40, 41, 42 and 43 are reserved */
+ /* opcode 44 is reserved */
+ VIRTCHNL_OP_ADD_RSS_CFG = 45,
+ VIRTCHNL_OP_DEL_RSS_CFG = 46,
+ VIRTCHNL_OP_ADD_FDIR_FILTER = 47,
+ VIRTCHNL_OP_DEL_FDIR_FILTER = 48,
+ VIRTCHNL_OP_GET_MAX_RSS_QREGION = 50,
+ VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS = 51,
+ VIRTCHNL_OP_ADD_VLAN_V2 = 52,
+ VIRTCHNL_OP_DEL_VLAN_V2 = 53,
+ VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 = 54,
+ VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 = 55,
+ VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 = 56,
+ VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 = 57,
+ VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2 = 58,
+ VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 = 59,
+ VIRTCHNL_OP_1588_PTP_GET_CAPS = 60,
+ VIRTCHNL_OP_1588_PTP_GET_TIME = 61,
+ VIRTCHNL_OP_1588_PTP_SET_TIME = 62,
+ VIRTCHNL_OP_1588_PTP_ADJ_TIME = 63,
+ VIRTCHNL_OP_1588_PTP_ADJ_FREQ = 64,
+ VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP = 65,
+ VIRTCHNL_OP_GET_QOS_CAPS = 66,
+ VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP = 67,
+ VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS = 68,
+ VIRTCHNL_OP_1588_PTP_SET_PIN_CFG = 69,
+ VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP = 70,
+ VIRTCHNL_OP_ENABLE_QUEUES_V2 = 107,
+ VIRTCHNL_OP_DISABLE_QUEUES_V2 = 108,
+ VIRTCHNL_OP_MAP_QUEUE_VECTOR = 111,
+ VIRTCHNL_OP_MAX,
+};
+
+static inline const char *virtchnl_op_str(enum virtchnl_ops v_opcode)
+{
+ switch (v_opcode) {
+ case VIRTCHNL_OP_UNKNOWN:
+ return "VIRTCHNL_OP_UNKNOWN";
+ case VIRTCHNL_OP_VERSION:
+ return "VIRTCHNL_OP_VERSION";
+ case VIRTCHNL_OP_RESET_VF:
+ return "VIRTCHNL_OP_RESET_VF";
+ case VIRTCHNL_OP_GET_VF_RESOURCES:
+ return "VIRTCHNL_OP_GET_VF_RESOURCES";
+ case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+ return "VIRTCHNL_OP_CONFIG_TX_QUEUE";
+ case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+ return "VIRTCHNL_OP_CONFIG_RX_QUEUE";
+ case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+ return "VIRTCHNL_OP_CONFIG_VSI_QUEUES";
+ case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+ return "VIRTCHNL_OP_CONFIG_IRQ_MAP";
+ case VIRTCHNL_OP_ENABLE_QUEUES:
+ return "VIRTCHNL_OP_ENABLE_QUEUES";
+ case VIRTCHNL_OP_DISABLE_QUEUES:
+ return "VIRTCHNL_OP_DISABLE_QUEUES";
+ case VIRTCHNL_OP_ADD_ETH_ADDR:
+ return "VIRTCHNL_OP_ADD_ETH_ADDR";
+ case VIRTCHNL_OP_DEL_ETH_ADDR:
+ return "VIRTCHNL_OP_DEL_ETH_ADDR";
+ case VIRTCHNL_OP_ADD_VLAN:
+ return "VIRTCHNL_OP_ADD_VLAN";
+ case VIRTCHNL_OP_DEL_VLAN:
+ return "VIRTCHNL_OP_DEL_VLAN";
+ case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+ return "VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE";
+ case VIRTCHNL_OP_GET_STATS:
+ return "VIRTCHNL_OP_GET_STATS";
+ case VIRTCHNL_OP_RSVD:
+ return "VIRTCHNL_OP_RSVD";
+ case VIRTCHNL_OP_EVENT:
+ return "VIRTCHNL_OP_EVENT";
+ case VIRTCHNL_OP_CONFIG_RSS_KEY:
+ return "VIRTCHNL_OP_CONFIG_RSS_KEY";
+ case VIRTCHNL_OP_CONFIG_RSS_LUT:
+ return "VIRTCHNL_OP_CONFIG_RSS_LUT";
+ case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ return "VIRTCHNL_OP_GET_RSS_HENA_CAPS";
+ case VIRTCHNL_OP_SET_RSS_HENA:
+ return "VIRTCHNL_OP_SET_RSS_HENA";
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+ return "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING";
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ return "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING";
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ return "VIRTCHNL_OP_REQUEST_QUEUES";
+ case VIRTCHNL_OP_ENABLE_CHANNELS:
+ return "VIRTCHNL_OP_ENABLE_CHANNELS";
+ case VIRTCHNL_OP_DISABLE_CHANNELS:
+ return "VIRTCHNL_OP_DISABLE_CHANNELS";
+ case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+ return "VIRTCHNL_OP_ADD_CLOUD_FILTER";
+ case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+ return "VIRTCHNL_OP_DEL_CLOUD_FILTER";
+ case VIRTCHNL_OP_ADD_RSS_CFG:
+ return "VIRTCHNL_OP_ADD_RSS_CFG";
+ case VIRTCHNL_OP_DEL_RSS_CFG:
+ return "VIRTCHNL_OP_DEL_RSS_CFG";
+ case VIRTCHNL_OP_ADD_FDIR_FILTER:
+ return "VIRTCHNL_OP_ADD_FDIR_FILTER";
+ case VIRTCHNL_OP_DEL_FDIR_FILTER:
+ return "VIRTCHNL_OP_DEL_FDIR_FILTER";
+ case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
+ return "VIRTCHNL_OP_GET_MAX_RSS_QREGION";
+ case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS:
+ return "VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS";
+ case VIRTCHNL_OP_ADD_VLAN_V2:
+ return "VIRTCHNL_OP_ADD_VLAN_V2";
+ case VIRTCHNL_OP_DEL_VLAN_V2:
+ return "VIRTCHNL_OP_DEL_VLAN_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2";
+ case VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2:
+ return "VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2";
+ case VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2:
+ return "VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2";
+ case VIRTCHNL_OP_1588_PTP_GET_CAPS:
+ return "VIRTCHNL_OP_1588_PTP_GET_CAPS";
+ case VIRTCHNL_OP_1588_PTP_GET_TIME:
+ return "VIRTCHNL_OP_1588_PTP_GET_TIME";
+ case VIRTCHNL_OP_1588_PTP_SET_TIME:
+ return "VIRTCHNL_OP_1588_PTP_SET_TIME";
+ case VIRTCHNL_OP_1588_PTP_ADJ_TIME:
+ return "VIRTCHNL_OP_1588_PTP_ADJ_TIME";
+ case VIRTCHNL_OP_1588_PTP_ADJ_FREQ:
+ return "VIRTCHNL_OP_1588_PTP_ADJ_FREQ";
+ case VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP:
+ return "VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP";
+ case VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS:
+ return "VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS";
+ case VIRTCHNL_OP_1588_PTP_SET_PIN_CFG:
+ return "VIRTCHNL_OP_1588_PTP_SET_PIN_CFG";
+ case VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP:
+ return "VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP";
+ case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+ return "VIRTCHNL_OP_ENABLE_QUEUES_V2";
+ case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+ return "VIRTCHNL_OP_DISABLE_QUEUES_V2";
+ case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+ return "VIRTCHNL_OP_MAP_QUEUE_VECTOR";
+ case VIRTCHNL_OP_MAX:
+ return "VIRTCHNL_OP_MAX";
+ default:
+ return "Unsupported (update virtchnl.h)";
+ }
+}
+
+static inline const char *virtchnl_stat_str(enum virtchnl_status_code v_status)
+{
+ switch (v_status) {
+ case VIRTCHNL_STATUS_SUCCESS:
+ return "VIRTCHNL_STATUS_SUCCESS";
+ case VIRTCHNL_STATUS_ERR_PARAM:
+ return "VIRTCHNL_STATUS_ERR_PARAM";
+ case VIRTCHNL_STATUS_ERR_NO_MEMORY:
+ return "VIRTCHNL_STATUS_ERR_NO_MEMORY";
+ case VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH:
+ return "VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH";
+ case VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR:
+ return "VIRTCHNL_STATUS_ERR_CQP_COMPL_ERROR";
+ case VIRTCHNL_STATUS_ERR_INVALID_VF_ID:
+ return "VIRTCHNL_STATUS_ERR_INVALID_VF_ID";
+ case VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR:
+ return "VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR";
+ case VIRTCHNL_STATUS_ERR_NOT_SUPPORTED:
+ return "VIRTCHNL_STATUS_ERR_NOT_SUPPORTED";
+ default:
+ return "Unknown status code (update virtchnl.h)";
+ }
+}
+
+/* Virtual channel message descriptor. This overlays the admin queue
+ * descriptor. All other data is passed in external buffers.
+ */
+
+struct virtchnl_msg {
+ u8 pad[8]; /* AQ flags/opcode/len/retval fields */
+
+ /* avoid confusion with desc->opcode */
+ enum virtchnl_ops v_opcode;
+
+ /* ditto for desc->retval */
+ enum virtchnl_status_code v_retval;
+ u32 vfid; /* used by PF when sending to VF */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(20, virtchnl_msg);
+
+/* Message descriptions and data structures. */
+
+/* VIRTCHNL_OP_VERSION
+ * VF posts its version number to the PF. PF responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This enum element MUST always be specified as == 1, regardless of other
+ * changes in the API. The PF must always respond to this message without
+ * error regardless of version mismatch.
+ */
+#define VIRTCHNL_VERSION_MAJOR 1
+#define VIRTCHNL_VERSION_MINOR 1
+#define VIRTCHNL_VERSION_MAJOR_2 2
+#define VIRTCHNL_VERSION_MINOR_0 0
+#define VIRTCHNL_VERSION_MINOR_NO_VF_CAPS 0
+
+struct virtchnl_version_info {
+ u32 major;
+ u32 minor;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_version_info);
+
+#define VF_IS_V10(_ver) (((_ver)->major == 1) && ((_ver)->minor == 0))
+#define VF_IS_V11(_ver) (((_ver)->major == 1) && ((_ver)->minor == 1))
+#define VF_IS_V20(_ver) (((_ver)->major == 2) && ((_ver)->minor == 0))
+
+/* VIRTCHNL_OP_RESET_VF
+ * VF sends this request to PF with no parameters
+ * PF does NOT respond! VF driver must delay then poll VFGEN_RSTAT register
+ * until reset completion is indicated. The admin queue must be reinitialized
+ * after this operation.
+ *
+ * When reset is complete, PF must ensure that all queues in all VSIs associated
+ * with the VF are stopped, all queue configurations in the HMC are set to 0,
+ * and all MAC and VLAN filters (except the default MAC address) on all VSIs
+ * are cleared.
+ */
+
+/* VSI types that use VIRTCHNL interface for VF-PF communication. VSI_SRIOV
+ * vsi_type should always be 6 for backward compatibility. Add other fields
+ * as needed.
+ */
+enum virtchnl_vsi_type {
+ VIRTCHNL_VSI_TYPE_INVALID = 0,
+ VIRTCHNL_VSI_SRIOV = 6,
+};
+
+/* VIRTCHNL_OP_GET_VF_RESOURCES
+ * Version 1.0 VF sends this request to PF with no parameters
+ */
+
+struct virtchnl_vsi_resource {
+ u16 vsi_id;
+ u16 num_queue_pairs;
+
+ /* see enum virtchnl_vsi_type */
+ s32 vsi_type;
+ u16 qset_handle;
+ u8 default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
+
+/* VF capability flags
+ * VIRTCHNL_VF_OFFLOAD_L2 flag is inclusive of base mode L2 offloads including
+ * TX/RX Checksum offloading and TSO for non-tunnelled packets.
+ */
+#define VIRTCHNL_VF_OFFLOAD_L2 BIT(0)
+#define VIRTCHNL_VF_OFFLOAD_IWARP BIT(1)
+#define VIRTCHNL_VF_CAP_RDMA VIRTCHNL_VF_OFFLOAD_IWARP
+#define VIRTCHNL_VF_OFFLOAD_RSS_AQ BIT(3)
+#define VIRTCHNL_VF_OFFLOAD_RSS_REG BIT(4)
+#define VIRTCHNL_VF_OFFLOAD_WB_ON_ITR BIT(5)
+#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6)
+/* used to negotiate communicating link speeds in Mbps */
+#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7)
+ /* BIT(8) is reserved */
+#define VIRTCHNL_VF_LARGE_NUM_QPAIRS BIT(9)
+#define VIRTCHNL_VF_OFFLOAD_CRC BIT(10)
+#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15)
+#define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16)
+#define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17)
+#define VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 BIT(18)
+#define VIRTCHNL_VF_OFFLOAD_RSS_PF BIT(19)
+#define VIRTCHNL_VF_OFFLOAD_ENCAP BIT(20)
+#define VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM BIT(21)
+#define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM BIT(22)
+#define VIRTCHNL_VF_OFFLOAD_ADQ BIT(23)
+#define VIRTCHNL_VF_OFFLOAD_ADQ_V2 BIT(24)
+#define VIRTCHNL_VF_OFFLOAD_USO BIT(25)
+ /* BIT(26) is reserved */
+#define VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF BIT(27)
+#define VIRTCHNL_VF_OFFLOAD_FDIR_PF BIT(28)
+#define VIRTCHNL_VF_OFFLOAD_QOS BIT(29)
+ /* BIT(30) is reserved */
+#define VIRTCHNL_VF_CAP_PTP BIT(31)
+
+#define VF_BASE_MODE_OFFLOADS (VIRTCHNL_VF_OFFLOAD_L2 | \
+ VIRTCHNL_VF_OFFLOAD_VLAN | \
+ VIRTCHNL_VF_OFFLOAD_RSS_PF)
+
+struct virtchnl_vf_resource {
+ u16 num_vsis;
+ u16 num_queue_pairs;
+ u16 max_vectors;
+ u16 max_mtu;
+
+ u32 vf_cap_flags;
+ u32 rss_key_size;
+ u32 rss_lut_size;
+
+ struct virtchnl_vsi_resource vsi_res[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_vf_resource);
+
+/* VIRTCHNL_OP_CONFIG_TX_QUEUE
+ * VF sends this message to set up parameters for one TX queue.
+ * External data buffer contains one instance of virtchnl_txq_info.
+ * PF configures requested queue and returns a status code.
+ */
+
+/* Tx queue config info */
+struct virtchnl_txq_info {
+ u16 vsi_id;
+ u16 queue_id;
+ u16 ring_len; /* number of descriptors, multiple of 8 */
+ u16 headwb_enabled; /* deprecated with AVF 1.0 */
+ u64 dma_ring_addr;
+ u64 dma_headwb_addr; /* deprecated with AVF 1.0 */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
+
+/* RX descriptor IDs (range from 0 to 63) */
+enum virtchnl_rx_desc_ids {
+ VIRTCHNL_RXDID_0_16B_BASE = 0,
+ VIRTCHNL_RXDID_1_32B_BASE = 1,
+ VIRTCHNL_RXDID_2_FLEX_SQ_NIC = 2,
+ VIRTCHNL_RXDID_3_FLEX_SQ_SW = 3,
+ VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB = 4,
+ VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL = 5,
+ VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2 = 6,
+ VIRTCHNL_RXDID_7_HW_RSVD = 7,
+ /* 8 through 15 are reserved */
+ VIRTCHNL_RXDID_16_COMMS_GENERIC = 16,
+ VIRTCHNL_RXDID_17_COMMS_AUX_VLAN = 17,
+ VIRTCHNL_RXDID_18_COMMS_AUX_IPV4 = 18,
+ VIRTCHNL_RXDID_19_COMMS_AUX_IPV6 = 19,
+ VIRTCHNL_RXDID_20_COMMS_AUX_FLOW = 20,
+ VIRTCHNL_RXDID_21_COMMS_AUX_TCP = 21,
+ /* 22 through 63 are reserved */
+};
+
+/* RX descriptor ID bitmasks */
+enum virtchnl_rx_desc_id_bitmasks {
+ VIRTCHNL_RXDID_0_16B_BASE_M = BIT(VIRTCHNL_RXDID_0_16B_BASE),
+ VIRTCHNL_RXDID_1_32B_BASE_M = BIT(VIRTCHNL_RXDID_1_32B_BASE),
+ VIRTCHNL_RXDID_2_FLEX_SQ_NIC_M = BIT(VIRTCHNL_RXDID_2_FLEX_SQ_NIC),
+ VIRTCHNL_RXDID_3_FLEX_SQ_SW_M = BIT(VIRTCHNL_RXDID_3_FLEX_SQ_SW),
+ VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB_M = BIT(VIRTCHNL_RXDID_4_FLEX_SQ_NIC_VEB),
+ VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL_M = BIT(VIRTCHNL_RXDID_5_FLEX_SQ_NIC_ACL),
+ VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2_M = BIT(VIRTCHNL_RXDID_6_FLEX_SQ_NIC_2),
+ VIRTCHNL_RXDID_7_HW_RSVD_M = BIT(VIRTCHNL_RXDID_7_HW_RSVD),
+ /* 9 through 15 are reserved */
+ VIRTCHNL_RXDID_16_COMMS_GENERIC_M = BIT(VIRTCHNL_RXDID_16_COMMS_GENERIC),
+ VIRTCHNL_RXDID_17_COMMS_AUX_VLAN_M = BIT(VIRTCHNL_RXDID_17_COMMS_AUX_VLAN),
+ VIRTCHNL_RXDID_18_COMMS_AUX_IPV4_M = BIT(VIRTCHNL_RXDID_18_COMMS_AUX_IPV4),
+ VIRTCHNL_RXDID_19_COMMS_AUX_IPV6_M = BIT(VIRTCHNL_RXDID_19_COMMS_AUX_IPV6),
+ VIRTCHNL_RXDID_20_COMMS_AUX_FLOW_M = BIT(VIRTCHNL_RXDID_20_COMMS_AUX_FLOW),
+ VIRTCHNL_RXDID_21_COMMS_AUX_TCP_M = BIT(VIRTCHNL_RXDID_21_COMMS_AUX_TCP),
+ /* 22 through 63 are reserved */
+};
+
+/* VIRTCHNL_OP_CONFIG_RX_QUEUE
+ * VF sends this message to set up parameters for one RX queue.
+ * External data buffer contains one instance of virtchnl_rxq_info.
+ * PF configures requested queue and returns a status code. The
+ * crc_disable flag disables CRC stripping on the VF. Setting
+ * the crc_disable flag to 1 will disable CRC stripping for each
+ * queue in the VF where the flag is set. The VIRTCHNL_VF_OFFLOAD_CRC
+ * offload must have been set prior to sending this info or the PF
+ * will ignore the request. This flag should be set the same for
+ * all of the queues for a VF.
+ */
+
+/* Rx queue config info */
+struct virtchnl_rxq_info {
+ u16 vsi_id;
+ u16 queue_id;
+ u32 ring_len; /* number of descriptors, multiple of 32 */
+ u16 hdr_size;
+ u16 splithdr_enabled; /* deprecated with AVF 1.0 */
+ u32 databuffer_size;
+ u32 max_pkt_size;
+ u8 crc_disable;
+ u8 pad1[3];
+ u64 dma_ring_addr;
+
+ /* see enum virtchnl_rx_hsplit; deprecated with AVF 1.0 */
+ s32 rx_split_pos;
+ u32 pad2;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_rxq_info);
+
+/* VIRTCHNL_OP_CONFIG_VSI_QUEUES
+ * VF sends this message to set parameters for active TX and RX queues
+ * associated with the specified VSI.
+ * PF configures queues and returns status.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the VSI, an error is returned and no queues are configured.
+ * NOTE: The VF is not required to configure all queues in a single request.
+ * It may send multiple messages. PF drivers must correctly handle all VF
+ * requests.
+ */
+struct virtchnl_queue_pair_info {
+ /* NOTE: vsi_id and queue_id should be identical for both queues. */
+ struct virtchnl_txq_info txq;
+ struct virtchnl_rxq_info rxq;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(64, virtchnl_queue_pair_info);
+
+struct virtchnl_vsi_queue_config_info {
+ u16 vsi_id;
+ u16 num_queue_pairs;
+ u32 pad;
+ struct virtchnl_queue_pair_info qpair[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_vsi_queue_config_info);
+
+/* VIRTCHNL_OP_REQUEST_QUEUES
+ * VF sends this message to request the PF to allocate additional queues to
+ * this VF. Each VF gets a guaranteed number of queues on init but asking for
+ * additional queues must be negotiated. This is a best effort request as it
+ * is possible the PF does not have enough queues left to support the request.
+ * If the PF cannot support the number requested it will respond with the
+ * maximum number it is able to support. If the request is successful, PF will
+ * then reset the VF to institute required changes.
+ */
+
+/* VF resource request */
+struct virtchnl_vf_res_request {
+ u16 num_queue_pairs;
+};
+
+/* VIRTCHNL_OP_CONFIG_IRQ_MAP
+ * VF uses this message to map vectors to queues.
+ * The rxq_map and txq_map fields are bitmaps used to indicate which queues
+ * are to be associated with the specified vector.
+ * The "other" causes are always mapped to vector 0. The VF may not request
+ * that vector 0 be used for traffic.
+ * PF configures interrupt mapping and returns status.
+ * NOTE: due to hardware requirements, all active queues (both TX and RX)
+ * should be mapped to interrupts, even if the driver intends to operate
+ * only in polling mode. In this case the interrupt may be disabled, but
+ * the ITR timer will still run to trigger writebacks.
+ */
+struct virtchnl_vector_map {
+ u16 vsi_id;
+ u16 vector_id;
+ u16 rxq_map;
+ u16 txq_map;
+ u16 rxitr_idx;
+ u16 txitr_idx;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_vector_map);
+
+struct virtchnl_irq_map_info {
+ u16 num_vectors;
+ struct virtchnl_vector_map vecmap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(14, virtchnl_irq_map_info);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES
+ * VIRTCHNL_OP_DISABLE_QUEUES
+ * VF sends these message to enable or disable TX/RX queue pairs.
+ * The queues fields are bitmaps indicating which queues to act upon.
+ * (Currently, we only support 16 queues per VF, but we make the field
+ * u32 to allow for expansion.)
+ * PF performs requested action and returns status.
+ * NOTE: The VF is not required to enable/disable all queues in a single
+ * request. It may send multiple messages.
+ * PF drivers must correctly handle all VF requests.
+ */
+struct virtchnl_queue_select {
+ u16 vsi_id;
+ u16 pad;
+ u32 rx_queues;
+ u32 tx_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_select);
+
+/* VIRTCHNL_OP_GET_MAX_RSS_QREGION
+ *
+ * if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated in VIRTCHNL_OP_GET_VF_RESOURCES
+ * then this op must be supported.
+ *
+ * VF sends this message in order to query the max RSS queue region
+ * size supported by PF, when VIRTCHNL_VF_LARGE_NUM_QPAIRS is enabled.
+ * This information should be used when configuring the RSS LUT and/or
+ * configuring queue region based filters.
+ *
+ * The maximum RSS queue region is 2^qregion_width. So, a qregion_width
+ * of 6 would inform the VF that the PF supports a maximum RSS queue region
+ * of 64.
+ *
+ * A queue region represents a range of queues that can be used to configure
+ * a RSS LUT. For example, if a VF is given 64 queues, but only a max queue
+ * region size of 16 (i.e. 2^qregion_width = 16) then it will only be able
+ * to configure the RSS LUT with queue indices from 0 to 15. However, other
+ * filters can be used to direct packets to queues >15 via specifying a queue
+ * base/offset and queue region width.
+ */
+struct virtchnl_max_rss_qregion {
+ u16 vport_id;
+ u16 qregion_width;
+ u8 pad[4];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_max_rss_qregion);
+
+/* VIRTCHNL_OP_ADD_ETH_ADDR
+ * VF sends this message in order to add one or more unicast or multicast
+ * address filters for the specified VSI.
+ * PF adds the filters and returns status.
+ */
+
+/* VIRTCHNL_OP_DEL_ETH_ADDR
+ * VF sends this message in order to remove one or more unicast or multicast
+ * filters for the specified VSI.
+ * PF removes the filters and returns status.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_LEGACY
+ * Prior to adding the @type member to virtchnl_ether_addr, there were 2 pad
+ * bytes. Moving forward all VF drivers should not set type to
+ * VIRTCHNL_ETHER_ADDR_LEGACY. This is only here to not break previous/legacy
+ * behavior. The control plane function (i.e. PF) can use a best effort method
+ * of tracking the primary/device unicast in this case, but there is no
+ * guarantee and functionality depends on the implementation of the PF.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_PRIMARY
+ * All VF drivers should set @type to VIRTCHNL_ETHER_ADDR_PRIMARY for the
+ * primary/device unicast MAC address filter for VIRTCHNL_OP_ADD_ETH_ADDR and
+ * VIRTCHNL_OP_DEL_ETH_ADDR. This allows for the underlying control plane
+ * function (i.e. PF) to accurately track and use this MAC address for
+ * displaying on the host and for VM/function reset.
+ */
+
+/* VIRTCHNL_ETHER_ADDR_EXTRA
+ * All VF drivers should set @type to VIRTCHNL_ETHER_ADDR_EXTRA for any extra
+ * unicast and/or multicast filters that are being added/deleted via
+ * VIRTCHNL_OP_DEL_ETH_ADDR/VIRTCHNL_OP_ADD_ETH_ADDR respectively.
+ */
+struct virtchnl_ether_addr {
+ u8 addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ u8 type;
+#define VIRTCHNL_ETHER_ADDR_LEGACY 0
+#define VIRTCHNL_ETHER_ADDR_PRIMARY 1
+#define VIRTCHNL_ETHER_ADDR_EXTRA 2
+#define VIRTCHNL_ETHER_ADDR_TYPE_MASK 3 /* first two bits of type are valid */
+ u8 pad;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_ether_addr);
+
+struct virtchnl_ether_addr_list {
+ u16 vsi_id;
+ u16 num_elements;
+ struct virtchnl_ether_addr list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_ether_addr_list);
+
+/* VIRTCHNL_OP_ADD_VLAN
+ * VF sends this message to add one or more VLAN tag filters for receives.
+ * PF adds the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+/* VIRTCHNL_OP_DEL_VLAN
+ * VF sends this message to remove one or more VLAN tag filters for receives.
+ * PF removes the filters and returns status.
+ * If a port VLAN is configured by the PF, this operation will return an
+ * error to the VF.
+ */
+
+struct virtchnl_vlan_filter_list {
+ u16 vsi_id;
+ u16 num_elements;
+ u16 vlan_id[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_vlan_filter_list);
+
+/* This enum is used for all of the VIRTCHNL_VF_OFFLOAD_VLAN_V2_CAPS related
+ * structures and opcodes.
+ *
+ * VIRTCHNL_VLAN_UNSUPPORTED - This field is not supported and if a VF driver
+ * populates it the PF should return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED.
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 - This field supports 0x8100 ethertype.
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 - This field supports 0x88A8 ethertype.
+ * VIRTCHNL_VLAN_ETHERTYPE_9100 - This field supports 0x9100 ethertype.
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_AND - Used when multiple ethertypes can be supported
+ * by the PF concurrently. For example, if the PF can support
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 AND VIRTCHNL_VLAN_ETHERTYPE_88A8 filters it
+ * would OR the following bits:
+ *
+ * VIRTHCNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * The VF would interpret this as VLAN filtering can be supported on both 0x8100
+ * and 0x88A8 VLAN ethertypes.
+ *
+ * VIRTCHNL_ETHERTYPE_XOR - Used when only a single ethertype can be supported
+ * by the PF concurrently. For example if the PF can support
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 XOR VIRTCHNL_VLAN_ETHERTYPE_88A8 stripping
+ * offload it would OR the following bits:
+ *
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * The VF would interpret this as VLAN stripping can be supported on either
+ * 0x8100 or 0x88a8 VLAN ethertypes. So when requesting VLAN stripping via
+ * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 the specified ethertype will override
+ * the previously set value.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 - Used to tell the VF to insert and/or
+ * strip the VLAN tag using the L2TAG1 field of the Tx/Rx descriptors.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to insert hardware
+ * offloaded VLAN tags using the L2TAG2 field of the Tx descriptor.
+ *
+ * VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 - Used to tell the VF to strip hardware
+ * offloaded VLAN tags using the L2TAG2_2 field of the Rx descriptor.
+ *
+ * VIRTCHNL_VLAN_PRIO - This field supports VLAN priority bits. This is used for
+ * VLAN filtering if the underlying PF supports it.
+ *
+ * VIRTCHNL_VLAN_TOGGLE_ALLOWED - This field is used to say whether a
+ * certain VLAN capability can be toggled. For example if the underlying PF/CP
+ * allows the VF to toggle VLAN filtering, stripping, and/or insertion it should
+ * set this bit along with the supported ethertypes.
+ */
+enum virtchnl_vlan_support {
+ VIRTCHNL_VLAN_UNSUPPORTED = 0,
+ VIRTCHNL_VLAN_ETHERTYPE_8100 = 0x00000001,
+ VIRTCHNL_VLAN_ETHERTYPE_88A8 = 0x00000002,
+ VIRTCHNL_VLAN_ETHERTYPE_9100 = 0x00000004,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG1 = 0x00000100,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2 = 0x00000200,
+ VIRTCHNL_VLAN_TAG_LOCATION_L2TAG2_2 = 0x00000400,
+ VIRTCHNL_VLAN_PRIO = 0x01000000,
+ VIRTCHNL_VLAN_FILTER_MASK = 0x10000000,
+ VIRTCHNL_VLAN_ETHERTYPE_AND = 0x20000000,
+ VIRTCHNL_VLAN_ETHERTYPE_XOR = 0x40000000,
+ VIRTCHNL_VLAN_TOGGLE = 0x80000000
+};
+
+/* This structure is used as part of the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
+ * for filtering, insertion, and stripping capabilities.
+ *
+ * If only outer capabilities are supported (for filtering, insertion, and/or
+ * stripping) then this refers to the outer most or single VLAN from the VF's
+ * perspective.
+ *
+ * If only inner capabilities are supported (for filtering, insertion, and/or
+ * stripping) then this refers to the outer most or single VLAN from the VF's
+ * perspective. Functionally this is the same as if only outer capabilities are
+ * supported. The VF driver is just forced to use the inner fields when
+ * adding/deleting filters and enabling/disabling offloads (if supported).
+ *
+ * If both outer and inner capabilities are supported (for filtering, insertion,
+ * and/or stripping) then outer refers to the outer most or single VLAN and
+ * inner refers to the second VLAN, if it exists, in the packet.
+ *
+ * There is no support for tunneled VLAN offloads, so outer or inner are never
+ * referring to a tunneled packet from the VF's perspective.
+ */
+struct virtchnl_vlan_supported_caps {
+ u32 outer;
+ u32 inner;
+};
+
+/* The PF populates these fields based on the supported VLAN filtering. If a
+ * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will
+ * reject any VIRTCHNL_OP_ADD_VLAN_V2 or VIRTCHNL_OP_DEL_VLAN_V2 messages using
+ * the unsupported fields.
+ *
+ * Also, a VF is only allowed to toggle its VLAN filtering setting if the
+ * VIRTCHNL_VLAN_TOGGLE bit is set.
+ *
+ * The ethertype(s) specified in the ethertype_init field are the ethertypes
+ * enabled for VLAN filtering. VLAN filtering in this case refers to the outer
+ * most VLAN from the VF's perspective. If both inner and outer filtering are
+ * allowed then ethertype_init only refers to the outer most VLAN as only
+ * VLAN ethertype supported for inner VLAN filtering is
+ * VIRTCHNL_VLAN_ETHERTYPE_8100. By default, inner VLAN filtering is disabled
+ * when both inner and outer filtering are allowed.
+ *
+ * The max_filters field tells the VF how many VLAN filters it's allowed to have
+ * at any one time. If it exceeds this amount and tries to add another filter,
+ * then the request will be rejected by the PF. To prevent failures, the VF
+ * should keep track of how many VLAN filters it has added and not attempt to
+ * add more than max_filters.
+ */
+struct virtchnl_vlan_filtering_caps {
+ struct virtchnl_vlan_supported_caps filtering_support;
+ u32 ethertype_init;
+ u16 max_filters;
+ u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_filtering_caps);
+
+/* This enum is used for the virtchnl_vlan_offload_caps structure to specify
+ * if the PF supports a different ethertype for stripping and insertion.
+ *
+ * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION - The ethertype(s) specified
+ * for stripping affect the ethertype(s) specified for insertion and visa versa
+ * as well. If the VF tries to configure VLAN stripping via
+ * VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 with VIRTCHNL_VLAN_ETHERTYPE_8100 then
+ * that will be the ethertype for both stripping and insertion.
+ *
+ * VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED - The ethertype(s) specified for
+ * stripping do not affect the ethertype(s) specified for insertion and visa
+ * versa.
+ */
+enum virtchnl_vlan_ethertype_match {
+ VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION = 0,
+ VIRTCHNL_ETHERTYPE_MATCH_NOT_REQUIRED = 1,
+};
+
+/* The PF populates these fields based on the supported VLAN offloads. If a
+ * field is VIRTCHNL_VLAN_UNSUPPORTED then it's not supported and the PF will
+ * reject any VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 or
+ * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 messages using the unsupported fields.
+ *
+ * Also, a VF is only allowed to toggle its VLAN offload setting if the
+ * VIRTCHNL_VLAN_TOGGLE_ALLOWED bit is set.
+ *
+ * The VF driver needs to be aware of how the tags are stripped by hardware and
+ * inserted by the VF driver based on the level of offload support. The PF will
+ * populate these fields based on where the VLAN tags are expected to be
+ * offloaded via the VIRTHCNL_VLAN_TAG_LOCATION_* bits. The VF will need to
+ * interpret these fields. See the definition of the
+ * VIRTCHNL_VLAN_TAG_LOCATION_* bits above the virtchnl_vlan_support
+ * enumeration.
+ */
+struct virtchnl_vlan_offload_caps {
+ struct virtchnl_vlan_supported_caps stripping_support;
+ struct virtchnl_vlan_supported_caps insertion_support;
+ u32 ethertype_init;
+ u8 ethertype_match;
+ u8 pad[3];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_vlan_offload_caps);
+
+/* VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS
+ * VF sends this message to determine its VLAN capabilities.
+ *
+ * PF will mark which capabilities it supports based on hardware support and
+ * current configuration. For example, if a port VLAN is configured the PF will
+ * not allow outer VLAN filtering, stripping, or insertion to be configured so
+ * it will block these features from the VF.
+ *
+ * The VF will need to cross reference its capabilities with the PFs
+ * capabilities in the response message from the PF to determine the VLAN
+ * support.
+ */
+struct virtchnl_vlan_caps {
+ struct virtchnl_vlan_filtering_caps filtering;
+ struct virtchnl_vlan_offload_caps offloads;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_caps);
+
+struct virtchnl_vlan {
+ u16 tci; /* tci[15:13] = PCP and tci[11:0] = VID */
+ u16 tci_mask; /* only valid if VIRTCHNL_VLAN_FILTER_MASK set in
+ * filtering caps
+ */
+ u16 tpid; /* 0x8100, 0x88a8, etc. and only type(s) set in
+ * filtering caps. Note that tpid here does not refer to
+ * VIRTCHNL_VLAN_ETHERTYPE_*, but it refers to the
+ * actual 2-byte VLAN TPID
+ */
+ u8 pad[2];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_vlan);
+
+struct virtchnl_vlan_filter {
+ struct virtchnl_vlan inner;
+ struct virtchnl_vlan outer;
+ u8 pad[16];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(32, virtchnl_vlan_filter);
+
+/* VIRTCHNL_OP_ADD_VLAN_V2
+ * VIRTCHNL_OP_DEL_VLAN_V2
+ *
+ * VF sends these messages to add/del one or more VLAN tag filters for Rx
+ * traffic.
+ *
+ * The PF attempts to add the filters and returns status.
+ *
+ * The VF should only ever attempt to add/del virtchnl_vlan_filter(s) using the
+ * supported fields negotiated via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS.
+ */
+struct virtchnl_vlan_filter_list_v2 {
+ u16 vport_id;
+ u16 num_elements;
+ u8 pad[4];
+ struct virtchnl_vlan_filter filters[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_vlan_filter_list_v2);
+
+/* VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2
+ * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2
+ *
+ * VF sends this message to enable or disable VLAN stripping or insertion. It
+ * also needs to specify an ethertype. The VF knows which VLAN ethertypes are
+ * allowed and whether or not it's allowed to enable/disable the specific
+ * offload via the VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to
+ * parse the virtchnl_vlan_caps.offloads fields to determine which offload
+ * messages are allowed.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.offloads in the
+ * following manner the VF will be allowed to enable and/or disable 0x8100 inner
+ * VLAN insertion and/or stripping via the opcodes listed above. Inner in this
+ * case means the outer most or single VLAN from the VF's perspective. This is
+ * because no outer offloads are supported. See the comments above the
+ * virtchnl_vlan_supported_caps structure for more details.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.inner =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.inner =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * In order to enable inner (again note that in this case inner is the outer
+ * most or single VLAN from the VF's perspective) VLAN stripping for 0x8100
+ * VLANs, the VF would populate the virtchnl_vlan_setting structure in the
+ * following manner and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message.
+ *
+ * virtchnl_vlan_setting.inner_ethertype_setting =
+ * VIRTCHNL_VLAN_ETHERTYPE_8100;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * The reason that VLAN TPID(s) are not being used for the
+ * outer_ethertype_setting and inner_ethertype_setting fields is because it's
+ * possible a device could support VLAN insertion and/or stripping offload on
+ * multiple ethertypes concurrently, so this method allows a VF to request
+ * multiple ethertypes in one message using the virtchnl_vlan_support
+ * enumeration.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.offloads in the
+ * following manner the VF will be allowed to enable 0x8100 and 0x88a8 outer
+ * VLAN insertion and stripping simultaneously. The
+ * virtchnl_vlan_caps.offloads.ethertype_match field will also have to be
+ * populated based on what the PF can support.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * In order to enable outer VLAN stripping for 0x8100 and 0x88a8 VLANs, the VF
+ * would populate the virthcnl_vlan_offload_structure in the following manner
+ * and send the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 message.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting =
+ * VIRTHCNL_VLAN_ETHERTYPE_8100 |
+ * VIRTHCNL_VLAN_ETHERTYPE_88A8;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * There is also the case where a PF and the underlying hardware can support
+ * VLAN offloads on multiple ethertypes, but not concurrently. For example, if
+ * the PF populates the virtchnl_vlan_caps.offloads in the following manner the
+ * VF will be allowed to enable and/or disable 0x8100 XOR 0x88a8 outer VLAN
+ * offloads. The ethertypes must match for stripping and insertion.
+ *
+ * virtchnl_vlan_caps.offloads.stripping_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * virtchnl_vlan_caps.offloads.insertion_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_XOR;
+ *
+ * virtchnl_vlan_caps.offloads.ethertype_match =
+ * VIRTCHNL_ETHERTYPE_STRIPPING_MATCHES_INSERTION;
+ *
+ * In order to enable outer VLAN stripping for 0x88a8 VLANs, the VF would
+ * populate the virtchnl_vlan_setting structure in the following manner and send
+ * the VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2. Also, this will change the
+ * ethertype for VLAN insertion if it's enabled. So, for completeness, a
+ * VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 with the same ethertype should be sent.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting = VIRTHCNL_VLAN_ETHERTYPE_88A8;
+ *
+ * virtchnl_vlan_setting.vport_id = vport_id or vsi_id assigned to the VF on
+ * initialization.
+ *
+ * VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2
+ * VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2
+ *
+ * VF sends this message to enable or disable VLAN filtering. It also needs to
+ * specify an ethertype. The VF knows which VLAN ethertypes are allowed and
+ * whether or not it's allowed to enable/disable filtering via the
+ * VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS message. The VF needs to
+ * parse the virtchnl_vlan_caps.filtering fields to determine which, if any,
+ * filtering messages are allowed.
+ *
+ * For example, if the PF populates the virtchnl_vlan_caps.filtering in the
+ * following manner the VF will be allowed to enable/disable 0x8100 and 0x88a8
+ * outer VLAN filtering together. Note, that the VIRTCHNL_VLAN_ETHERTYPE_AND
+ * means that all filtering ethertypes will to be enabled and disabled together
+ * regardless of the request from the VF. This means that the underlying
+ * hardware only supports VLAN filtering for all VLAN the specified ethertypes
+ * or none of them.
+ *
+ * virtchnl_vlan_caps.filtering.filtering_support.outer =
+ * VIRTCHNL_VLAN_TOGGLE |
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTHCNL_VLAN_ETHERTYPE_88A8 |
+ * VIRTCHNL_VLAN_ETHERTYPE_9100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_AND;
+ *
+ * In order to enable outer VLAN filtering for 0x88a8 and 0x8100 VLANs (0x9100
+ * VLANs aren't supported by the VF driver), the VF would populate the
+ * virtchnl_vlan_setting structure in the following manner and send the
+ * VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2. The same message format would be used
+ * to disable outer VLAN filtering for 0x88a8 and 0x8100 VLANs, but the
+ * VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2 opcode is used.
+ *
+ * virtchnl_vlan_setting.outer_ethertype_setting =
+ * VIRTCHNL_VLAN_ETHERTYPE_8100 |
+ * VIRTCHNL_VLAN_ETHERTYPE_88A8;
+ *
+ */
+struct virtchnl_vlan_setting {
+ u32 outer_ethertype_setting;
+ u32 inner_ethertype_setting;
+ u16 vport_id;
+ u8 pad[6];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vlan_setting);
+
+/* VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE
+ * VF sends VSI id and flags.
+ * PF returns status code in retval.
+ * Note: we assume that broadcast accept mode is always enabled.
+ */
+struct virtchnl_promisc_info {
+ u16 vsi_id;
+ u16 flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(4, virtchnl_promisc_info);
+
+#define FLAG_VF_UNICAST_PROMISC 0x00000001
+#define FLAG_VF_MULTICAST_PROMISC 0x00000002
+
+/* VIRTCHNL_OP_GET_STATS
+ * VF sends this message to request stats for the selected VSI. VF uses
+ * the virtchnl_queue_select struct to specify the VSI. The queue_id
+ * field is ignored by the PF.
+ *
+ * PF replies with struct virtchnl_eth_stats in an external buffer.
+ */
+
+struct virtchnl_eth_stats {
+ u64 rx_bytes; /* received bytes */
+ u64 rx_unicast; /* received unicast pkts */
+ u64 rx_multicast; /* received multicast pkts */
+ u64 rx_broadcast; /* received broadcast pkts */
+ u64 rx_discards;
+ u64 rx_unknown_protocol;
+ u64 tx_bytes; /* transmitted bytes */
+ u64 tx_unicast; /* transmitted unicast pkts */
+ u64 tx_multicast; /* transmitted multicast pkts */
+ u64 tx_broadcast; /* transmitted broadcast pkts */
+ u64 tx_discards;
+ u64 tx_errors;
+};
+
+/* VIRTCHNL_OP_CONFIG_RSS_KEY
+ * VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct virtchnl_rss_key {
+ u16 vsi_id;
+ u16 key_len;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_key);
+
+struct virtchnl_rss_lut {
+ u16 vsi_id;
+ u16 lut_entries;
+ u8 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(6, virtchnl_rss_lut);
+
+/* VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ */
+struct virtchnl_rss_hena {
+ u64 hena;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_rss_hena);
+
+/* Type of RSS algorithm */
+enum virtchnl_rss_algorithm {
+ VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0,
+ VIRTCHNL_RSS_ALG_R_ASYMMETRIC = 1,
+ VIRTCHNL_RSS_ALG_TOEPLITZ_SYMMETRIC = 2,
+ VIRTCHNL_RSS_ALG_XOR_SYMMETRIC = 3,
+};
+
+/* This is used by PF driver to enforce how many channels can be supported.
+ * When ADQ_V2 capability is negotiated, it will allow 16 channels otherwise
+ * PF driver will allow only max 4 channels
+ */
+#define VIRTCHNL_MAX_ADQ_CHANNELS 4
+#define VIRTCHNL_MAX_ADQ_V2_CHANNELS 16
+
+/* VIRTCHNL_OP_ENABLE_CHANNELS
+ * VIRTCHNL_OP_DISABLE_CHANNELS
+ * VF sends these messages to enable or disable channels based on
+ * the user specified queue count and queue offset for each traffic class.
+ * This struct encompasses all the information that the PF needs from
+ * VF to create a channel.
+ */
+struct virtchnl_channel_info {
+ u16 count; /* number of queues in a channel */
+ u16 offset; /* queues in a channel start from 'offset' */
+ u32 pad;
+ u64 max_tx_rate;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_channel_info);
+
+struct virtchnl_tc_info {
+ u32 num_tc;
+ u32 pad;
+ struct virtchnl_channel_info list[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_tc_info);
+
+/* VIRTCHNL_ADD_CLOUD_FILTER
+ * VIRTCHNL_DEL_CLOUD_FILTER
+ * VF sends these messages to add or delete a cloud filter based on the
+ * user specified match and action filters. These structures encompass
+ * all the information that the PF needs from the VF to add/delete a
+ * cloud filter.
+ */
+
+struct virtchnl_l4_spec {
+ u8 src_mac[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ u8 dst_mac[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+ /* vlan_prio is part of this 16 bit field even from OS perspective
+ * vlan_id:12 is actual vlan_id, then vlanid:bit14..12 is vlan_prio
+ * in future, when decided to offload vlan_prio, pass that information
+ * as part of the "vlan_id" field, Bit14..12
+ */
+ __be16 vlan_id;
+ __be16 pad; /* reserved for future use */
+ __be32 src_ip[4];
+ __be32 dst_ip[4];
+ __be16 src_port;
+ __be16 dst_port;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(52, virtchnl_l4_spec);
+
+union virtchnl_flow_spec {
+ struct virtchnl_l4_spec tcp_spec;
+ u8 buffer[128]; /* reserved for future use */
+};
+
+VIRTCHNL_CHECK_UNION_LEN(128, virtchnl_flow_spec);
+
+enum virtchnl_action {
+ /* action types */
+ VIRTCHNL_ACTION_DROP = 0,
+ VIRTCHNL_ACTION_TC_REDIRECT,
+ VIRTCHNL_ACTION_PASSTHRU,
+ VIRTCHNL_ACTION_QUEUE,
+ VIRTCHNL_ACTION_Q_REGION,
+ VIRTCHNL_ACTION_MARK,
+ VIRTCHNL_ACTION_COUNT,
+};
+
+enum virtchnl_flow_type {
+ /* flow types */
+ VIRTCHNL_TCP_V4_FLOW = 0,
+ VIRTCHNL_TCP_V6_FLOW,
+ VIRTCHNL_UDP_V4_FLOW,
+ VIRTCHNL_UDP_V6_FLOW,
+};
+
+struct virtchnl_filter {
+ union virtchnl_flow_spec data;
+ union virtchnl_flow_spec mask;
+
+ /* see enum virtchnl_flow_type */
+ s32 flow_type;
+
+ /* see enum virtchnl_action */
+ s32 action;
+ u32 action_meta;
+ u8 field_flags;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(272, virtchnl_filter);
+
+struct virtchnl_shaper_bw {
+ /* Unit is Kbps */
+ u32 committed;
+ u32 peak;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_shaper_bw);
+
+
+
+/* VIRTCHNL_OP_EVENT
+ * PF sends this message to inform the VF driver of events that may affect it.
+ * No direct response is expected from the VF, though it may generate other
+ * messages in response to this one.
+ */
+enum virtchnl_event_codes {
+ VIRTCHNL_EVENT_UNKNOWN = 0,
+ VIRTCHNL_EVENT_LINK_CHANGE,
+ VIRTCHNL_EVENT_RESET_IMPENDING,
+ VIRTCHNL_EVENT_PF_DRIVER_CLOSE,
+};
+
+#define PF_EVENT_SEVERITY_INFO 0
+#define PF_EVENT_SEVERITY_ATTENTION 1
+#define PF_EVENT_SEVERITY_ACTION_REQUIRED 2
+#define PF_EVENT_SEVERITY_CERTAIN_DOOM 255
+
+struct virtchnl_pf_event {
+ /* see enum virtchnl_event_codes */
+ s32 event;
+ union {
+ /* If the PF driver does not support the new speed reporting
+ * capabilities then use link_event else use link_event_adv to
+ * get the speed and link information. The ability to understand
+ * new speeds is indicated by setting the capability flag
+ * VIRTCHNL_VF_CAP_ADV_LINK_SPEED in vf_cap_flags parameter
+ * in virtchnl_vf_resource struct and can be used to determine
+ * which link event struct to use below.
+ */
+ struct {
+ enum virtchnl_link_speed link_speed;
+ bool link_status;
+ u8 pad[3];
+ } link_event;
+ struct {
+ /* link_speed provided in Mbps */
+ u32 link_speed;
+ u8 link_status;
+ u8 pad[3];
+ } link_event_adv;
+ } event_data;
+
+ s32 severity;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_pf_event);
+
+
+/* VF reset states - these are written into the RSTAT register:
+ * VFGEN_RSTAT on the VF
+ * When the PF initiates a reset, it writes 0
+ * When the reset is complete, it writes 1
+ * When the PF detects that the VF has recovered, it writes 2
+ * VF checks this register periodically to determine if a reset has occurred,
+ * then polls it to know when the reset is complete.
+ * If either the PF or VF reads the register while the hardware
+ * is in a reset state, it will return DEADBEEF, which, when masked
+ * will result in 3.
+ */
+enum virtchnl_vfr_states {
+ VIRTCHNL_VFR_INPROGRESS = 0,
+ VIRTCHNL_VFR_COMPLETED,
+ VIRTCHNL_VFR_VFACTIVE,
+};
+
+#define VIRTCHNL_MAX_NUM_PROTO_HDRS 32
+#define PROTO_HDR_SHIFT 5
+#define PROTO_HDR_FIELD_START(proto_hdr_type) \
+ (proto_hdr_type << PROTO_HDR_SHIFT)
+#define PROTO_HDR_FIELD_MASK ((1UL << PROTO_HDR_SHIFT) - 1)
+
+/* VF use these macros to configure each protocol header.
+ * Specify which protocol headers and protocol header fields base on
+ * virtchnl_proto_hdr_type and virtchnl_proto_hdr_field.
+ * @param hdr: a struct of virtchnl_proto_hdr
+ * @param hdr_type: ETH/IPV4/TCP, etc
+ * @param field: SRC/DST/TEID/SPI, etc
+ */
+#define VIRTCHNL_ADD_PROTO_HDR_FIELD(hdr, field) \
+ ((hdr)->field_selector |= BIT((field) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_DEL_PROTO_HDR_FIELD(hdr, field) \
+ ((hdr)->field_selector &= ~BIT((field) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_TEST_PROTO_HDR_FIELD(hdr, val) \
+ ((hdr)->field_selector & BIT((val) & PROTO_HDR_FIELD_MASK))
+#define VIRTCHNL_GET_PROTO_HDR_FIELD(hdr) ((hdr)->field_selector)
+
+#define VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, hdr_type, field) \
+ (VIRTCHNL_ADD_PROTO_HDR_FIELD(hdr, \
+ VIRTCHNL_PROTO_HDR_ ## hdr_type ## _ ## field))
+#define VIRTCHNL_DEL_PROTO_HDR_FIELD_BIT(hdr, hdr_type, field) \
+ (VIRTCHNL_DEL_PROTO_HDR_FIELD(hdr, \
+ VIRTCHNL_PROTO_HDR_ ## hdr_type ## _ ## field))
+
+#define VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, hdr_type) \
+ ((hdr)->type = VIRTCHNL_PROTO_HDR_ ## hdr_type)
+#define VIRTCHNL_GET_PROTO_HDR_TYPE(hdr) \
+ (((hdr)->type) >> PROTO_HDR_SHIFT)
+#define VIRTCHNL_TEST_PROTO_HDR_TYPE(hdr, val) \
+ ((hdr)->type == ((s32)((val) >> PROTO_HDR_SHIFT)))
+#define VIRTCHNL_TEST_PROTO_HDR(hdr, val) \
+ (VIRTCHNL_TEST_PROTO_HDR_TYPE(hdr, val) && \
+ VIRTCHNL_TEST_PROTO_HDR_FIELD(hdr, val))
+
+/* Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+enum virtchnl_proto_hdr_type {
+ VIRTCHNL_PROTO_HDR_NONE,
+ VIRTCHNL_PROTO_HDR_ETH,
+ VIRTCHNL_PROTO_HDR_S_VLAN,
+ VIRTCHNL_PROTO_HDR_C_VLAN,
+ VIRTCHNL_PROTO_HDR_IPV4,
+ VIRTCHNL_PROTO_HDR_IPV6,
+ VIRTCHNL_PROTO_HDR_TCP,
+ VIRTCHNL_PROTO_HDR_UDP,
+ VIRTCHNL_PROTO_HDR_SCTP,
+ VIRTCHNL_PROTO_HDR_GTPU_IP,
+ VIRTCHNL_PROTO_HDR_GTPU_EH,
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN,
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP,
+ VIRTCHNL_PROTO_HDR_PPPOE,
+ VIRTCHNL_PROTO_HDR_L2TPV3,
+ VIRTCHNL_PROTO_HDR_ESP,
+ VIRTCHNL_PROTO_HDR_AH,
+ VIRTCHNL_PROTO_HDR_PFCP,
+ VIRTCHNL_PROTO_HDR_GTPC,
+ VIRTCHNL_PROTO_HDR_ECPRI,
+ VIRTCHNL_PROTO_HDR_L2TPV2,
+ VIRTCHNL_PROTO_HDR_PPP,
+ /* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL_PROTO_HDR_IPV4 and VIRTCHNL_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG,
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG,
+ VIRTCHNL_PROTO_HDR_GRE,
+};
+
+/* Protocol header field within a protocol header. */
+enum virtchnl_proto_hdr_field {
+ /* ETHER */
+ VIRTCHNL_PROTO_HDR_ETH_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ETH),
+ VIRTCHNL_PROTO_HDR_ETH_DST,
+ VIRTCHNL_PROTO_HDR_ETH_ETHERTYPE,
+ /* S-VLAN */
+ VIRTCHNL_PROTO_HDR_S_VLAN_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_S_VLAN),
+ /* C-VLAN */
+ VIRTCHNL_PROTO_HDR_C_VLAN_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_C_VLAN),
+ /* IPV4 */
+ VIRTCHNL_PROTO_HDR_IPV4_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4),
+ VIRTCHNL_PROTO_HDR_IPV4_DST,
+ VIRTCHNL_PROTO_HDR_IPV4_DSCP,
+ VIRTCHNL_PROTO_HDR_IPV4_TTL,
+ VIRTCHNL_PROTO_HDR_IPV4_PROT,
+ VIRTCHNL_PROTO_HDR_IPV4_CHKSUM,
+ /* IPV6 */
+ VIRTCHNL_PROTO_HDR_IPV6_SRC =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6),
+ VIRTCHNL_PROTO_HDR_IPV6_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_TC,
+ VIRTCHNL_PROTO_HDR_IPV6_HOP_LIMIT,
+ VIRTCHNL_PROTO_HDR_IPV6_PROT,
+ /* IPV6 Prefix */
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX32_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX32_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX40_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX40_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX48_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX48_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX56_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX56_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX64_DST,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_SRC,
+ VIRTCHNL_PROTO_HDR_IPV6_PREFIX96_DST,
+ /* TCP */
+ VIRTCHNL_PROTO_HDR_TCP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_TCP),
+ VIRTCHNL_PROTO_HDR_TCP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_TCP_CHKSUM,
+ /* UDP */
+ VIRTCHNL_PROTO_HDR_UDP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_UDP),
+ VIRTCHNL_PROTO_HDR_UDP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_UDP_CHKSUM,
+ /* SCTP */
+ VIRTCHNL_PROTO_HDR_SCTP_SRC_PORT =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_SCTP),
+ VIRTCHNL_PROTO_HDR_SCTP_DST_PORT,
+ VIRTCHNL_PROTO_HDR_SCTP_CHKSUM,
+ /* GTPU_IP */
+ VIRTCHNL_PROTO_HDR_GTPU_IP_TEID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_IP),
+ /* GTPU_EH */
+ VIRTCHNL_PROTO_HDR_GTPU_EH_PDU =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH),
+ VIRTCHNL_PROTO_HDR_GTPU_EH_QFI,
+ /* PPPOE */
+ VIRTCHNL_PROTO_HDR_PPPOE_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_PPPOE),
+ /* L2TPV3 */
+ VIRTCHNL_PROTO_HDR_L2TPV3_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_L2TPV3),
+ /* ESP */
+ VIRTCHNL_PROTO_HDR_ESP_SPI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ESP),
+ /* AH */
+ VIRTCHNL_PROTO_HDR_AH_SPI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_AH),
+ /* PFCP */
+ VIRTCHNL_PROTO_HDR_PFCP_S_FIELD =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_PFCP),
+ VIRTCHNL_PROTO_HDR_PFCP_SEID,
+ /* GTPC */
+ VIRTCHNL_PROTO_HDR_GTPC_TEID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPC),
+ /* ECPRI */
+ VIRTCHNL_PROTO_HDR_ECPRI_MSG_TYPE =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_ECPRI),
+ VIRTCHNL_PROTO_HDR_ECPRI_PC_RTC_ID,
+ /* IPv4 Dummy Fragment */
+ VIRTCHNL_PROTO_HDR_IPV4_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV4_FRAG),
+ /* IPv6 Extension Fragment */
+ VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG_PKID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG),
+ /* GTPU_DWN/UP */
+ VIRTCHNL_PROTO_HDR_GTPU_DWN_QFI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_DWN),
+ VIRTCHNL_PROTO_HDR_GTPU_UP_QFI =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_GTPU_EH_PDU_UP),
+ /* L2TPv2 */
+ VIRTCHNL_PROTO_HDR_L2TPV2_SESS_ID =
+ PROTO_HDR_FIELD_START(VIRTCHNL_PROTO_HDR_L2TPV2),
+ VIRTCHNL_PROTO_HDR_L2TPV2_LEN_SESS_ID,
+};
+
+struct virtchnl_proto_hdr {
+ /* see enum virtchnl_proto_hdr_type */
+ s32 type;
+ u32 field_selector; /* a bit mask to select field for header type */
+ u8 buffer[64];
+ /**
+ * binary buffer in network order for specific header type.
+ * For example, if type = VIRTCHNL_PROTO_HDR_IPV4, a IPv4
+ * header is expected to be copied into the buffer.
+ */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_proto_hdr);
+
+struct virtchnl_proto_hdrs {
+ u8 tunnel_level;
+ /**
+ * specify where protocol header start from.
+ * 0 - from the outer layer
+ * 1 - from the first inner layer
+ * 2 - from the second inner layer
+ * ....
+ **/
+ int count; /* the proto layers must < VIRTCHNL_MAX_NUM_PROTO_HDRS */
+ struct virtchnl_proto_hdr proto_hdr[VIRTCHNL_MAX_NUM_PROTO_HDRS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2312, virtchnl_proto_hdrs);
+
+struct virtchnl_rss_cfg {
+ struct virtchnl_proto_hdrs proto_hdrs; /* protocol headers */
+
+ /* see enum virtchnl_rss_algorithm; rss algorithm type */
+ s32 rss_algorithm;
+ u8 reserved[128]; /* reserve for future */
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2444, virtchnl_rss_cfg);
+
+/* action configuration for FDIR */
+struct virtchnl_filter_action {
+ /* see enum virtchnl_action type */
+ s32 type;
+ union {
+ /* used for queue and qgroup action */
+ struct {
+ u16 index;
+ u8 region;
+ } queue;
+ /* used for count action */
+ struct {
+ /* share counter ID with other flow rules */
+ u8 shared;
+ u32 id; /* counter ID */
+ } count;
+ /* used for mark action */
+ u32 mark_id;
+ u8 reserve[32];
+ } act_conf;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(36, virtchnl_filter_action);
+
+#define VIRTCHNL_MAX_NUM_ACTIONS 8
+
+struct virtchnl_filter_action_set {
+ /* action number must be less then VIRTCHNL_MAX_NUM_ACTIONS */
+ int count;
+ struct virtchnl_filter_action actions[VIRTCHNL_MAX_NUM_ACTIONS];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(292, virtchnl_filter_action_set);
+
+/* pattern and action for FDIR rule */
+struct virtchnl_fdir_rule {
+ struct virtchnl_proto_hdrs proto_hdrs;
+ struct virtchnl_filter_action_set action_set;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2604, virtchnl_fdir_rule);
+
+/* Status returned to VF after VF requests FDIR commands
+ * VIRTCHNL_FDIR_SUCCESS
+ * VF FDIR related request is successfully done by PF
+ * The request can be OP_ADD/DEL/QUERY_FDIR_FILTER.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE
+ * OP_ADD_FDIR_FILTER request is failed due to no Hardware resource.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_EXIST
+ * OP_ADD_FDIR_FILTER request is failed due to the rule is already existed.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT
+ * OP_ADD_FDIR_FILTER request is failed due to conflict with existing rule.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST
+ * OP_DEL_FDIR_FILTER request is failed due to this rule doesn't exist.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_INVALID
+ * OP_ADD_FDIR_FILTER request is failed due to parameters validation
+ * or HW doesn't support.
+ *
+ * VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT
+ * OP_ADD/DEL_FDIR_FILTER request is failed due to timing out
+ * for programming.
+ *
+ * VIRTCHNL_FDIR_FAILURE_QUERY_INVALID
+ * OP_QUERY_FDIR_FILTER request is failed due to parameters validation,
+ * for example, VF query counter of a rule who has no counter action.
+ */
+enum virtchnl_fdir_prgm_status {
+ VIRTCHNL_FDIR_SUCCESS = 0,
+ VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE,
+ VIRTCHNL_FDIR_FAILURE_RULE_EXIST,
+ VIRTCHNL_FDIR_FAILURE_RULE_CONFLICT,
+ VIRTCHNL_FDIR_FAILURE_RULE_NONEXIST,
+ VIRTCHNL_FDIR_FAILURE_RULE_INVALID,
+ VIRTCHNL_FDIR_FAILURE_RULE_TIMEOUT,
+ VIRTCHNL_FDIR_FAILURE_QUERY_INVALID,
+};
+
+/* VIRTCHNL_OP_ADD_FDIR_FILTER
+ * VF sends this request to PF by filling out vsi_id,
+ * validate_only and rule_cfg. PF will return flow_id
+ * if the request is successfully done and return add_status to VF.
+ */
+struct virtchnl_fdir_add {
+ u16 vsi_id; /* INPUT */
+ /*
+ * 1 for validating a fdir rule, 0 for creating a fdir rule.
+ * Validate and create share one ops: VIRTCHNL_OP_ADD_FDIR_FILTER.
+ */
+ u16 validate_only; /* INPUT */
+ u32 flow_id; /* OUTPUT */
+ struct virtchnl_fdir_rule rule_cfg; /* INPUT */
+
+ /* see enum virtchnl_fdir_prgm_status; OUTPUT */
+ s32 status;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(2616, virtchnl_fdir_add);
+
+/* VIRTCHNL_OP_DEL_FDIR_FILTER
+ * VF sends this request to PF by filling out vsi_id
+ * and flow_id. PF will return del_status to VF.
+ */
+struct virtchnl_fdir_del {
+ u16 vsi_id; /* INPUT */
+ u16 pad;
+ u32 flow_id; /* INPUT */
+
+ /* see enum virtchnl_fdir_prgm_status; OUTPUT */
+ s32 status;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_fdir_del);
+
+/* VIRTCHNL_OP_GET_QOS_CAPS
+ * VF sends this message to get its QoS Caps, such as
+ * TC number, Arbiter and Bandwidth.
+ */
+struct virtchnl_qos_cap_elem {
+ u8 tc_num;
+ u8 tc_prio;
+#define VIRTCHNL_ABITER_STRICT 0
+#define VIRTCHNL_ABITER_ETS 2
+ u8 arbiter;
+#define VIRTCHNL_STRICT_WEIGHT 1
+ u8 weight;
+ enum virtchnl_bw_limit_type type;
+ union {
+ struct virtchnl_shaper_bw shaper;
+ u8 pad2[32];
+ };
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(40, virtchnl_qos_cap_elem);
+
+struct virtchnl_qos_cap_list {
+ u16 vsi_id;
+ u16 num_elem;
+ struct virtchnl_qos_cap_elem cap[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(44, virtchnl_qos_cap_list);
+
+/* VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP
+ * VF sends message virtchnl_queue_tc_mapping to set queue to tc
+ * mapping for all the Tx and Rx queues with a specified VSI, and
+ * would get response about bitmap of valid user priorities
+ * associated with queues.
+ */
+struct virtchnl_queue_tc_mapping {
+ u16 vsi_id;
+ u16 num_tc;
+ u16 num_queue_pairs;
+ u8 pad[2];
+ union {
+ struct {
+ u16 start_queue_id;
+ u16 queue_count;
+ } req;
+ struct {
+#define VIRTCHNL_USER_PRIO_TYPE_UP 0
+#define VIRTCHNL_USER_PRIO_TYPE_DSCP 1
+ u16 prio_type;
+ u16 valid_prio_bitmap;
+ } resp;
+ } tc[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_tc_mapping);
+
+/* queue types */
+enum virtchnl_queue_type {
+ VIRTCHNL_QUEUE_TYPE_TX = 0,
+ VIRTCHNL_QUEUE_TYPE_RX = 1,
+};
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl_queue_chunk {
+ /* see enum virtchnl_queue_type */
+ s32 type;
+ u16 start_queue_id;
+ u16 num_queues;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(8, virtchnl_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl_queue_chunks {
+ u16 num_chunks;
+ u16 rsvd;
+ struct virtchnl_queue_chunk chunks[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(12, virtchnl_queue_chunks);
+
+/* VIRTCHNL_OP_ENABLE_QUEUES_V2
+ * VIRTCHNL_OP_DISABLE_QUEUES_V2
+ *
+ * These opcodes can be used if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated in
+ * VIRTCHNL_OP_GET_VF_RESOURCES
+ *
+ * VF sends virtchnl_ena_dis_queues struct to specify the queues to be
+ * enabled/disabled in chunks. Also applicable to single queue RX or
+ * TX. PF performs requested action and returns status.
+ */
+struct virtchnl_del_ena_dis_queues {
+ u16 vport_id;
+ u16 pad;
+ struct virtchnl_queue_chunks chunks;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_del_ena_dis_queues);
+
+/* Virtchannel interrupt throttling rate index */
+enum virtchnl_itr_idx {
+ VIRTCHNL_ITR_IDX_0 = 0,
+ VIRTCHNL_ITR_IDX_1 = 1,
+ VIRTCHNL_ITR_IDX_NO_ITR = 3,
+};
+
+/* Queue to vector mapping */
+struct virtchnl_queue_vector {
+ u16 queue_id;
+ u16 vector_id;
+ u8 pad[4];
+
+ /* see enum virtchnl_itr_idx */
+ s32 itr_idx;
+
+ /* see enum virtchnl_queue_type */
+ s32 queue_type;
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_queue_vector);
+
+/* VIRTCHNL_OP_MAP_QUEUE_VECTOR
+ *
+ * This opcode can be used only if VIRTCHNL_VF_LARGE_NUM_QPAIRS was negotiated
+ * in VIRTCHNL_OP_GET_VF_RESOURCES
+ *
+ * VF sends this message to map queues to vectors and ITR index registers.
+ * External data buffer contains virtchnl_queue_vector_maps structure
+ * that contains num_qv_maps of virtchnl_queue_vector structures.
+ * PF maps the requested queue vector maps after validating the queue and vector
+ * ids and returns a status code.
+ */
+struct virtchnl_queue_vector_maps {
+ u16 vport_id;
+ u16 num_qv_maps;
+ u8 pad[4];
+ struct virtchnl_queue_vector qv_maps[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_queue_vector_maps);
+
+/* VIRTCHNL_VF_CAP_PTP
+ * VIRTCHNL_OP_1588_PTP_GET_CAPS
+ * VIRTCHNL_OP_1588_PTP_GET_TIME
+ * VIRTCHNL_OP_1588_PTP_SET_TIME
+ * VIRTCHNL_OP_1588_PTP_ADJ_TIME
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ
+ * VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP
+ * VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS
+ * VIRTCHNL_OP_1588_PTP_SET_PIN_CFG
+ * VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP
+ *
+ * Support for offloading control of the device PTP hardware clock (PHC) is enabled
+ * by VIRTCHNL_VF_CAP_PTP. This capability allows a VF to request that PF
+ * enable Tx and Rx timestamps, and request access to read and/or write the
+ * PHC on the device, as well as query if the VF has direct access to the PHC
+ * time registers.
+ *
+ * The VF must set VIRTCHNL_VF_CAP_PTP in its capabilities when requesting
+ * resources. If the capability is set in reply, the VF must then send
+ * a VIRTCHNL_OP_1588_PTP_GET_CAPS request during initialization. The VF indicates
+ * what extended capabilities it wants by setting the appropriate flags in the
+ * caps field. The PF reply will indicate what features are enabled for
+ * that VF.
+ */
+#define VIRTCHNL_1588_PTP_CAP_TX_TSTAMP BIT(0)
+#define VIRTCHNL_1588_PTP_CAP_RX_TSTAMP BIT(1)
+#define VIRTCHNL_1588_PTP_CAP_READ_PHC BIT(2)
+#define VIRTCHNL_1588_PTP_CAP_WRITE_PHC BIT(3)
+#define VIRTCHNL_1588_PTP_CAP_PHC_REGS BIT(4)
+#define VIRTCHNL_1588_PTP_CAP_PIN_CFG BIT(5)
+
+/**
+ * virtchnl_phc_regs
+ *
+ * Structure defines how the VF should access PHC related registers. The VF
+ * must request VIRTCHNL_1588_PTP_CAP_PHC_REGS. If the VF has access to PHC
+ * registers, the PF will reply with the capability flag set, and with this
+ * structure detailing what PCIe region and what offsets to use. If direct
+ * access is not available, this entire structure is reserved and the fields
+ * will be zero.
+ *
+ * If necessary in a future extension, a separate capability mutually
+ * exclusive with VIRTCHNL_1588_PTP_CAP_PHC_REGS might be used to change the
+ * entire format of this structure within virtchnl_ptp_caps.
+ *
+ * @clock_hi: Register offset of the high 32 bits of clock time
+ * @clock_lo: Register offset of the low 32 bits of clock time
+ * @pcie_region: The PCIe region the registers are located in.
+ * @rsvd: Reserved bits for future extension
+ */
+struct virtchnl_phc_regs {
+ u32 clock_hi;
+ u32 clock_lo;
+ u8 pcie_region;
+ u8 rsvd[15];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_phc_regs);
+
+/* timestamp format enumeration
+ *
+ * VIRTCHNL_1588_PTP_TSTAMP_40BIT
+ *
+ * This format indicates a timestamp that uses the 40bit format from the
+ * flexible Rx descriptors. It is also the default Tx timestamp format used
+ * today.
+ *
+ * Such a timestamp has the following 40bit format:
+ *
+ * *--------------------------------*-------------------------------*-----------*
+ * | 32 bits of time in nanoseconds | 7 bits of sub-nanosecond time | valid bit |
+ * *--------------------------------*-------------------------------*-----------*
+ *
+ * The timestamp is passed in a u64, with the upper 24bits of the field
+ * reserved as zero.
+ *
+ * With this format, in order to report a full 64bit timestamp to userspace
+ * applications, the VF is responsible for performing timestamp extension by
+ * carefully comparing the timestamp with the PHC time. This can correctly
+ * be achieved with a recent cached copy of the PHC time by doing delta
+ * comparison between the 32bits of nanoseconds in the timestamp with the
+ * lower 32 bits of the clock time. For this to work, the cached PHC time
+ * must be from within 2^31 nanoseconds (~2.1 seconds) of when the timestamp
+ * was captured.
+ *
+ * VIRTCHNL_1588_PTP_TSTAMP_64BIT_NS
+ *
+ * This format indicates a timestamp that is 64 bits of nanoseconds.
+ */
+enum virtchnl_ptp_tstamp_format {
+ VIRTCHNL_1588_PTP_TSTAMP_40BIT = 0,
+ VIRTCHNL_1588_PTP_TSTAMP_64BIT_NS = 1,
+};
+
+/**
+ * virtchnl_ptp_caps
+ *
+ * Structure that defines the PTP capabilities available to the VF. The VF
+ * sends VIRTCHNL_OP_1588_PTP_GET_CAPS, and must fill in the ptp_caps field
+ * indicating what capabilities it is requesting. The PF will respond with the
+ * same message with the virtchnl_ptp_caps structure indicating what is
+ * enabled for the VF.
+ *
+ * @phc_regs: If VIRTCHNL_1588_PTP_CAP_PHC_REGS is set, contains information
+ * on the PHC related registers available to the VF.
+ * @caps: On send, VF sets what capabilities it requests. On reply, PF
+ * indicates what has been enabled for this VF. The PF shall not set
+ * bits which were not requested by the VF.
+ * @max_adj: The maximum adjustment capable of being requested by
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ, in parts per billion. Note that 1 ppb
+ * is approximately 65.5 scaled_ppm. The PF shall clamp any
+ * frequency adjustment in VIRTCHNL_op_1588_ADJ_FREQ to +/- max_adj.
+ * Use of ppb in this field allows fitting the value into 4 bytes
+ * instead of potentially requiring 8 if scaled_ppm units were used.
+ * @tx_tstamp_idx: The Tx timestamp index to set in the transmit descriptor
+ * when requesting a timestamp for an outgoing packet.
+ * Reserved if VIRTCHNL_1588_PTP_CAP_TX_TSTAMP is not enabled.
+ * @n_ext_ts: Number of external timestamp functions available. Reserved
+ * if VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @n_per_out: Number of periodic output functions available. Reserved if
+ * VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @n_pins: Number of physical programmable pins able to be controlled.
+ * Reserved if VIRTCHNL_1588_PTP_CAP_PIN_CFG is not enabled.
+ * @tx_tstamp_format: Format of the Tx timestamps. Valid formats are defined
+ * by the virtchnl_ptp_tstamp enumeration. Note that Rx
+ * timestamps are tied to the descriptor format, and do not
+ * have a separate format field.
+ * @rsvd: Reserved bits for future extension.
+ *
+ * PTP capabilities
+ *
+ * VIRTCHNL_1588_PTP_CAP_TX_TSTAMP indicates that the VF can request transmit
+ * timestamps for packets in its transmit descriptors. If this is unset,
+ * transmit timestamp requests are ignored. Note that only one outstanding Tx
+ * timestamp request will be honored at a time. The PF shall handle receipt of
+ * the timestamp from the hardware, and will forward this to the VF by sending
+ * a VIRTCHNL_OP_1588_TX_TIMESTAMP message.
+ *
+ * VIRTCHNL_1588_PTP_CAP_RX_TSTAMP indicates that the VF receive queues have
+ * receive timestamps enabled in the flexible descriptors. Note that this
+ * requires a VF to also negotiate to enable advanced flexible descriptors in
+ * the receive path instead of the default legacy descriptor format.
+ *
+ * For a detailed description of the current Tx and Rx timestamp format, see
+ * the section on virtchnl_phc_tx_tstamp. Future extensions may indicate
+ * timestamp format in the capability structure.
+ *
+ * VIRTCHNL_1588_PTP_CAP_READ_PHC indicates that the VF may read the PHC time
+ * via the VIRTCHNL_OP_1588_PTP_GET_TIME command, or by directly reading PHC
+ * registers if VIRTCHNL_1588_PTP_CAP_PHC_REGS is also set.
+ *
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC indicates that the VF may request updates
+ * to the PHC time via VIRTCHNL_OP_1588_PTP_SET_TIME,
+ * VIRTCHNL_OP_1588_PTP_ADJ_TIME, and VIRTCHNL_OP_1588_PTP_ADJ_FREQ.
+ *
+ * VIRTCHNL_1588_PTP_CAP_PHC_REGS indicates that the VF has direct access to
+ * certain PHC related registers, primarily for lower latency access to the
+ * PHC time. If this is set, the VF shall read the virtchnl_phc_regs section
+ * of the capabilities to determine the location of the clock registers. If
+ * this capability is not set, the entire 24 bytes of virtchnl_phc_regs is
+ * reserved as zero. Future extensions define alternative formats for this
+ * data, in which case they will be mutually exclusive with this capability.
+ *
+ * VIRTCHNL_1588_PTP_CAP_PIN_CFG indicates that the VF has the capability to
+ * control software defined pins. These pins can be assigned either as an
+ * input to timestamp external events, or as an output to cause a periodic
+ * signal output.
+ *
+ * Note that in the future, additional capability flags may be added which
+ * indicate additional extended support. All fields marked as reserved by this
+ * header will be set to zero. VF implementations should verify this to ensure
+ * that future extensions do not break compatibility.
+ */
+struct virtchnl_ptp_caps {
+ struct virtchnl_phc_regs phc_regs;
+ u32 caps;
+ s32 max_adj;
+ u8 tx_tstamp_idx;
+ u8 n_ext_ts;
+ u8 n_per_out;
+ u8 n_pins;
+ /* see enum virtchnl_ptp_tstamp_format */
+ u8 tx_tstamp_format;
+ u8 rsvd[11];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(48, virtchnl_ptp_caps);
+
+/**
+ * virtchnl_phc_time
+ * @time: PHC time in nanoseconds
+ * @rsvd: Reserved for future extension
+ *
+ * Structure sent with VIRTCHNL_OP_1588_PTP_SET_TIME and received with
+ * VIRTCHNL_OP_1588_PTP_GET_TIME. Contains the 64bits of PHC clock time in
+ * nanoseconds.
+ *
+ * VIRTCHNL_OP_1588_PTP_SET_TIME may be sent by the VF if
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC is set. This will request that the PHC time
+ * be set to the requested value. This operation is non-atomic and thus does
+ * not adjust for the delay between request and completion. It is recommended
+ * that the VF use VIRTCHNL_OP_1588_PTP_ADJ_TIME and
+ * VIRTCHNL_OP_1588_PTP_ADJ_FREQ when possible to steer the PHC clock.
+ *
+ * VIRTCHNL_OP_1588_PTP_GET_TIME may be sent to request the current time of
+ * the PHC. This op is available in case direct access via the PHC registers
+ * is not available.
+ */
+struct virtchnl_phc_time {
+ u64 time;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_time);
+
+/**
+ * virtchnl_phc_adj_time
+ * @delta: offset requested to adjust clock by
+ * @rsvd: reserved for future extension
+ *
+ * Sent with VIRTCHNL_OP_1588_PTP_ADJ_TIME. Used to request an adjustment of
+ * the clock time by the provided delta, with negative values representing
+ * subtraction. VIRTCHNL_OP_1588_PTP_ADJ_TIME may not be sent unless
+ * VIRTCHNL_1588_PTP_CAP_WRITE_PHC is set.
+ *
+ * The atomicity of this operation is not guaranteed. The PF should perform an
+ * atomic update using appropriate mechanisms if possible. However, this is
+ * not guaranteed.
+ */
+struct virtchnl_phc_adj_time {
+ s64 delta;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_adj_time);
+
+/**
+ * virtchnl_phc_adj_freq
+ * @scaled_ppm: frequency adjustment represented in scaled parts per million
+ * @rsvd: Reserved for future extension
+ *
+ * Sent with the VIRTCHNL_OP_1588_PTP_ADJ_FREQ to request an adjustment to the
+ * clock frequency. The adjustment is in scaled_ppm, which is parts per
+ * million with a 16bit binary fractional portion. 1 part per billion is
+ * approximately 65.5 scaled_ppm.
+ *
+ * ppm = scaled_ppm / 2^16
+ *
+ * ppb = scaled_ppm * 1000 / 2^16 or
+ *
+ * ppb = scaled_ppm * 125 / 2^13
+ *
+ * The PF shall clamp any adjustment request to plus or minus the specified
+ * max_adj in the PTP capabilities.
+ *
+ * Requests for adjustment are always based off of nominal clock frequency and
+ * not compounding. To reset clock frequency, send a request with a scaled_ppm
+ * of 0.
+ */
+struct virtchnl_phc_adj_freq {
+ s64 scaled_ppm;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_adj_freq);
+
+/**
+ * virtchnl_phc_tx_stamp
+ * @tstamp: timestamp value
+ * @rsvd: Reserved for future extension
+ *
+ * Sent along with VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP from the PF when a Tx
+ * timestamp for the index associated with this VF in the tx_tstamp_idx field
+ * is captured by hardware.
+ *
+ * If VIRTCHNL_1588_PTP_CAP_TX_TSTAMP is set, the VF may request a timestamp
+ * for a packet in its transmit context descriptor by setting the appropriate
+ * flag and setting the timestamp index provided by the PF. On transmission,
+ * the timestamp will be captured and sent to the PF. The PF will forward this
+ * timestamp to the VF via the VIRTCHNL_1588_PTP_CAP_TX_TSTAMP op.
+ *
+ * The timestamp format is defined by the tx_tstamp_format field of the
+ * virtchnl_ptp_caps structure.
+ */
+struct virtchnl_phc_tx_tstamp {
+ u64 tstamp;
+ u8 rsvd[8];
+};
+VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_phc_tx_tstamp);
+
+enum virtchnl_phc_pin_func {
+ VIRTCHNL_PHC_PIN_FUNC_NONE = 0, /* Not assigned to any function */
+ VIRTCHNL_PHC_PIN_FUNC_EXT_TS = 1, /* Assigned to external timestamp */
+ VIRTCHNL_PHC_PIN_FUNC_PER_OUT = 2, /* Assigned to periodic output */
+};
+
+/* Length of the pin configuration data. All pin configurations belong within
+ * the same union and *must* have this length in bytes.
+ */
+#define VIRTCHNL_PIN_CFG_LEN 64
+
+/* virtchnl_phc_ext_ts_mode
+ *
+ * Mode of the external timestamp, indicating which edges of the input signal
+ * to timestamp.
+ */
+enum virtchnl_phc_ext_ts_mode {
+ VIRTCHNL_PHC_EXT_TS_NONE = 0,
+ VIRTCHNL_PHC_EXT_TS_RISING_EDGE = 1,
+ VIRTCHNL_PHC_EXT_TS_FALLING_EDGE = 2,
+ VIRTCHNL_PHC_EXT_TS_BOTH_EDGES = 3,
+};
+
+/**
+ * virtchnl_phc_ext_ts
+ * @mode: mode of external timestamp request
+ * @rsvd: reserved for future extension
+ *
+ * External timestamp configuration. Defines the configuration for this
+ * external timestamp function.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_NONE, the function is essentially disabled,
+ * timestamping nothing.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_RISING_EDGE, the function shall timestamp
+ * the rising edge of the input when it transitions from low to high signal.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_FALLING_EDGE, the function shall timestamp
+ * the falling edge of the input when it transitions from high to low signal.
+ *
+ * If mode is VIRTCHNL_PHC_EXT_TS_BOTH_EDGES, the function shall timestamp
+ * both the rising and falling edge of the signal whenever it changes.
+ *
+ * The PF shall return an error if the requested mode cannot be implemented on
+ * the function.
+ */
+struct virtchnl_phc_ext_ts {
+ u8 mode; /* see virtchnl_phc_ext_ts_mode */
+ u8 rsvd[63];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(VIRTCHNL_PIN_CFG_LEN, virtchnl_phc_ext_ts);
+
+/* virtchnl_phc_per_out_flags
+ *
+ * Flags defining periodic output functionality.
+ */
+enum virtchnl_phc_per_out_flags {
+ VIRTCHNL_PHC_PER_OUT_PHASE_START = BIT(0),
+};
+
+/**
+ * virtchnl_phc_per_out
+ * @start: absolute start time (if VIRTCHNL_PHC_PER_OUT_PHASE_START unset)
+ * @phase: phase offset to start (if VIRTCHNL_PHC_PER_OUT_PHASE_START set)
+ * @period: time to complete a full clock cycle (low - > high -> low)
+ * @on: length of time the signal should stay high
+ * @flags: flags defining the periodic output operation.
+ * rsvd: reserved for future extension
+ *
+ * Configuration for a periodic output signal. Used to define the signal that
+ * should be generated on a given function.
+ *
+ * The period field determines the full length of the clock cycle, including
+ * both duration hold high transition and duration to hold low transition in
+ * nanoseconds.
+ *
+ * The on field determines how long the signal should remain high. For
+ * a traditional square wave clock that is on for some duration and off for
+ * the same duration, use an on length of precisely half the period. The duty
+ * cycle of the clock is period/on.
+ *
+ * If VIRTCHNL_PHC_PER_OUT_PHASE_START is unset, then the request is to start
+ * a clock an absolute time. This means that the clock should start precisely
+ * at the specified time in the start field. If the start time is in the past,
+ * then the periodic output should start at the next valid multiple of the
+ * period plus the start time:
+ *
+ * new_start = (n * period) + start
+ * (choose n such that new start is in the future)
+ *
+ * Note that the PF should not reject a start time in the past because it is
+ * possible that such a start time was valid when the request was made, but
+ * became invalid due to delay in programming the pin.
+ *
+ * If VIRTCHNL_PHC_PER_OUT_PHASE_START is set, then the request is to start
+ * the next multiple of the period plus the phase offset. The phase must be
+ * less than the period. In this case, the clock should start as soon possible
+ * at the next available multiple of the period. To calculate a start time
+ * when programming this mode, use:
+ *
+ * start = (n * period) + phase
+ * (choose n such that start is in the future)
+ *
+ * A period of zero should be treated as a request to disable the clock
+ * output.
+ */
+struct virtchnl_phc_per_out {
+ union {
+ u64 start;
+ u64 phase;
+ };
+ u64 period;
+ u64 on;
+ u32 flags;
+ u8 rsvd[36];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(VIRTCHNL_PIN_CFG_LEN, virtchnl_phc_per_out);
+
+/* virtchnl_phc_pin_cfg_flags
+ *
+ * Definition of bits in the flags field of the virtchnl_phc_pin_cfg
+ * structure.
+ */
+enum virtchnl_phc_pin_cfg_flags {
+ /* Valid for VIRTCHNL_OP_1588_PTP_SET_PIN_CFG. If set, indicates this
+ * is a request to verify if the function can be assigned to the
+ * provided pin. In this case, the ext_ts and per_out fields are
+ * ignored, and the PF response must be an error if the pin cannot be
+ * assigned to that function index.
+ */
+ VIRTCHNL_PHC_PIN_CFG_VERIFY = BIT(0),
+};
+
+/**
+ * virtchnl_phc_set_pin
+ * @pin_index: The pin to get or set
+ * @func: the function type the pin is assigned to
+ * @func_index: the index of the function the pin is assigned to
+ * @ext_ts: external timestamp configuration
+ * @per_out: periodic output configuration
+ * @rsvd1: Reserved for future extension
+ * @rsvd2: Reserved for future extension
+ *
+ * Sent along with the VIRTCHNL_OP_1588_PTP_SET_PIN_CFG op.
+ *
+ * The VF issues a VIRTCHNL_OP_1588_PTP_SET_PIN_CFG to assign the pin to one
+ * of the functions. It must set the pin_index field, the func field, and
+ * the func_index field. The pin_index must be less than n_pins, and the
+ * func_index must be less than the n_ext_ts or n_per_out depending on which
+ * function type is selected. If func is for an external timestamp, the
+ * ext_ts field must be filled in with the desired configuration. Similarly,
+ * if the function is for a periodic output, the per_out field must be
+ * configured.
+ *
+ * If the VIRTCHNL_PHC_PIN_CFG_VERIFY bit of the flag field is set, this is
+ * a request only to verify the configuration, not to set it. In this case,
+ * the PF should simply report an error if the requested pin cannot be
+ * assigned to the requested function. This allows VF to determine whether or
+ * not a given function can be assigned to a specific pin. Other flag bits are
+ * currently reserved and must be verified as zero on both sides. They may be
+ * extended in the future.
+ */
+struct virtchnl_phc_set_pin {
+ u32 flags; /* see virtchnl_phc_pin_cfg_flags */
+ u8 pin_index;
+ u8 func; /* see virtchnl_phc_pin_func */
+ u8 func_index;
+ u8 rsvd1;
+ union {
+ struct virtchnl_phc_ext_ts ext_ts;
+ struct virtchnl_phc_per_out per_out;
+ };
+ u8 rsvd2[8];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_phc_set_pin);
+
+/**
+ * virtchnl_phc_pin
+ * @pin_index: The pin to get or set
+ * @func: the function type the pin is assigned to
+ * @func_index: the index of the function the pin is assigned to
+ * @rsvd: Reserved for future extension
+ * @name: human readable pin name, supplied by PF on GET_PIN_CFGS
+ *
+ * Sent by the PF as part of the VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS response.
+ *
+ * The VF issues a VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS request to the PF in
+ * order to obtain the current pin configuration for all of the pins that were
+ * assigned to this VF.
+ *
+ * This structure details the pin configuration state, including a pin name
+ * and which function is assigned to the pin currently.
+ */
+struct virtchnl_phc_pin {
+ u8 pin_index;
+ u8 func; /* see virtchnl_phc_pin_func */
+ u8 func_index;
+ u8 rsvd[5];
+ char name[64];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(72, virtchnl_phc_pin);
+
+/**
+ * virtchnl_phc_pin_cfg
+ * @len: length of the variable pin config array
+ * @pins: variable length pin configuration array
+ *
+ * Variable structure sent by the PF in reply to
+ * VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS. The VF does not send this structure with
+ * its request of the operation.
+ *
+ * It is possible that the PF may need to send more pin configuration data
+ * than can be sent in one virtchnl message. To handle this, the PF should
+ * issue multiple VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS responses. Each response
+ * will indicate the number of pins it covers. The VF should be ready to wait
+ * for multiple responses until it has received a total length equal to the
+ * number of n_pins negotiated during extended PTP capabilities exchange.
+ */
+struct virtchnl_phc_get_pins {
+ u8 len;
+ u8 rsvd[7];
+ struct virtchnl_phc_pin pins[1];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(80, virtchnl_phc_get_pins);
+
+/**
+ * virtchnl_phc_ext_stamp
+ * @tstamp: timestamp value
+ * @tstamp_rsvd: Reserved for future extension of the timestamp value.
+ * @tstamp_format: format of the timstamp
+ * @func_index: external timestamp function this timestamp is for
+ * @rsvd2: Reserved for future extension
+ *
+ * Sent along with the VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP from the PF when an
+ * external timestamp function is triggered.
+ *
+ * This will be sent only if one of the external timestamp functions is
+ * configured by the VF, and is only valid if VIRTCHNL_1588_PTP_CAP_PIN_CFG is
+ * negotiated with the PF.
+ *
+ * The timestamp format is defined by the tstamp_format field using the
+ * virtchnl_ptp_tstamp_format enumeration. The tstamp_rsvd field is
+ * exclusively reserved for possible future variants of the timestamp format,
+ * and its access will be controlled by the tstamp_format field.
+ */
+struct virtchnl_phc_ext_tstamp {
+ u64 tstamp;
+ u8 tstamp_rsvd[8];
+ u8 tstamp_format;
+ u8 func_index;
+ u8 rsvd2[6];
+};
+
+VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_phc_ext_tstamp);
+
+/* Since VF messages are limited by u16 size, precalculate the maximum possible
+ * values of nested elements in virtchnl structures that virtual channel can
+ * possibly handle in a single message.
+ */
+enum virtchnl_vector_limits {
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vsi_queue_config_info)) /
+ sizeof(struct virtchnl_queue_pair_info),
+
+ VIRTCHNL_OP_CONFIG_IRQ_MAP_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_irq_map_info)) /
+ sizeof(struct virtchnl_vector_map),
+
+ VIRTCHNL_OP_ADD_DEL_ETH_ADDR_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_ether_addr_list)) /
+ sizeof(struct virtchnl_ether_addr),
+
+ VIRTCHNL_OP_ADD_DEL_VLAN_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list)) /
+ sizeof(u16),
+
+
+ VIRTCHNL_OP_ENABLE_CHANNELS_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_tc_info)) /
+ sizeof(struct virtchnl_channel_info),
+
+ VIRTCHNL_OP_ENABLE_DISABLE_DEL_QUEUES_V2_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_del_ena_dis_queues)) /
+ sizeof(struct virtchnl_queue_chunk),
+
+ VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_queue_vector_maps)) /
+ sizeof(struct virtchnl_queue_vector),
+
+ VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX =
+ ((u16)(~0) - sizeof(struct virtchnl_vlan_filter_list_v2)) /
+ sizeof(struct virtchnl_vlan_filter),
+};
+
+/**
+ * virtchnl_vc_validate_vf_msg
+ * @ver: Virtchnl version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl_vc_validate_vf_msg(struct virtchnl_version_info *ver, u32 v_opcode,
+ u8 *msg, u16 msglen)
+{
+ bool err_msg_format = false;
+ u32 valid_len = 0;
+
+ /* Validate message length. */
+ switch (v_opcode) {
+ case VIRTCHNL_OP_VERSION:
+ valid_len = sizeof(struct virtchnl_version_info);
+ break;
+ case VIRTCHNL_OP_RESET_VF:
+ break;
+ case VIRTCHNL_OP_GET_VF_RESOURCES:
+ if (VF_IS_V11(ver))
+ valid_len = sizeof(u32);
+ break;
+ case VIRTCHNL_OP_CONFIG_TX_QUEUE:
+ valid_len = sizeof(struct virtchnl_txq_info);
+ break;
+ case VIRTCHNL_OP_CONFIG_RX_QUEUE:
+ valid_len = sizeof(struct virtchnl_rxq_info);
+ break;
+ case VIRTCHNL_OP_CONFIG_VSI_QUEUES:
+ valid_len = sizeof(struct virtchnl_vsi_queue_config_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_vsi_queue_config_info *vqc =
+ (struct virtchnl_vsi_queue_config_info *)msg;
+
+ if (vqc->num_queue_pairs == 0 || vqc->num_queue_pairs >
+ VIRTCHNL_OP_CONFIG_VSI_QUEUES_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vqc->num_queue_pairs *
+ sizeof(struct
+ virtchnl_queue_pair_info));
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_IRQ_MAP:
+ valid_len = sizeof(struct virtchnl_irq_map_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_irq_map_info *vimi =
+ (struct virtchnl_irq_map_info *)msg;
+
+ if (vimi->num_vectors == 0 || vimi->num_vectors >
+ VIRTCHNL_OP_CONFIG_IRQ_MAP_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vimi->num_vectors *
+ sizeof(struct virtchnl_vector_map));
+ }
+ break;
+ case VIRTCHNL_OP_ENABLE_QUEUES:
+ case VIRTCHNL_OP_DISABLE_QUEUES:
+ valid_len = sizeof(struct virtchnl_queue_select);
+ break;
+ case VIRTCHNL_OP_GET_MAX_RSS_QREGION:
+ break;
+ case VIRTCHNL_OP_ADD_ETH_ADDR:
+ case VIRTCHNL_OP_DEL_ETH_ADDR:
+ valid_len = sizeof(struct virtchnl_ether_addr_list);
+ if (msglen >= valid_len) {
+ struct virtchnl_ether_addr_list *veal =
+ (struct virtchnl_ether_addr_list *)msg;
+
+ if (veal->num_elements == 0 || veal->num_elements >
+ VIRTCHNL_OP_ADD_DEL_ETH_ADDR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += veal->num_elements *
+ sizeof(struct virtchnl_ether_addr);
+ }
+ break;
+ case VIRTCHNL_OP_ADD_VLAN:
+ case VIRTCHNL_OP_DEL_VLAN:
+ valid_len = sizeof(struct virtchnl_vlan_filter_list);
+ if (msglen >= valid_len) {
+ struct virtchnl_vlan_filter_list *vfl =
+ (struct virtchnl_vlan_filter_list *)msg;
+
+ if (vfl->num_elements == 0 || vfl->num_elements >
+ VIRTCHNL_OP_ADD_DEL_VLAN_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += vfl->num_elements * sizeof(u16);
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE:
+ valid_len = sizeof(struct virtchnl_promisc_info);
+ break;
+ case VIRTCHNL_OP_GET_STATS:
+ valid_len = sizeof(struct virtchnl_queue_select);
+ break;
+ case VIRTCHNL_OP_CONFIG_RSS_KEY:
+ valid_len = sizeof(struct virtchnl_rss_key);
+ if (msglen >= valid_len) {
+ struct virtchnl_rss_key *vrk =
+ (struct virtchnl_rss_key *)msg;
+
+ if (vrk->key_len == 0) {
+ /* zero length is allowed as input */
+ break;
+ }
+
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case VIRTCHNL_OP_CONFIG_RSS_LUT:
+ valid_len = sizeof(struct virtchnl_rss_lut);
+ if (msglen >= valid_len) {
+ struct virtchnl_rss_lut *vrl =
+ (struct virtchnl_rss_lut *)msg;
+
+ if (vrl->lut_entries == 0) {
+ /* zero entries is allowed as input */
+ break;
+ }
+
+ valid_len += vrl->lut_entries - 1;
+ }
+ break;
+ case VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ break;
+ case VIRTCHNL_OP_SET_RSS_HENA:
+ valid_len = sizeof(struct virtchnl_rss_hena);
+ break;
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING:
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING:
+ break;
+ case VIRTCHNL_OP_REQUEST_QUEUES:
+ valid_len = sizeof(struct virtchnl_vf_res_request);
+ break;
+ case VIRTCHNL_OP_ENABLE_CHANNELS:
+ valid_len = sizeof(struct virtchnl_tc_info);
+ if (msglen >= valid_len) {
+ struct virtchnl_tc_info *vti =
+ (struct virtchnl_tc_info *)msg;
+
+ if (vti->num_tc == 0 || vti->num_tc >
+ VIRTCHNL_OP_ENABLE_CHANNELS_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vti->num_tc - 1) *
+ sizeof(struct virtchnl_channel_info);
+ }
+ break;
+ case VIRTCHNL_OP_DISABLE_CHANNELS:
+ break;
+ case VIRTCHNL_OP_ADD_CLOUD_FILTER:
+ case VIRTCHNL_OP_DEL_CLOUD_FILTER:
+ valid_len = sizeof(struct virtchnl_filter);
+ break;
+ case VIRTCHNL_OP_ADD_RSS_CFG:
+ case VIRTCHNL_OP_DEL_RSS_CFG:
+ valid_len = sizeof(struct virtchnl_rss_cfg);
+ break;
+ case VIRTCHNL_OP_ADD_FDIR_FILTER:
+ valid_len = sizeof(struct virtchnl_fdir_add);
+ break;
+ case VIRTCHNL_OP_DEL_FDIR_FILTER:
+ valid_len = sizeof(struct virtchnl_fdir_del);
+ break;
+ case VIRTCHNL_OP_GET_QOS_CAPS:
+ break;
+ case VIRTCHNL_OP_CONFIG_QUEUE_TC_MAP:
+ valid_len = sizeof(struct virtchnl_queue_tc_mapping);
+ if (msglen >= valid_len) {
+ struct virtchnl_queue_tc_mapping *q_tc =
+ (struct virtchnl_queue_tc_mapping *)msg;
+ if (q_tc->num_tc == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (q_tc->num_tc - 1) *
+ sizeof(q_tc->tc[0]);
+ }
+ break;
+ case VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS:
+ break;
+ case VIRTCHNL_OP_ADD_VLAN_V2:
+ case VIRTCHNL_OP_DEL_VLAN_V2:
+ valid_len = sizeof(struct virtchnl_vlan_filter_list_v2);
+ if (msglen >= valid_len) {
+ struct virtchnl_vlan_filter_list_v2 *vfl =
+ (struct virtchnl_vlan_filter_list_v2 *)msg;
+
+ if (vfl->num_elements == 0 || vfl->num_elements >
+ VIRTCHNL_OP_ADD_DEL_VLAN_V2_MAX) {
+ err_msg_format = true;
+ break;
+ }
+
+ valid_len += (vfl->num_elements - 1) *
+ sizeof(struct virtchnl_vlan_filter);
+ }
+ break;
+ case VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2:
+ case VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2:
+ case VIRTCHNL_OP_ENABLE_VLAN_FILTERING_V2:
+ case VIRTCHNL_OP_DISABLE_VLAN_FILTERING_V2:
+ valid_len = sizeof(struct virtchnl_vlan_setting);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_CAPS:
+ valid_len = sizeof(struct virtchnl_ptp_caps);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_TIME:
+ case VIRTCHNL_OP_1588_PTP_SET_TIME:
+ valid_len = sizeof(struct virtchnl_phc_time);
+ break;
+ case VIRTCHNL_OP_1588_PTP_ADJ_TIME:
+ valid_len = sizeof(struct virtchnl_phc_adj_time);
+ break;
+ case VIRTCHNL_OP_1588_PTP_ADJ_FREQ:
+ valid_len = sizeof(struct virtchnl_phc_adj_freq);
+ break;
+ case VIRTCHNL_OP_1588_PTP_TX_TIMESTAMP:
+ valid_len = sizeof(struct virtchnl_phc_tx_tstamp);
+ break;
+ case VIRTCHNL_OP_1588_PTP_SET_PIN_CFG:
+ valid_len = sizeof(struct virtchnl_phc_set_pin);
+ break;
+ case VIRTCHNL_OP_1588_PTP_GET_PIN_CFGS:
+ break;
+ case VIRTCHNL_OP_1588_PTP_EXT_TIMESTAMP:
+ valid_len = sizeof(struct virtchnl_phc_ext_tstamp);
+ break;
+ case VIRTCHNL_OP_ENABLE_QUEUES_V2:
+ case VIRTCHNL_OP_DISABLE_QUEUES_V2:
+ valid_len = sizeof(struct virtchnl_del_ena_dis_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl_del_ena_dis_queues *qs =
+ (struct virtchnl_del_ena_dis_queues *)msg;
+ if (qs->chunks.num_chunks == 0 ||
+ qs->chunks.num_chunks > VIRTCHNL_OP_ENABLE_DISABLE_DEL_QUEUES_V2_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (qs->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl_queue_chunk);
+ }
+ break;
+ case VIRTCHNL_OP_MAP_QUEUE_VECTOR:
+ valid_len = sizeof(struct virtchnl_queue_vector_maps);
+ if (msglen >= valid_len) {
+ struct virtchnl_queue_vector_maps *v_qp =
+ (struct virtchnl_queue_vector_maps *)msg;
+ if (v_qp->num_qv_maps == 0 ||
+ v_qp->num_qv_maps > VIRTCHNL_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_qp->num_qv_maps - 1) *
+ sizeof(struct virtchnl_queue_vector);
+ }
+ break;
+ /* These are always errors coming from the VF. */
+ case VIRTCHNL_OP_EVENT:
+ case VIRTCHNL_OP_UNKNOWN:
+ default:
+ return VIRTCHNL_STATUS_ERR_PARAM;
+ }
+ /* few more checks */
+ if (err_msg_format || valid_len != msglen)
+ return VIRTCHNL_STATUS_ERR_OPCODE_MISMATCH;
+
+ return 0;
+}
+#endif /* _VIRTCHNL_H_ */
diff --git a/drivers/net/idpf/base/virtchnl2.h b/drivers/net/idpf/base/virtchnl2.h
new file mode 100644
index 0000000000..d0af6ef7c7
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl2.h
@@ -0,0 +1,1411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL2_H_
+#define _VIRTCHNL2_H_
+
+/* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
+ * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
+ * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ */
+
+#include "virtchnl2_lan_desc.h"
+
+/* Error Codes
+ * Note that many older versions of various iAVF drivers convert the reported
+ * status code directly into an iavf_status enumeration. For this reason, it
+ * is important that the values of these enumerations line up.
+ */
+#define VIRTCHNL2_STATUS_SUCCESS 0
+#define VIRTCHNL2_STATUS_ERR_PARAM -5
+#define VIRTCHNL2_STATUS_ERR_OPCODE_MISMATCH -38
+
+/* These macros are used to generate compilation errors if a structure/union
+ * is not exactly the correct length. It gives a divide by zero error if the
+ * structure/union is not of the correct size, otherwise it creates an enum
+ * that is never used.
+ */
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
+ { virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
+ { virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+
+/* New major set of opcodes introduced and so leaving room for
+ * old misc opcodes to be added in future. Also these opcodes may only
+ * be used if both the PF and VF have successfully negotiated the
+ * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
+ */
+#define VIRTCHNL2_OP_UNKNOWN 0
+#define VIRTCHNL2_OP_VERSION 1
+#define VIRTCHNL2_OP_GET_CAPS 500
+#define VIRTCHNL2_OP_CREATE_VPORT 501
+#define VIRTCHNL2_OP_DESTROY_VPORT 502
+#define VIRTCHNL2_OP_ENABLE_VPORT 503
+#define VIRTCHNL2_OP_DISABLE_VPORT 504
+#define VIRTCHNL2_OP_CONFIG_TX_QUEUES 505
+#define VIRTCHNL2_OP_CONFIG_RX_QUEUES 506
+#define VIRTCHNL2_OP_ENABLE_QUEUES 507
+#define VIRTCHNL2_OP_DISABLE_QUEUES 508
+#define VIRTCHNL2_OP_ADD_QUEUES 509
+#define VIRTCHNL2_OP_DEL_QUEUES 510
+#define VIRTCHNL2_OP_MAP_QUEUE_VECTOR 511
+#define VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR 512
+#define VIRTCHNL2_OP_GET_RSS_KEY 513
+#define VIRTCHNL2_OP_SET_RSS_KEY 514
+#define VIRTCHNL2_OP_GET_RSS_LUT 515
+#define VIRTCHNL2_OP_SET_RSS_LUT 516
+#define VIRTCHNL2_OP_GET_RSS_HASH 517
+#define VIRTCHNL2_OP_SET_RSS_HASH 518
+#define VIRTCHNL2_OP_SET_SRIOV_VFS 519
+#define VIRTCHNL2_OP_ALLOC_VECTORS 520
+#define VIRTCHNL2_OP_DEALLOC_VECTORS 521
+#define VIRTCHNL2_OP_EVENT 522
+#define VIRTCHNL2_OP_GET_STATS 523
+#define VIRTCHNL2_OP_RESET_VF 524
+ /* opcode 525 is reserved */
+#define VIRTCHNL2_OP_GET_PTYPE_INFO 526
+ /* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+ * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ */
+ /* opcodes 529, 530, and 531 are reserved */
+#define VIRTCHNL2_OP_CREATE_ADI 532
+#define VIRTCHNL2_OP_DESTROY_ADI 533
+
+#define VIRTCHNL2_MAX_NUM_PROTO_HDRS 32
+
+#define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX 0xFFFF
+
+/* VIRTCHNL2_VPORT_TYPE
+ * Type of virtual port
+ */
+#define VIRTCHNL2_VPORT_TYPE_DEFAULT 0
+#define VIRTCHNL2_VPORT_TYPE_SRIOV 1
+#define VIRTCHNL2_VPORT_TYPE_SIOV 2
+#define VIRTCHNL2_VPORT_TYPE_SUBDEV 3
+#define VIRTCHNL2_VPORT_TYPE_MNG 4
+
+/* VIRTCHNL2_QUEUE_MODEL
+ * Type of queue model
+ *
+ * In the single queue model, the same transmit descriptor queue is used by
+ * software to post descriptors to hardware and by hardware to post completed
+ * descriptors to software.
+ * Likewise, the same receive descriptor queue is used by hardware to post
+ * completions to software and by software to post buffers to hardware.
+ */
+#define VIRTCHNL2_QUEUE_MODEL_SINGLE 0
+/* In the split queue model, hardware uses transmit completion queues to post
+ * descriptor/buffer completions to software, while software uses transmit
+ * descriptor queues to post descriptors to hardware.
+ * Likewise, hardware posts descriptor completions to the receive descriptor
+ * queue, while software uses receive buffer queues to post buffers to hardware.
+ */
+#define VIRTCHNL2_QUEUE_MODEL_SPLIT 1
+
+/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
+ * Checksum offload capability flags
+ */
+#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 BIT(0)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP BIT(1)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP BIT(2)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP BIT(3)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP BIT(4)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP BIT(5)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP BIT(6)
+#define VIRTCHNL2_CAP_TX_CSUM_GENERIC BIT(7)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 BIT(8)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP BIT(9)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP BIT(10)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP BIT(11)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP BIT(12)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP BIT(13)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP BIT(14)
+#define VIRTCHNL2_CAP_RX_CSUM_GENERIC BIT(15)
+#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL BIT(16)
+#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL BIT(17)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL BIT(18)
+#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL BIT(19)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL BIT(20)
+#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL BIT(21)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL BIT(22)
+#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL BIT(23)
+
+/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
+ * Segmentation offload capability flags
+ */
+#define VIRTCHNL2_CAP_SEG_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_SEG_IPV4_UDP BIT(1)
+#define VIRTCHNL2_CAP_SEG_IPV4_SCTP BIT(2)
+#define VIRTCHNL2_CAP_SEG_IPV6_TCP BIT(3)
+#define VIRTCHNL2_CAP_SEG_IPV6_UDP BIT(4)
+#define VIRTCHNL2_CAP_SEG_IPV6_SCTP BIT(5)
+#define VIRTCHNL2_CAP_SEG_GENERIC BIT(6)
+#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL BIT(7)
+#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL BIT(8)
+
+/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
+ * Receive Side Scaling Flow type capability flags
+ */
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP BIT(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP BIT(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER BIT(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP BIT(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP BIT(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP BIT(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER BIT(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH BIT(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP BIT(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP BIT(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH BIT(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP BIT(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP BIT(13)
+
+/* VIRTCHNL2_HEADER_SPLIT_CAPS
+ * Header split capability flags
+ */
+/* for prepended metadata */
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 BIT(0)
+/* all VLANs go into header buffer */
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 BIT(1)
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 BIT(2)
+#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6 BIT(3)
+
+/* VIRTCHNL2_RSC_OFFLOAD_CAPS
+ * Receive Side Coalescing offload capability flags
+ */
+#define VIRTCHNL2_CAP_RSC_IPV4_TCP BIT(0)
+#define VIRTCHNL2_CAP_RSC_IPV4_SCTP BIT(1)
+#define VIRTCHNL2_CAP_RSC_IPV6_TCP BIT(2)
+#define VIRTCHNL2_CAP_RSC_IPV6_SCTP BIT(3)
+
+/* VIRTCHNL2_OTHER_CAPS
+ * Other capability flags
+ * SPLITQ_QSCHED: Queue based scheduling using split queue model
+ * TX_VLAN: VLAN tag insertion
+ * RX_VLAN: VLAN tag stripping
+ */
+#define VIRTCHNL2_CAP_RDMA BIT(0)
+#define VIRTCHNL2_CAP_SRIOV BIT(1)
+#define VIRTCHNL2_CAP_MACFILTER BIT(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR BIT(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED BIT(4)
+#define VIRTCHNL2_CAP_CRC BIT(5)
+#define VIRTCHNL2_CAP_ADQ BIT(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR BIT(7)
+#define VIRTCHNL2_CAP_PROMISC BIT(8)
+#define VIRTCHNL2_CAP_LINK_SPEED BIT(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC BIT(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES BIT(11)
+/* require additional info */
+#define VIRTCHNL2_CAP_VLAN BIT(12)
+#define VIRTCHNL2_CAP_PTP BIT(13)
+#define VIRTCHNL2_CAP_ADV_RSS BIT(15)
+#define VIRTCHNL2_CAP_FDIR BIT(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC BIT(17)
+#define VIRTCHNL2_CAP_PTYPE BIT(18)
+
+/* VIRTCHNL2_DEVICE_TYPE */
+/* underlying device type */
+#define VIRTCHNL2_MEV_DEVICE 0
+
+/* VIRTCHNL2_TXQ_SCHED_MODE
+ * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
+ * completions where descriptors and buffers are completed at the same time.
+ * Flow scheduling mode allows for out of order packet processing where
+ * descriptors are cleaned in order, but buffers can be completed out of order.
+ */
+#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE 0
+#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW 1
+
+/* VIRTCHNL2_TXQ_FLAGS
+ * Transmit Queue feature flags
+ *
+ * Enable rule miss completion type; packet completion for a packet
+ * sent on exception path; only relevant in flow scheduling mode
+ */
+#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL BIT(0)
+
+/* VIRTCHNL2_PEER_TYPE
+ * Transmit mailbox peer type
+ */
+#define VIRTCHNL2_RDMA_CPF 0
+#define VIRTCHNL2_NVME_CPF 1
+#define VIRTCHNL2_ATE_CPF 2
+#define VIRTCHNL2_LCE_CPF 3
+
+/* VIRTCHNL2_RXQ_FLAGS
+ * Receive Queue Feature flags
+ */
+#define VIRTCHNL2_RXQ_RSC BIT(0)
+#define VIRTCHNL2_RXQ_HDR_SPLIT BIT(1)
+/* When set, packet descriptors are flushed by hardware immediately after
+ * processing each packet.
+ */
+#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK BIT(2)
+#define VIRTCHNL2_RX_DESC_SIZE_16BYTE BIT(3)
+#define VIRTCHNL2_RX_DESC_SIZE_32BYTE BIT(4)
+
+/* VIRTCHNL2_RSS_ALGORITHM
+ * Type of RSS algorithm
+ */
+#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC 0
+#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC 1
+#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC 2
+#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC 3
+
+/* VIRTCHNL2_EVENT_CODES
+ * Type of event
+ */
+#define VIRTCHNL2_EVENT_UNKNOWN 0
+#define VIRTCHNL2_EVENT_LINK_CHANGE 1
+
+/* VIRTCHNL2_QUEUE_TYPE
+ * Transmit and Receive queue types are valid in legacy as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced -
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
+ * the queue where hardware posts completions.
+ */
+#define VIRTCHNL2_QUEUE_TYPE_TX 0
+#define VIRTCHNL2_QUEUE_TYPE_RX 1
+#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION 2
+#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER 3
+#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX 4
+#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX 5
+
+/* VIRTCHNL2_ITR_IDX
+ * Virtchannel interrupt throttling rate index
+ */
+#define VIRTCHNL2_ITR_IDX_0 0
+#define VIRTCHNL2_ITR_IDX_1 1
+#define VIRTCHNL2_ITR_IDX_2 2
+#define VIRTCHNL2_ITR_IDX_NO_ITR 3
+
+/* VIRTCHNL2_VECTOR_LIMITS
+ * Since PF/VF messages are limited by __le16 size, precalculate the maximum
+ * possible values of nested elements in virtchnl structures that virtual
+ * channel can possibly handle in a single message.
+ */
+
+#define VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX (\
+ ((__le16)(~0) - sizeof(struct virtchnl2_del_ena_dis_queues)) / \
+ sizeof(struct virtchnl2_queue_chunk))
+
+#define VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX (\
+ ((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
+ sizeof(struct virtchnl2_queue_vector))
+
+/* VIRTCHNL2_PROTO_HDR_TYPE
+ * Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ANY 0
+#define VIRTCHNL2_PROTO_HDR_PRE_MAC 1
+/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_MAC 2
+#define VIRTCHNL2_PROTO_HDR_POST_MAC 3
+#define VIRTCHNL2_PROTO_HDR_ETHERTYPE 4
+#define VIRTCHNL2_PROTO_HDR_VLAN 5
+#define VIRTCHNL2_PROTO_HDR_SVLAN 6
+#define VIRTCHNL2_PROTO_HDR_CVLAN 7
+#define VIRTCHNL2_PROTO_HDR_MPLS 8
+#define VIRTCHNL2_PROTO_HDR_UMPLS 9
+#define VIRTCHNL2_PROTO_HDR_MMPLS 10
+#define VIRTCHNL2_PROTO_HDR_PTP 11
+#define VIRTCHNL2_PROTO_HDR_CTRL 12
+#define VIRTCHNL2_PROTO_HDR_LLDP 13
+#define VIRTCHNL2_PROTO_HDR_ARP 14
+#define VIRTCHNL2_PROTO_HDR_ECP 15
+#define VIRTCHNL2_PROTO_HDR_EAPOL 16
+#define VIRTCHNL2_PROTO_HDR_PPPOD 17
+#define VIRTCHNL2_PROTO_HDR_PPPOE 18
+/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV4 19
+/* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG 20
+/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV6 21
+/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG 22
+#define VIRTCHNL2_PROTO_HDR_IPV6_EH 23
+/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_UDP 24
+/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_TCP 25
+/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_SCTP 26
+/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ICMP 27
+/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_ICMPV6 28
+#define VIRTCHNL2_PROTO_HDR_IGMP 29
+#define VIRTCHNL2_PROTO_HDR_AH 30
+#define VIRTCHNL2_PROTO_HDR_ESP 31
+#define VIRTCHNL2_PROTO_HDR_IKE 32
+#define VIRTCHNL2_PROTO_HDR_NATT_KEEP 33
+/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_PAY 34
+#define VIRTCHNL2_PROTO_HDR_L2TPV2 35
+#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL 36
+#define VIRTCHNL2_PROTO_HDR_L2TPV3 37
+#define VIRTCHNL2_PROTO_HDR_GTP 38
+#define VIRTCHNL2_PROTO_HDR_GTP_EH 39
+#define VIRTCHNL2_PROTO_HDR_GTPCV2 40
+#define VIRTCHNL2_PROTO_HDR_GTPC_TEID 41
+#define VIRTCHNL2_PROTO_HDR_GTPU 42
+#define VIRTCHNL2_PROTO_HDR_GTPU_UL 43
+#define VIRTCHNL2_PROTO_HDR_GTPU_DL 44
+#define VIRTCHNL2_PROTO_HDR_ECPRI 45
+#define VIRTCHNL2_PROTO_HDR_VRRP 46
+#define VIRTCHNL2_PROTO_HDR_OSPF 47
+/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_TUN 48
+#define VIRTCHNL2_PROTO_HDR_GRE 49
+#define VIRTCHNL2_PROTO_HDR_NVGRE 50
+#define VIRTCHNL2_PROTO_HDR_VXLAN 51
+#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE 52
+#define VIRTCHNL2_PROTO_HDR_GENEVE 53
+#define VIRTCHNL2_PROTO_HDR_NSH 54
+#define VIRTCHNL2_PROTO_HDR_QUIC 55
+#define VIRTCHNL2_PROTO_HDR_PFCP 56
+#define VIRTCHNL2_PROTO_HDR_PFCP_NODE 57
+#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION 58
+#define VIRTCHNL2_PROTO_HDR_RTP 59
+#define VIRTCHNL2_PROTO_HDR_ROCE 60
+#define VIRTCHNL2_PROTO_HDR_ROCEV1 61
+#define VIRTCHNL2_PROTO_HDR_ROCEV2 62
+/* protocol ids upto 32767 are reserved for AVF use */
+/* 32768 - 65534 are used for user defined protocol ids */
+/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+#define VIRTCHNL2_PROTO_HDR_NO_PROTO 65535
+
+#define VIRTCHNL2_VERSION_MAJOR_2 2
+#define VIRTCHNL2_VERSION_MINOR_0 0
+
+
+/* VIRTCHNL2_OP_VERSION
+ * VF posts its version number to the CP. CP responds with its version number
+ * in the same format, along with a return code.
+ * Reply from PF has its major/minor versions also in param0 and param1.
+ * If there is a major version mismatch, then the VF cannot operate.
+ * If there is a minor version mismatch, then the VF can operate but should
+ * add a warning to the system log.
+ *
+ * This version opcode MUST always be specified as == 1, regardless of other
+ * changes in the API. The CP must always respond to this message without
+ * error regardless of version mismatch.
+ */
+struct virtchnl2_version_info {
+ u32 major;
+ u32 minor;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
+
+/* VIRTCHNL2_OP_GET_CAPS
+ * Dataplane driver sends this message to CP to negotiate capabilities and
+ * provides a virtchnl2_get_capabilities structure with its desired
+ * capabilities, max_sriov_vfs and num_allocated_vectors.
+ * CP responds with a virtchnl2_get_capabilities structure updated
+ * with allowed capabilities and the other fields as below.
+ * If PF sets max_sriov_vfs as 0, CP will respond with max number of VFs
+ * that can be created by this PF. For any other value 'n', CP responds
+ * with max_sriov_vfs set to min(n, x) where x is the max number of VFs
+ * allowed by CP's policy. max_sriov_vfs is not applicable for VFs.
+ * If dataplane driver sets num_allocated_vectors as 0, CP will respond with 1
+ * which is default vector associated with the default mailbox. For any other
+ * value 'n', CP responds with a value <= n based on the CP's policy of
+ * max number of vectors for a PF.
+ * CP will respond with the vector ID of mailbox allocated to the PF in
+ * mailbox_vector_id and the number of itr index registers in itr_idx_map.
+ * It also responds with default number of vports that the dataplane driver
+ * should comeup with in default_num_vports and maximum number of vports that
+ * can be supported in max_vports
+ */
+struct virtchnl2_get_capabilities {
+ /* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
+ __le32 csum_caps;
+
+ /* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
+ __le32 seg_caps;
+
+ /* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
+ __le32 hsplit_caps;
+
+ /* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
+ __le32 rsc_caps;
+
+ /* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions */
+ __le64 rss_caps;
+
+
+ /* see VIRTCHNL2_OTHER_CAPS definitions */
+ __le64 other_caps;
+
+ /* DYN_CTL register offset and vector id for mailbox provided by CP */
+ __le32 mailbox_dyn_ctl;
+ __le16 mailbox_vector_id;
+ /* Maximum number of allocated vectors for the device */
+ __le16 num_allocated_vectors;
+
+ /* Maximum number of queues that can be supported */
+ __le16 max_rx_q;
+ __le16 max_tx_q;
+ __le16 max_rx_bufq;
+ __le16 max_tx_complq;
+
+ /* The PF sends the maximum VFs it is requesting. The CP responds with
+ * the maximum VFs granted.
+ */
+ __le16 max_sriov_vfs;
+
+ /* maximum number of vports that can be supported */
+ __le16 max_vports;
+ /* default number of vports driver should allocate on load */
+ __le16 default_num_vports;
+
+ /* Max header length hardware can parse/checksum, in bytes */
+ __le16 max_tx_hdr_size;
+
+ /* Max number of scatter gather buffers that can be sent per transmit
+ * packet without needing to be linearized
+ */
+ u8 max_sg_bufs_per_tx_pkt;
+
+ /* see VIRTCHNL2_ITR_IDX definition */
+ u8 itr_idx_map;
+
+ __le16 pad1;
+
+ /* version of Control Plane that is running */
+ __le16 oem_cp_ver_major;
+ __le16 oem_cp_ver_minor;
+ /* see VIRTCHNL2_DEVICE_TYPE definitions */
+ __le32 device_type;
+
+ u8 reserved[12];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
+
+struct virtchnl2_queue_reg_chunk {
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ __le32 pad;
+
+ /* Queue tail register offset and spacing provided by CP */
+ __le64 qtail_reg_start;
+ __le32 qtail_reg_spacing;
+
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl2_queue_reg_chunks {
+ __le16 num_chunks;
+ u8 reserved[6];
+ struct virtchnl2_queue_reg_chunk chunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+
+#define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS 6
+
+/* VIRTCHNL2_OP_CREATE_VPORT
+ * PF sends this message to CP to create a vport by filling in required
+ * fields of virtchnl2_create_vport structure.
+ * CP responds with the updated virtchnl2_create_vport structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_create_vport {
+ /* PF/VF populates the following fields on request */
+ /* see VIRTCHNL2_VPORT_TYPE definitions */
+ __le16 vport_type;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 txq_model;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 rxq_model;
+ __le16 num_tx_q;
+ /* valid only if txq_model is split queue */
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ /* valid only if rxq_model is split queue */
+ __le16 num_rx_bufq;
+ /* relative receive queue index to be used as default */
+ __le16 default_rx_q;
+ /* used to align PF and CP in case of default multiple vports, it is
+ * filled by the PF and CP returns the same value, to enable the driver
+ * to support multiple asynchronous parallel CREATE_VPORT requests and
+ * associate a response to a specific request
+ */
+ __le16 vport_index;
+
+ /* CP populates the following fields on response */
+ __le16 max_mtu;
+ __le32 vport_id;
+ u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
+ __le16 pad;
+ /* see VIRTCHNL2_RX_DESC_IDS definitions */
+ __le64 rx_desc_ids;
+ /* see VIRTCHNL2_TX_DESC_IDS definitions */
+ __le64 tx_desc_ids;
+
+#define MAX_Q_REGIONS 16
+ __le32 max_qs_per_qregion[MAX_Q_REGIONS];
+ __le32 qregion_total_qs;
+ __le16 qregion_type;
+ __le16 pad2;
+
+ /* see VIRTCHNL2_RSS_ALGORITHM definitions */
+ __le32 rss_algorithm;
+ __le16 rss_key_size;
+ __le16 rss_lut_size;
+
+ /* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
+ __le32 rx_split_pos;
+
+ u8 reserved[20];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+
+/* VIRTCHNL2_OP_DESTROY_VPORT
+ * VIRTCHNL2_OP_ENABLE_VPORT
+ * VIRTCHNL2_OP_DISABLE_VPORT
+ * PF sends this message to CP to destroy, enable or disable a vport by filling
+ * in the vport_id in virtchnl2_vport structure.
+ * CP responds with the status of the requested operation.
+ */
+struct virtchnl2_vport {
+ __le32 vport_id;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
+
+/* Transmit queue config info */
+struct virtchnl2_txq_info {
+ __le64 dma_ring_addr;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+
+ __le32 queue_id;
+ /* valid only if queue model is split and type is trasmit queue. Used
+ * in many to one mapping of transmit queues to completion queue
+ */
+ __le16 relative_queue_id;
+
+ /* see VIRTCHNL2_QUEUE_MODEL definitions */
+ __le16 model;
+
+ /* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
+ __le16 sched_mode;
+
+ /* see VIRTCHNL2_TXQ_FLAGS definitions */
+ __le16 qflags;
+ __le16 ring_len;
+
+ /* valid only if queue model is split and type is transmit queue */
+ __le16 tx_compl_queue_id;
+ /* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
+ /* see VIRTCHNL2_PEER_TYPE definitions */
+ __le16 peer_type;
+ /* valid only if queue type is CONFIG_TX and used to deliver messages
+ * for the respective CONFIG_TX queue
+ */
+ __le16 peer_rx_queue_id;
+
+ /* value ranges from 0 to 15 */
+ __le16 qregion_id;
+ u8 pad[2];
+
+ /* Egress pasid is used for SIOV use case */
+ __le32 egress_pasid;
+ __le32 egress_hdr_pasid;
+ __le32 egress_buf_pasid;
+
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
+
+/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
+ * PF sends this message to set up parameters for one or more transmit queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_txq_info
+ * structures. CP configures requested queues and returns a status code. If
+ * num_qinfo specified is greater than the number of queues associated with the
+ * vport, an error is returned and no queues are configured.
+ */
+struct virtchnl2_config_tx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+
+ u8 reserved[10];
+ struct virtchnl2_txq_info qinfo[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+
+/* Receive queue config info */
+struct virtchnl2_rxq_info {
+ /* see VIRTCHNL2_RX_DESC_IDS definitions */
+ __le64 desc_ids;
+ __le64 dma_ring_addr;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 queue_id;
+
+ /* see QUEUE_MODEL definitions */
+ __le16 model;
+
+ __le16 hdr_buffer_size;
+ __le32 data_buffer_size;
+ __le32 max_pkt_size;
+
+ __le16 ring_len;
+ u8 buffer_notif_stride;
+ u8 pad[1];
+
+ /* Applicable only for receive buffer queues */
+ __le64 dma_head_wb_addr;
+
+ /* Applicable only for receive completion queues */
+ /* see VIRTCHNL2_RXQ_FLAGS definitions */
+ __le16 qflags;
+
+ __le16 rx_buffer_low_watermark;
+
+ /* valid only in split queue model */
+ __le16 rx_bufq1_id;
+ /* valid only in split queue model */
+ __le16 rx_bufq2_id;
+ /* it indicates if there is a second buffer, rx_bufq2_id is valid only
+ * if this field is set
+ */
+ u8 bufq2_ena;
+ u8 pad2;
+
+ /* value ranges from 0 to 15 */
+ __le16 qregion_id;
+
+ /* Ingress pasid is used for SIOV use case */
+ __le32 ingress_pasid;
+ __le32 ingress_hdr_pasid;
+ __le32 ingress_buf_pasid;
+
+ u8 reserved[16];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
+
+/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
+ * PF sends this message to set up parameters for one or more receive queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
+ * structures. CP configures requested queues and returns a status code.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ */
+struct virtchnl2_config_rx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+
+ u8 reserved[18];
+ struct virtchnl2_rxq_info qinfo[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+
+/* VIRTCHNL2_OP_ADD_QUEUES
+ * PF sends this message to request additional transmit/receive queues beyond
+ * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
+ * structure is used to specify the number of each type of queues.
+ * CP responds with the same structure with the actual number of queues assigned
+ * followed by num_chunks of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_add_queues {
+ __le32 vport_id;
+ __le16 num_tx_q;
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ __le16 num_rx_bufq;
+ u8 reserved[4];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+
+/* Structure to specify a chunk of contiguous interrupt vectors */
+struct virtchnl2_vector_chunk {
+ __le16 start_vector_id;
+ __le16 start_evv_id;
+ __le16 num_vectors;
+ __le16 pad1;
+
+ /* Register offsets and spacing provided by CP.
+ * dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
+ __le32 dynctl_reg_start;
+ __le32 dynctl_reg_spacing;
+
+ __le32 itrn_reg_start;
+ __le32 itrn_reg_spacing;
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
+
+/* Structure to specify several chunks of contiguous interrupt vectors */
+struct virtchnl2_vector_chunks {
+ __le16 num_vchunks;
+ u8 reserved[14];
+ struct virtchnl2_vector_chunk vchunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+
+/* VIRTCHNL2_OP_ALLOC_VECTORS
+ * PF sends this message to request additional interrupt vectors beyond the
+ * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
+ * structure is used to specify the number of vectors requested. CP responds
+ * with the same structure with the actual number of vectors assigned followed
+ * by virtchnl2_vector_chunks structure identifying the vector ids.
+ */
+struct virtchnl2_alloc_vectors {
+ __le16 num_vectors;
+ u8 reserved[14];
+ struct virtchnl2_vector_chunks vchunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+
+/* VIRTCHNL2_OP_DEALLOC_VECTORS
+ * PF sends this message to release the vectors.
+ * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
+ * away. CP performs requested action and returns status.
+ */
+
+/* VIRTCHNL2_OP_GET_RSS_LUT
+ * VIRTCHNL2_OP_SET_RSS_LUT
+ * PF sends this message to get or set RSS lookup table. Only supported if
+ * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl2_rss_lut structure
+ */
+struct virtchnl2_rss_lut {
+ __le32 vport_id;
+ __le16 lut_entries_start;
+ __le16 lut_entries;
+ u8 reserved[4];
+ __le32 lut[1]; /* RSS lookup table */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+
+struct virtchnl2_proto_hdr {
+ /* see VIRTCHNL2_PROTO_HDR_TYPE definitions */
+ __le32 type;
+ __le32 field_selector; /* a bit mask to select field for header type */
+ u8 buffer[64];
+ /*
+ * binary buffer in network order for specific header type.
+ * For example, if type = VIRTCHNL2_PROTO_HDR_IPV4, a IPv4
+ * header is expected to be copied into the buffer.
+ */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_proto_hdr);
+
+struct virtchnl2_proto_hdrs {
+ u8 tunnel_level;
+ /*
+ * specify where protocol header start from.
+ * 0 - from the outer layer
+ * 1 - from the first inner layer
+ * 2 - from the second inner layer
+ * ....
+ */
+ __le32 count; /* the proto layers must < VIRTCHNL2_MAX_NUM_PROTO_HDRS */
+ struct virtchnl2_proto_hdr proto_hdr[VIRTCHNL2_MAX_NUM_PROTO_HDRS];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(2312, virtchnl2_proto_hdrs);
+
+struct virtchnl2_rss_cfg {
+ struct virtchnl2_proto_hdrs proto_hdrs;
+
+ /* see VIRTCHNL2_RSS_ALGORITHM definitions */
+ __le32 rss_algorithm;
+ u8 reserved[128];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(2444, virtchnl2_rss_cfg);
+
+/* VIRTCHNL2_OP_GET_RSS_KEY
+ * PF sends this message to get RSS key. Only supported if both PF and CP
+ * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
+ * the virtchnl2_rss_key structure
+ */
+
+/* VIRTCHNL2_OP_GET_RSS_HASH
+ * VIRTCHNL2_OP_SET_RSS_HASH
+ * PF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the CP sets these to all possible traffic types that the
+ * hardware supports. The PF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
+ * during configuration negotiation.
+ */
+struct virtchnl2_rss_hash {
+ /* Packet Type Groups bitmap */
+ __le64 ptype_groups;
+ __le32 vport_id;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
+
+/* VIRTCHNL2_OP_SET_SRIOV_VFS
+ * This message is used to set number of SRIOV VFs to be created. The actual
+ * allocation of resources for the VFs in terms of vport, queues and interrupts
+ * is done by CP. When this call completes, the APF driver calls
+ * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
+ * The number of VFs set to 0 will destroy all the VFs of this function.
+ */
+
+struct virtchnl2_sriov_vfs_info {
+ __le16 num_vfs;
+ __le16 pad;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
+
+/* VIRTCHNL2_OP_CREATE_ADI
+ * PF sends this message to HMA to create ADI by filling in required
+ * fields of virtchnl2_create_adi structure.
+ * HMA responds with the updated virtchnl2_create_adi structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ */
+struct virtchnl2_create_adi {
+ /* PF sends PASID to HMA */
+ __le32 pasid;
+ /*
+ * mbx_id is set to 1 by PF when requesting HMA to provide HW mailbox
+ * id else it is set to 0 by PF
+ */
+ __le16 mbx_id;
+ /* PF sends mailbox vector id to HMA */
+ __le16 mbx_vec_id;
+ /* HMA populates ADI id */
+ __le16 adi_id;
+ u8 reserved[64];
+ u8 pad[6];
+ /* HMA populates queue chunks */
+ struct virtchnl2_queue_reg_chunks chunks;
+ /* PF sends vector chunks to HMA */
+ struct virtchnl2_vector_chunks vchunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_create_adi);
+
+/* VIRTCHNL2_OP_DESTROY_ADI
+ * PF sends this message to HMA to destroy ADI by filling
+ * in the adi_id in virtchnl2_destropy_adi structure.
+ * HMA responds with the status of the requested operation.
+ */
+struct virtchnl2_destroy_adi {
+ __le16 adi_id;
+ u8 reserved[2];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_destroy_adi);
+
+/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+ * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
+ * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
+ * last ptype.
+ */
+struct virtchnl2_ptype {
+ __le16 ptype_id_10;
+ u8 ptype_id_8;
+ /* number of protocol ids the packet supports, maximum of 32
+ * protocol ids are supported
+ */
+ u8 proto_id_count;
+ __le16 pad;
+ /* proto_id_count decides the allocation of protocol id array */
+ /* see VIRTCHNL2_PROTO_HDR_TYPE */
+ __le16 proto_id[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+
+/* VIRTCHNL2_OP_GET_PTYPE_INFO
+ * PF sends this message to CP to get all supported packet types. It does by
+ * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
+ * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
+ * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
+ * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
+ * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
+ * messages, where each message will have the start ptype id, number of ptypes
+ * sent in that message and the ptype array itself. When CP is done updating
+ * all ptype information it extracted from the package (number of ptypes
+ * extracted might be less than what PF expects), it will append a dummy ptype
+ * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
+ * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
+ * messages.
+ */
+struct virtchnl2_get_ptype_info {
+ __le16 start_ptype_id;
+ __le16 num_ptypes;
+ __le32 pad;
+ struct virtchnl2_ptype ptype[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
+
+/* VIRTCHNL2_OP_GET_STATS
+ * PF/VF sends this message to CP to get the update stats by specifying the
+ * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ */
+struct virtchnl2_vport_stats {
+ __le32 vport_id;
+ u8 pad[4];
+
+ __le64 rx_bytes; /* received bytes */
+ __le64 rx_unicast; /* received unicast pkts */
+ __le64 rx_multicast; /* received multicast pkts */
+ __le64 rx_broadcast; /* received broadcast pkts */
+ __le64 rx_discards;
+ __le64 rx_errors;
+ __le64 rx_unknown_protocol;
+ __le64 tx_bytes; /* transmitted bytes */
+ __le64 tx_unicast; /* transmitted unicast pkts */
+ __le64 tx_multicast; /* transmitted multicast pkts */
+ __le64 tx_broadcast; /* transmitted broadcast pkts */
+ __le64 tx_discards;
+ __le64 tx_errors;
+ u8 reserved[16];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
+
+/* VIRTCHNL2_OP_EVENT
+ * CP sends this message to inform the PF/VF driver of events that may affect
+ * it. No direct response is expected from the driver, though it may generate
+ * other messages in response to this one.
+ */
+struct virtchnl2_event {
+ /* see VIRTCHNL2_EVENT_CODES definitions */
+ __le32 event;
+ /* link_speed provided in Mbps */
+ __le32 link_speed;
+ __le32 vport_id;
+ u8 link_status;
+ u8 pad[3];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
+
+/* VIRTCHNL2_OP_GET_RSS_KEY
+ * VIRTCHNL2_OP_SET_RSS_KEY
+ * PF/VF sends this message to get or set RSS key. Only supported if both
+ * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation. Uses the virtchnl2_rss_key structure
+ */
+struct virtchnl2_rss_key {
+ __le32 vport_id;
+ __le16 key_len;
+ u8 pad;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
+
+/* structure to specify a chunk of contiguous queues */
+struct virtchnl2_queue_chunk {
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ u8 reserved[4];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
+
+/* structure to specify several chunks of contiguous queues */
+struct virtchnl2_queue_chunks {
+ __le16 num_chunks;
+ u8 reserved[6];
+ struct virtchnl2_queue_chunk chunks[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+
+/* VIRTCHNL2_OP_ENABLE_QUEUES
+ * VIRTCHNL2_OP_DISABLE_QUEUES
+ * VIRTCHNL2_OP_DEL_QUEUES
+ *
+ * PF sends these messages to enable, disable or delete queues specified in
+ * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * to be enabled/disabled/deleted. Also applicable to single queue receive or
+ * transmit. CP performs requested action and returns status.
+ */
+struct virtchnl2_del_ena_dis_queues {
+ __le32 vport_id;
+ u8 reserved[4];
+ struct virtchnl2_queue_chunks chunks;
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+
+/* Queue to vector mapping */
+struct virtchnl2_queue_vector {
+ __le32 queue_id;
+ __le16 vector_id;
+ u8 pad[2];
+
+ /* see VIRTCHNL2_ITR_IDX definitions */
+ __le32 itr_idx;
+
+ /* see VIRTCHNL2_QUEUE_TYPE definitions */
+ __le32 queue_type;
+ u8 reserved[8];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
+
+/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+ *
+ * PF sends this message to map or unmap queues to vectors and interrupt
+ * throttling rate index registers. External data buffer contains
+ * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
+ * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
+ * after validating the queue and vector ids and returns a status code.
+ */
+struct virtchnl2_queue_vector_maps {
+ __le32 vport_id;
+ __le16 num_qv_maps;
+ u8 pad[10];
+ struct virtchnl2_queue_vector qv_maps[1];
+};
+
+VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+
+
+static inline const char *virtchnl2_op_str(__le32 v_opcode)
+{
+ switch (v_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ return "VIRTCHNL2_OP_VERSION";
+ case VIRTCHNL2_OP_GET_CAPS:
+ return "VIRTCHNL2_OP_GET_CAPS";
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ return "VIRTCHNL2_OP_CREATE_VPORT";
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ return "VIRTCHNL2_OP_DESTROY_VPORT";
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ return "VIRTCHNL2_OP_ENABLE_VPORT";
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ return "VIRTCHNL2_OP_DISABLE_VPORT";
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ return "VIRTCHNL2_OP_CONFIG_TX_QUEUES";
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ return "VIRTCHNL2_OP_CONFIG_RX_QUEUES";
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ return "VIRTCHNL2_OP_ENABLE_QUEUES";
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ return "VIRTCHNL2_OP_DISABLE_QUEUES";
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ return "VIRTCHNL2_OP_ADD_QUEUES";
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ return "VIRTCHNL2_OP_DEL_QUEUES";
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ return "VIRTCHNL2_OP_MAP_QUEUE_VECTOR";
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ return "VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR";
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ return "VIRTCHNL2_OP_GET_RSS_KEY";
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ return "VIRTCHNL2_OP_SET_RSS_KEY";
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ return "VIRTCHNL2_OP_GET_RSS_LUT";
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ return "VIRTCHNL2_OP_SET_RSS_LUT";
+ case VIRTCHNL2_OP_GET_RSS_HASH:
+ return "VIRTCHNL2_OP_GET_RSS_HASH";
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ return "VIRTCHNL2_OP_SET_RSS_HASH";
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ return "VIRTCHNL2_OP_SET_SRIOV_VFS";
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ return "VIRTCHNL2_OP_ALLOC_VECTORS";
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ return "VIRTCHNL2_OP_DEALLOC_VECTORS";
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ return "VIRTCHNL2_OP_GET_PTYPE_INFO";
+ case VIRTCHNL2_OP_GET_STATS:
+ return "VIRTCHNL2_OP_GET_STATS";
+ case VIRTCHNL2_OP_EVENT:
+ return "VIRTCHNL2_OP_EVENT";
+ case VIRTCHNL2_OP_RESET_VF:
+ return "VIRTCHNL2_OP_RESET_VF";
+ case VIRTCHNL2_OP_CREATE_ADI:
+ return "VIRTCHNL2_OP_CREATE_ADI";
+ case VIRTCHNL2_OP_DESTROY_ADI:
+ return "VIRTCHNL2_OP_DESTROY_ADI";
+ default:
+ return "Unsupported (update virtchnl2.h)";
+ }
+}
+
+/**
+ * virtchnl2_vc_validate_vf_msg
+ * @ver: Virtchnl2 version info
+ * @v_opcode: Opcode for the message
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * validate msg format against struct for each opcode
+ */
+static inline int
+virtchnl2_vc_validate_vf_msg(struct virtchnl2_version_info *ver, u32 v_opcode,
+ u8 *msg, __le16 msglen)
+{
+ bool err_msg_format = false;
+ __le32 valid_len = 0;
+
+ /* Validate message length. */
+ switch (v_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ valid_len = sizeof(struct virtchnl2_version_info);
+ break;
+ case VIRTCHNL2_OP_GET_CAPS:
+ valid_len = sizeof(struct virtchnl2_get_capabilities);
+ break;
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ valid_len = sizeof(struct virtchnl2_create_vport);
+ if (msglen >= valid_len) {
+ struct virtchnl2_create_vport *cvport =
+ (struct virtchnl2_create_vport *)msg;
+
+ if (cvport->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (cvport->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_CREATE_ADI:
+ valid_len = sizeof(struct virtchnl2_create_adi);
+ if (msglen >= valid_len) {
+ struct virtchnl2_create_adi *cadi =
+ (struct virtchnl2_create_adi *)msg;
+
+ if (cadi->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ if (cadi->vchunks.num_vchunks == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (cadi->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ valid_len += (cadi->vchunks.num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_DESTROY_ADI:
+ valid_len = sizeof(struct virtchnl2_destroy_adi);
+ break;
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ valid_len = sizeof(struct virtchnl2_vport);
+ break;
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ valid_len = sizeof(struct virtchnl2_config_tx_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_config_tx_queues *ctq =
+ (struct virtchnl2_config_tx_queues *)msg;
+ if (ctq->num_qinfo == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (ctq->num_qinfo - 1) *
+ sizeof(struct virtchnl2_txq_info);
+ }
+ break;
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ valid_len = sizeof(struct virtchnl2_config_rx_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_config_rx_queues *crq =
+ (struct virtchnl2_config_rx_queues *)msg;
+ if (crq->num_qinfo == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (crq->num_qinfo - 1) *
+ sizeof(struct virtchnl2_rxq_info);
+ }
+ break;
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ valid_len = sizeof(struct virtchnl2_add_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_add_queues *add_q =
+ (struct virtchnl2_add_queues *)msg;
+
+ if (add_q->chunks.num_chunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (add_q->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_reg_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ if (msglen >= valid_len) {
+ struct virtchnl2_del_ena_dis_queues *qs =
+ (struct virtchnl2_del_ena_dis_queues *)msg;
+ if (qs->chunks.num_chunks == 0 ||
+ qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (qs->chunks.num_chunks - 1) *
+ sizeof(struct virtchnl2_queue_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ valid_len = sizeof(struct virtchnl2_queue_vector_maps);
+ if (msglen >= valid_len) {
+ struct virtchnl2_queue_vector_maps *v_qp =
+ (struct virtchnl2_queue_vector_maps *)msg;
+ if (v_qp->num_qv_maps == 0 ||
+ v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_qp->num_qv_maps - 1) *
+ sizeof(struct virtchnl2_queue_vector);
+ }
+ break;
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ valid_len = sizeof(struct virtchnl2_alloc_vectors);
+ if (msglen >= valid_len) {
+ struct virtchnl2_alloc_vectors *v_av =
+ (struct virtchnl2_alloc_vectors *)msg;
+
+ if (v_av->vchunks.num_vchunks == 0) {
+ /* zero chunks is allowed as input */
+ break;
+ }
+
+ valid_len += (v_av->vchunks.num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ valid_len = sizeof(struct virtchnl2_vector_chunks);
+ if (msglen >= valid_len) {
+ struct virtchnl2_vector_chunks *v_chunks =
+ (struct virtchnl2_vector_chunks *)msg;
+ if (v_chunks->num_vchunks == 0) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += (v_chunks->num_vchunks - 1) *
+ sizeof(struct virtchnl2_vector_chunk);
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ valid_len = sizeof(struct virtchnl2_rss_key);
+ if (msglen >= valid_len) {
+ struct virtchnl2_rss_key *vrk =
+ (struct virtchnl2_rss_key *)msg;
+
+ if (vrk->key_len == 0) {
+ /* zero length is allowed as input */
+ break;
+ }
+
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ valid_len = sizeof(struct virtchnl2_rss_lut);
+ if (msglen >= valid_len) {
+ struct virtchnl2_rss_lut *vrl =
+ (struct virtchnl2_rss_lut *)msg;
+
+ if (vrl->lut_entries == 0) {
+ /* zero entries is allowed as input */
+ break;
+ }
+
+ valid_len += (vrl->lut_entries - 1) * sizeof(__le16);
+ }
+ break;
+ case VIRTCHNL2_OP_GET_RSS_HASH:
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ valid_len = sizeof(struct virtchnl2_rss_hash);
+ break;
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
+ break;
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ valid_len = sizeof(struct virtchnl2_get_ptype_info);
+ break;
+ case VIRTCHNL2_OP_GET_STATS:
+ valid_len = sizeof(struct virtchnl2_vport_stats);
+ break;
+ case VIRTCHNL2_OP_RESET_VF:
+ break;
+ /* These are always errors coming from the VF. */
+ case VIRTCHNL2_OP_EVENT:
+ case VIRTCHNL2_OP_UNKNOWN:
+ default:
+ return VIRTCHNL2_STATUS_ERR_PARAM;
+ }
+ /* few more checks */
+ if (err_msg_format || valid_len != msglen)
+ return VIRTCHNL2_STATUS_ERR_OPCODE_MISMATCH;
+
+ return 0;
+}
+
+#endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/net/idpf/base/virtchnl2_lan_desc.h b/drivers/net/idpf/base/virtchnl2_lan_desc.h
new file mode 100644
index 0000000000..2243b17673
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl2_lan_desc.h
@@ -0,0 +1,603 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+/*
+ * Copyright (C) 2019 Intel Corporation
+ *
+ * For licensing information, see the file 'LICENSE' in the root folder
+ */
+#ifndef _VIRTCHNL2_LAN_DESC_H_
+#define _VIRTCHNL2_LAN_DESC_H_
+
+/* VIRTCHNL2_TX_DESC_IDS
+ * Transmit descriptor ID flags
+ */
+#define VIRTCHNL2_TXDID_DATA BIT(0)
+#define VIRTCHNL2_TXDID_CTX BIT(1)
+#define VIRTCHNL2_TXDID_REINJECT_CTX BIT(2)
+#define VIRTCHNL2_TXDID_FLEX_DATA BIT(3)
+#define VIRTCHNL2_TXDID_FLEX_CTX BIT(4)
+#define VIRTCHNL2_TXDID_FLEX_TSO_CTX BIT(5)
+#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1 BIT(6)
+#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2 BIT(7)
+#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX BIT(8)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX BIT(9)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX BIT(10)
+#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX BIT(11)
+#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED BIT(12)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX BIT(13)
+#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX BIT(14)
+#define VIRTCHNL2_TXDID_DESC_DONE BIT(15)
+
+/* VIRTCHNL2_RX_DESC_IDS
+ * Receive descriptor IDs (range from 0 to 63)
+ */
+#define VIRTCHNL2_RXDID_0_16B_BASE 0
+#define VIRTCHNL2_RXDID_1_32B_BASE 1
+/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+ * differentiated based on queue model; e.g. single queue model can
+ * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+ * for DID 2.
+ */
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ 2
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC 2
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW 3
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB 4
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL 5
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2 6
+#define VIRTCHNL2_RXDID_7_HW_RSVD 7
+/* 9 through 15 are reserved */
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC 16
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN 17
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4 18
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6 19
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW 20
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP 21
+/* 22 through 63 are reserved */
+
+/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+ * Receive descriptor ID bitmasks
+ */
+#define VIRTCHNL2_RXDID_0_16B_BASE_M BIT(VIRTCHNL2_RXDID_0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M BIT(VIRTCHNL2_RXDID_1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+/* 9 through 15 are reserved */
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+/* 22 through 63 are reserved */
+
+/* Rx */
+/* For splitq virtchnl2_rx_flex_desc_adv desc members */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M \
+ MAKEMASK(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M \
+ MAKEMASK(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M \
+ MAKEMASK(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M \
+ MAKEMASK(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M \
+ MAKEMASK(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S 7
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST 8 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * for splitq virtchnl2_rx_flex_desc_adv
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S 0 /* 2 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST 8 /* this entry must be last!!! */
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields */
+/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M \
+ MAKEMASK(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+
+/* for virtchnl2_rx_flex_desc.pkt_length member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M \
+ MAKEMASK(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+
+/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * for singleq (flex) virtchnl2_rx_flex_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S 0
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S 1
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S 2
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S 3
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S 5
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S 6
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S 7
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S 8
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S 9
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST 16 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * for singleq (flex) virtchnl2_rx_flex_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S 0 /* 4 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S 4
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S 5
+/* [10:6] reserved */
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S 13
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST 16 /* this entry must be last!!! */
+
+/* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S 63
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S 52
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M \
+ MAKEMASK(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S 38
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M \
+ MAKEMASK(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S 30
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M \
+ MAKEMASK(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S 19
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M \
+ MAKEMASK(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S 0
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M \
+ MAKEMASK(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+
+/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S 0
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S 1
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S 2
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S 3
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S 4
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S 5 /* 3 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S 8
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S 9 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S 11
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S 12 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S 14
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S 15
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S 16 /* 2 bits */
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S 18
+#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST 19 /* this entry must be last!!! */
+
+/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S 0
+
+/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S 0
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S 1
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S 2
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S 3 /* 3 bits */
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S 3
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S 4
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S 5
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S 6
+#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S 7
+
+/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * for singleq (base) virtchnl2_rx_base_desc
+ * Note: These are predefined bit offsets
+ */
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA 0
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID 1
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV 2
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH 3
+
+/* Receive Descriptors */
+/* splitq buf
+ | 16| 0|
+ ----------------------------------------------------------------
+ | RSV | Buffer ID |
+ ----------------------------------------------------------------
+ | Rx packet buffer adresss |
+ ----------------------------------------------------------------
+ | Rx header buffer adresss |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | 0|
+ */
+struct virtchnl2_splitq_rx_buf_desc {
+ struct {
+ __le16 buf_id; /* Buffer Identifier */
+ __le16 rsvd0;
+ __le32 rsvd1;
+ } qword0;
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 rsvd2;
+}; /* read used with buffer queues*/
+
+/* singleq buf
+ | 0|
+ ----------------------------------------------------------------
+ | Rx packet buffer adresss |
+ ----------------------------------------------------------------
+ | Rx header buffer adresss |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | RSV |
+ ----------------------------------------------------------------
+ | 0|
+ */
+struct virtchnl2_singleq_rx_buf_desc {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ __le64 rsvd1;
+ __le64 rsvd2;
+}; /* read used with buffer queues*/
+
+union virtchnl2_rx_buf_desc {
+ struct virtchnl2_singleq_rx_buf_desc read;
+ struct virtchnl2_splitq_rx_buf_desc split_rd;
+};
+
+/* (0x00) singleq wb(compl) */
+struct virtchnl2_singleq_base_rx_desc {
+ struct {
+ struct {
+ __le16 mirroring_status;
+ __le16 l2tag1;
+ } lo_dword;
+ union {
+ __le32 rss; /* RSS Hash */
+ __le32 fd_id; /* Flow Director filter id */
+ } hi_dword;
+ } qword0;
+ struct {
+ /* status/error/PTYPE/length */
+ __le64 status_error_ptype_len;
+ } qword1;
+ struct {
+ __le16 ext_status; /* extended status */
+ __le16 rsvd;
+ __le16 l2tag2_1;
+ __le16 l2tag2_2;
+ } qword2;
+ struct {
+ __le32 reserved;
+ __le32 fd_id;
+ } qword3;
+}; /* writeback */
+
+/* (0x01) singleq flex compl */
+struct virtchnl2_rx_flex_desc {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile id */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* (0x02) */
+struct virtchnl2_rx_flex_desc_nic {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le32 flow_id;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* Rx Flex Descriptor Switch Profile
+ * RxDID Profile Id 3
+ * Flex-field 0: Source Vsi
+ */
+struct virtchnl2_rx_flex_desc_sw {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 src_vsi; /* [10:15] are reserved */
+ __le16 flex_md1_rsvd;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le32 rsvd; /* flex words 2-3 are reserved */
+ __le32 ts_high;
+};
+
+
+/* Rx Flex Descriptor NIC Profile
+ * RxDID Profile Id 6
+ * Flex-field 0: RSS hash lower 16-bits
+ * Flex-field 1: RSS hash upper 16-bits
+ * Flex-field 2: Flow Id lower 16-bits
+ * Flex-field 3: Source Vsi
+ * Flex-field 4: reserved, Vlan id taken from L2Tag
+ */
+struct virtchnl2_rx_flex_desc_nic_2 {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flow_id;
+ __le16 src_vsi;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/* Rx Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile Id 7
+ */
+struct virtchnl2_rx_flex_desc_adv {
+ /* Qword 0 */
+ u8 rxdid_ucast; /* profile_id=[3:0] */
+ /* rsvd=[5:4] */
+ /* ucast=[7:6] */
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0; /* ptype=[9:0] */
+ /* ip_hdr_err=[10:10] */
+ /* udp_len_err=[11:11] */
+ /* ff0=[15:12] */
+ __le16 pktlen_gen_bufq_id; /* plen=[13:0] */
+ /* gen=[14:14] only in splitq */
+ /* bufq_id=[15:15] only in splitq */
+ __le16 hdrlen_flags; /* header=[9:0] */
+ /* rsc=[10:10] only in splitq */
+ /* sph=[11:11] only in splitq */
+ /* ext_udp_0=[12:12] */
+ /* int_udp_0=[13:13] */
+ /* trunc_mirr=[14:14] */
+ /* miss_prepend=[15:15] */
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 fmd0;
+ __le16 fmd1;
+ /* Qword 2 */
+ __le16 fmd2;
+ u8 fflags2;
+ u8 hash3;
+ __le16 fmd3;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 fmd5;
+ __le16 fmd6;
+ __le16 fmd7_0;
+ __le16 fmd7_1;
+}; /* writeback */
+
+/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
+ * RxDID Profile Id 8
+ * Flex-field 0: BufferID
+ * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
+ * Flex-field 2: Hash[15:0]
+ * Flex-flags 2: Hash[23:16]
+ * Flex-field 3: L2TAG2
+ * Flex-field 5: L2TAG1
+ * Flex-field 7: Timestamp (upper 32 bits)
+ */
+struct virtchnl2_rx_flex_desc_adv_nic_3 {
+ /* Qword 0 */
+ u8 rxdid_ucast; /* profile_id=[3:0] */
+ /* rsvd=[5:4] */
+ /* ucast=[7:6] */
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0; /* ptype=[9:0] */
+ /* ip_hdr_err=[10:10] */
+ /* udp_len_err=[11:11] */
+ /* ff0=[15:12] */
+ __le16 pktlen_gen_bufq_id; /* plen=[13:0] */
+ /* gen=[14:14] only in splitq */
+ /* bufq_id=[15:15] only in splitq */
+ __le16 hdrlen_flags; /* header=[9:0] */
+ /* rsc=[10:10] only in splitq */
+ /* sph=[11:11] only in splitq */
+ /* ext_udp_0=[12:12] */
+ /* int_udp_0=[13:13] */
+ /* trunc_mirr=[14:14] */
+ /* miss_prepend=[15:15] */
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 buf_id; /* only in splitq */
+ union {
+ __le16 raw_cs;
+ __le16 l2tag1;
+ __le16 rscseglen;
+ } misc;
+ /* Qword 2 */
+ __le16 hash1;
+ union {
+ u8 fflags2;
+ u8 mirrorid;
+ u8 hash2;
+ } ff2_mirrid_hash2;
+ u8 hash3;
+ __le16 l2tag2;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 l2tag1;
+ __le16 fmd6;
+ __le32 ts_high;
+}; /* writeback */
+
+union virtchnl2_rx_desc {
+ struct virtchnl2_singleq_rx_buf_desc read;
+ struct virtchnl2_singleq_base_rx_desc base_wb;
+ struct virtchnl2_rx_flex_desc flex_wb;
+ struct virtchnl2_rx_flex_desc_nic flex_nic_wb;
+ struct virtchnl2_rx_flex_desc_sw flex_sw_wb;
+ struct virtchnl2_rx_flex_desc_nic_2 flex_nic_2_wb;
+ struct virtchnl2_rx_flex_desc_adv flex_adv_wb;
+ struct virtchnl2_rx_flex_desc_adv_nic_3 flex_adv_nic_3_wb;
+};
+
+#endif /* _VIRTCHNL_LAN_DESC_H_ */
diff --git a/drivers/net/idpf/base/virtchnl_inline_ipsec.h b/drivers/net/idpf/base/virtchnl_inline_ipsec.h
new file mode 100644
index 0000000000..902f63bd51
--- /dev/null
+++ b/drivers/net/idpf/base/virtchnl_inline_ipsec.h
@@ -0,0 +1,567 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _VIRTCHNL_INLINE_IPSEC_H_
+#define _VIRTCHNL_INLINE_IPSEC_H_
+
+#define VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM 3
+#define VIRTCHNL_IPSEC_MAX_ALGO_CAP_NUM 16
+#define VIRTCHNL_IPSEC_MAX_TX_DESC_NUM 128
+#define VIRTCHNL_IPSEC_MAX_CRYPTO_ITEM_NUMBER 2
+#define VIRTCHNL_IPSEC_MAX_KEY_LEN 128
+#define VIRTCHNL_IPSEC_MAX_SA_DESTROY_NUM 8
+#define VIRTCHNL_IPSEC_SA_DESTROY 0
+#define VIRTCHNL_IPSEC_BROADCAST_VFID 0xFFFFFFFF
+#define VIRTCHNL_IPSEC_INVALID_REQ_ID 0xFFFF
+#define VIRTCHNL_IPSEC_INVALID_SA_CFG_RESP 0xFFFFFFFF
+#define VIRTCHNL_IPSEC_INVALID_SP_CFG_RESP 0xFFFFFFFF
+
+/* crypto type */
+#define VIRTCHNL_AUTH 1
+#define VIRTCHNL_CIPHER 2
+#define VIRTCHNL_AEAD 3
+
+/* caps enabled */
+#define VIRTCHNL_IPSEC_ESN_ENA BIT(0)
+#define VIRTCHNL_IPSEC_UDP_ENCAP_ENA BIT(1)
+#define VIRTCHNL_IPSEC_SA_INDEX_SW_ENA BIT(2)
+#define VIRTCHNL_IPSEC_AUDIT_ENA BIT(3)
+#define VIRTCHNL_IPSEC_BYTE_LIMIT_ENA BIT(4)
+#define VIRTCHNL_IPSEC_DROP_ON_AUTH_FAIL_ENA BIT(5)
+#define VIRTCHNL_IPSEC_ARW_CHECK_ENA BIT(6)
+#define VIRTCHNL_IPSEC_24BIT_SPI_ENA BIT(7)
+
+/* algorithm type */
+/* Hash Algorithm */
+#define VIRTCHNL_HASH_NO_ALG 0 /* NULL algorithm */
+#define VIRTCHNL_AES_CBC_MAC 1 /* AES-CBC-MAC algorithm */
+#define VIRTCHNL_AES_CMAC 2 /* AES CMAC algorithm */
+#define VIRTCHNL_AES_GMAC 3 /* AES GMAC algorithm */
+#define VIRTCHNL_AES_XCBC_MAC 4 /* AES XCBC algorithm */
+#define VIRTCHNL_MD5_HMAC 5 /* HMAC using MD5 algorithm */
+#define VIRTCHNL_SHA1_HMAC 6 /* HMAC using 128 bit SHA algorithm */
+#define VIRTCHNL_SHA224_HMAC 7 /* HMAC using 224 bit SHA algorithm */
+#define VIRTCHNL_SHA256_HMAC 8 /* HMAC using 256 bit SHA algorithm */
+#define VIRTCHNL_SHA384_HMAC 9 /* HMAC using 384 bit SHA algorithm */
+#define VIRTCHNL_SHA512_HMAC 10 /* HMAC using 512 bit SHA algorithm */
+#define VIRTCHNL_SHA3_224_HMAC 11 /* HMAC using 224 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_256_HMAC 12 /* HMAC using 256 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_384_HMAC 13 /* HMAC using 384 bit SHA3 algorithm */
+#define VIRTCHNL_SHA3_512_HMAC 14 /* HMAC using 512 bit SHA3 algorithm */
+/* Cipher Algorithm */
+#define VIRTCHNL_CIPHER_NO_ALG 15 /* NULL algorithm */
+#define VIRTCHNL_3DES_CBC 16 /* Triple DES algorithm in CBC mode */
+#define VIRTCHNL_AES_CBC 17 /* AES algorithm in CBC mode */
+#define VIRTCHNL_AES_CTR 18 /* AES algorithm in Counter mode */
+/* AEAD Algorithm */
+#define VIRTCHNL_AES_CCM 19 /* AES algorithm in CCM mode */
+#define VIRTCHNL_AES_GCM 20 /* AES algorithm in GCM mode */
+#define VIRTCHNL_CHACHA20_POLY1305 21 /* algorithm of ChaCha20-Poly1305 */
+
+/* protocol type */
+#define VIRTCHNL_PROTO_ESP 1
+#define VIRTCHNL_PROTO_AH 2
+#define VIRTCHNL_PROTO_RSVD1 3
+
+/* sa mode */
+#define VIRTCHNL_SA_MODE_TRANSPORT 1
+#define VIRTCHNL_SA_MODE_TUNNEL 2
+#define VIRTCHNL_SA_MODE_TRAN_TUN 3
+#define VIRTCHNL_SA_MODE_UNKNOWN 4
+
+/* sa direction */
+#define VIRTCHNL_DIR_INGRESS 1
+#define VIRTCHNL_DIR_EGRESS 2
+#define VIRTCHNL_DIR_INGRESS_EGRESS 3
+
+/* sa termination */
+#define VIRTCHNL_TERM_SOFTWARE 1
+#define VIRTCHNL_TERM_HARDWARE 2
+
+/* sa ip type */
+#define VIRTCHNL_IPV4 1
+#define VIRTCHNL_IPV6 2
+
+/* for virtchnl_ipsec_resp */
+enum inline_ipsec_resp {
+ INLINE_IPSEC_SUCCESS = 0,
+ INLINE_IPSEC_FAIL = -1,
+ INLINE_IPSEC_ERR_FIFO_FULL = -2,
+ INLINE_IPSEC_ERR_NOT_READY = -3,
+ INLINE_IPSEC_ERR_VF_DOWN = -4,
+ INLINE_IPSEC_ERR_INVALID_PARAMS = -5,
+ INLINE_IPSEC_ERR_NO_MEM = -6,
+};
+
+/* Detailed opcodes for DPDK and IPsec use */
+enum inline_ipsec_ops {
+ INLINE_IPSEC_OP_GET_CAP = 0,
+ INLINE_IPSEC_OP_GET_STATUS = 1,
+ INLINE_IPSEC_OP_SA_CREATE = 2,
+ INLINE_IPSEC_OP_SA_UPDATE = 3,
+ INLINE_IPSEC_OP_SA_DESTROY = 4,
+ INLINE_IPSEC_OP_SP_CREATE = 5,
+ INLINE_IPSEC_OP_SP_DESTROY = 6,
+ INLINE_IPSEC_OP_SA_READ = 7,
+ INLINE_IPSEC_OP_EVENT = 8,
+ INLINE_IPSEC_OP_RESP = 9,
+};
+
+#pragma pack(1)
+/* Not all valid, if certain field is invalid, set 1 for all bits */
+struct virtchnl_algo_cap {
+ u32 algo_type;
+
+ u16 block_size;
+
+ u16 min_key_size;
+ u16 max_key_size;
+ u16 inc_key_size;
+
+ u16 min_iv_size;
+ u16 max_iv_size;
+ u16 inc_iv_size;
+
+ u16 min_digest_size;
+ u16 max_digest_size;
+ u16 inc_digest_size;
+
+ u16 min_aad_size;
+ u16 max_aad_size;
+ u16 inc_aad_size;
+};
+#pragma pack()
+
+/* vf record the capability of crypto from the virtchnl */
+struct virtchnl_sym_crypto_cap {
+ u8 crypto_type;
+ u8 algo_cap_num;
+ struct virtchnl_algo_cap algo_cap_list[VIRTCHNL_IPSEC_MAX_ALGO_CAP_NUM];
+};
+
+/* VIRTCHNL_OP_GET_IPSEC_CAP
+ * VF pass virtchnl_ipsec_cap to PF
+ * and PF return capability of ipsec from virtchnl.
+ */
+#pragma pack(1)
+struct virtchnl_ipsec_cap {
+ /* max number of SA per VF */
+ u16 max_sa_num;
+
+ /* IPsec SA Protocol - value ref VIRTCHNL_PROTO_XXX */
+ u8 virtchnl_protocol_type;
+
+ /* IPsec SA Mode - value ref VIRTCHNL_SA_MODE_XXX */
+ u8 virtchnl_sa_mode;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 termination_mode;
+
+ /* number of supported crypto capability */
+ u8 crypto_cap_num;
+
+ /* descriptor ID */
+ u16 desc_id;
+
+ /* capabilities enabled - value ref VIRTCHNL_IPSEC_XXX_ENA */
+ u32 caps_enabled;
+
+ /* crypto capabilities */
+ struct virtchnl_sym_crypto_cap cap[VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM];
+};
+
+/* configuration of crypto function */
+struct virtchnl_ipsec_crypto_cfg_item {
+ u8 crypto_type;
+
+ u32 algo_type;
+
+ /* Length of valid IV data. */
+ u16 iv_len;
+
+ /* Length of digest */
+ u16 digest_len;
+
+ /* SA salt */
+ u32 salt;
+
+ /* The length of the symmetric key */
+ u16 key_len;
+
+ /* key data buffer */
+ u8 key_data[VIRTCHNL_IPSEC_MAX_KEY_LEN];
+};
+#pragma pack()
+
+struct virtchnl_ipsec_sym_crypto_cfg {
+ struct virtchnl_ipsec_crypto_cfg_item
+ items[VIRTCHNL_IPSEC_MAX_CRYPTO_ITEM_NUMBER];
+};
+
+#pragma pack(1)
+/* VIRTCHNL_OP_IPSEC_SA_CREATE
+ * VF send this SA configuration to PF using virtchnl;
+ * PF create SA as configuration and PF driver will return
+ * an unique index (sa_idx) for the created SA.
+ */
+struct virtchnl_ipsec_sa_cfg {
+ /* IPsec SA Protocol - AH/ESP */
+ u8 virtchnl_protocol_type;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 virtchnl_termination;
+
+ /* type of outer IP - IPv4/IPv6 */
+ u8 virtchnl_ip_type;
+
+ /* type of esn - !0:enable/0:disable */
+ u8 esn_enabled;
+
+ /* udp encap - !0:enable/0:disable */
+ u8 udp_encap_enabled;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* reserved */
+ u8 reserved1;
+
+ /* SA security parameter index */
+ u32 spi;
+
+ /* outer src ip address */
+ u8 src_addr[16];
+
+ /* outer dst ip address */
+ u8 dst_addr[16];
+
+ /* SPD reference. Used to link an SA with its policy.
+ * PF drivers may ignore this field.
+ */
+ u16 spd_ref;
+
+ /* high 32 bits of esn */
+ u32 esn_hi;
+
+ /* low 32 bits of esn */
+ u32 esn_low;
+
+ /* When enabled, sa_index must be valid */
+ u8 sa_index_en;
+
+ /* SA index when sa_index_en is true */
+ u32 sa_index;
+
+ /* auditing mode - enable/disable */
+ u8 audit_en;
+
+ /* lifetime byte limit - enable/disable
+ * When enabled, byte_limit_hard and byte_limit_soft
+ * must be valid.
+ */
+ u8 byte_limit_en;
+
+ /* hard byte limit count */
+ u64 byte_limit_hard;
+
+ /* soft byte limit count */
+ u64 byte_limit_soft;
+
+ /* drop on authentication failure - enable/disable */
+ u8 drop_on_auth_fail_en;
+
+ /* anti-reply window check - enable/disable
+ * When enabled, arw_size must be valid.
+ */
+ u8 arw_check_en;
+
+ /* size of arw window, offset by 1. Setting to 0
+ * represents ARW window size of 1. Setting to 127
+ * represents ARW window size of 128
+ */
+ u8 arw_size;
+
+ /* no ip offload mode - enable/disable
+ * When enabled, ip type and address must not be valid.
+ */
+ u8 no_ip_offload_en;
+
+ /* SA Domain. Used to logical separate an SADB into groups.
+ * PF drivers supporting a single group ignore this field.
+ */
+ u16 sa_domain;
+
+ /* crypto configuration */
+ struct virtchnl_ipsec_sym_crypto_cfg crypto_cfg;
+};
+#pragma pack()
+
+/* VIRTCHNL_OP_IPSEC_SA_UPDATE
+ * VF send configuration of index of SA to PF
+ * PF will update SA according to configuration
+ */
+struct virtchnl_ipsec_sa_update {
+ u32 sa_index; /* SA to update */
+ u32 esn_hi; /* high 32 bits of esn */
+ u32 esn_low; /* low 32 bits of esn */
+};
+
+#pragma pack(1)
+/* VIRTCHNL_OP_IPSEC_SA_DESTROY
+ * VF send configuration of index of SA to PF
+ * PF will destroy SA according to configuration
+ * flag bitmap indicate all SA or just selected SA will
+ * be destroyed
+ */
+struct virtchnl_ipsec_sa_destroy {
+ /* All zero bitmap indicates all SA will be destroyed.
+ * Non-zero bitmap indicates the selected SA in
+ * array sa_index will be destroyed.
+ */
+ u8 flag;
+
+ /* selected SA index */
+ u32 sa_index[VIRTCHNL_IPSEC_MAX_SA_DESTROY_NUM];
+};
+
+/* VIRTCHNL_OP_IPSEC_SA_READ
+ * VF send this SA configuration to PF using virtchnl;
+ * PF read SA and will return configuration for the created SA.
+ */
+struct virtchnl_ipsec_sa_read {
+ /* SA valid - invalid/valid */
+ u8 valid;
+
+ /* SA active - inactive/active */
+ u8 active;
+
+ /* SA SN rollover - not_rollover/rollover */
+ u8 sn_rollover;
+
+ /* IPsec SA Protocol - AH/ESP */
+ u8 virtchnl_protocol_type;
+
+ /* termination mode - value ref VIRTCHNL_TERM_XXX */
+ u8 virtchnl_termination;
+
+ /* auditing mode - enable/disable */
+ u8 audit_en;
+
+ /* lifetime byte limit - enable/disable
+ * When set to limit, byte_limit_hard and byte_limit_soft
+ * must be valid.
+ */
+ u8 byte_limit_en;
+
+ /* hard byte limit count */
+ u64 byte_limit_hard;
+
+ /* soft byte limit count */
+ u64 byte_limit_soft;
+
+ /* drop on authentication failure - enable/disable */
+ u8 drop_on_auth_fail_en;
+
+ /* anti-replay window check - enable/disable
+ * When set to check, arw_size, arw_top, and arw must be valid
+ */
+ u8 arw_check_en;
+
+ /* size of arw window, offset by 1. Setting to 0
+ * represents ARW window size of 1. Setting to 127
+ * represents ARW window size of 128
+ */
+ u8 arw_size;
+
+ /* reserved */
+ u8 reserved1;
+
+ /* top of anti-replay-window */
+ u64 arw_top;
+
+ /* anti-replay-window */
+ u8 arw[16];
+
+ /* packets processed */
+ u64 packets_processed;
+
+ /* bytes processed */
+ u64 bytes_processed;
+
+ /* packets dropped */
+ u32 packets_dropped;
+
+ /* authentication failures */
+ u32 auth_fails;
+
+ /* ARW check failures */
+ u32 arw_fails;
+
+ /* type of esn - enable/disable */
+ u8 esn;
+
+ /* IPSec SA Direction - value ref VIRTCHNL_DIR_XXX */
+ u8 virtchnl_direction;
+
+ /* SA security parameter index */
+ u32 spi;
+
+ /* SA salt */
+ u32 salt;
+
+ /* high 32 bits of esn */
+ u32 esn_hi;
+
+ /* low 32 bits of esn */
+ u32 esn_low;
+
+ /* SA Domain. Used to logical separate an SADB into groups.
+ * PF drivers supporting a single group ignore this field.
+ */
+ u16 sa_domain;
+
+ /* SPD reference. Used to link an SA with its policy.
+ * PF drivers may ignore this field.
+ */
+ u16 spd_ref;
+
+ /* crypto configuration. Salt and keys are set to 0 */
+ struct virtchnl_ipsec_sym_crypto_cfg crypto_cfg;
+};
+#pragma pack()
+
+/* Add allowlist entry in IES */
+struct virtchnl_ipsec_sp_cfg {
+ u32 spi;
+ u32 dip[4];
+
+ /* Drop frame if true or redirect to QAT if false. */
+ u8 drop;
+
+ /* Congestion domain. For future use. */
+ u8 cgd;
+
+ /* 0 for IPv4 table, 1 for IPv6 table. */
+ u8 table_id;
+
+ /* Set TC (congestion domain) if true. For future use. */
+ u8 set_tc;
+
+ /* 0 for NAT-T unsupported, 1 for NAT-T supported */
+ u8 is_udp;
+
+ /* reserved */
+ u8 reserved;
+
+ /* NAT-T UDP port number. Only valid in case NAT-T supported */
+ u16 udp_port;
+};
+
+#pragma pack(1)
+/* Delete allowlist entry in IES */
+struct virtchnl_ipsec_sp_destroy {
+ /* 0 for IPv4 table, 1 for IPv6 table. */
+ u8 table_id;
+ u32 rule_id;
+};
+#pragma pack()
+
+/* Response from IES to allowlist operations */
+struct virtchnl_ipsec_sp_cfg_resp {
+ u32 rule_id;
+};
+
+struct virtchnl_ipsec_sa_cfg_resp {
+ u32 sa_handle;
+};
+
+#define INLINE_IPSEC_EVENT_RESET 0x1
+#define INLINE_IPSEC_EVENT_CRYPTO_ON 0x2
+#define INLINE_IPSEC_EVENT_CRYPTO_OFF 0x4
+
+struct virtchnl_ipsec_event {
+ u32 ipsec_event_data;
+};
+
+#define INLINE_IPSEC_STATUS_AVAILABLE 0x1
+#define INLINE_IPSEC_STATUS_UNAVAILABLE 0x2
+
+struct virtchnl_ipsec_status {
+ u32 status;
+};
+
+struct virtchnl_ipsec_resp {
+ u32 resp;
+};
+
+/* Internal message descriptor for VF <-> IPsec communication */
+struct inline_ipsec_msg {
+ u16 ipsec_opcode;
+ u16 req_id;
+
+ union {
+ /* IPsec request */
+ struct virtchnl_ipsec_sa_cfg sa_cfg[0];
+ struct virtchnl_ipsec_sp_cfg sp_cfg[0];
+ struct virtchnl_ipsec_sa_update sa_update[0];
+ struct virtchnl_ipsec_sa_destroy sa_destroy[0];
+ struct virtchnl_ipsec_sp_destroy sp_destroy[0];
+
+ /* IPsec response */
+ struct virtchnl_ipsec_sa_cfg_resp sa_cfg_resp[0];
+ struct virtchnl_ipsec_sp_cfg_resp sp_cfg_resp[0];
+ struct virtchnl_ipsec_cap ipsec_cap[0];
+ struct virtchnl_ipsec_status ipsec_status[0];
+ /* response to del_sa, del_sp, update_sa */
+ struct virtchnl_ipsec_resp ipsec_resp[0];
+
+ /* IPsec event (no req_id is required) */
+ struct virtchnl_ipsec_event event[0];
+
+ /* Reserved */
+ struct virtchnl_ipsec_sa_read sa_read[0];
+ } ipsec_data;
+};
+
+static inline u16 virtchnl_inline_ipsec_val_msg_len(u16 opcode)
+{
+ u16 valid_len = sizeof(struct inline_ipsec_msg);
+
+ switch (opcode) {
+ case INLINE_IPSEC_OP_GET_CAP:
+ case INLINE_IPSEC_OP_GET_STATUS:
+ break;
+ case INLINE_IPSEC_OP_SA_CREATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_cfg);
+ break;
+ case INLINE_IPSEC_OP_SP_CREATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sp_cfg);
+ break;
+ case INLINE_IPSEC_OP_SA_UPDATE:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_update);
+ break;
+ case INLINE_IPSEC_OP_SA_DESTROY:
+ valid_len += sizeof(struct virtchnl_ipsec_sa_destroy);
+ break;
+ case INLINE_IPSEC_OP_SP_DESTROY:
+ valid_len += sizeof(struct virtchnl_ipsec_sp_destroy);
+ break;
+ /* Only for msg length caculation of response to VF in case of
+ * inline ipsec failure.
+ */
+ case INLINE_IPSEC_OP_RESP:
+ valid_len += sizeof(struct virtchnl_ipsec_resp);
+ break;
+ default:
+ valid_len = 0;
+ break;
+ }
+
+ return valid_len;
+}
+
+#endif /* _VIRTCHNL_INLINE_IPSEC_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 02/11] net/idpf/base: add OS specific implementation
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
2022-05-18 8:25 ` [RFC v3 01/11] net/idpf/base: introduce base code Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 03/11] net/idpf: support device initialization Junfeng Guo
` (8 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add some MACRO definations and small functions which are specific
for DPDK.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_osdep.h | 365 +++++++++++++++++++++++++++++
1 file changed, 365 insertions(+)
create mode 100644 drivers/net/idpf/base/iecm_osdep.h
diff --git a/drivers/net/idpf/base/iecm_osdep.h b/drivers/net/idpf/base/iecm_osdep.h
new file mode 100644
index 0000000000..60e21fbc1b
--- /dev/null
+++ b/drivers/net/idpf/base/iecm_osdep.h
@@ -0,0 +1,365 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2001-2022 Intel Corporation
+ */
+
+#ifndef _IECM_OSDEP_H_
+#define _IECM_OSDEP_H_
+
+#include <string.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_memcpy.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_byteorder.h>
+#include <rte_cycles.h>
+#include <rte_spinlock.h>
+#include <rte_log.h>
+#include <rte_random.h>
+#include <rte_io.h>
+
+#include "../idpf_logs.h"
+
+#define INLINE inline
+#define STATIC static
+
+typedef uint8_t u8;
+typedef int8_t s8;
+typedef uint16_t u16;
+typedef int16_t s16;
+typedef uint32_t u32;
+typedef int32_t s32;
+typedef uint64_t u64;
+typedef uint64_t s64;
+
+typedef enum iecm_status iecm_status;
+typedef struct iecm_lock iecm_lock;
+
+#define __iomem
+#define hw_dbg(hw, S, A...) do {} while (0)
+#define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
+#define lower_32_bits(n) ((u32)(n))
+#define low_16_bits(x) ((x) & 0xFFFF)
+#define high_16_bits(x) (((x) & 0xFFFF0000) >> 16)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN 6
+#endif
+
+#ifndef __le16
+#define __le16 uint16_t
+#endif
+#ifndef __le32
+#define __le32 uint32_t
+#endif
+#ifndef __le64
+#define __le64 uint64_t
+#endif
+#ifndef __be16
+#define __be16 uint16_t
+#endif
+#ifndef __be32
+#define __be32 uint32_t
+#endif
+#ifndef __be64
+#define __be64 uint64_t
+#endif
+
+#ifndef __always_unused
+#define __always_unused __attribute__((__unused__))
+#endif
+#ifndef __maybe_unused
+#define __maybe_unused __attribute__((__unused__))
+#endif
+#ifndef __packed
+#define __packed __attribute__((packed))
+#endif
+
+#ifndef BIT_ULL
+#define BIT_ULL(a) (1ULL << (a))
+#endif
+
+#ifndef BIT
+#define BIT(a) (1ULL << (a))
+#endif
+
+#define FALSE 0
+#define TRUE 1
+#define false 0
+#define true 1
+
+#define min(a, b) RTE_MIN(a, b)
+#define max(a, b) RTE_MAX(a, b)
+
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof(arr[0]))
+#define FIELD_SIZEOF(t, f) (sizeof(((t *)0)->(f)))
+#define MAKEMASK(m, s) ((m) << (s))
+
+#define DEBUGOUT(S) PMD_DRV_LOG_RAW(DEBUG, S)
+#define DEBUGOUT2(S, A...) PMD_DRV_LOG_RAW(DEBUG, S, ##A)
+#define DEBUGFUNC(F) PMD_DRV_LOG_RAW(DEBUG, F)
+
+#define iecm_debug(h, m, s, ...) \
+ do { \
+ if (((m) & (h)->debug_mask)) \
+ PMD_DRV_LOG_RAW(DEBUG, "iecm %02x.%x " s, \
+ (h)->bus.device, (h)->bus.func, \
+ ##__VA_ARGS__); \
+ } while (0)
+
+#define iecm_info(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_warn(hw, fmt, args...) iecm_debug(hw, IECM_DBG_ALL, fmt, ##args)
+#define iecm_debug_array(hw, type, rowsize, groupsize, buf, len) \
+ do { \
+ struct iecm_hw *hw_l = hw; \
+ u16 len_l = len; \
+ u8 *buf_l = buf; \
+ int i; \
+ for (i = 0; i < len_l; i += 8) \
+ iecm_debug(hw_l, type, \
+ "0x%04X 0x%016"PRIx64"\n", \
+ i, *((u64 *)((buf_l) + i))); \
+ } while (0)
+#define iecm_snprintf snprintf
+#ifndef SNPRINTF
+#define SNPRINTF iecm_snprintf
+#endif
+
+#define IECM_PCI_REG(reg) rte_read32(reg)
+#define IECM_PCI_REG_ADDR(a, reg) \
+ ((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define IECM_PCI_REG64(reg) rte_read64(reg)
+#define IECM_PCI_REG_ADDR64(a, reg) \
+ ((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
+
+#define iecm_wmb() rte_io_wmb()
+#define iecm_rmb() rte_io_rmb()
+#define iecm_mb() rte_io_mb()
+
+static inline uint32_t iecm_read_addr(volatile void *addr)
+{
+ return rte_le_to_cpu_32(IECM_PCI_REG(addr));
+}
+
+static inline uint64_t iecm_read_addr64(volatile void *addr)
+{
+ return rte_le_to_cpu_64(IECM_PCI_REG64(addr));
+}
+
+#define IECM_PCI_REG_WRITE(reg, value) \
+ rte_write32((rte_cpu_to_le_32(value)), reg)
+
+#define IECM_PCI_REG_WRITE64(reg, value) \
+ rte_write64((rte_cpu_to_le_64(value)), reg)
+
+#define IECM_READ_REG(hw, reg) iecm_read_addr(IECM_PCI_REG_ADDR((hw), (reg)))
+#define IECM_WRITE_REG(hw, reg, value) \
+ IECM_PCI_REG_WRITE(IECM_PCI_REG_ADDR((hw), (reg)), (value))
+
+#define rd32(a, reg) iecm_read_addr(IECM_PCI_REG_ADDR((a), (reg)))
+#define wr32(a, reg, value) \
+ IECM_PCI_REG_WRITE(IECM_PCI_REG_ADDR((a), (reg)), (value))
+#define div64_long(n, d) ((n) / (d))
+#define rd64(a, reg) iecm_read_addr64(IECM_PCI_REG_ADDR64((a), (reg)))
+
+#define BITS_PER_BYTE 8
+
+/* memory allocation tracking */
+struct iecm_dma_mem {
+ void *va;
+ u64 pa;
+ u32 size;
+ const void *zone;
+} __attribute__((packed));
+
+struct iecm_virt_mem {
+ void *va;
+ u32 size;
+} __attribute__((packed));
+
+#define iecm_malloc(h, s) rte_zmalloc(NULL, s, 0)
+#define iecm_calloc(h, c, s) rte_zmalloc(NULL, (c) * (s), 0)
+#define iecm_free(h, m) rte_free(m)
+
+#define iecm_memset(a, b, c, d) memset((a), (b), (c))
+#define iecm_memcpy(a, b, c, d) rte_memcpy((a), (b), (c))
+#define iecm_memdup(a, b, c, d) rte_memcpy(iecm_malloc(a, c), b, c)
+
+#define CPU_TO_BE16(o) rte_cpu_to_be_16(o)
+#define CPU_TO_BE32(o) rte_cpu_to_be_32(o)
+#define CPU_TO_BE64(o) rte_cpu_to_be_64(o)
+#define CPU_TO_LE16(o) rte_cpu_to_le_16(o)
+#define CPU_TO_LE32(s) rte_cpu_to_le_32(s)
+#define CPU_TO_LE64(h) rte_cpu_to_le_64(h)
+#define LE16_TO_CPU(a) rte_le_to_cpu_16(a)
+#define LE32_TO_CPU(c) rte_le_to_cpu_32(c)
+#define LE64_TO_CPU(k) rte_le_to_cpu_64(k)
+
+#define NTOHS(a) rte_be_to_cpu_16(a)
+#define NTOHL(a) rte_be_to_cpu_32(a)
+#define HTONS(a) rte_cpu_to_be_16(a)
+#define HTONL(a) rte_cpu_to_be_32(a)
+
+/* SW spinlock */
+struct iecm_lock {
+ rte_spinlock_t spinlock;
+};
+
+static inline void
+iecm_init_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_init(&sp->spinlock);
+}
+
+static inline void
+iecm_acquire_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_lock(&sp->spinlock);
+}
+
+static inline void
+iecm_release_lock(struct iecm_lock *sp)
+{
+ rte_spinlock_unlock(&sp->spinlock);
+}
+
+static inline void
+iecm_destroy_lock(__attribute__((unused)) struct iecm_lock *sp)
+{
+}
+
+struct iecm_hw;
+
+static inline void *
+iecm_alloc_dma_mem(__attribute__((unused)) struct iecm_hw *hw,
+ struct iecm_dma_mem *mem, u64 size)
+{
+ const struct rte_memzone *mz = NULL;
+ char z_name[RTE_MEMZONE_NAMESIZE];
+
+ if (!mem)
+ return NULL;
+
+ snprintf(z_name, sizeof(z_name), "iecm_dma_%"PRIu64, rte_rand());
+ mz = rte_memzone_reserve_aligned(z_name, size, SOCKET_ID_ANY,
+ RTE_MEMZONE_IOVA_CONTIG, RTE_PGSIZE_4K);
+ if (!mz)
+ return NULL;
+
+ mem->size = size;
+ mem->va = mz->addr;
+ mem->pa = mz->iova;
+ mem->zone = (const void *)mz;
+ memset(mem->va, 0, size);
+
+ return mem->va;
+}
+
+static inline void
+iecm_free_dma_mem(__attribute__((unused)) struct iecm_hw *hw,
+ struct iecm_dma_mem *mem)
+{
+ rte_memzone_free((const struct rte_memzone *)mem->zone);
+ mem->size = 0;
+ mem->va = NULL;
+ mem->pa = 0;
+}
+
+static inline u8
+iecm_hweight8(u32 num)
+{
+ u8 bits = 0;
+ u32 i;
+
+ for (i = 0; i < 8; i++) {
+ bits += (u8)(num & 0x1);
+ num >>= 1;
+ }
+
+ return bits;
+}
+
+static inline u8
+iecm_hweight32(u32 num)
+{
+ u8 bits = 0;
+ u32 i;
+
+ for (i = 0; i < 32; i++) {
+ bits += (u8)(num & 0x1);
+ num >>= 1;
+ }
+
+ return bits;
+}
+
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+#define DELAY(x) rte_delay_us(x)
+#define iecm_usec_delay(x) rte_delay_us(x)
+#define iecm_msec_delay(x, y) rte_delay_us(1000 * (x))
+#define udelay(x) DELAY(x)
+#define msleep(x) DELAY(1000 * (x))
+#define usleep_range(min, max) msleep(DIV_ROUND_UP(min, 1000))
+
+#ifndef IECM_DBG_TRACE
+#define IECM_DBG_TRACE BIT_ULL(0)
+#endif
+
+#ifndef DIVIDE_AND_ROUND_UP
+#define DIVIDE_AND_ROUND_UP(a, b) (((a) + (b) - 1) / (b))
+#endif
+
+#ifndef IECM_INTEL_VENDOR_ID
+#define IECM_INTEL_VENDOR_ID 0x8086
+#endif
+
+#ifndef IS_UNICAST_ETHER_ADDR
+#define IS_UNICAST_ETHER_ADDR(addr) \
+ ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 0))
+#endif
+
+#ifndef IS_MULTICAST_ETHER_ADDR
+#define IS_MULTICAST_ETHER_ADDR(addr) \
+ ((bool)((((u8 *)(addr))[0] % ((u8)0x2)) == 1))
+#endif
+
+#ifndef IS_BROADCAST_ETHER_ADDR
+/* Check whether an address is broadcast. */
+#define IS_BROADCAST_ETHER_ADDR(addr) \
+ ((bool)((((u16 *)(addr))[0] == ((u16)0xffff))))
+#endif
+
+#ifndef IS_ZERO_ETHER_ADDR
+#define IS_ZERO_ETHER_ADDR(addr) \
+ (((bool)((((u16 *)(addr))[0] == ((u16)0x0)))) && \
+ ((bool)((((u16 *)(addr))[1] == ((u16)0x0)))) && \
+ ((bool)((((u16 *)(addr))[2] == ((u16)0x0)))))
+#endif
+
+#ifndef LIST_HEAD_TYPE
+#define LIST_HEAD_TYPE(list_name, type) LIST_HEAD(list_name, type)
+#endif
+
+#ifndef LIST_ENTRY_TYPE
+#define LIST_ENTRY_TYPE(type) LIST_ENTRY(type)
+#endif
+
+#ifndef LIST_FOR_EACH_ENTRY_SAFE
+#define LIST_FOR_EACH_ENTRY_SAFE(pos, temp, head, entry_type, list) \
+ LIST_FOREACH(pos, head, list)
+
+#endif
+
+#ifndef LIST_FOR_EACH_ENTRY
+#define LIST_FOR_EACH_ENTRY(pos, head, entry_type, list) \
+ LIST_FOREACH(pos, head, list)
+
+#endif
+
+#endif /* _IECM_OSDEP_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 03/11] net/idpf: support device initialization
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
2022-05-18 8:25 ` [RFC v3 01/11] net/idpf/base: introduce base code Junfeng Guo
2022-05-18 8:25 ` [RFC v3 02/11] net/idpf/base: add OS specific implementation Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 04/11] net/idpf: support queue ops Junfeng Guo
` (7 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing
Cc: dev, junfeng.guo, Xiaoyun Li, Xiao Wang
Support dev init and add dev ops for IDPF PMD:
dev_configure
dev_start
dev_stop
dev_close
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 655 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 200 ++++++++++
drivers/net/idpf/idpf_logs.h | 38 ++
drivers/net/idpf/idpf_vchnl.c | 475 ++++++++++++++++++++++++
drivers/net/idpf/meson.build | 18 +
drivers/net/idpf/version.map | 3 +
drivers/net/meson.build | 1 +
7 files changed, 1390 insertions(+)
create mode 100644 drivers/net/idpf/idpf_ethdev.c
create mode 100644 drivers/net/idpf/idpf_ethdev.h
create mode 100644 drivers/net/idpf/idpf_logs.h
create mode 100644 drivers/net/idpf/idpf_vchnl.c
create mode 100644 drivers/net/idpf/meson.build
create mode 100644 drivers/net/idpf/version.map
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
new file mode 100644
index 0000000000..c28f0c45f5
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -0,0 +1,655 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+
+#include "idpf_ethdev.h"
+
+#define VPORT_NUM "vport_num"
+
+struct idpf_adapter *adapter;
+uint16_t vport_num = 1;
+
+static const char * const idpf_valid_args[] = {
+ VPORT_NUM,
+ NULL
+};
+
+static int idpf_dev_configure(struct rte_eth_dev *dev);
+static int idpf_dev_start(struct rte_eth_dev *dev);
+static int idpf_dev_stop(struct rte_eth_dev *dev);
+static int idpf_dev_close(struct rte_eth_dev *dev);
+
+static const struct eth_dev_ops idpf_eth_dev_ops = {
+ .dev_configure = idpf_dev_configure,
+ .dev_start = idpf_dev_start,
+ .dev_stop = idpf_dev_stop,
+ .dev_close = idpf_dev_close,
+};
+
+
+static int
+idpf_init_vport_req_info(struct rte_eth_dev *dev)
+{
+ struct virtchnl2_create_vport *vport_info;
+ uint16_t idx = adapter->next_vport_idx;
+
+ if (!adapter->vport_req_info[idx]) {
+ adapter->vport_req_info[idx] = rte_zmalloc(NULL,
+ sizeof(struct virtchnl2_create_vport), 0);
+ if (!adapter->vport_req_info[idx]) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info");
+ return -1;
+ }
+ }
+
+ vport_info =
+ (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+
+ vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+
+ return 0;
+}
+
+static uint16_t
+idpf_get_next_vport_idx(struct idpf_vport **vports, uint16_t max_vport_nb,
+ uint16_t cur_vport_idx)
+{
+ uint16_t vport_idx;
+ uint16_t i;
+
+ if (cur_vport_idx < max_vport_nb && !vports[cur_vport_idx + 1]) {
+ vport_idx = cur_vport_idx + 1;
+ return vport_idx;
+ }
+
+ for (i = 0; i < max_vport_nb; i++) {
+ if (vports[i])
+ continue;
+ }
+
+ if (i == max_vport_nb)
+ vport_idx = IDPF_INVALID_VPORT_IDX;
+ else
+ vport_idx = i;
+
+ return vport_idx;
+}
+
+#ifndef IDPF_RSS_KEY_LEN
+#define IDPF_RSS_KEY_LEN 52
+#endif
+
+static int
+idpf_init_vport(struct rte_eth_dev *dev)
+{
+ uint16_t idx = adapter->next_vport_idx;
+ struct virtchnl2_create_vport *vport_info =
+ (struct virtchnl2_create_vport *)adapter->vport_recv_info[idx];
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int i;
+
+ vport->adapter = adapter;
+ vport->vport_id = vport_info->vport_id;
+ vport->txq_model = vport_info->txq_model;
+ vport->rxq_model = vport_info->rxq_model;
+ vport->num_tx_q = vport_info->num_tx_q;
+ vport->num_tx_complq = vport_info->num_tx_complq;
+ vport->num_rx_q = vport_info->num_rx_q;
+ vport->num_rx_bufq = vport_info->num_rx_bufq;
+ vport->max_mtu = vport_info->max_mtu;
+ rte_memcpy(vport->default_mac_addr,
+ vport_info->default_mac_addr, ETH_ALEN);
+ vport->rss_algorithm = vport_info->rss_algorithm;
+ vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+ vport_info->rss_key_size);
+ vport->rss_lut_size = vport_info->rss_lut_size;
+ vport->sw_idx = idx;
+
+ for (i = 0; i < vport_info->chunks.num_chunks; i++) {
+ if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_TX) {
+ vport->chunks_info.tx_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.tx_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.tx_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_RX) {
+ vport->chunks_info.rx_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.rx_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.rx_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) {
+ vport->chunks_info.tx_compl_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.tx_compl_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.tx_compl_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ } else if (vport_info->chunks.chunks[i].type ==
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) {
+ vport->chunks_info.rx_buf_start_qid =
+ vport_info->chunks.chunks[i].start_queue_id;
+ vport->chunks_info.rx_buf_qtail_start =
+ vport_info->chunks.chunks[i].qtail_reg_start;
+ vport->chunks_info.rx_buf_qtail_spacing =
+ vport_info->chunks.chunks[i].qtail_reg_spacing;
+ }
+ }
+
+ adapter->vports[idx] = vport;
+ adapter->cur_vport_nb++;
+ adapter->next_vport_idx = idpf_get_next_vport_idx(adapter->vports,
+ adapter->max_vport_nb, idx);
+ if (adapter->next_vport_idx == IDPF_INVALID_VPORT_IDX) {
+ PMD_INIT_LOG(ERR, "Failed to get next vport id");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+idpf_dev_configure(struct rte_eth_dev *dev)
+{
+ int ret = 0;
+
+ if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+ dev->data->dev_conf.rxmode.offloads |=
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ ret = idpf_init_vport_req_info(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+ return ret;
+ }
+
+ ret = idpf_create_vport(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to create vport.");
+ return ret;
+ }
+
+ ret = idpf_init_vport(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init vports.");
+ return ret;
+ }
+
+ rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+ &dev->data->mac_addrs[0]);
+
+ return ret;
+}
+
+static int
+idpf_dev_start(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ vport->stopped = 0;
+
+ if (idpf_ena_dis_vport(vport, true)) {
+ PMD_DRV_LOG(ERR, "Failed to enable vport");
+ goto err_vport;
+ }
+
+ return 0;
+
+err_vport:
+ return -1;
+}
+
+static int
+idpf_dev_stop(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (vport->stopped == 1)
+ return 0;
+
+ if (idpf_ena_dis_vport(vport, false))
+ PMD_DRV_LOG(ERR, "disable vport failed");
+
+ vport->stopped = 1;
+ dev->data->dev_started = 0;
+
+ return 0;
+}
+
+static int
+idpf_dev_close(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ idpf_dev_stop(dev);
+ idpf_destroy_vport(vport);
+
+ return 0;
+}
+
+static void
+idpf_reset_pf(struct iecm_hw *hw)
+{
+ uint32_t reg;
+
+ reg = IECM_READ_REG(hw, PFGEN_CTRL);
+ IECM_WRITE_REG(hw, PFGEN_CTRL, (reg | PFGEN_CTRL_PFSWR));
+}
+
+#define IDPF_RESET_WAIT_CNT 100
+static int
+idpf_check_pf_reset_done(struct iecm_hw *hw)
+{
+ uint32_t reg;
+ int i;
+
+ for (i = 0; i < IDPF_RESET_WAIT_CNT; i++) {
+ reg = IECM_READ_REG(hw, PFGEN_RSTAT);
+ if (reg != 0xFFFFFFFF && (reg & PFGEN_RSTAT_PFR_STATE_M))
+ return 0;
+ rte_delay_ms(1000);
+ }
+
+ PMD_INIT_LOG(ERR, "IDPF reset timeout");
+ return -EBUSY;
+}
+
+#define CTLQ_NUM 2
+static int
+idpf_init_mbx(struct iecm_hw *hw)
+{
+ struct iecm_ctlq_create_info ctlq_info[CTLQ_NUM] = {
+ {
+ .type = IECM_CTLQ_TYPE_MAILBOX_TX,
+ .id = IDPF_CTLQ_ID,
+ .len = IDPF_CTLQ_LEN,
+ .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+ .reg = {
+ .head = PF_FW_ATQH,
+ .tail = PF_FW_ATQT,
+ .len = PF_FW_ATQLEN,
+ .bah = PF_FW_ATQBAH,
+ .bal = PF_FW_ATQBAL,
+ .len_mask = PF_FW_ATQLEN_ATQLEN_M,
+ .len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M,
+ .head_mask = PF_FW_ATQH_ATQH_M,
+ }
+ },
+ {
+ .type = IECM_CTLQ_TYPE_MAILBOX_RX,
+ .id = IDPF_CTLQ_ID,
+ .len = IDPF_CTLQ_LEN,
+ .buf_size = IDPF_DFLT_MBX_BUF_SIZE,
+ .reg = {
+ .head = PF_FW_ARQH,
+ .tail = PF_FW_ARQT,
+ .len = PF_FW_ARQLEN,
+ .bah = PF_FW_ARQBAH,
+ .bal = PF_FW_ARQBAL,
+ .len_mask = PF_FW_ARQLEN_ARQLEN_M,
+ .len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M,
+ .head_mask = PF_FW_ARQH_ARQH_M,
+ }
+ }
+ };
+ struct iecm_ctlq_info *ctlq;
+ int ret = 0;
+
+ ret = iecm_ctlq_init(hw, CTLQ_NUM, ctlq_info);
+ if (ret)
+ return ret;
+
+ LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
+ struct iecm_ctlq_info, cq_list) {
+ if (ctlq->q_id == IDPF_CTLQ_ID && ctlq->cq_type == IECM_CTLQ_TYPE_MAILBOX_TX)
+ hw->asq = ctlq;
+ if (ctlq->q_id == IDPF_CTLQ_ID && ctlq->cq_type == IECM_CTLQ_TYPE_MAILBOX_RX)
+ hw->arq = ctlq;
+ }
+
+ if (!hw->asq || !hw->arq) {
+ iecm_ctlq_deinit(hw);
+ ret = -ENOENT;
+ }
+
+ return ret;
+}
+
+static int
+idpf_adapter_init(struct rte_eth_dev *dev)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ struct rte_pci_device *pci_dev = IDPF_DEV_TO_PCI(dev);
+ int ret = 0;
+
+ if (adapter->initialized)
+ return 0;
+
+ hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+ hw->hw_addr_len = pci_dev->mem_resource[0].len;
+ hw->back = adapter;
+ hw->vendor_id = pci_dev->id.vendor_id;
+ hw->device_id = pci_dev->id.device_id;
+ hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+ idpf_reset_pf(hw);
+ ret = idpf_check_pf_reset_done(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "IDPF is still resetting");
+ goto err;
+ }
+
+ ret = idpf_init_mbx(hw);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init mailbox");
+ goto err;
+ }
+
+ adapter->mbx_resp = rte_zmalloc("idpf_adapter_mbx_resp", IDPF_DFLT_MBX_BUF_SIZE, 0);
+ if (!adapter->mbx_resp) {
+ PMD_INIT_LOG(ERR, "Failed to allocate idpf_adapter_mbx_resp memory");
+ goto err_mbx;
+ }
+
+ if (idpf_check_api_version(adapter)) {
+ PMD_INIT_LOG(ERR, "Failed to check api version");
+ goto err_api;
+ }
+
+ adapter->caps = rte_zmalloc("idpf_caps",
+ sizeof(struct virtchnl2_get_capabilities), 0);
+ if (!adapter->caps) {
+ PMD_INIT_LOG(ERR, "Failed to allocate idpf_caps memory");
+ goto err_api;
+ }
+
+ if (idpf_get_caps(adapter)) {
+ PMD_INIT_LOG(ERR, "Failed to get capabilities");
+ goto err_caps;
+ }
+
+ adapter->max_vport_nb = adapter->caps->max_vports;
+
+ adapter->vport_req_info = rte_zmalloc("vport_req_info",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vport_req_info),
+ 0);
+ if (!adapter->vport_req_info) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_req_info memory");
+ goto err_caps;
+ }
+
+ adapter->vport_recv_info = rte_zmalloc("vport_recv_info",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vport_recv_info),
+ 0);
+ if (!adapter->vport_recv_info) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vport_recv_info memory");
+ goto err_vport_recv_info;
+ }
+
+ adapter->vports = rte_zmalloc("vports",
+ adapter->max_vport_nb *
+ sizeof(*adapter->vports),
+ 0);
+ if (!adapter->vports) {
+ PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+ goto err_vports;
+ }
+
+ adapter->max_rxq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
+ sizeof(struct virtchnl2_config_rx_queues)) /
+ sizeof(struct virtchnl2_rxq_info);
+ adapter->max_txq_per_msg = (IDPF_DFLT_MBX_BUF_SIZE -
+ sizeof(struct virtchnl2_config_tx_queues)) /
+ sizeof(struct virtchnl2_txq_info);
+
+ adapter->cur_vport_nb = 0;
+ adapter->next_vport_idx = 0;
+ adapter->initialized = true;
+
+ return ret;
+
+err_vports:
+ rte_free(adapter->vports);
+ adapter->vports = NULL;
+err_vport_recv_info:
+ rte_free(adapter->vport_req_info);
+ adapter->vport_req_info = NULL;
+err_caps:
+ rte_free(adapter->caps);
+ adapter->caps = NULL;
+err_api:
+ rte_free(adapter->mbx_resp);
+ adapter->mbx_resp = NULL;
+err_mbx:
+ iecm_ctlq_deinit(hw);
+err:
+ return -1;
+}
+
+
+static int
+idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void *init_params)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int ret = 0;
+
+ PMD_INIT_FUNC_TRACE();
+
+ dev->dev_ops = &idpf_eth_dev_ops;
+
+ ret = idpf_adapter_init(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init adapter.");
+ return ret;
+ }
+
+ dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
+
+ vport->dev_data = dev->data;
+
+ dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+ if (dev->data->mac_addrs == NULL) {
+ PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+ ret = -ENOMEM;
+ goto err;
+ }
+
+err:
+ return ret;
+}
+
+static int
+idpf_dev_uninit(struct rte_eth_dev *dev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return -EPERM;
+
+ idpf_dev_close(dev);
+
+ return 0;
+}
+
+static const struct rte_pci_id pci_id_idpf_map[] = {
+ { RTE_PCI_DEVICE(IECM_INTEL_VENDOR_ID, IECM_DEV_ID_PF) },
+ { .vendor_id = 0, /* sentinel */ },
+};
+
+static int
+idpf_handle_vport_num(const char *key, const char *value, void *args)
+{
+ int *i = (int *)args;
+ char *end;
+ int num;
+
+ num = strtoul(value, &end, 10);
+
+ if (num <= 0) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", value must be greater than 0",
+ value, key);
+ return -1;
+ }
+
+ *i = num;
+ return 0;
+}
+
+static int
+idpf_parse_vport_num(struct rte_devargs *devargs)
+{
+ struct rte_kvargs *kvlist;
+ const char *key = "vport_num";
+ int ret = 0;
+
+ if (devargs == NULL)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+ if (kvlist == NULL)
+ return 0;
+
+ ret = rte_kvargs_process(kvlist, key, &idpf_handle_vport_num,
+ &vport_num);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static int
+idpf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+ struct rte_pci_device *pci_dev)
+{
+ char name[RTE_ETH_NAME_MAX_LEN];
+ int i, retval;
+
+ retval = idpf_parse_vport_num(pci_dev->device.devargs);
+ if (retval)
+ return retval;
+
+ if (!adapter) {
+ adapter = (struct idpf_adapter *)rte_zmalloc("idpf_adapter",
+ sizeof(struct idpf_adapter), 0);
+ if (!adapter) {
+ PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+ return -1;
+ }
+ }
+
+ for (i = 0; i < vport_num; i++) {
+ snprintf(name, sizeof(name), "idpf_vport_%d", i);
+ retval = rte_eth_dev_create(&pci_dev->device, name,
+ sizeof(struct idpf_vport),
+ NULL, NULL, idpf_dev_init,
+ NULL);
+ if (retval)
+ PMD_DRV_LOG(ERR, "failed to creat vport %d", i);
+ }
+
+ return 0;
+}
+
+static void
+idpf_adapter_rel(struct idpf_adapter *adapter)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ int i;
+
+ iecm_ctlq_deinit(hw);
+
+ if (adapter->caps) {
+ rte_free(adapter->caps);
+ adapter->caps = NULL;
+ }
+
+ if (adapter->mbx_resp) {
+ rte_free(adapter->mbx_resp);
+ adapter->mbx_resp = NULL;
+ }
+
+ if (adapter->vport_req_info) {
+ for (i = 0; i < adapter->max_vport_nb; i++) {
+ if (adapter->vport_req_info[i]) {
+ rte_free(adapter->vport_req_info[i]);
+ adapter->vport_req_info[i] = NULL;
+ }
+ }
+ rte_free(adapter->vport_req_info);
+ adapter->vport_req_info = NULL;
+ }
+
+ if (adapter->vport_recv_info) {
+ for (i = 0; i < adapter->max_vport_nb; i++) {
+ if (adapter->vport_recv_info[i]) {
+ rte_free(adapter->vport_recv_info[i]);
+ adapter->vport_recv_info[i] = NULL;
+ }
+ }
+ }
+
+ if (adapter->vports) {
+ /* Needn't free adapter->vports[i] since it's private data */
+ rte_free(adapter->vports);
+ adapter->vports = NULL;
+ }
+}
+
+static int
+idpf_pci_remove(struct rte_pci_device *pci_dev)
+{
+ if (adapter) {
+ idpf_adapter_rel(adapter);
+ rte_free(adapter);
+ adapter = NULL;
+ }
+
+ return rte_eth_dev_pci_generic_remove(pci_dev, idpf_dev_uninit);
+}
+
+static struct rte_pci_driver rte_idpf_pmd = {
+ .id_table = pci_id_idpf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
+ .probe = idpf_pci_probe,
+ .remove = idpf_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_idpf, rte_idpf_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_idpf, pci_id_idpf_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_ice, "* igb_uio | uio_pci_generic | vfio-pci");
+
+RTE_LOG_REGISTER_SUFFIX(idpf_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(idpf_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
new file mode 100644
index 0000000000..762d5ff66a
--- /dev/null
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -0,0 +1,200 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_ETHDEV_H_
+#define _IDPF_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_mbuf.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_bus_pci.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+
+#include "base/iecm_osdep.h"
+#include "base/iecm_type.h"
+#include "base/iecm_devids.h"
+#include "base/iecm_lan_txrx.h"
+#include "base/iecm_lan_pf_regs.h"
+#include "base/virtchnl.h"
+#include "base/virtchnl2.h"
+
+#define IDPF_INVALID_VPORT_IDX 0xffff
+#define IDPF_TX_COMPLQ_PER_GRP 1
+#define IDPF_RX_BUFQ_PER_GRP 2
+
+#define IDPF_CTLQ_ID -1
+#define IDPF_CTLQ_LEN 64
+#define IDPF_DFLT_MBX_BUF_SIZE 4096
+
+#define IDPF_MAX_NUM_QUEUES 256
+#define IDPF_MIN_BUF_SIZE 1024
+#define IDPF_MAX_FRAME_SIZE 9728
+
+#define IDPF_NUM_MACADDR_MAX 64
+
+#define IDPF_MAX_PKT_TYPE 1024
+
+#define IDPF_VLAN_TAG_SIZE 4
+#define IDPF_ETH_OVERHEAD \
+ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
+
+#ifndef ETH_ADDR_LEN
+#define ETH_ADDR_LEN 6
+#endif
+
+/* Message type read in virtual channel from PF */
+enum idpf_vc_result {
+ IDPF_MSG_ERR = -1, /* Meet error when accessing admin queue */
+ IDPF_MSG_NON, /* Read nothing from admin queue */
+ IDPF_MSG_SYS, /* Read system msg from admin queue */
+ IDPF_MSG_CMD, /* Read async command result */
+};
+
+struct idpf_chunks_info {
+ uint32_t tx_start_qid;
+ uint32_t rx_start_qid;
+ /* Valid only if split queue model */
+ uint32_t tx_compl_start_qid;
+ uint32_t rx_buf_start_qid;
+
+ uint64_t tx_qtail_start;
+ uint32_t tx_qtail_spacing;
+ uint64_t rx_qtail_start;
+ uint32_t rx_qtail_spacing;
+ uint64_t tx_compl_qtail_start;
+ uint32_t tx_compl_qtail_spacing;
+ uint64_t rx_buf_qtail_start;
+ uint32_t rx_buf_qtail_spacing;
+};
+
+struct idpf_vport {
+ struct idpf_adapter *adapter; /* Backreference to associated adapter */
+ uint16_t vport_id;
+ uint32_t txq_model;
+ uint32_t rxq_model;
+ uint16_t num_tx_q;
+ /* valid only if txq_model is split Q */
+ uint16_t num_tx_complq;
+ uint16_t num_rx_q;
+ /* valid only if rxq_model is split Q */
+ uint16_t num_rx_bufq;
+
+ uint16_t max_mtu;
+ uint8_t default_mac_addr[VIRTCHNL_ETH_LENGTH_OF_ADDRESS];
+
+ enum virtchnl_rss_algorithm rss_algorithm;
+ uint16_t rss_key_size;
+ uint16_t rss_lut_size;
+
+ uint16_t sw_idx; /* SW idx */
+
+ struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+
+ /* RSS info */
+ uint32_t *rss_lut;
+ uint8_t *rss_key;
+ uint64_t rss_hf;
+
+ /* Chunk info */
+ struct idpf_chunks_info chunks_info;
+
+ /* Event from ipf */
+ bool link_up;
+ uint32_t link_speed;
+
+ bool stopped;
+};
+
+struct idpf_adapter {
+ struct iecm_hw hw;
+
+ struct virtchnl_version_info virtchnl_version;
+ struct virtchnl2_get_capabilities *caps;
+
+ volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
+ uint32_t cmd_retval; /* return value of the cmd response from ipf */
+ uint8_t *mbx_resp; /* buffer to store the mailbox response from ipf */
+
+ uint32_t txq_model;
+ uint32_t rxq_model;
+
+ /* Vport info */
+ uint8_t **vport_req_info;
+ uint8_t **vport_recv_info;
+ struct idpf_vport **vports;
+ uint16_t max_vport_nb;
+ uint16_t cur_vport_nb;
+ uint16_t next_vport_idx;
+
+ /* Max config queue number per VC message */
+ uint32_t max_rxq_per_msg;
+ uint32_t max_txq_per_msg;
+
+ uint32_t ptype_tbl[IDPF_MAX_PKT_TYPE] __rte_cache_min_aligned;
+
+ bool initialized;
+ bool stopped;
+};
+
+extern struct idpf_adapter *adapter;
+
+#define IDPF_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+/* structure used for sending and checking response of virtchnl ops */
+struct idpf_cmd_info {
+ uint32_t ops;
+ uint8_t *in_args; /* buffer for sending */
+ uint32_t in_args_size; /* buffer size for sending */
+ uint8_t *out_buffer; /* buffer for response */
+ uint32_t out_size; /* buffer size for response */
+};
+
+/* notify current command done. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_notify_cmd(struct idpf_adapter *adapter, int msg_ret)
+{
+ adapter->cmd_retval = msg_ret;
+ rte_wmb();
+ adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+}
+
+/* clear current command. Only call in case execute
+ * _atomic_set_cmd successfully.
+ */
+static inline void
+_clear_cmd(struct idpf_adapter *adapter)
+{
+ rte_wmb();
+ adapter->pend_cmd = VIRTCHNL_OP_UNKNOWN;
+ adapter->cmd_retval = VIRTCHNL_STATUS_SUCCESS;
+}
+
+/* Check there is pending cmd in execution. If none, set new command. */
+static inline int
+_atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
+{
+ int ret = rte_atomic32_cmpset(&adapter->pend_cmd, VIRTCHNL_OP_UNKNOWN, ops);
+
+ if (!ret)
+ PMD_DRV_LOG(ERR, "There is incomplete cmd %d", adapter->pend_cmd);
+
+ return !ret;
+}
+
+void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
+int idpf_check_api_version(struct idpf_adapter *adapter);
+int idpf_get_caps(struct idpf_adapter *adapter);
+int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
+int idpf_destroy_vport(struct idpf_vport *vport);
+
+int idpf_ena_dis_vport(struct idpf_vport *vport, bool enable);
+
+#endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_logs.h b/drivers/net/idpf/idpf_logs.h
new file mode 100644
index 0000000000..906aae8463
--- /dev/null
+++ b/drivers/net/idpf/idpf_logs.h
@@ -0,0 +1,38 @@
+#ifndef _IDPF_LOGS_H_
+#define _IDPF_LOGS_H_
+
+#include <rte_log.h>
+
+extern int idpf_logtype_init;
+extern int idpf_logtype_driver;
+
+#define PMD_INIT_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, idpf_logtype_init, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#define PMD_INIT_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#define PMD_DRV_LOG_RAW(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, idpf_logtype_driver, \
+ "%s(): " fmt "\n", __func__, ##args)
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, " >>")
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_IDPF_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _IDPF_LOGS_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
new file mode 100644
index 0000000000..6dcb62148a
--- /dev/null
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -0,0 +1,475 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <stdio.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+
+#include <rte_debug.h>
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+#include <rte_dev.h>
+
+#include "idpf_ethdev.h"
+
+#include "base/iecm_prototype.h"
+
+#define IDPF_CTLQ_LEN 64
+
+static int
+idpf_vc_clean(struct idpf_adapter *adapter)
+{
+ struct iecm_ctlq_msg *q_msg[IDPF_CTLQ_LEN];
+ uint16_t num_q_msg = IDPF_CTLQ_LEN;
+ struct iecm_dma_mem *dma_mem;
+ int err = 0;
+ uint32_t i;
+
+ for (i = 0; i < 10; i++) {
+ err = iecm_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+ msleep(20);
+ if (num_q_msg)
+ break;
+ }
+ if (err)
+ goto error;
+
+ /* Empty queue is not an error */
+ for (i = 0; i < num_q_msg; i++) {
+ dma_mem = q_msg[i]->ctx.indirect.payload;
+ if (dma_mem) {
+ iecm_free_dma_mem(&adapter->hw, dma_mem);
+ rte_free(dma_mem);
+ }
+ rte_free(q_msg[i]);
+ }
+
+error:
+ return err;
+}
+
+static int
+idpf_send_vc_msg(struct idpf_adapter *adapter, enum virtchnl_ops op,
+ uint16_t msg_size, uint8_t *msg)
+{
+ struct iecm_ctlq_msg *ctlq_msg;
+ struct iecm_dma_mem *dma_mem;
+ int err = 0;
+
+ err = idpf_vc_clean(adapter);
+ if (err)
+ goto err;
+
+ ctlq_msg = (struct iecm_ctlq_msg *)rte_zmalloc(NULL,
+ sizeof(struct iecm_ctlq_msg), 0);
+ if (!ctlq_msg) {
+ err = -ENOMEM;
+ goto err;
+ }
+
+ dma_mem = (struct iecm_dma_mem *)rte_zmalloc(NULL,
+ sizeof(struct iecm_dma_mem), 0);
+ if (!dma_mem) {
+ err = -ENOMEM;
+ goto dma_mem_error;
+ }
+
+ dma_mem->size = IDPF_DFLT_MBX_BUF_SIZE;
+ iecm_alloc_dma_mem(&adapter->hw, dma_mem, dma_mem->size);
+ if (!dma_mem->va) {
+ err = -ENOMEM;
+ goto dma_alloc_error;
+ }
+
+ memcpy(dma_mem->va, msg, msg_size);
+
+ ctlq_msg->opcode = iecm_mbq_opc_send_msg_to_pf;
+ ctlq_msg->func_id = 0;
+ ctlq_msg->data_len = msg_size;
+ ctlq_msg->cookie.mbx.chnl_opcode = op;
+ ctlq_msg->cookie.mbx.chnl_retval = VIRTCHNL_STATUS_SUCCESS;
+ ctlq_msg->ctx.indirect.payload = dma_mem;
+
+ err = iecm_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+ if (err)
+ goto send_error;
+
+ return err;
+
+send_error:
+ iecm_free_dma_mem(&adapter->hw, dma_mem);
+dma_alloc_error:
+ rte_free(dma_mem);
+dma_mem_error:
+ rte_free(ctlq_msg);
+err:
+ return err;
+}
+
+static enum idpf_vc_result
+idpf_read_msg_from_ipf(struct idpf_adapter *adapter, uint16_t buf_len,
+ uint8_t *buf)
+{
+ struct iecm_hw *hw = &adapter->hw;
+ struct iecm_ctlq_msg ctlq_msg;
+ struct iecm_dma_mem *dma_mem = NULL;
+ enum idpf_vc_result result = IDPF_MSG_NON;
+ enum virtchnl_ops opcode;
+ uint16_t pending = 1;
+ int ret;
+
+ ret = iecm_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+ if (ret) {
+ PMD_DRV_LOG(DEBUG, "Can't read msg from AQ");
+ if (ret != IECM_ERR_CTLQ_NO_WORK)
+ result = IDPF_MSG_ERR;
+ return result;
+ }
+
+ rte_memcpy(buf, ctlq_msg.ctx.indirect.payload->va, buf_len);
+
+ opcode = (enum virtchnl_ops)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+ adapter->cmd_retval =
+ (enum virtchnl_status_code)rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+ PMD_DRV_LOG(DEBUG, "CQ from ipf carries opcode %u, retval %d",
+ opcode, adapter->cmd_retval);
+
+ if (opcode == VIRTCHNL2_OP_EVENT) {
+ struct virtchnl2_event *ve =
+ (struct virtchnl2_event *)ctlq_msg.ctx.indirect.payload->va;
+
+ result = IDPF_MSG_SYS;
+ switch (ve->event) {
+ case VIRTCHNL2_EVENT_LINK_CHANGE:
+ /* TBD */
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "%s: Unknown event %d from ipf",
+ __func__, ve->event);
+ break;
+ }
+ } else {
+ /* async reply msg on command issued by pf previously */
+ result = IDPF_MSG_CMD;
+ if (opcode != adapter->pend_cmd) {
+ PMD_DRV_LOG(WARNING, "command mismatch, expect %u, get %u",
+ adapter->pend_cmd, opcode);
+ result = IDPF_MSG_ERR;
+ }
+ }
+
+ if (ctlq_msg.data_len)
+ dma_mem = ctlq_msg.ctx.indirect.payload;
+ else
+ pending = 0;
+
+ ret = iecm_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+ if (ret && dma_mem)
+ iecm_free_dma_mem(hw, dma_mem);
+
+ return result;
+}
+
+#define MAX_TRY_TIMES 200
+#define ASQ_DELAY_MS 10
+
+static int
+idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
+{
+ enum idpf_vc_result result;
+ int err = 0;
+ int i = 0;
+ int ret;
+
+ if (_atomic_set_cmd(adapter, args->ops))
+ return -1;
+
+ ret = idpf_send_vc_msg(adapter, args->ops,
+ args->in_args_size,
+ args->in_args);
+ if (ret) {
+ PMD_DRV_LOG(ERR, "fail to send cmd %d", args->ops);
+ _clear_cmd(adapter);
+ return ret;
+ }
+
+ switch (args->ops) {
+ case VIRTCHNL_OP_VERSION:
+ case VIRTCHNL2_OP_GET_CAPS:
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_HASH:
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ /* for init virtchnl ops, need to poll the response */
+ do {
+ result = idpf_read_msg_from_ipf(adapter,
+ args->out_size,
+ args->out_buffer);
+ if (result == IDPF_MSG_CMD)
+ break;
+ rte_delay_ms(ASQ_DELAY_MS);
+ } while (i++ < MAX_TRY_TIMES);
+ if (i >= MAX_TRY_TIMES ||
+ adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+ adapter->cmd_retval, args->ops);
+ }
+ _clear_cmd(adapter);
+ break;
+ default:
+ /* For other virtchnl ops in running time,
+ * wait for the cmd done flag.
+ */
+ do {
+ if (adapter->pend_cmd == VIRTCHNL_OP_UNKNOWN)
+ break;
+ rte_delay_ms(ASQ_DELAY_MS);
+ /* If don't read msg or read sys event, continue */
+ } while (i++ < MAX_TRY_TIMES);
+ /* If there's no response is received, clear command */
+ if (i >= MAX_TRY_TIMES ||
+ adapter->cmd_retval != VIRTCHNL_STATUS_SUCCESS) {
+ err = -1;
+ PMD_DRV_LOG(ERR, "No response or return failure (%d) for cmd %d",
+ adapter->cmd_retval, args->ops);
+ _clear_cmd(adapter);
+ }
+ break;
+ }
+
+ return err;
+}
+
+int
+idpf_check_api_version(struct idpf_adapter *adapter)
+{
+ struct virtchnl_version_info version;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&version, 0, sizeof(struct virtchnl_version_info));
+ version.major = VIRTCHNL_VERSION_MAJOR_2;
+ version.minor = VIRTCHNL_VERSION_MINOR_0;
+
+ args.ops = VIRTCHNL_OP_VERSION;
+ args.in_args = (uint8_t *)&version;
+ args.in_args_size = sizeof(version);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL_OP_VERSION");
+ return err;
+ }
+
+ return err;
+}
+
+int
+idpf_get_caps(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_get_capabilities caps_msg;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&caps_msg, 0, sizeof(struct virtchnl2_get_capabilities));
+ caps_msg.csum_caps =
+ VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_GENERIC |
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_GENERIC;
+
+ caps_msg.seg_caps =
+ VIRTCHNL2_CAP_SEG_IPV4_TCP |
+ VIRTCHNL2_CAP_SEG_IPV4_UDP |
+ VIRTCHNL2_CAP_SEG_IPV4_SCTP |
+ VIRTCHNL2_CAP_SEG_IPV6_TCP |
+ VIRTCHNL2_CAP_SEG_IPV6_UDP |
+ VIRTCHNL2_CAP_SEG_IPV6_SCTP |
+ VIRTCHNL2_CAP_SEG_GENERIC;
+
+ caps_msg.rss_caps =
+ VIRTCHNL2_CAP_RSS_IPV4_TCP |
+ VIRTCHNL2_CAP_RSS_IPV4_UDP |
+ VIRTCHNL2_CAP_RSS_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV4_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV6_TCP |
+ VIRTCHNL2_CAP_RSS_IPV6_UDP |
+ VIRTCHNL2_CAP_RSS_IPV6_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV6_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV4_AH |
+ VIRTCHNL2_CAP_RSS_IPV4_ESP |
+ VIRTCHNL2_CAP_RSS_IPV4_AH_ESP |
+ VIRTCHNL2_CAP_RSS_IPV6_AH |
+ VIRTCHNL2_CAP_RSS_IPV6_ESP |
+ VIRTCHNL2_CAP_RSS_IPV6_AH_ESP;
+
+ caps_msg.hsplit_caps =
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6;
+
+ caps_msg.rsc_caps =
+ VIRTCHNL2_CAP_RSC_IPV4_TCP |
+ VIRTCHNL2_CAP_RSC_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSC_IPV6_TCP |
+ VIRTCHNL2_CAP_RSC_IPV6_SCTP;
+
+ caps_msg.other_caps =
+ VIRTCHNL2_CAP_RDMA |
+ VIRTCHNL2_CAP_SRIOV |
+ VIRTCHNL2_CAP_MACFILTER |
+ VIRTCHNL2_CAP_FLOW_DIRECTOR |
+ VIRTCHNL2_CAP_SPLITQ_QSCHED |
+ VIRTCHNL2_CAP_CRC |
+ VIRTCHNL2_CAP_WB_ON_ITR |
+ VIRTCHNL2_CAP_PROMISC |
+ VIRTCHNL2_CAP_LINK_SPEED |
+ VIRTCHNL2_CAP_VLAN;
+
+ args.ops = VIRTCHNL2_OP_GET_CAPS;
+ args.in_args = (uint8_t *)&caps_msg;
+ args.in_args_size = sizeof(caps_msg);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL2_OP_GET_CAPS");
+ return err;
+ }
+
+ rte_memcpy(adapter->caps, args.out_buffer, sizeof(caps_msg));
+
+ return err;
+}
+
+int
+idpf_create_vport(__rte_unused struct rte_eth_dev *dev)
+{
+ uint16_t idx = adapter->next_vport_idx;
+ struct virtchnl2_create_vport *vport_req_info =
+ (struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
+ struct virtchnl2_create_vport vport_msg;
+ struct idpf_cmd_info args;
+ int err = -1;
+
+ memset(&vport_msg, 0, sizeof(struct virtchnl2_create_vport));
+ vport_msg.vport_type = vport_req_info->vport_type;
+ vport_msg.txq_model = vport_req_info->txq_model;
+ vport_msg.rxq_model = vport_req_info->rxq_model;
+ vport_msg.num_tx_q = vport_req_info->num_tx_q;
+ vport_msg.num_tx_complq = vport_req_info->num_tx_complq;
+ vport_msg.num_rx_q = vport_req_info->num_rx_q;
+ vport_msg.num_rx_bufq = vport_req_info->num_rx_bufq;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CREATE_VPORT;
+ args.in_args = (uint8_t *)&vport_msg;
+ args.in_args_size = sizeof(vport_msg);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR,
+ "Failed to execute command of VIRTCHNL2_OP_CREATE_VPORT");
+ return err;
+ }
+
+ if (!adapter->vport_recv_info[idx]) {
+ adapter->vport_recv_info[idx] = rte_zmalloc(NULL,
+ IDPF_DFLT_MBX_BUF_SIZE, 0);
+ if (!adapter->vport_recv_info[idx]) {
+ PMD_INIT_LOG(ERR, "Failed to alloc vport_recv_info.");
+ return err;
+ }
+ }
+ rte_memcpy(adapter->vport_recv_info[idx], args.out_buffer,
+ IDPF_DFLT_MBX_BUF_SIZE);
+ return err;
+}
+
+int
+idpf_destroy_vport(struct idpf_vport *vport)
+{
+ struct virtchnl2_vport vc_vport;
+ struct idpf_cmd_info args;
+ int err;
+
+ vc_vport.vport_id = vport->vport_id;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_DESTROY_VPORT;
+ args.in_args = (uint8_t *)&vc_vport;
+ args.in_args_size = sizeof(vc_vport);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_DESTROY_VPORT");
+ return err;
+ }
+
+ return err;
+}
+
+int
+idpf_ena_dis_vport(struct idpf_vport *vport, bool enable)
+{
+ struct virtchnl2_vport vc_vport;
+ struct idpf_cmd_info args;
+ int err;
+
+ vc_vport.vport_id = vport->vport_id;
+ args.ops = enable ? VIRTCHNL2_OP_ENABLE_VPORT :
+ VIRTCHNL2_OP_DISABLE_VPORT;
+ args.in_args = (u8 *)&vc_vport;
+ args.in_args_size = sizeof(vc_vport);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_VPORT",
+ enable ? "ENABLE" : "DISABLE");
+ }
+
+ return err;
+}
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
new file mode 100644
index 0000000000..262a7aa8c7
--- /dev/null
+++ b/drivers/net/idpf/meson.build
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+ build = false
+ reason = 'not supported on Windows'
+ subdir_done()
+endif
+
+subdir('base')
+objs = [base_objs]
+
+sources = files(
+ 'idpf_ethdev.c',
+ 'idpf_vchnl.c',
+)
+
+includes += include_directories('base')
diff --git a/drivers/net/idpf/version.map b/drivers/net/idpf/version.map
new file mode 100644
index 0000000000..b7da224860
--- /dev/null
+++ b/drivers/net/idpf/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+ local: *;
+};
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index e35652fe63..8910154544 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -28,6 +28,7 @@ drivers = [
'i40e',
'iavf',
'ice',
+ 'idpf',
'igc',
'ionic',
'ipn3ke',
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 04/11] net/idpf: support queue ops
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (2 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 03/11] net/idpf: support device initialization Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 05/11] net/idpf: support getting device information Junfeng Guo
` (6 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Xiaoyun Li
Add queue ops for IDPF PMD:
rx_queue_start
rx_queue_stop
tx_queue_start
tx_queue_stop
rx_queue_setup
rx_queue_release
tx_queue_setup
tx_queue_release
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 92 +++
drivers/net/idpf/idpf_ethdev.h | 6 +
drivers/net/idpf/idpf_rxtx.c | 1252 ++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 167 +++++
drivers/net/idpf/idpf_vchnl.c | 341 +++++++++
drivers/net/idpf/meson.build | 1 +
6 files changed, 1859 insertions(+)
create mode 100644 drivers/net/idpf/idpf_rxtx.c
create mode 100644 drivers/net/idpf/idpf_rxtx.h
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c28f0c45f5..b48a722b80 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -12,6 +12,7 @@
#include <rte_dev.h>
#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
#define VPORT_NUM "vport_num"
@@ -33,6 +34,14 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_start = idpf_dev_start,
.dev_stop = idpf_dev_stop,
.dev_close = idpf_dev_close,
+ .rx_queue_start = idpf_rx_queue_start,
+ .rx_queue_stop = idpf_rx_queue_stop,
+ .tx_queue_start = idpf_tx_queue_start,
+ .tx_queue_stop = idpf_tx_queue_stop,
+ .rx_queue_setup = idpf_rx_queue_setup,
+ .rx_queue_release = idpf_dev_rx_queue_release,
+ .tx_queue_setup = idpf_tx_queue_setup,
+ .tx_queue_release = idpf_dev_tx_queue_release,
};
@@ -196,6 +205,65 @@ idpf_dev_configure(struct rte_eth_dev *dev)
return ret;
}
+static int
+idpf_config_queues(struct idpf_vport *vport)
+{
+ int err;
+
+ err = idpf_config_rxqs(vport);
+ if (err)
+ return err;
+
+ err = idpf_config_txqs(vport);
+
+ return err;
+}
+
+static int
+idpf_start_queues(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_tx_queue *txq;
+ int i, err = 0;
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (txq->tx_deferred_start)
+ continue;
+ if (idpf_tx_queue_init(dev, i) != 0) {
+ PMD_DRV_LOG(ERR, "Fail to init tx queue %u", i);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (rxq->rx_deferred_start)
+ continue;
+ if (idpf_rx_queue_init(dev, i) != 0) {
+ PMD_DRV_LOG(ERR, "Fail to init rx queue %u", i);
+ return -1;
+ }
+ }
+
+ err = idpf_ena_dis_queues(vport, true);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Fail to start queues");
+ return err;
+ }
+
+ for (i = 0; i < dev->data->nb_tx_queues; i++)
+ dev->data->tx_queue_state[i] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
+ dev->data->rx_queue_state[i] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
static int
idpf_dev_start(struct rte_eth_dev *dev)
{
@@ -206,6 +274,26 @@ idpf_dev_start(struct rte_eth_dev *dev)
vport->stopped = 0;
+ if (dev->data->mtu > vport->max_mtu) {
+ PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+ goto err_mtu;
+ }
+
+ vport->max_pkt_len = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+ if (idpf_config_queues(vport)) {
+ PMD_DRV_LOG(ERR, "Failed to configure queues");
+ goto err_mtu;
+ }
+
+ idpf_set_rx_function(dev);
+ idpf_set_tx_function(dev);
+
+ if (idpf_start_queues(dev)) {
+ PMD_DRV_LOG(ERR, "Failed to start queues");
+ goto err_mtu;
+ }
+
if (idpf_ena_dis_vport(vport, true)) {
PMD_DRV_LOG(ERR, "Failed to enable vport");
goto err_vport;
@@ -214,6 +302,8 @@ idpf_dev_start(struct rte_eth_dev *dev)
return 0;
err_vport:
+ idpf_stop_queues(dev);
+err_mtu:
return -1;
}
@@ -231,6 +321,8 @@ idpf_dev_stop(struct rte_eth_dev *dev)
if (idpf_ena_dis_vport(vport, false))
PMD_DRV_LOG(ERR, "disable vport failed");
+ idpf_stop_queues(dev);
+
vport->stopped = 1;
dev->data->dev_started = 0;
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 762d5ff66a..80eba09184 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -94,6 +94,7 @@ struct idpf_vport {
uint16_t sw_idx; /* SW idx */
struct rte_eth_dev_data *dev_data; /* Pointer to the device data */
+ uint16_t max_pkt_len; /* Maximum packet length */
/* RSS info */
uint32_t *rss_lut;
@@ -195,6 +196,11 @@ int idpf_get_caps(struct idpf_adapter *adapter);
int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
int idpf_destroy_vport(struct idpf_vport *vport);
+int idpf_config_rxqs(struct idpf_vport *vport);
+int idpf_config_txqs(struct idpf_vport *vport);
+int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on);
+int idpf_ena_dis_queues(struct idpf_vport *vport, bool enable);
int idpf_ena_dis_vport(struct idpf_vport *vport, bool enable);
#endif /* _IDPF_ETHDEV_H_ */
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
new file mode 100644
index 0000000000..770ed52281
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -0,0 +1,1252 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+
+#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
+
+static inline int
+check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
+{
+ /* The following constraints must be satisfied:
+ * thresh < rxq->nb_rx_desc
+ */
+ if (thresh >= nb_desc) {
+ PMD_INIT_LOG(ERR, "rx_free_thresh (%u) must be less than %u",
+ thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int
+check_tx_thresh(uint16_t nb_desc, uint16_t tx_rs_thresh,
+ uint16_t tx_free_thresh)
+{
+ /* TX descriptors will have their RS bit set after tx_rs_thresh
+ * descriptors have been used. The TX descriptor ring will be cleaned
+ * after tx_free_thresh descriptors are used or if the number of
+ * descriptors required to transmit a packet is greater than the
+ * number of free TX descriptors.
+ *
+ * The following constraints must be satisfied:
+ * - tx_rs_thresh must be less than the size of the ring minus 2.
+ * - tx_free_thresh must be less than the size of the ring minus 3.
+ * - tx_rs_thresh must be less than or equal to tx_free_thresh.
+ * - tx_rs_thresh must be a divisor of the ring size.
+ *
+ * One descriptor in the TX ring is used as a sentinel to avoid a H/W
+ * race condition, hence the maximum threshold constraints. When set
+ * to zero use default values.
+ */
+ if (tx_rs_thresh >= (nb_desc - 2)) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 2",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_free_thresh >= (nb_desc - 3)) {
+ PMD_INIT_LOG(ERR, "tx_free_thresh (%u) must be less than the "
+ "number of TX descriptors (%u) minus 3.",
+ tx_free_thresh, nb_desc);
+ return -EINVAL;
+ }
+ if (tx_rs_thresh > tx_free_thresh) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be less than or "
+ "equal to tx_free_thresh (%u).",
+ tx_rs_thresh, tx_free_thresh);
+ return -EINVAL;
+ }
+ if ((nb_desc % tx_rs_thresh) != 0) {
+ PMD_INIT_LOG(ERR, "tx_rs_thresh (%u) must be a divisor of the "
+ "number of TX descriptors (%u).",
+ tx_rs_thresh, nb_desc);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline void
+release_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ uint16_t i;
+
+ if (!rxq->sw_ring)
+ return;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ if (rxq->sw_ring[i]) {
+ rte_pktmbuf_free_seg(rxq->sw_ring[i]);
+ rxq->sw_ring[i] = NULL;
+ }
+ }
+}
+
+static inline void
+release_txq_mbufs(struct idpf_tx_queue *txq)
+{
+ uint16_t nb_desc, i;
+
+ if (!txq || !txq->sw_ring) {
+ PMD_DRV_LOG(DEBUG, "Pointer to rxq or sw_ring is NULL");
+ return;
+ }
+
+ if (txq->sw_nb_desc) {
+ /* For split queue model, descriptor ring */
+ nb_desc = txq->sw_nb_desc;
+ } else {
+ /* For single queue model */
+ nb_desc = txq->nb_tx_desc;
+ }
+ for (i = 0; i < nb_desc; i++) {
+ if (txq->sw_ring[i].mbuf) {
+ rte_pktmbuf_free_seg(txq->sw_ring[i].mbuf);
+ txq->sw_ring[i].mbuf = NULL;
+ }
+ }
+}
+
+static const struct idpf_rxq_ops def_rxq_ops = {
+ .release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+ .release_mbufs = release_txq_mbufs,
+};
+
+static void
+idpf_rx_queue_release(void *rxq)
+{
+ struct idpf_rx_queue *q = (struct idpf_rx_queue *)rxq;
+
+ if (!q)
+ return;
+
+ /* Split queue */
+ if (q->bufq1 && q->bufq2) {
+ q->bufq1->ops->release_mbufs(q->bufq1);
+ rte_free(q->bufq1->sw_ring);
+ rte_memzone_free(q->bufq1->mz);
+ rte_free(q->bufq1);
+ q->bufq2->ops->release_mbufs(q->bufq2);
+ rte_free(q->bufq2->sw_ring);
+ rte_memzone_free(q->bufq2->mz);
+ rte_free(q->bufq2);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+ return;
+ }
+
+ /* Single queue */
+ q->ops->release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+}
+
+static void
+idpf_tx_queue_release(void *txq)
+{
+ struct idpf_tx_queue *q = (struct idpf_tx_queue *)txq;
+
+ if (!q)
+ return;
+
+ if (q->complq)
+ rte_free(q->complq);
+ q->ops->release_mbufs(q);
+ rte_free(q->sw_ring);
+ rte_memzone_free(q->mz);
+ rte_free(q);
+}
+
+static inline void
+reset_split_rx_descq(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ rxq->rx_tail = 0;
+ rxq->expected_gen_id = 1;
+}
+
+static inline void
+reset_split_rx_bufq(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_splitq_rx_buf_desc);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+ for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+ rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+ /* The next descriptor id which can be received. */
+ rxq->rx_next_avail = 0;
+
+ /* The next descriptor id which can be refilled. */
+ rxq->rx_tail = 0;
+ /* The number of descriptors which can be refilled. */
+ rxq->nb_rx_hold = rxq->nb_rx_desc - 1;
+
+ rxq->bufq1 = NULL;
+ rxq->bufq2 = NULL;
+}
+
+static inline void
+reset_split_rx_queue(struct idpf_rx_queue *rxq)
+{
+ reset_split_rx_descq(rxq);
+ reset_split_rx_bufq(rxq->bufq1);
+ reset_split_rx_bufq(rxq->bufq2);
+}
+
+static inline void
+reset_single_rx_queue(struct idpf_rx_queue *rxq)
+{
+ uint16_t len;
+ uint32_t i;
+
+ if (!rxq)
+ return;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+
+ for (i = 0; i < len * sizeof(struct virtchnl2_singleq_rx_buf_desc);
+ i++)
+ ((volatile char *)rxq->rx_ring)[i] = 0;
+
+ memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+ for (i = 0; i < IDPF_RX_MAX_BURST; i++)
+ rxq->sw_ring[rxq->nb_rx_desc + i] = &rxq->fake_mbuf;
+
+ rxq->rx_tail = 0;
+ rxq->nb_rx_hold = 0;
+
+ if (rxq->pkt_first_seg != NULL)
+ rte_pktmbuf_free(rxq->pkt_first_seg);
+
+ rxq->pkt_first_seg = NULL;
+ rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_split_tx_descq(struct idpf_tx_queue *txq)
+{
+ struct idpf_tx_entry *txe;
+ uint32_t i, size;
+ uint16_t prev;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ size = sizeof(struct iecm_flex_tx_sched_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->desc_ring)[i] = 0;
+
+ txe = txq->sw_ring;
+ prev = (uint16_t)(txq->sw_nb_desc - 1);
+ for (i = 0; i < txq->sw_nb_desc; i++) {
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ /* Use this as next to clean for split desc queue */
+ txq->last_desc_cleaned = 0;
+ txq->sw_tail = 0;
+ txq->nb_free = txq->nb_tx_desc - 1;
+}
+
+static inline void
+reset_split_tx_complq(struct idpf_tx_queue *cq)
+{
+ uint32_t i, size;
+
+ if (!cq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to complq is NULL");
+ return;
+ }
+
+ size = sizeof(struct iecm_splitq_tx_compl_desc) * cq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)cq->compl_ring)[i] = 0;
+
+ cq->tx_tail = 0;
+ cq->expected_gen_id = 1;
+}
+
+static inline void
+reset_single_tx_queue(struct idpf_tx_queue *txq)
+{
+ struct idpf_tx_entry *txe;
+ uint32_t i, size;
+ uint16_t prev;
+
+ if (!txq) {
+ PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+ return;
+ }
+
+ txe = txq->sw_ring;
+ size = sizeof(struct iecm_base_tx_desc) * txq->nb_tx_desc;
+ for (i = 0; i < size; i++)
+ ((volatile char *)txq->tx_ring)[i] = 0;
+
+ prev = (uint16_t)(txq->nb_tx_desc - 1);
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ txq->tx_ring[i].qw1 =
+ rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DESC_DONE);
+ txe[i].mbuf = NULL;
+ txe[i].last_id = i;
+ txe[prev].next_id = i;
+ prev = i;
+ }
+
+ txq->tx_tail = 0;
+ txq->nb_used = 0;
+
+ txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+ txq->nb_free = txq->nb_tx_desc - 1;
+
+ txq->next_dd = txq->rs_thresh - 1;
+ txq->next_rs = txq->rs_thresh - 1;
+}
+
+static int
+idpf_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *bufq,
+ uint16_t queue_idx, uint16_t rx_free_thresh,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t len;
+
+ bufq->mp = mp;
+ bufq->nb_rx_desc = nb_desc;
+ bufq->rx_free_thresh = rx_free_thresh;
+ bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+ bufq->port_id = dev->data->port_id;
+ bufq->rx_deferred_start = rx_conf->rx_deferred_start;
+ bufq->rx_hdr_len = 0;
+ bufq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ bufq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ bufq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+ bufq->rx_buf_len = len;
+
+ /* Allocate the software ring. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ bufq->sw_ring =
+ rte_zmalloc_socket("idpf rx bufq sw ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ return -ENOMEM;
+ }
+
+ /* Allocate a liitle more to support bulk allocate. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_splitq_rx_buf_desc),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx_buf_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
+ rte_free(bufq->sw_ring);
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ bufq->rx_ring_phys_addr = mz->iova;
+ bufq->rx_ring = mz->addr;
+
+ bufq->mz = mz;
+ reset_split_rx_bufq(bufq);
+ bufq->q_set = true;
+ bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+ queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+ bufq->ops = &def_rxq_ops;
+
+ /* TODO: allow bulk or vec */
+
+ return 0;
+}
+
+static int
+idpf_rx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_rx_queue *bufq1, *bufq2;
+ const struct rte_memzone *mz;
+ uint16_t rx_free_thresh;
+ uint32_t ring_size;
+ uint16_t qid;
+ uint16_t len;
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid", nb_desc);
+ return -EINVAL;
+ }
+
+ /* Check free threshold */
+ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+ IDPF_DEFAULT_RX_FREE_THRESH :
+ rx_conf->rx_free_thresh;
+ if (check_rx_thresh(nb_desc, rx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Setup Rx description queue */
+ rxq = rte_zmalloc_socket("idpf rxq",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+ return -ENOMEM;
+ }
+
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->rx_hdr_len = 0;
+ rxq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = len;
+
+ len = rxq->nb_rx_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx_cpmpl_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX");
+ ret = -ENOMEM;
+ goto free_rxq;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ rxq->rx_ring_phys_addr = mz->iova;
+ rxq->rx_ring = mz->addr;
+
+ rxq->mz = mz;
+ reset_split_rx_descq(rxq);
+ rxq->q_set = true;
+ dev->data->rx_queues[queue_idx] = rxq;
+
+ /* TODO: allow bulk or vec */
+
+ /* setup Rx buffer queue */
+ bufq1 = rte_zmalloc_socket("idpf bufq1",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq1) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 1.");
+ ret = -ENOMEM;
+ goto free_mz;
+ }
+ qid = 2 * queue_idx;
+ ret = idpf_rx_split_bufq_setup(dev, bufq1, qid, rx_free_thresh,
+ nb_desc, socket_id, rx_conf, mp);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+ ret = -EINVAL;
+ goto free_bufq1;
+ }
+ rxq->bufq1 = bufq1;
+
+ bufq2 = rte_zmalloc_socket("idpf bufq2",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!bufq2) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue 2.");
+ rte_free(bufq1->sw_ring);
+ rte_memzone_free(bufq1->mz);
+ ret = -ENOMEM;
+ goto free_bufq1;
+ }
+ qid = 2 * queue_idx + 1;
+ ret = idpf_rx_split_bufq_setup(dev, bufq2, qid, rx_free_thresh,
+ nb_desc, socket_id, rx_conf, mp);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+ rte_free(bufq1->sw_ring);
+ rte_memzone_free(bufq1->mz);
+ ret = -EINVAL;
+ goto free_bufq2;
+ }
+ rxq->bufq2 = bufq2;
+
+ return 0;
+
+free_bufq2:
+ rte_free(bufq2);
+free_bufq1:
+ rte_free(bufq1);
+free_mz:
+ rte_memzone_free(mz);
+free_rxq:
+ rte_free(rxq);
+
+ return ret;
+}
+
+static int
+idpf_rx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_rx_queue *rxq;
+ const struct rte_memzone *mz;
+ uint16_t rx_free_thresh;
+ uint32_t ring_size;
+ uint16_t len;
+
+ PMD_INIT_FUNC_TRACE();
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of receive descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ /* Check free threshold */
+ rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+ IDPF_DEFAULT_RX_FREE_THRESH :
+ rx_conf->rx_free_thresh;
+ if (check_rx_thresh(nb_desc, rx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed */
+ if (dev->data->rx_queues[queue_idx]) {
+ idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+ dev->data->rx_queues[queue_idx] = NULL;
+ }
+
+ /* Setup Rx description queue */
+ rxq = rte_zmalloc_socket("idpf rxq",
+ sizeof(struct idpf_rx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+ return -ENOMEM;
+ }
+
+ rxq->mp = mp;
+ rxq->nb_rx_desc = nb_desc;
+ rxq->rx_free_thresh = rx_free_thresh;
+ rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+ rxq->port_id = dev->data->port_id;
+ rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+ rxq->rx_hdr_len = 0;
+ rxq->adapter = adapter;
+
+ if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
+ rxq->crc_len = RTE_ETHER_CRC_LEN;
+ else
+ rxq->crc_len = 0;
+
+ len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+ rxq->rx_buf_len = len;
+
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ rxq->sw_ring =
+ rte_zmalloc_socket("idpf rxq sw ring",
+ sizeof(struct rte_mbuf *) * len,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!rxq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+ rte_free(rxq);
+ return -ENOMEM;
+ }
+
+ /* Allocate a liitle more to support bulk allocate. */
+ len = nb_desc + IDPF_RX_MAX_BURST;
+ ring_size = RTE_ALIGN(len *
+ sizeof(struct virtchnl2_singleq_rx_buf_desc),
+ IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "rx ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for RX buffer queue.");
+ rte_free(rxq->sw_ring);
+ rte_free(rxq);
+ return -ENOMEM;
+ }
+
+ /* Zero all the descriptors in the ring. */
+ memset(mz->addr, 0, ring_size);
+ rxq->rx_ring_phys_addr = mz->iova;
+ rxq->rx_ring = mz->addr;
+
+ rxq->mz = mz;
+ reset_single_rx_queue(rxq);
+ rxq->q_set = true;
+ dev->data->rx_queues[queue_idx] = rxq;
+ rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+ queue_idx * vport->chunks_info.rx_qtail_spacing);
+ rxq->ops = &def_rxq_ops;
+
+ return 0;
+}
+
+int
+idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+ return idpf_rx_single_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, rx_conf, mp);
+ else
+ return idpf_rx_split_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, rx_conf, mp);
+}
+
+static int
+idpf_tx_split_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_tx_queue *txq, *cq;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ PMD_INIT_FUNC_TRACE();
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH;
+ tx_free_thresh = IDPF_DEFAULT_TX_FREE_THRESH;
+ if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh))
+ return -EINVAL;
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("idpf split txq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ txq->nb_tx_desc = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ /* Allocate software ring */
+ txq->sw_nb_desc = 2 * nb_desc;
+ txq->sw_ring =
+ rte_zmalloc_socket("idpf split tx sw ring",
+ sizeof(struct idpf_tx_entry) *
+ txq->sw_nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct iecm_flex_tx_sched_desc) * txq->nb_tx_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "split_tx_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->desc_ring = (struct iecm_flex_tx_sched_desc *)mz->addr;
+
+ txq->mz = mz;
+ reset_split_tx_descq(txq);
+ txq->q_set = true;
+ dev->data->tx_queues[queue_idx] = txq;
+ txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+ queue_idx * vport->chunks_info.tx_qtail_spacing);
+ txq->ops = &def_txq_ops;
+
+ /* Allocate the TX completion queue data structure. */
+ txq->complq = rte_zmalloc_socket("idpf splitq cq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ cq = txq->complq;
+ if (!cq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+ cq->nb_tx_desc = 2 * nb_desc;
+ cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+ cq->port_id = dev->data->port_id;
+ cq->txqs = dev->data->tx_queues;
+ cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+ ring_size = sizeof(struct iecm_splitq_tx_compl_desc) * cq->nb_tx_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "tx_split_compl_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX completion queue");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+ cq->tx_ring_phys_addr = mz->iova;
+ cq->compl_ring = (struct iecm_splitq_tx_compl_desc *)mz->addr;
+ cq->mz = mz;
+ reset_split_tx_complq(cq);
+
+ return 0;
+}
+
+static int
+idpf_tx_single_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct iecm_hw *hw = &adapter->hw;
+ struct idpf_tx_queue *txq;
+ const struct rte_memzone *mz;
+ uint32_t ring_size;
+ uint16_t tx_rs_thresh, tx_free_thresh;
+ uint64_t offloads;
+
+ PMD_INIT_FUNC_TRACE();
+
+ offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+ if (nb_desc % IDPF_ALIGN_RING_DESC != 0 ||
+ nb_desc > IDPF_MAX_RING_DESC ||
+ nb_desc < IDPF_MIN_RING_DESC) {
+ PMD_INIT_LOG(ERR, "Number (%u) of transmit descriptors is invalid",
+ nb_desc);
+ return -EINVAL;
+ }
+
+ tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh) ?
+ tx_conf->tx_rs_thresh : IDPF_DEFAULT_TX_RS_THRESH);
+ tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?
+ tx_conf->tx_free_thresh : IDPF_DEFAULT_TX_FREE_THRESH);
+ check_tx_thresh(nb_desc, tx_rs_thresh, tx_rs_thresh);
+
+ /* Free memory if needed. */
+ if (dev->data->tx_queues[queue_idx]) {
+ idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+ dev->data->tx_queues[queue_idx] = NULL;
+ }
+
+ /* Allocate the TX queue data structure. */
+ txq = rte_zmalloc_socket("idpf txq",
+ sizeof(struct idpf_tx_queue),
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+ return -ENOMEM;
+ }
+
+ /* TODO: vlan offload */
+
+ txq->nb_tx_desc = nb_desc;
+ txq->rs_thresh = tx_rs_thresh;
+ txq->free_thresh = tx_free_thresh;
+ txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+ txq->port_id = dev->data->port_id;
+ txq->offloads = offloads;
+ txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+ /* Allocate software ring */
+ txq->sw_ring =
+ rte_zmalloc_socket("idpf tx sw ring",
+ sizeof(struct idpf_tx_entry) * nb_desc,
+ RTE_CACHE_LINE_SIZE,
+ socket_id);
+ if (!txq->sw_ring) {
+ PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ /* Allocate TX hardware ring descriptors. */
+ ring_size = sizeof(struct iecm_base_tx_desc) * nb_desc;
+ ring_size = RTE_ALIGN(ring_size, IDPF_DMA_MEM_ALIGN);
+ mz = rte_eth_dma_zone_reserve(dev, "tx_ring", queue_idx,
+ ring_size, IDPF_RING_BASE_ALIGN,
+ socket_id);
+ if (!mz) {
+ PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for TX");
+ rte_free(txq->sw_ring);
+ rte_free(txq);
+ return -ENOMEM;
+ }
+
+ txq->tx_ring_phys_addr = mz->iova;
+ txq->tx_ring = (struct iecm_base_tx_desc *)mz->addr;
+
+ txq->mz = mz;
+ reset_single_tx_queue(txq);
+ txq->q_set = true;
+ dev->data->tx_queues[queue_idx] = txq;
+ txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+ queue_idx * vport->chunks_info.tx_qtail_spacing);
+ txq->ops = &def_txq_ops;
+
+ return 0;
+}
+
+int
+idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE)
+ return idpf_tx_single_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, tx_conf);
+ else
+ return idpf_tx_split_queue_setup(dev, queue_idx, nb_desc,
+ socket_id, tx_conf);
+}
+
+static int
+idpf_alloc_single_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ volatile struct virtchnl2_singleq_rx_buf_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &((volatile struct virtchnl2_singleq_rx_buf_desc *)(rxq->rx_ring))[i];
+ rxd->pkt_addr = dma_addr;
+ rxd->hdr_addr = 0;
+#ifndef RTE_LIBRTE_IDPF_16BYTE_RX_DESC
+ rxd->rsvd1 = 0;
+ rxd->rsvd2 = 0;
+#endif
+
+ rxq->sw_ring[i] = mbuf;
+ }
+
+ return 0;
+}
+
+static int
+idpf_alloc_split_rxq_mbufs(struct idpf_rx_queue *rxq)
+{
+ volatile struct virtchnl2_splitq_rx_buf_desc *rxd;
+ struct rte_mbuf *mbuf = NULL;
+ uint64_t dma_addr;
+ uint16_t i;
+
+ for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+ mbuf = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!mbuf)) {
+ PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+ return -ENOMEM;
+ }
+
+ rte_mbuf_refcnt_set(mbuf, 1);
+ mbuf->next = NULL;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->port = rxq->port_id;
+
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+ rxd = &((volatile struct virtchnl2_splitq_rx_buf_desc *)(rxq->rx_ring))[i];
+ rxd->qword0.buf_id = i;
+ rxd->qword0.rsvd0 = 0;
+ rxd->qword0.rsvd1 = 0;
+ rxd->pkt_addr = dma_addr;
+ rxd->hdr_addr = 0;
+ rxd->rsvd2 = 0;
+
+ rxq->sw_ring[i] = mbuf;
+ }
+
+ rxq->nb_rx_hold = 0;
+ rxq->rx_tail = rxq->nb_rx_desc - 1;
+
+ return 0;
+}
+
+int
+idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_rx_queue *rxq;
+ int err;
+
+ if (rx_queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+
+ if (!rxq->bufq1) {
+ /* Single queue */
+ err = idpf_alloc_single_rxq_mbufs(rxq);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+ return err;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+ } else {
+ /* Split queue */
+ err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+ return err;
+ }
+ err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+ return err;
+ }
+
+ rte_wmb();
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->nb_rx_desc - 1);
+ IECM_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->nb_rx_desc - 1);
+ }
+
+ return err;
+}
+
+int
+idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int err = 0;
+
+ PMD_DRV_FUNC_TRACE();
+
+ err = idpf_rx_queue_init(dev, rx_queue_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+ rx_queue_id);
+ return err;
+ }
+
+ /* Ready to switch the queue on */
+ err = idpf_switch_queue(vport, rx_queue_id, true, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+ rx_queue_id);
+ else
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
+int
+idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_tx_queue *txq;
+
+ if (tx_queue_id >= dev->data->nb_tx_queues)
+ return -EINVAL;
+
+ txq = dev->data->tx_queues[tx_queue_id];
+
+ /* Init the RX tail register. */
+ IECM_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+ return 0;
+}
+
+int
+idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ int err = 0;
+
+ PMD_DRV_FUNC_TRACE();
+
+ err = idpf_tx_queue_init(dev, tx_queue_id);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+ tx_queue_id);
+ return err;
+ }
+
+ /* Ready to switch the queue on */
+ err = idpf_switch_queue(vport, tx_queue_id, false, true);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+ tx_queue_id);
+ else
+ dev->data->tx_queue_state[tx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
+
+ return err;
+}
+
+int
+idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ int err;
+
+ PMD_DRV_FUNC_TRACE();
+
+ if (rx_queue_id >= dev->data->nb_rx_queues)
+ return -EINVAL;
+
+ err = idpf_switch_queue(vport, rx_queue_id, true, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+ rx_queue_id);
+ return err;
+ }
+
+ rxq = dev->data->rx_queues[rx_queue_id];
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ rxq->ops->release_mbufs(rxq);
+ reset_single_rx_queue(rxq);
+ } else {
+ rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+ rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+ reset_split_rx_queue(rxq);
+ }
+ dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+int
+idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_tx_queue *txq;
+ int err;
+
+ PMD_DRV_FUNC_TRACE();
+
+ if (tx_queue_id >= dev->data->nb_tx_queues)
+ return -EINVAL;
+
+ err = idpf_switch_queue(vport, tx_queue_id, false, false);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+ tx_queue_id);
+ return err;
+ }
+
+ txq = dev->data->tx_queues[tx_queue_id];
+ txq->ops->release_mbufs(txq);
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ reset_single_tx_queue(txq);
+ } else {
+ reset_split_tx_descq(txq);
+ reset_split_tx_complq(txq->complq);
+ }
+ dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+ return 0;
+}
+
+void
+idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+ idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
+void
+idpf_stop_queues(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct idpf_rx_queue *rxq;
+ struct idpf_tx_queue *txq;
+ int ret, i;
+
+ /* Stop All queues */
+ ret = idpf_ena_dis_queues(vport, false);
+ if (ret)
+ PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ if (!rxq)
+ continue;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ rxq->ops->release_mbufs(rxq);
+ reset_single_rx_queue(rxq);
+ } else {
+ rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+ rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+ reset_split_rx_queue(rxq);
+ }
+ dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ if (!txq)
+ continue;
+ txq->ops->release_mbufs(txq);
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ reset_split_tx_descq(txq);
+ reset_split_tx_complq(txq->complq);
+ } else {
+ reset_single_tx_queue(txq);
+ }
+ dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+ }
+}
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
new file mode 100644
index 0000000000..705f706890
--- /dev/null
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _IDPF_RXTX_H_
+#define _IDPF_RXTX_H_
+
+#include "base/iecm_osdep.h"
+#include "base/iecm_type.h"
+#include "base/iecm_devids.h"
+#include "base/iecm_lan_txrx.h"
+#include "base/iecm_lan_pf_regs.h"
+#include "base/virtchnl.h"
+#include "base/virtchnl2.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define IDPF_ALIGN_RING_DESC 32
+#define IDPF_MIN_RING_DESC 32
+#define IDPF_MAX_RING_DESC 4096
+#define IDPF_DMA_MEM_ALIGN 4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define IDPF_RING_BASE_ALIGN 128
+
+/* used for Rx Bulk Allocate */
+#define IDPF_RX_MAX_BURST 32
+
+#define IDPF_DEFAULT_RX_FREE_THRESH 32
+
+
+#define IDPF_DEFAULT_TX_RS_THRESH 128
+#define IDPF_DEFAULT_TX_FREE_THRESH 128
+
+#define IDPF_MIN_TSO_MSS 256
+#define IDPF_MAX_TSO_MSS 9668
+#define IDPF_TSO_MAX_SEG UINT8_MAX
+#define IDPF_TX_MAX_MTU_SEG 8
+
+struct idpf_rx_queue {
+ struct idpf_adapter *adapter; /* the adapter this queue belongs to */
+ struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
+ const struct rte_memzone *mz; /* memzone for Rx ring */
+ volatile void *rx_ring;
+ struct rte_mbuf **sw_ring; /* address of SW ring */
+ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
+
+ uint16_t nb_rx_desc; /* ring length */
+ uint16_t rx_tail; /* current value of tail */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
+
+ /* for rx bulk */
+ uint16_t rx_nb_avail; /* number of staged packets ready */
+ uint16_t rx_next_avail; /* index of next staged packets */
+ uint16_t rx_free_trigger; /* triggers rx buffer allocation */
+ struct rte_mbuf *rx_stage[IDPF_RX_MAX_BURST * 2]; /* store mbuf */
+
+ uint16_t port_id; /* device port ID */
+ uint16_t queue_id; /* Rx queue index */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint8_t rxdid;
+
+ bool q_set; /* if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+ const struct idpf_rxq_ops *ops;
+
+ /* only valid for split queue mode */
+ uint8_t expected_gen_id;
+ struct idpf_rx_queue *bufq1;
+ struct idpf_rx_queue *bufq2;
+};
+
+struct idpf_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct idpf_tx_queue {
+ const struct rte_memzone *mz; /* memzone for Tx ring */
+ volatile struct iecm_base_tx_desc *tx_ring; /* Tx ring virtual address */
+ volatile union {
+ struct iecm_flex_tx_sched_desc *desc_ring;
+ struct iecm_splitq_tx_compl_desc *compl_ring;
+ };
+ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ struct idpf_tx_entry *sw_ring; /* address array of SW ring */
+
+ uint16_t nb_tx_desc; /* ring length */
+ uint16_t tx_tail; /* current value of tail */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ /* number of used desc since RS bit set */
+ uint16_t nb_used;
+ uint16_t nb_free;
+ uint16_t last_desc_cleaned; /* last desc have been cleaned*/
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint64_t offloads;
+ uint16_t next_dd; /* next to set RS, for VPMD */
+ uint16_t next_rs; /* next to check DD, for VPMD */
+
+ bool q_set; /* if rx queue has been configured */
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ const struct idpf_txq_ops *ops;
+#define IDPF_TX_FLAGS_VLAN_TAG_LOC_L2TAG1 BIT(0)
+#define IDPF_TX_FLAGS_VLAN_TAG_LOC_L2TAG2 BIT(1)
+ uint8_t vlan_flag;
+
+ /* only valid for split queue mode */
+ uint16_t sw_nb_desc;
+ uint16_t sw_tail;
+ void **txqs;
+ uint32_t tx_start_qid;
+ uint8_t expected_gen_id;
+ struct idpf_tx_queue *complq;
+};
+
+/* Offload features */
+union idpf_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ /* uint64_t unused : 24; */
+ };
+};
+
+struct idpf_rxq_ops {
+ void (*release_mbufs)(struct idpf_rx_queue *rxq);
+};
+
+struct idpf_txq_ops {
+ void (*release_mbufs)(struct idpf_tx_queue *txq);
+};
+
+int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp);
+int idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void idpf_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+
+int idpf_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+ uint16_t nb_desc, unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf);
+int idpf_tx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+
+void idpf_stop_queues(struct rte_eth_dev *dev);
+
+#endif /* _IDPF_RXTX_H_ */
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 6dcb62148a..31b40af270 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -21,6 +21,7 @@
#include <rte_dev.h>
#include "idpf_ethdev.h"
+#include "idpf_rxtx.h"
#include "base/iecm_prototype.h"
@@ -450,6 +451,346 @@ idpf_destroy_vport(struct idpf_vport *vport)
return err;
}
+#define IDPF_RX_BUF_STRIDE 64
+int
+idpf_config_rxqs(struct idpf_vport *vport)
+{
+ struct idpf_rx_queue **rxq =
+ (struct idpf_rx_queue **)vport->dev_data->rx_queues;
+ struct virtchnl2_config_rx_queues *vc_rxqs = NULL;
+ struct virtchnl2_rxq_info *rxq_info;
+ struct idpf_cmd_info args;
+ uint16_t total_qs, num_qs;
+ int size, err, i, j;
+ int k = 0;
+
+ total_qs = vport->num_rx_q + vport->num_rx_bufq;
+ while (total_qs) {
+ if (total_qs > adapter->max_rxq_per_msg) {
+ num_qs = adapter->max_rxq_per_msg;
+ total_qs -= adapter->max_rxq_per_msg;
+ } else {
+ num_qs = total_qs;
+ total_qs = 0;
+ }
+
+ size = sizeof(*vc_rxqs) + (num_qs - 1) *
+ sizeof(struct virtchnl2_rxq_info);
+ vc_rxqs = rte_zmalloc("cfg_rxqs", size, 0);
+ if (vc_rxqs == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_rx_queues");
+ err = -ENOMEM;
+ break;
+ }
+ vc_rxqs->vport_id = vport->vport_id;
+ vc_rxqs->num_qinfo = num_qs;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ for (i = 0; i < num_qs; i++, k++) {
+ rxq_info = &vc_rxqs->qinfo[i];
+ rxq_info->dma_ring_addr = rxq[k]->rx_ring_phys_addr;
+ rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+ rxq_info->queue_id = rxq[k]->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+ rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
+ rxq_info->max_pkt_size = vport->max_pkt_len;
+
+ rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+ rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+ rxq_info->ring_len = rxq[k]->nb_rx_desc;
+ }
+ } else {
+ for (i = 0; i < num_qs / 3; i++, k++) {
+ /* Rx queue */
+ rxq_info = &vc_rxqs->qinfo[i * 3];
+ rxq_info->dma_ring_addr =
+ rxq[k]->rx_ring_phys_addr;
+ rxq_info->type = VIRTCHNL2_QUEUE_TYPE_RX;
+ rxq_info->queue_id = rxq[k]->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ rxq_info->data_buffer_size = rxq[k]->rx_buf_len;
+ rxq_info->max_pkt_size = vport->max_pkt_len;
+
+ rxq_info->desc_ids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ rxq_info->qflags |= VIRTCHNL2_RX_DESC_SIZE_32BYTE;
+
+ rxq_info->ring_len = rxq[k]->nb_rx_desc;
+ rxq_info->rx_bufq1_id = rxq[k]->bufq1->queue_id;
+ rxq_info->rx_bufq2_id = rxq[k]->bufq2->queue_id;
+ rxq_info->rx_buffer_low_watermark = 64;
+
+ /* Buffer queue */
+ for (j = 1; j <= IDPF_RX_BUFQ_PER_GRP; j++) {
+ struct idpf_rx_queue *bufq = j == 1 ?
+ rxq[k]->bufq1 : rxq[k]->bufq2;
+ rxq_info = &vc_rxqs->qinfo[i * 3 + j];
+ rxq_info->dma_ring_addr =
+ bufq->rx_ring_phys_addr;
+ rxq_info->type =
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ rxq_info->queue_id = bufq->queue_id;
+ rxq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ rxq_info->data_buffer_size = bufq->rx_buf_len;
+ rxq_info->desc_ids =
+ VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ rxq_info->ring_len = bufq->nb_rx_desc;
+
+ rxq_info->buffer_notif_stride =
+ IDPF_RX_BUF_STRIDE;
+ rxq_info->rx_buffer_low_watermark = 64;
+ }
+ }
+ }
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CONFIG_RX_QUEUES;
+ args.in_args = (uint8_t *)vc_rxqs;
+ args.in_args_size = size;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ rte_free(vc_rxqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_RX_QUEUES");
+ break;
+ }
+ }
+
+ return err;
+}
+
+int
+idpf_config_txqs(struct idpf_vport *vport)
+{
+ struct idpf_tx_queue **txq =
+ (struct idpf_tx_queue **)vport->dev_data->tx_queues;
+ struct virtchnl2_config_tx_queues *vc_txqs = NULL;
+ struct virtchnl2_txq_info *txq_info;
+ struct idpf_cmd_info args;
+ uint16_t total_qs, num_qs;
+ int size, err, i;
+ int k = 0;
+
+ total_qs = vport->num_tx_q + vport->num_tx_complq;
+ while (total_qs) {
+ if (total_qs > adapter->max_txq_per_msg) {
+ num_qs = adapter->max_txq_per_msg;
+ total_qs -= adapter->max_txq_per_msg;
+ } else {
+ num_qs = total_qs;
+ total_qs = 0;
+ }
+ size = sizeof(*vc_txqs) + (num_qs - 1) *
+ sizeof(struct virtchnl2_txq_info);
+ vc_txqs = rte_zmalloc("cfg_txqs", size, 0);
+ if (vc_txqs == NULL) {
+ PMD_DRV_LOG(ERR, "Failed to allocate virtchnl2_config_tx_queues");
+ err = -ENOMEM;
+ break;
+ }
+ vc_txqs->vport_id = vport->vport_id;
+ vc_txqs->num_qinfo = num_qs;
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ for (i = 0; i < num_qs; i++, k++) {
+ txq_info = &vc_txqs->qinfo[i];
+ txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+ txq_info->queue_id = txq[k]->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SINGLE;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+ txq_info->ring_len = txq[k]->nb_tx_desc;
+ }
+ } else {
+ for (i = 0; i < num_qs / 2; i++, k++) {
+ /* txq info */
+ txq_info = &vc_txqs->qinfo[2 * i];
+ txq_info->dma_ring_addr = txq[k]->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX;
+ txq_info->queue_id = txq[k]->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ txq_info->ring_len = txq[k]->nb_tx_desc;
+ txq_info->tx_compl_queue_id =
+ txq[k]->complq->queue_id;
+ txq_info->relative_queue_id = txq_info->queue_id;
+
+ /* tx completion queue info */
+ txq_info = &vc_txqs->qinfo[2 * i + 1];
+ txq_info->dma_ring_addr =
+ txq[k]->complq->tx_ring_phys_addr;
+ txq_info->type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ txq_info->queue_id = txq[k]->complq->queue_id;
+ txq_info->model = VIRTCHNL2_QUEUE_MODEL_SPLIT;
+ txq_info->sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ txq_info->ring_len = txq[k]->complq->nb_tx_desc;
+ }
+ }
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_CONFIG_TX_QUEUES;
+ args.in_args = (uint8_t *)vc_txqs;
+ args.in_args_size = size;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ rte_free(vc_txqs);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_CONFIG_TX_QUEUES");
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int
+idpf_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
+ uint32_t type, bool on)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ struct idpf_cmd_info args;
+ int err, len;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (!queue_select)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = 1;
+ queue_select->vport_id = vport->vport_id;
+
+ queue_chunk->type = type;
+ queue_chunk->start_queue_id = qid;
+ queue_chunk->num_queues = 1;
+
+ args.ops = on ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (u8 *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ on ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
+int
+idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
+ bool rx, bool on)
+{
+ uint32_t type;
+ int err, queue_id;
+
+ /* switch txq/rxq */
+ type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
+
+ if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+ queue_id = vport->chunks_info.rx_start_qid + qid;
+ else
+ queue_id = vport->chunks_info.tx_start_qid + qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+
+ /* switch tx completion queue */
+ if (!rx && vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ queue_id = vport->chunks_info.tx_compl_start_qid + qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ }
+
+ /* switch rx buffer queue */
+ if (rx && vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ queue_id = vport->chunks_info.rx_buf_start_qid + 2 * qid;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ queue_id++;
+ err = idpf_ena_dis_one_queue(vport, queue_id, type, on);
+ if (err)
+ return err;
+ }
+
+ return err;
+}
+
+#define IDPF_RXTX_QUEUE_CHUNKS_NUM 2
+int idpf_ena_dis_queues(struct idpf_vport *vport, bool enable)
+{
+ struct virtchnl2_del_ena_dis_queues *queue_select;
+ struct virtchnl2_queue_chunk *queue_chunk;
+ uint32_t type;
+ struct idpf_cmd_info args;
+ uint16_t num_chunks;
+ int err, len;
+
+ num_chunks = IDPF_RXTX_QUEUE_CHUNKS_NUM;
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ num_chunks++;
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ num_chunks++;
+
+ len = sizeof(struct virtchnl2_del_ena_dis_queues) +
+ sizeof(struct virtchnl2_queue_chunk) * (num_chunks - 1);
+ queue_select = rte_zmalloc("queue_select", len, 0);
+ if (queue_select == NULL)
+ return -ENOMEM;
+
+ queue_chunk = queue_select->chunks.chunks;
+ queue_select->chunks.num_chunks = num_chunks;
+ queue_select->vport_id = vport->vport_id;
+
+ type = VIRTCHNL_QUEUE_TYPE_RX;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id = vport->chunks_info.rx_start_qid;
+ queue_chunk[type].num_queues = vport->num_rx_q;
+
+ type = VIRTCHNL2_QUEUE_TYPE_TX;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id = vport->chunks_info.tx_start_qid;
+ queue_chunk[type].num_queues = vport->num_tx_q;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id =
+ vport->chunks_info.rx_buf_start_qid;
+ queue_chunk[type].num_queues = vport->num_rx_bufq;
+ }
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ queue_chunk[type].type = type;
+ queue_chunk[type].start_queue_id =
+ vport->chunks_info.tx_compl_start_qid;
+ queue_chunk[type].num_queues = vport->num_tx_complq;
+ }
+
+ args.ops = enable ? VIRTCHNL2_OP_ENABLE_QUEUES :
+ VIRTCHNL2_OP_DISABLE_QUEUES;
+ args.in_args = (u8 *)queue_select;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_%s_QUEUES",
+ enable ? "ENABLE" : "DISABLE");
+
+ rte_free(queue_select);
+ return err;
+}
+
int
idpf_ena_dis_vport(struct idpf_vport *vport, bool enable)
{
diff --git a/drivers/net/idpf/meson.build b/drivers/net/idpf/meson.build
index 262a7aa8c7..9bda251ead 100644
--- a/drivers/net/idpf/meson.build
+++ b/drivers/net/idpf/meson.build
@@ -12,6 +12,7 @@ objs = [base_objs]
sources = files(
'idpf_ethdev.c',
+ 'idpf_rxtx.c',
'idpf_vchnl.c',
)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 05/11] net/idpf: support getting device information
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (3 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 04/11] net/idpf: support queue ops Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 06/11] net/idpf: support packet type getting Junfeng Guo
` (5 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops dev_infos_get.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 69 ++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index b48a722b80..02e0760fc4 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -28,6 +28,8 @@ static int idpf_dev_configure(struct rte_eth_dev *dev);
static int idpf_dev_start(struct rte_eth_dev *dev);
static int idpf_dev_stop(struct rte_eth_dev *dev);
static int idpf_dev_close(struct rte_eth_dev *dev);
+static int idpf_dev_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);
static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_configure = idpf_dev_configure,
@@ -42,8 +44,75 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.rx_queue_release = idpf_dev_rx_queue_release,
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release = idpf_dev_tx_queue_release,
+ .dev_infos_get = idpf_dev_info_get,
};
+static int
+idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_queues = adapter->caps->max_rx_q;
+ dev_info->max_tx_queues = adapter->caps->max_tx_q;
+ dev_info->min_rx_bufsize = IDPF_MIN_BUF_SIZE;
+ dev_info->max_rx_pktlen = IDPF_MAX_FRAME_SIZE;
+
+ dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
+ dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+ dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
+ dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
+ dev_info->rx_offload_capa =
+ RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+ RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+ RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_RX_OFFLOAD_SCATTER |
+ RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+ RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+ dev_info->tx_offload_capa =
+ RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+ RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+ RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+ RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+ RTE_ETH_TX_OFFLOAD_TCP_TSO |
+ RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+ RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ dev_info->default_rxconf = (struct rte_eth_rxconf) {
+ .rx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+ .rx_drop_en = 0,
+ .offloads = 0,
+ };
+
+ dev_info->default_txconf = (struct rte_eth_txconf) {
+ .tx_free_thresh = IDPF_DEFAULT_RX_FREE_THRESH,
+ .tx_rs_thresh = IDPF_DEFAULT_TX_RS_THRESH,
+ .offloads = 0,
+ };
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = IDPF_MAX_RING_DESC,
+ .nb_min = IDPF_MIN_RING_DESC,
+ .nb_align = IDPF_ALIGN_RING_DESC,
+ };
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = IDPF_MAX_RING_DESC,
+ .nb_min = IDPF_MIN_RING_DESC,
+ .nb_align = IDPF_ALIGN_RING_DESC,
+ };
+
+ return 0;
+}
static int
idpf_init_vport_req_info(struct rte_eth_dev *dev)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 06/11] net/idpf: support packet type getting
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (4 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 05/11] net/idpf: support getting device information Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 07/11] net/idpf: support link update Junfeng Guo
` (4 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops dev_supported_ptypes_get.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 3 ++
drivers/net/idpf/idpf_rxtx.c | 51 ++++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 3 ++
3 files changed, 57 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 02e0760fc4..95d2fd3968 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -32,6 +32,7 @@ static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
static const struct eth_dev_ops idpf_eth_dev_ops = {
+ .dev_supported_ptypes_get = idpf_dev_supported_ptypes_get,
.dev_configure = idpf_dev_configure,
.dev_start = idpf_dev_start,
.dev_stop = idpf_dev_stop,
@@ -511,6 +512,8 @@ idpf_adapter_init(struct rte_eth_dev *dev)
if (adapter->initialized)
return 0;
+ idpf_set_default_ptype_table(dev);
+
hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
hw->hw_addr_len = pci_dev->mem_resource[0].len;
hw->back = adapter;
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 770ed52281..6b436141c8 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -8,6 +8,57 @@
#include "idpf_ethdev.h"
#include "idpf_rxtx.h"
+const uint32_t *
+idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+ static const uint32_t ptypes[] = {
+ RTE_PTYPE_L2_ETHER,
+ RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+ RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+ RTE_PTYPE_L4_FRAG,
+ RTE_PTYPE_L4_NONFRAG,
+ RTE_PTYPE_L4_UDP,
+ RTE_PTYPE_L4_TCP,
+ RTE_PTYPE_L4_SCTP,
+ RTE_PTYPE_L4_ICMP,
+ RTE_PTYPE_UNKNOWN
+ };
+
+ return ptypes;
+}
+
+static inline uint32_t
+idpf_get_default_pkt_type(uint16_t ptype)
+{
+ static const uint32_t type_table[IDPF_MAX_PKT_TYPE]
+ __rte_cache_aligned = {
+ [1] = RTE_PTYPE_L2_ETHER,
+ [22] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_FRAG,
+ [23] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4,
+ [24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_UDP,
+ [26] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_TCP,
+ [27] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_SCTP,
+ [28] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L4_ICMP,
+ [88] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_FRAG,
+ [89] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6,
+ [90] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_UDP,
+ [92] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_TCP,
+ [93] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_SCTP,
+ [94] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6 | RTE_PTYPE_L4_ICMP,
+ };
+
+ return type_table[ptype];
+}
+
+void __rte_cold
+idpf_set_default_ptype_table(struct rte_eth_dev *dev __rte_unused)
+{
+ int i;
+
+ for (i = 0; i < IDPF_MAX_PKT_TYPE; i++)
+ adapter->ptype_tbl[i] = idpf_get_default_pkt_type(i);
+}
+
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
{
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 705f706890..21b6d8cb84 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -164,4 +164,7 @@ void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
void idpf_stop_queues(struct rte_eth_dev *dev);
+void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
+const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
+
#endif /* _IDPF_RXTX_H_ */
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 07/11] net/idpf: support link update
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (5 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 06/11] net/idpf: support packet type getting Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 08/11] net/idpf: support basic Rx/Tx Junfeng Guo
` (3 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add ops link_update.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 22 ++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 2 ++
2 files changed, 24 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 95d2fd3968..97540f382a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -31,6 +31,27 @@ static int idpf_dev_close(struct rte_eth_dev *dev);
static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+int
+idpf_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+ struct rte_eth_link new_link;
+
+ memset(&new_link, 0, sizeof(new_link));
+
+ new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+
+ new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+ RTE_ETH_LINK_DOWN;
+ new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+ RTE_ETH_LINK_SPEED_FIXED);
+
+ return rte_eth_linkstatus_set(dev, &new_link);
+}
+
static const struct eth_dev_ops idpf_eth_dev_ops = {
.dev_supported_ptypes_get = idpf_dev_supported_ptypes_get,
.dev_configure = idpf_dev_configure,
@@ -46,6 +67,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_setup = idpf_tx_queue_setup,
.tx_queue_release = idpf_dev_tx_queue_release,
.dev_infos_get = idpf_dev_info_get,
+ .link_update = idpf_dev_link_update,
};
static int
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 80eba09184..38c5486ca6 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -190,6 +190,8 @@ _atomic_set_cmd(struct idpf_adapter *adapter, enum virtchnl_ops ops)
return !ret;
}
+int idpf_dev_link_update(struct rte_eth_dev *dev,
+ __rte_unused int wait_to_complete);
void idpf_handle_virtchnl_msg(struct rte_eth_dev *dev);
int idpf_check_api_version(struct idpf_adapter *adapter);
int idpf_get_caps(struct idpf_adapter *adapter);
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 08/11] net/idpf: support basic Rx/Tx
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (6 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 07/11] net/idpf: support link update Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 09/11] net/idpf: support RSS Junfeng Guo
` (2 subsequent siblings)
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Xiaoyun Li
Add basic RX & TX support in split queue mode and single queue mode.
Using split queue mode by default.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 93 ++++
drivers/net/idpf/idpf_rxtx.c | 877 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_rxtx.h | 33 ++
3 files changed, 1003 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 97540f382a..ab67f8c2fd 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -14,12 +14,16 @@
#include "idpf_ethdev.h"
#include "idpf_rxtx.h"
+#define IDPF_TX_SINGLE_Q "tx_single"
+#define IDPF_RX_SINGLE_Q "rx_single"
#define VPORT_NUM "vport_num"
struct idpf_adapter *adapter;
uint16_t vport_num = 1;
static const char * const idpf_valid_args[] = {
+ IDPF_TX_SINGLE_Q,
+ IDPF_RX_SINGLE_Q,
VPORT_NUM,
NULL
};
@@ -156,6 +160,30 @@ idpf_init_vport_req_info(struct rte_eth_dev *dev)
(struct virtchnl2_create_vport *)adapter->vport_req_info[idx];
vport_info->vport_type = rte_cpu_to_le_16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+ if (!adapter->txq_model) {
+ vport_info->txq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ vport_info->num_tx_q = dev->data->nb_tx_queues;
+ vport_info->num_tx_complq =
+ dev->data->nb_tx_queues * IDPF_TX_COMPLQ_PER_GRP;
+ } else {
+ vport_info->txq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ vport_info->num_tx_q = dev->data->nb_tx_queues;
+ vport_info->num_tx_complq = 0;
+ }
+ if (!adapter->rxq_model) {
+ vport_info->rxq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ vport_info->num_rx_q = dev->data->nb_rx_queues;
+ vport_info->num_rx_bufq =
+ dev->data->nb_rx_queues * IDPF_RX_BUFQ_PER_GRP;
+ } else {
+ vport_info->rxq_model =
+ rte_cpu_to_le_16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+ vport_info->num_rx_q = dev->data->nb_rx_queues;
+ vport_info->num_rx_bufq = 0;
+ }
return 0;
}
@@ -436,6 +464,56 @@ idpf_dev_close(struct rte_eth_dev *dev)
return 0;
}
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+ int *i = (int *)args;
+ char *end;
+ int num;
+
+ num = strtoul(value, &end, 10);
+
+ if (num != 0 && num != 1) {
+ PMD_DRV_LOG(WARNING, "invalid value:\"%s\" for key:\"%s\", "
+ "value must be 0 or 1",
+ value, key);
+ return -1;
+ }
+
+ *i = num;
+ return 0;
+}
+
+static int idpf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, idpf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key");
+ return -EINVAL;
+ }
+
+ ret = rte_kvargs_process(kvlist, IDPF_TX_SINGLE_Q, &parse_bool,
+ &adapter->txq_model);
+ if (ret)
+ goto bail;
+
+ ret = rte_kvargs_process(kvlist, IDPF_RX_SINGLE_Q, &parse_bool,
+ &adapter->rxq_model);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
static void
idpf_reset_pf(struct iecm_hw *hw)
{
@@ -543,6 +621,12 @@ idpf_adapter_init(struct rte_eth_dev *dev)
hw->device_id = pci_dev->id.device_id;
hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+ ret = idpf_parse_devargs(dev);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
idpf_reset_pf(hw);
ret = idpf_check_pf_reset_done(hw);
if (ret) {
@@ -651,6 +735,15 @@ idpf_dev_init(struct rte_eth_dev *dev, __rte_unused void *init_params)
dev->dev_ops = &idpf_eth_dev_ops;
+ /* for secondary processes, we don't initialise any further as primary
+ * has already done this work.
+ */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ idpf_set_rx_function(dev);
+ idpf_set_tx_function(dev);
+ return ret;
+ }
+
ret = idpf_adapter_init(dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to init adapter.");
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 6b436141c8..d5613d63d6 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -1301,3 +1301,880 @@ idpf_stop_queues(struct rte_eth_dev *dev)
dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
}
}
+
+#define IDPF_RX_ERR0_QW1 \
+ (BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S) | \
+ BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S))
+
+static inline uint64_t
+idpf_splitq_rx_csum_offload(uint8_t err)
+{
+ uint64_t flags = 0;
+
+ if (unlikely(!(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S))))
+ return flags;
+
+ if (likely((err & IDPF_RX_ERR0_QW1) == 0)) {
+ flags |= (RTE_MBUF_F_RX_IP_CKSUM_GOOD |
+ RTE_MBUF_F_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S)))
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S)))
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S)))
+ flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+
+ if (unlikely(err & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S)))
+ flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+ else
+ flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+
+ return flags;
+}
+
+#define IDPF_RX_FLEX_DESC_HASH1_S 0
+#define IDPF_RX_FLEX_DESC_HASH2_S 16
+#define IDPF_RX_FLEX_DESC_HASH3_S 24
+
+static inline uint64_t
+idpf_splitq_rx_rss_offload(struct rte_mbuf *mb,
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+ uint8_t status_err0_qw0;
+ uint64_t flags = 0;
+
+ status_err0_qw0 = rx_desc->status_err0_qw0;
+
+ if (status_err0_qw0 & BIT(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S)) {
+ flags |= RTE_MBUF_F_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_16(rx_desc->hash1) |
+ ((uint32_t)(rx_desc->ff2_mirrid_hash2.hash2) <<
+ IDPF_RX_FLEX_DESC_HASH2_S) |
+ ((uint32_t)(rx_desc->hash3) <<
+ IDPF_RX_FLEX_DESC_HASH3_S);
+ }
+
+ return flags;
+}
+
+static void
+idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
+{
+ volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_ring;
+ volatile struct virtchnl2_splitq_rx_buf_desc *rx_buf_desc;
+ uint16_t nb_refill = rx_bufq->nb_rx_hold;
+ uint16_t nb_desc = rx_bufq->nb_rx_desc;
+ uint16_t next_avail = rx_bufq->rx_tail;
+ struct rte_mbuf *nmb[nb_refill];
+ struct rte_eth_dev *dev;
+ uint64_t dma_addr;
+ uint16_t delta;
+
+ if (nb_refill <= rx_bufq->rx_free_thresh)
+ return;
+
+ if (nb_refill >= nb_desc)
+ nb_refill = nb_desc - 1;
+
+ rx_buf_ring =
+ (volatile struct virtchnl2_splitq_rx_buf_desc *)rx_bufq->rx_ring;
+ delta = nb_desc - next_avail;
+ if (delta < nb_refill) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, delta))) {
+ for (int i = 0; i < delta; i++) {
+ rx_buf_desc = &rx_buf_ring[next_avail + i];
+ rx_bufq->sw_ring[next_avail + i] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->hdr_addr = 0;
+ rx_buf_desc->pkt_addr = dma_addr;
+ }
+ nb_refill -= delta;
+ next_avail = 0;
+ rx_bufq->nb_rx_hold -= delta;
+ } else {
+ dev = &rte_eth_devices[rx_bufq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rx_bufq->port_id, rx_bufq->queue_id);
+ return;
+ }
+ }
+
+ if (nb_desc - next_avail >= nb_refill) {
+ if (likely(!rte_pktmbuf_alloc_bulk(rx_bufq->mp, nmb, nb_refill))) {
+ for (int i = 0; i < nb_refill; i++) {
+ rx_buf_desc = &rx_buf_ring[next_avail + i];
+ rx_bufq->sw_ring[next_avail + i] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->hdr_addr = 0;
+ rx_buf_desc->pkt_addr = dma_addr;
+ }
+ next_avail += nb_refill;
+ rx_bufq->nb_rx_hold -= nb_refill;
+ } else {
+ dev = &rte_eth_devices[rx_bufq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rx_bufq->port_id, rx_bufq->queue_id);
+ }
+ }
+
+ IECM_PCI_REG_WRITE(rx_bufq->qrx_tail, next_avail);
+
+ rx_bufq->rx_tail = next_avail;
+}
+
+uint16_t
+idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc_ring;
+ volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+ uint16_t pktlen_gen_bufq_id;
+ struct idpf_rx_queue *rxq;
+ const uint32_t *ptype_tbl;
+ uint8_t status_err0_qw1;
+ struct rte_mbuf *rxm;
+ uint16_t rx_id_bufq1;
+ uint16_t rx_id_bufq2;
+ uint64_t pkt_flags;
+ uint16_t pkt_len;
+ uint16_t bufq_id;
+ uint16_t gen_id;
+ uint16_t rx_id;
+ uint16_t nb_rx;
+
+ nb_rx = 0;
+ rxq = (struct idpf_rx_queue *)rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_id_bufq1 = rxq->bufq1->rx_next_avail;
+ rx_id_bufq2 = rxq->bufq2->rx_next_avail;
+ rx_desc_ring =
+ (volatile struct virtchnl2_rx_flex_desc_adv_nic_3 *)rxq->rx_ring;
+ ptype_tbl = rxq->adapter->ptype_tbl;
+
+ while (nb_rx < nb_pkts) {
+ rx_desc = &rx_desc_ring[rx_id];
+
+ pktlen_gen_bufq_id =
+ rte_le_to_cpu_16(rx_desc->pktlen_gen_bufq_id);
+ gen_id = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S;
+ if (gen_id != rxq->expected_gen_id)
+ break;
+
+ pkt_len = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S;
+ if (!pkt_len)
+ PMD_RX_LOG(ERR, "Packet length is 0");
+
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc)) {
+ rx_id = 0;
+ rxq->expected_gen_id ^= 1;
+ }
+
+ bufq_id = (pktlen_gen_bufq_id &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S;
+ if (!bufq_id) {
+ rxm = rxq->bufq1->sw_ring[rx_id_bufq1];
+ rx_id_bufq1++;
+ if (unlikely(rx_id_bufq1 == rxq->bufq1->nb_rx_desc))
+ rx_id_bufq1 = 0;
+ rxq->bufq1->nb_rx_hold++;
+ } else {
+ rxm = rxq->bufq2->sw_ring[rx_id_bufq2];
+ rx_id_bufq2++;
+ if (unlikely(rx_id_bufq2 == rxq->bufq2->nb_rx_desc))
+ rx_id_bufq2 = 0;
+ rxq->bufq2->nb_rx_hold++;
+ }
+
+ pkt_len -= rxq->crc_len;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rxm->next = NULL;
+ rxm->nb_segs = 1;
+ rxm->port = rxq->port_id;
+ rxm->ol_flags = 0;
+ rxm->packet_type =
+ ptype_tbl[(rte_le_to_cpu_16(rx_desc->ptype_err_fflags0) &
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M) >>
+ VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S];
+
+ status_err0_qw1 = rx_desc->status_err0_qw1;
+ pkt_flags = idpf_splitq_rx_csum_offload(status_err0_qw1);
+ pkt_flags |= idpf_splitq_rx_rss_offload(rxm, rx_desc);
+ rxm->ol_flags |= pkt_flags;
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+
+ if (nb_rx) {
+ rxq->rx_tail = rx_id;
+ if (rx_id_bufq1 != rxq->bufq1->rx_next_avail)
+ rxq->bufq1->rx_next_avail = rx_id_bufq1;
+ if (rx_id_bufq2 != rxq->bufq2->rx_next_avail)
+ rxq->bufq2->rx_next_avail = rx_id_bufq2;
+
+ idpf_split_rx_bufq_refill(rxq->bufq1);
+ idpf_split_rx_bufq_refill(rxq->bufq2);
+ }
+
+ return nb_rx;
+}
+
+static inline void
+idpf_split_tx_free(struct idpf_tx_queue *cq)
+{
+ volatile struct iecm_splitq_tx_compl_desc *compl_ring = cq->compl_ring;
+ volatile struct iecm_splitq_tx_compl_desc *txd;
+ uint16_t next = cq->tx_tail;
+ struct idpf_tx_entry *txe;
+ struct idpf_tx_queue *txq;
+ uint16_t gen, qid, q_head;
+ uint8_t ctype;
+
+ txd = &compl_ring[next];
+ gen = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_GEN_M) >> IECM_TXD_COMPLQ_GEN_S;
+ if (gen != cq->expected_gen_id)
+ return;
+
+ ctype = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_COMPL_TYPE_M) >> IECM_TXD_COMPLQ_COMPL_TYPE_S;
+ qid = (rte_le_to_cpu_16(txd->qid_comptype_gen) &
+ IECM_TXD_COMPLQ_QID_M) >> IECM_TXD_COMPLQ_QID_S;
+ q_head = rte_le_to_cpu_16(txd->q_head_compl_tag.compl_tag);
+ txq = cq->txqs[qid - cq->tx_start_qid];
+
+ switch (ctype) {
+ case IECM_TXD_COMPLT_RE:
+ if (q_head == 0)
+ txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+ else
+ txq->last_desc_cleaned = q_head - 1;
+ if (unlikely(!(txq->last_desc_cleaned % 32))) {
+ PMD_DRV_LOG(ERR, "unexpected desc (head = %u) completion.",
+ q_head);
+ return;
+ }
+
+ break;
+ case IECM_TXD_COMPLT_RS:
+ txq->nb_free++;
+ txq->nb_used--;
+ txe = &txq->sw_ring[q_head];
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "unknown completion type.");
+ return;
+ }
+
+ if (++next == cq->nb_tx_desc) {
+ next = 0;
+ cq->expected_gen_id ^= 1;
+ }
+
+ cq->tx_tail = next;
+}
+
+/* Check if the context descriptor is needed for TX offloading */
+static inline uint16_t
+idpf_calc_context_desc(uint64_t flags)
+{
+ if (flags & RTE_MBUF_F_TX_TCP_SEG)
+ return 1;
+
+ return 0;
+}
+
+/* set TSO context descriptor
+ */
+static inline void
+idpf_set_splitq_tso_ctx(struct rte_mbuf *mbuf,
+ union idpf_tx_offload tx_offload,
+ volatile union iecm_flex_tx_ctx_desc *ctx_desc)
+{
+ uint16_t cmd_dtype;
+ uint32_t tso_len;
+ uint8_t hdr_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return;
+ }
+
+ hdr_len = tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+ cmd_dtype = IECM_TX_DESC_DTYPE_FLEX_TSO_CTX |
+ IECM_TX_FLEX_CTX_DESC_CMD_TSO;
+ tso_len = mbuf->pkt_len - hdr_len;
+
+ ctx_desc->tso.qw1.cmd_dtype = rte_cpu_to_le_16(cmd_dtype);
+ ctx_desc->tso.qw0.hdr_len = hdr_len;
+ ctx_desc->tso.qw0.mss_rt =
+ rte_cpu_to_le_16((uint16_t)mbuf->tso_segsz &
+ IECM_TXD_FLEX_CTX_MSS_RT_M);
+ ctx_desc->tso.qw0.flex_tlen =
+ rte_cpu_to_le_32(tso_len &
+ IECM_TXD_FLEX_CTX_MSS_RT_M);
+}
+
+uint16_t
+idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ struct idpf_tx_queue *txq = (struct idpf_tx_queue *)tx_queue;
+ volatile struct iecm_flex_tx_sched_desc *txr = txq->desc_ring;
+ volatile struct iecm_flex_tx_sched_desc *txd;
+ struct idpf_tx_entry *sw_ring = txq->sw_ring;
+ union idpf_tx_offload tx_offload = {0};
+ struct idpf_tx_entry *txe, *txn;
+ uint16_t nb_used, tx_id, sw_id;
+ struct rte_mbuf *tx_pkt;
+ uint16_t nb_to_clean;
+ uint16_t nb_tx = 0;
+ uint64_t ol_flags;
+ uint16_t nb_ctx;
+
+ tx_id = txq->tx_tail;
+ sw_id = txq->sw_tail;
+ txe = &sw_ring[sw_id];
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ tx_pkt = tx_pkts[nb_tx];
+
+ if (txq->nb_free <= txq->free_thresh) {
+ /* TODO: Need to refine
+ * 1. free and clean: Better to decide a clean destination instead of
+ * loop times. And don't free mbuf when RS got immediately, free when
+ * transmit or according to the clean destination.
+ * Now, just ingnore the RE write back, free mbuf when get RS
+ * 2. out-of-order rewrite back haven't be supported, SW head and HW head
+ * need to be separated.
+ **/
+ nb_to_clean = 2 * txq->rs_thresh;
+ while (nb_to_clean--)
+ idpf_split_tx_free(txq->complq);
+ }
+
+ if (txq->nb_free < tx_pkt->nb_segs)
+ break;
+
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = idpf_calc_context_desc(ol_flags);
+ nb_used = tx_pkt->nb_segs + nb_ctx;
+
+ /* context descriptor */
+ if (nb_ctx) {
+ volatile union iecm_flex_tx_ctx_desc *ctx_desc =
+ (volatile union iecm_flex_tx_ctx_desc *)&txr[tx_id];
+
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+ idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+ ctx_desc);
+
+ tx_id++;
+ if (tx_id == txq->nb_tx_desc)
+ tx_id = 0;
+ }
+
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+ txe->mbuf = tx_pkt;
+
+ /* Setup TX descriptor */
+ txd->buf_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
+ txd->qw1.cmd_dtype =
+ rte_cpu_to_le_16(IECM_TX_DESC_DTYPE_FLEX_FLOW_SCHE);
+ txd->qw1.rxr_bufsize = tx_pkt->data_len;
+ txd->qw1.compl_tag = sw_id;
+ tx_id++;
+ if (tx_id == txq->nb_tx_desc)
+ tx_id = 0;
+ sw_id = txe->next_id;
+ txe = txn;
+ tx_pkt = tx_pkt->next;
+ } while (tx_pkt);
+
+ /* fill the last descriptor with End of Packet (EOP) bit */
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_EOP;
+
+ if (unlikely(!(tx_id % 32)))
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_RE;
+ if (ol_flags & IDPF_TX_CKSUM_OFFLOAD_MASK)
+ txd->qw1.cmd_dtype |= IECM_TXD_FLEX_FLOW_CMD_CS_EN;
+ txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+ txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+ }
+
+ /* update the tail pointer if any packets were processed */
+ if (likely(nb_tx)) {
+ IECM_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+ txq->sw_tail = sw_id;
+ }
+
+ return nb_tx;
+}
+
+static inline void
+idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
+ uint16_t rx_id)
+{
+ nb_hold = (uint16_t)(nb_hold + rxq->nb_rx_hold);
+
+ if (nb_hold > rxq->rx_free_thresh) {
+ PMD_RX_LOG(DEBUG,
+ "port_id=%u queue_id=%u rx_tail=%u nb_hold=%u",
+ rxq->port_id, rxq->queue_id, rx_id, nb_hold);
+ rx_id = (uint16_t)((rx_id == 0) ?
+ (rxq->nb_rx_desc - 1) : (rx_id - 1));
+ IECM_PCI_REG_WRITE(rxq->qrx_tail, rx_id);
+ nb_hold = 0;
+ }
+ rxq->nb_rx_hold = nb_hold;
+}
+
+uint16_t
+idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile union virtchnl2_rx_desc *rx_ring;
+ volatile union virtchnl2_rx_desc *rxdp;
+ struct idpf_rx_queue *rxq;
+ const uint32_t *ptype_tbl;
+ uint16_t rx_id, nb_hold;
+ struct rte_eth_dev *dev;
+ uint16_t rx_packet_len;
+ struct rte_mbuf *rxe;
+ struct rte_mbuf *rxm;
+ struct rte_mbuf *nmb;
+ uint16_t rx_status0;
+ uint64_t dma_addr;
+ uint16_t nb_rx;
+
+ nb_rx = 0;
+ nb_hold = 0;
+ rxq = rx_queue;
+ rx_id = rxq->rx_tail;
+ rx_ring = rxq->rx_ring;
+ ptype_tbl = rxq->adapter->ptype_tbl;
+
+ while (nb_rx < nb_pkts) {
+ rxdp = &rx_ring[rx_id];
+ rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+ /* Check the DD bit first */
+ if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+ break;
+
+ rx_packet_len = (rte_cpu_to_le_16(rxdp->flex_nic_wb.pkt_len)) -
+ rxq->crc_len;
+
+ nmb = rte_mbuf_raw_alloc(rxq->mp);
+ if (unlikely(!nmb)) {
+ dev = &rte_eth_devices[rxq->port_id];
+ dev->data->rx_mbuf_alloc_failed++;
+ PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+ "queue_id=%u", rxq->port_id, rxq->queue_id);
+ break;
+ }
+
+ nb_hold++;
+ rxe = rxq->sw_ring[rx_id];
+ rx_id++;
+ if (unlikely(rx_id == rxq->nb_rx_desc))
+ rx_id = 0;
+
+ /* Prefetch next mbuf */
+ rte_prefetch0(rxq->sw_ring[rx_id]);
+
+ /* When next RX descriptor is on a cache line boundary,
+ * prefetch the next 4 RX descriptors and next 8 pointers
+ * to mbufs.
+ */
+ if ((rx_id & 0x3) == 0) {
+ rte_prefetch0(&rx_ring[rx_id]);
+ rte_prefetch0(rxq->sw_ring[rx_id]);
+ }
+ rxm = rxe;
+ dma_addr =
+ rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+ rxdp->read.hdr_addr = 0;
+ rxdp->read.pkt_addr = dma_addr;
+
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM));
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = rx_packet_len;
+ rxm->data_len = rx_packet_len;
+ rxm->port = rxq->port_id;
+ rxm->ol_flags = 0;
+ rxm->packet_type =
+ ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxdp->flex_nic_wb.ptype_flex_flags0) &
+ VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+ rx_pkts[nb_rx++] = rxm;
+ }
+ rxq->rx_tail = rx_id;
+
+ idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+ return nb_rx;
+}
+
+static inline int
+idpf_xmit_cleanup(struct idpf_tx_queue *txq)
+{
+ uint16_t last_desc_cleaned = txq->last_desc_cleaned;
+ struct idpf_tx_entry *sw_ring = txq->sw_ring;
+ uint16_t nb_tx_desc = txq->nb_tx_desc;
+ uint16_t desc_to_clean_to;
+ uint16_t nb_tx_to_clean;
+
+ volatile struct iecm_base_tx_desc *txd = txq->tx_ring;
+
+ desc_to_clean_to = (uint16_t)(last_desc_cleaned + txq->rs_thresh);
+ if (desc_to_clean_to >= nb_tx_desc)
+ desc_to_clean_to = (uint16_t)(desc_to_clean_to - nb_tx_desc);
+
+ desc_to_clean_to = sw_ring[desc_to_clean_to].last_id;
+ if ((txd[desc_to_clean_to].qw1 &
+ rte_cpu_to_le_64(IECM_TXD_QW1_DTYPE_M)) !=
+ rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DESC_DONE)) {
+ PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
+ "(port=%d queue=%d)", desc_to_clean_to,
+ txq->port_id, txq->queue_id);
+ return -1;
+ }
+
+ if (last_desc_cleaned > desc_to_clean_to)
+ nb_tx_to_clean = (uint16_t)((nb_tx_desc - last_desc_cleaned) +
+ desc_to_clean_to);
+ else
+ nb_tx_to_clean = (uint16_t)(desc_to_clean_to -
+ last_desc_cleaned);
+
+ txd[desc_to_clean_to].qw1 = 0;
+
+ txq->last_desc_cleaned = desc_to_clean_to;
+ txq->nb_free = (uint16_t)(txq->nb_free + nb_tx_to_clean);
+
+ return 0;
+}
+
+/* set TSO context descriptor
+ * support IP -> L4 and IP -> IP -> L4
+ */
+static inline uint64_t
+idpf_set_tso_ctx(struct rte_mbuf *mbuf, union idpf_tx_offload tx_offload)
+{
+ uint64_t ctx_desc = 0;
+ uint32_t cd_cmd, hdr_len, cd_tso_len;
+
+ if (!tx_offload.l4_len) {
+ PMD_TX_LOG(DEBUG, "L4 length set to 0");
+ return ctx_desc;
+ }
+
+ hdr_len = tx_offload.l2_len +
+ tx_offload.l3_len +
+ tx_offload.l4_len;
+
+ cd_cmd = IECM_TX_CTX_DESC_TSO;
+ cd_tso_len = mbuf->pkt_len - hdr_len;
+ ctx_desc |= ((uint64_t)cd_cmd << IECM_TXD_CTX_QW1_CMD_S) |
+ ((uint64_t)cd_tso_len << IECM_TXD_CTX_QW1_TSO_LEN_S) |
+ ((uint64_t)mbuf->tso_segsz << IECM_TXD_CTX_QW1_MSS_S);
+
+ return ctx_desc;
+}
+
+/* Construct the tx flags */
+static inline uint64_t
+idpf_build_ctob(uint32_t td_cmd, uint32_t td_offset, unsigned int size)
+{
+ return rte_cpu_to_le_64(IECM_TX_DESC_DTYPE_DATA |
+ ((uint64_t)td_cmd << IECM_TXD_QW1_CMD_S) |
+ ((uint64_t)td_offset <<
+ IECM_TXD_QW1_OFFSET_S) |
+ ((uint64_t)size <<
+ IECM_TXD_QW1_TX_BUF_SZ_S));
+}
+
+/* TX function */
+uint16_t
+idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ volatile struct iecm_base_tx_desc *txd;
+ volatile struct iecm_base_tx_desc *txr;
+ union idpf_tx_offload tx_offload = {0};
+ struct idpf_tx_entry *txe, *txn;
+ struct idpf_tx_entry *sw_ring;
+ struct idpf_tx_queue *txq;
+ struct rte_mbuf *tx_pkt;
+ struct rte_mbuf *m_seg;
+ uint64_t buf_dma_addr;
+ uint32_t td_offset;
+ uint64_t ol_flags;
+ uint16_t tx_last;
+ uint16_t nb_used;
+ uint16_t nb_ctx;
+ uint32_t td_cmd;
+ uint16_t tx_id;
+ uint16_t nb_tx;
+ uint16_t slen;
+
+ txq = tx_queue;
+ sw_ring = txq->sw_ring;
+ txr = txq->tx_ring;
+ tx_id = txq->tx_tail;
+ txe = &sw_ring[tx_id];
+
+ /* Check if the descriptor ring needs to be cleaned. */
+ if (txq->nb_free < txq->free_thresh)
+ (void)idpf_xmit_cleanup(txq);
+
+ for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+ td_cmd = 0;
+ td_offset = 0;
+
+ tx_pkt = *tx_pkts++;
+ RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
+
+ ol_flags = tx_pkt->ol_flags;
+ tx_offload.l2_len = tx_pkt->l2_len;
+ tx_offload.l3_len = tx_pkt->l3_len;
+ tx_offload.l4_len = tx_pkt->l4_len;
+ tx_offload.tso_segsz = tx_pkt->tso_segsz;
+ /* Calculate the number of context descriptors needed. */
+ nb_ctx = idpf_calc_context_desc(ol_flags);
+
+ /* The number of descriptors that must be allocated for
+ * a packet equals to the number of the segments of that
+ * packet plus 1 context descriptor if needed.
+ */
+ nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+ tx_last = (uint16_t)(tx_id + nb_used - 1);
+
+ /* Circular ring */
+ if (tx_last >= txq->nb_tx_desc)
+ tx_last = (uint16_t)(tx_last - txq->nb_tx_desc);
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u"
+ " tx_first=%u tx_last=%u",
+ txq->port_id, txq->queue_id, tx_id, tx_last);
+
+ if (nb_used > txq->nb_free) {
+ if (idpf_xmit_cleanup(txq)) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ if (unlikely(nb_used > txq->rs_thresh)) {
+ while (nb_used > txq->nb_free) {
+ if (idpf_xmit_cleanup(txq)) {
+ if (nb_tx == 0)
+ return 0;
+ goto end_of_tx;
+ }
+ }
+ }
+ }
+
+ /* According to datasheet, the bit2 is reserved and must be
+ * set to 1.
+ */
+ td_cmd |= 0x04;
+
+ if (nb_ctx) {
+ /* Setup TX context descriptor if required */
+ volatile union iecm_flex_tx_ctx_desc *ctx_txd =
+ (volatile union iecm_flex_tx_ctx_desc *)
+ &txr[tx_id];
+
+ txn = &sw_ring[txe->next_id];
+ RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
+ if (txe->mbuf) {
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = NULL;
+ }
+
+ /* TSO enabled */
+ if (ol_flags & RTE_MBUF_F_TX_TCP_SEG)
+ idpf_set_splitq_tso_ctx(tx_pkt, tx_offload,
+ ctx_txd);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ }
+
+ m_seg = tx_pkt;
+ do {
+ txd = &txr[tx_id];
+ txn = &sw_ring[txe->next_id];
+
+ if (txe->mbuf)
+ rte_pktmbuf_free_seg(txe->mbuf);
+ txe->mbuf = m_seg;
+
+ /* Setup TX Descriptor */
+ slen = m_seg->data_len;
+ buf_dma_addr = rte_mbuf_data_iova(m_seg);
+ txd->buf_addr = rte_cpu_to_le_64(buf_dma_addr);
+ txd->qw1 = idpf_build_ctob(td_cmd, td_offset, slen);
+
+ txe->last_id = tx_last;
+ tx_id = txe->next_id;
+ txe = txn;
+ m_seg = m_seg->next;
+ } while (m_seg);
+
+ /* The last packet data descriptor needs End Of Packet (EOP) */
+ td_cmd |= IECM_TX_DESC_CMD_EOP;
+ txq->nb_used = (uint16_t)(txq->nb_used + nb_used);
+ txq->nb_free = (uint16_t)(txq->nb_free - nb_used);
+
+ if (txq->nb_used >= txq->rs_thresh) {
+ PMD_TX_LOG(DEBUG, "Setting RS bit on TXD id="
+ "%4u (port=%d queue=%d)",
+ tx_last, txq->port_id, txq->queue_id);
+
+ td_cmd |= IECM_TX_DESC_CMD_RS;
+
+ /* Update txq RS bit counters */
+ txq->nb_used = 0;
+ }
+
+ txd->qw1 |=
+ rte_cpu_to_le_64(((uint64_t)td_cmd) <<
+ IECM_TXD_QW1_CMD_S);
+ }
+
+end_of_tx:
+ rte_wmb();
+
+ PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u",
+ txq->port_id, txq->queue_id, tx_id, nb_tx);
+
+ IECM_PCI_REG_WRITE(txq->qtx_tail, tx_id);
+ txq->tx_tail = tx_id;
+
+ return nb_tx;
+}
+
+/* TX prep functions */
+uint16_t
+idpf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ int i, ret;
+ uint64_t ol_flags;
+ struct rte_mbuf *m;
+
+ for (i = 0; i < nb_pkts; i++) {
+ m = tx_pkts[i];
+ ol_flags = m->ol_flags;
+
+ /* Check condition for nb_segs > IDPF_TX_MAX_MTU_SEG. */
+ if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
+ if (m->nb_segs > IDPF_TX_MAX_MTU_SEG) {
+ rte_errno = EINVAL;
+ return i;
+ }
+ } else if ((m->tso_segsz < IDPF_MIN_TSO_MSS) ||
+ (m->tso_segsz > IDPF_MAX_TSO_MSS)) {
+ /* MSS outside the range are considered malicious */
+ rte_errno = EINVAL;
+ return i;
+ }
+
+ if (ol_flags & IDPF_TX_OFFLOAD_NOTSUP_MASK) {
+ rte_errno = ENOTSUP;
+ return i;
+ }
+
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(m);
+ if (ret != 0) {
+ rte_errno = -ret;
+ return i;
+ }
+ }
+
+ return i;
+}
+
+void
+idpf_set_rx_function(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+ return;
+ }
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+ return;
+ }
+}
+
+void
+idpf_set_tx_function(struct rte_eth_dev *dev)
+{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_prep_pkts;
+ return;
+ }
+
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+ dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_prep_pkts;
+ return;
+ }
+}
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 21b6d8cb84..d9451d2e2d 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -35,6 +35,25 @@
#define IDPF_TSO_MAX_SEG UINT8_MAX
#define IDPF_TX_MAX_MTU_SEG 8
+#define IDPF_TX_CKSUM_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG)
+
+#define IDPF_TX_OFFLOAD_MASK ( \
+ RTE_MBUF_F_TX_OUTER_IPV6 | \
+ RTE_MBUF_F_TX_OUTER_IPV4 | \
+ RTE_MBUF_F_TX_IPV6 | \
+ RTE_MBUF_F_TX_IPV4 | \
+ RTE_MBUF_F_TX_VLAN | \
+ RTE_MBUF_F_TX_IP_CKSUM | \
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG | \
+ RTE_ETH_TX_OFFLOAD_SECURITY)
+
+#define IDPF_TX_OFFLOAD_NOTSUP_MASK \
+ (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IDPF_TX_OFFLOAD_MASK)
+
struct idpf_rx_queue {
struct idpf_adapter *adapter; /* the adapter this queue belongs to */
struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
@@ -162,8 +181,22 @@ int idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
int idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
void idpf_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+uint16_t idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_singleq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+uint16_t idpf_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
+
void idpf_stop_queues(struct rte_eth_dev *dev);
+void idpf_set_rx_function(struct rte_eth_dev *dev);
+void idpf_set_tx_function(struct rte_eth_dev *dev);
+
void idpf_set_default_ptype_table(struct rte_eth_dev *dev);
const uint32_t *idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev);
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 09/11] net/idpf: support RSS
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (7 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 08/11] net/idpf: support basic Rx/Tx Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 10/11] net/idpf: support MTU configuration Junfeng Guo
2022-05-18 8:25 ` [RFC v3 11/11] net/idpf: add CPF device ID for idpf map table Junfeng Guo
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
Add RSS support.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 106 +++++++++++++++++++++++++++++++++
drivers/net/idpf/idpf_ethdev.h | 18 +++++-
drivers/net/idpf/idpf_vchnl.c | 93 +++++++++++++++++++++++++++++
3 files changed, 216 insertions(+), 1 deletion(-)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index ab67f8c2fd..e46cadf83a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -85,6 +85,7 @@ idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info
dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+ dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
dev_info->max_mac_addrs = IDPF_NUM_MACADDR_MAX;
dev_info->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
dev_info->rx_offload_capa =
@@ -292,9 +293,96 @@ idpf_init_vport(struct rte_eth_dev *dev)
return 0;
}
+static int
+idpf_config_rss(struct idpf_vport *vport)
+{
+ int ret;
+
+ ret = idpf_set_rss_key(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+ return ret;
+ }
+
+ ret = idpf_set_rss_lut(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+ return ret;
+ }
+
+ ret = idpf_set_rss_hash(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+ return ret;
+ }
+
+ return ret;
+}
+
+static int
+idpf_init_rss(struct idpf_vport *vport)
+{
+ struct rte_eth_rss_conf *rss_conf;
+ uint16_t i, nb_q, lut_size;
+ int ret = 0;
+
+ rss_conf = &vport->dev_data->dev_conf.rx_adv_conf.rss_conf;
+ nb_q = vport->num_rx_q;
+
+ vport->rss_key = (uint8_t *)rte_zmalloc("rss_key",
+ vport->rss_key_size, 0);
+ if (!vport->rss_key) {
+ PMD_INIT_LOG(ERR, "Failed to allocate RSS key");
+ ret = -ENOMEM;
+ goto err_key;
+ }
+
+ lut_size = vport->rss_lut_size;
+ vport->rss_lut = (uint32_t *)rte_zmalloc("rss_lut",
+ sizeof(uint32_t) * lut_size, 0);
+ if (!vport->rss_lut) {
+ PMD_INIT_LOG(ERR, "Failed to allocate RSS lut");
+ ret = -ENOMEM;
+ goto err_lut;
+ }
+
+ if (!rss_conf->rss_key) {
+ for (i = 0; i < vport->rss_key_size; i++)
+ vport->rss_key[i] = (uint8_t)rte_rand();
+ } else {
+ rte_memcpy(vport->rss_key, rss_conf->rss_key,
+ RTE_MIN(rss_conf->rss_key_len,
+ vport->rss_key_size));
+ }
+
+ for (i = 0; i < lut_size; i++)
+ vport->rss_lut[i] = i % nb_q;
+
+ vport->rss_hf = IECM_DEFAULT_RSS_HASH_EXPANDED;
+
+ ret = idpf_config_rss(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to configure RSS");
+ goto err_cfg;
+ }
+
+ return ret;
+
+err_cfg:
+ rte_free(vport->rss_lut);
+ vport->rss_lut = NULL;
+err_lut:
+ rte_free(vport->rss_key);
+ vport->rss_key = NULL;
+err_key:
+ return ret;
+}
+
static int
idpf_dev_configure(struct rte_eth_dev *dev)
{
+ struct idpf_vport *vport =
+ (struct idpf_vport *)dev->data->dev_private;
int ret = 0;
if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
@@ -322,6 +410,14 @@ idpf_dev_configure(struct rte_eth_dev *dev)
rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
&dev->data->mac_addrs[0]);
+ if (adapter->caps->rss_caps) {
+ ret = idpf_init_rss(vport);
+ if (ret) {
+ PMD_INIT_LOG(ERR, "Failed to init rss");
+ return ret;
+ }
+ }
+
return ret;
}
@@ -461,6 +557,16 @@ idpf_dev_close(struct rte_eth_dev *dev)
idpf_dev_stop(dev);
idpf_destroy_vport(vport);
+ if (vport->rss_lut) {
+ rte_free(vport->rss_lut);
+ vport->rss_lut = NULL;
+ }
+
+ if (vport->rss_key) {
+ rte_free(vport->rss_key);
+ vport->rss_key = NULL;
+ }
+
return 0;
}
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 38c5486ca6..90c0230538 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -43,6 +43,20 @@
#define IDPF_ETH_OVERHEAD \
(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + IDPF_VLAN_TAG_SIZE * 2)
+#define IDPF_RSS_OFFLOAD_ALL ( \
+ RTE_ETH_RSS_IPV4 | \
+ RTE_ETH_RSS_FRAG_IPV4 | \
+ RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+ RTE_ETH_RSS_IPV6 | \
+ RTE_ETH_RSS_FRAG_IPV6 | \
+ RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+ RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+
#ifndef ETH_ADDR_LEN
#define ETH_ADDR_LEN 6
#endif
@@ -197,7 +211,9 @@ int idpf_check_api_version(struct idpf_adapter *adapter);
int idpf_get_caps(struct idpf_adapter *adapter);
int idpf_create_vport(__rte_unused struct rte_eth_dev *dev);
int idpf_destroy_vport(struct idpf_vport *vport);
-
+int idpf_set_rss_key(struct idpf_vport *vport);
+int idpf_set_rss_lut(struct idpf_vport *vport);
+int idpf_set_rss_hash(struct idpf_vport *vport);
int idpf_config_rxqs(struct idpf_vport *vport);
int idpf_config_txqs(struct idpf_vport *vport);
int idpf_switch_queue(struct idpf_vport *vport, uint16_t qid,
diff --git a/drivers/net/idpf/idpf_vchnl.c b/drivers/net/idpf/idpf_vchnl.c
index 31b40af270..7e52d54ccb 100644
--- a/drivers/net/idpf/idpf_vchnl.c
+++ b/drivers/net/idpf/idpf_vchnl.c
@@ -451,6 +451,99 @@ idpf_destroy_vport(struct idpf_vport *vport)
return err;
}
+int
+idpf_set_rss_key(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_key *rss_key;
+ struct idpf_cmd_info args;
+ int len, err;
+
+ len = sizeof(*rss_key) + sizeof(rss_key->key[0]) *
+ (vport->rss_key_size - 1);
+ rss_key = rte_zmalloc("rss_key", len, 0);
+ if (!rss_key)
+ return -ENOMEM;
+
+ rss_key->vport_id = vport->vport_id;
+ rss_key->key_len = vport->rss_key_size;
+ rte_memcpy(rss_key->key, vport->rss_key,
+ sizeof(rss_key->key[0]) * vport->rss_key_size);
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_KEY;
+ args.in_args = (uint8_t *)rss_key;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err) {
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_KEY");
+ return err;
+ }
+
+ rte_free(rss_key);
+ return err;
+}
+
+int
+idpf_set_rss_lut(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_lut *rss_lut;
+ struct idpf_cmd_info args;
+ int len, err;
+
+ len = sizeof(*rss_lut) + sizeof(rss_lut->lut[0]) *
+ (vport->rss_lut_size - 1);
+ rss_lut = rte_zmalloc("rss_lut", len, 0);
+ if (!rss_lut)
+ return -ENOMEM;
+
+ rss_lut->vport_id = vport->vport_id;
+ rss_lut->lut_entries = vport->rss_lut_size;
+ rte_memcpy(rss_lut->lut, vport->rss_lut,
+ sizeof(rss_lut->lut[0]) * vport->rss_lut_size);
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_LUT;
+ args.in_args = (uint8_t *)rss_lut;
+ args.in_args_size = len;
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_SET_RSS_LUT");
+
+ rte_free(rss_lut);
+ return err;
+}
+
+int
+idpf_set_rss_hash(struct idpf_vport *vport)
+{
+ struct virtchnl2_rss_hash rss_hash;
+ struct idpf_cmd_info args;
+ int err;
+
+ memset(&rss_hash, 0, sizeof(rss_hash));
+ rss_hash.ptype_groups = vport->rss_hf;
+ rss_hash.vport_id = vport->vport_id;
+
+ memset(&args, 0, sizeof(args));
+ args.ops = VIRTCHNL2_OP_SET_RSS_HASH;
+ args.in_args = (uint8_t *)&rss_hash;
+ args.in_args_size = sizeof(rss_hash);
+ args.out_buffer = adapter->mbx_resp;
+ args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+ err = idpf_execute_vc_cmd(adapter, &args);
+ if (err)
+ PMD_DRV_LOG(ERR, "Failed to execute command of OP_SET_RSS_HASH");
+
+ return err;
+}
+
#define IDPF_RX_BUF_STRIDE 64
int
idpf_config_rxqs(struct idpf_vport *vport)
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 10/11] net/idpf: support MTU configuration
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (8 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 09/11] net/idpf: support RSS Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
2022-05-18 8:25 ` [RFC v3 11/11] net/idpf: add CPF device ID for idpf map table Junfeng Guo
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo
support ops mtu_set.
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/idpf_ethdev.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e46cadf83a..9b0bbca4a5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -34,6 +34,7 @@ static int idpf_dev_stop(struct rte_eth_dev *dev);
static int idpf_dev_close(struct rte_eth_dev *dev);
static int idpf_dev_info_get(struct rte_eth_dev *dev,
struct rte_eth_dev_info *dev_info);
+static int idpf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
int
idpf_dev_link_update(struct rte_eth_dev *dev,
@@ -72,6 +73,7 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
.tx_queue_release = idpf_dev_tx_queue_release,
.dev_infos_get = idpf_dev_info_get,
.link_update = idpf_dev_link_update,
+ .mtu_set = idpf_dev_mtu_set,
};
static int
@@ -142,6 +144,18 @@ idpf_dev_info_get(__rte_unused struct rte_eth_dev *dev, struct rte_eth_dev_info
return 0;
}
+static int
+idpf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+ /* mtu setting is forbidden if port is start */
+ if (dev->data->dev_started) {
+ PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
static int
idpf_init_vport_req_info(struct rte_eth_dev *dev)
{
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* [RFC v3 11/11] net/idpf: add CPF device ID for idpf map table
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
` (9 preceding siblings ...)
2022-05-18 8:25 ` [RFC v3 10/11] net/idpf: support MTU configuration Junfeng Guo
@ 2022-05-18 8:25 ` Junfeng Guo
10 siblings, 0 replies; 33+ messages in thread
From: Junfeng Guo @ 2022-05-18 8:25 UTC (permalink / raw)
To: qi.z.zhang, jingjing.wu, beilei.xing; +Cc: dev, junfeng.guo, Wenjun Wu
This patch adds CPF device ID. CPF data path is supported
by idpf PMD now.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
---
drivers/net/idpf/base/iecm_devids.h | 1 +
drivers/net/idpf/idpf_ethdev.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/drivers/net/idpf/base/iecm_devids.h b/drivers/net/idpf/base/iecm_devids.h
index 839214cb40..aa58982db2 100644
--- a/drivers/net/idpf/base/iecm_devids.h
+++ b/drivers/net/idpf/base/iecm_devids.h
@@ -10,6 +10,7 @@
/* Device IDs */
#define IECM_DEV_ID_PF 0x1452
+#define IECM_DEV_ID_CPF 0x1453
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 9b0bbca4a5..6e205a7b13 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -898,6 +898,7 @@ idpf_dev_uninit(struct rte_eth_dev *dev)
static const struct rte_pci_id pci_id_idpf_map[] = {
{ RTE_PCI_DEVICE(IECM_INTEL_VENDOR_ID, IECM_DEV_ID_PF) },
+ { RTE_PCI_DEVICE(IECM_INTEL_VENDOR_ID, IECM_DEV_ID_CPF) },
{ .vendor_id = 0, /* sentinel */ },
};
--
2.25.1
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [RFC v3 01/11] net/idpf/base: introduce base code
2022-05-18 8:25 ` [RFC v3 01/11] net/idpf/base: introduce base code Junfeng Guo
@ 2022-05-18 15:26 ` Stephen Hemminger
0 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2022-05-18 15:26 UTC (permalink / raw)
To: Junfeng Guo; +Cc: qi.z.zhang, jingjing.wu, beilei.xing, dev
On Wed, 18 May 2022 16:25:21 +0800
Junfeng Guo <junfeng.guo@intel.com> wrote:
> + /* TODO hardcode a mac addr for now */
> + hw->mac.addr[0] = 0x00;
> + hw->mac.addr[1] = 0x00;
> + hw->mac.addr[2] = 0x00;
> + hw->mac.addr[3] = 0x00;
> + hw->mac.addr[4] = 0x03;
> + hw->mac.addr[5] = 0x14;
DPDK has ability to assign random ether address, please use that.
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2022-05-18 15:26 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-07 7:07 [RFC 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-07 7:07 ` [RFC 1/9] net/idpf/base: introduce base code Junfeng Guo
2022-05-09 9:11 ` [RFC v2 0/9] add support for idpf PMD in DPDK Junfeng Guo
2022-05-09 9:11 ` [RFC v2 1/9] net/idpf/base: introduce base code Junfeng Guo
2022-05-09 9:11 ` [RFC v2 2/9] net/idpf/base: add OS specific implementation Junfeng Guo
2022-05-09 9:11 ` [RFC v2 3/9] net/idpf: support device initialization Junfeng Guo
2022-05-09 9:11 ` [RFC v2 4/9] net/idpf: support queue ops Junfeng Guo
2022-05-09 9:11 ` [RFC v2 5/9] net/idpf: support getting device information Junfeng Guo
2022-05-09 9:11 ` [RFC v2 6/9] net/idpf: support packet type getting Junfeng Guo
2022-05-09 9:11 ` [RFC v2 7/9] net/idpf: support link update Junfeng Guo
2022-05-09 9:11 ` [RFC v2 8/9] net/idpf: support basic Rx/Tx Junfeng Guo
2022-05-09 9:11 ` [RFC v2 9/9] net/idpf: support RSS Junfeng Guo
2022-05-18 8:25 ` [RFC v3 00/11] add support for idpf PMD in DPDK Junfeng Guo
2022-05-18 8:25 ` [RFC v3 01/11] net/idpf/base: introduce base code Junfeng Guo
2022-05-18 15:26 ` Stephen Hemminger
2022-05-18 8:25 ` [RFC v3 02/11] net/idpf/base: add OS specific implementation Junfeng Guo
2022-05-18 8:25 ` [RFC v3 03/11] net/idpf: support device initialization Junfeng Guo
2022-05-18 8:25 ` [RFC v3 04/11] net/idpf: support queue ops Junfeng Guo
2022-05-18 8:25 ` [RFC v3 05/11] net/idpf: support getting device information Junfeng Guo
2022-05-18 8:25 ` [RFC v3 06/11] net/idpf: support packet type getting Junfeng Guo
2022-05-18 8:25 ` [RFC v3 07/11] net/idpf: support link update Junfeng Guo
2022-05-18 8:25 ` [RFC v3 08/11] net/idpf: support basic Rx/Tx Junfeng Guo
2022-05-18 8:25 ` [RFC v3 09/11] net/idpf: support RSS Junfeng Guo
2022-05-18 8:25 ` [RFC v3 10/11] net/idpf: support MTU configuration Junfeng Guo
2022-05-18 8:25 ` [RFC v3 11/11] net/idpf: add CPF device ID for idpf map table Junfeng Guo
2022-05-07 7:07 ` [RFC 2/9] net/idpf/base: add OS specific implementation Junfeng Guo
2022-05-07 7:07 ` [RFC 3/9] net/idpf: support device initialization Junfeng Guo
2022-05-07 7:07 ` [RFC 4/9] net/idpf: support queue ops Junfeng Guo
2022-05-07 7:07 ` [RFC 5/9] net/idpf: support getting device information Junfeng Guo
2022-05-07 7:07 ` [RFC 6/9] net/idpf: support packet type getting Junfeng Guo
2022-05-07 7:07 ` [RFC 7/9] net/idpf: support link update Junfeng Guo
2022-05-07 7:07 ` [RFC 8/9] net/idpf: support basic Rx/Tx Junfeng Guo
2022-05-07 7:07 ` [RFC 9/9] net/idpf: support RSS Junfeng Guo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).