DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 00/25] Update IDPF Base Driver
@ 2024-05-28  7:28 Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 01/25] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
                   ` (9 more replies)
  0 siblings, 10 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

This patchset updates IDPF base driver to latest shared code snapshot.

Soumyadeep Hore (25):
  common/idpf: added NVME CPF specific code with defines
  common/idpf: updated IDPF VF device ID
  common/idpf: update ADD QUEUE GROUPS offset
  common/idpf: update in PTP message validation
  common/idpf: added FLOW STEER capability and a vport flag
  common/idpf: moved the IDPF HW into API header file
  common/idpf: avoid defensive programming
  common/idpf: move related defines into enums
  common/idpf: add flex array support to virtchnl2 structures
  common/idpf: avoid variable 0-init
  common/idpf: support added for xn transactions
  common/idpf: rename of VIRTCHNL2 CAP INLINE FLOW STEER
  common/idpf: update compiler padding
  common/idpf: avoid compiler padding
  common/idpf: add wmb before tail
  common/idpf: add a new Tx context descriptor structure
  common/idpf: removing redundant implementation
  common/idpf: removing redundant functionality of virtchnl2
  common/idpf: updating common code of latest base driver
  net/cpfl: updating cpfl based on latest base driver
  common/idpf: defining ethernet address length macro
  common/idpf: increasing size of xn index
  common/idpf: redefining idpf vc queue switch
  net/idpf: updating idpf vc queue switch in idpf
  net/cpfl: updating idpf vc queue switch in cpfl

 drivers/common/idpf/base/idpf_common.c        |  382 ---
 drivers/common/idpf/base/idpf_controlq.c      |   94 +-
 drivers/common/idpf/base/idpf_controlq.h      |  110 +-
 drivers/common/idpf/base/idpf_controlq_api.h  |   41 +-
 .../common/idpf/base/idpf_controlq_setup.c    |   16 +-
 drivers/common/idpf/base/idpf_devids.h        |   12 +-
 drivers/common/idpf/base/idpf_lan_txrx.h      |   20 +-
 drivers/common/idpf/base/idpf_osdep.c         |   71 +
 drivers/common/idpf/base/idpf_osdep.h         |   80 +-
 drivers/common/idpf/base/idpf_prototype.h     |   23 -
 drivers/common/idpf/base/idpf_type.h          |   10 +-
 drivers/common/idpf/base/idpf_xn.c            |  439 +++
 drivers/common/idpf/base/idpf_xn.h            |   90 +
 drivers/common/idpf/base/meson.build          |    3 +-
 drivers/common/idpf/base/virtchnl2.h          | 2496 +++++++++--------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  859 ++++--
 drivers/common/idpf/idpf_common_device.h      |    2 +
 drivers/common/idpf/idpf_common_virtchnl.c    |   10 +-
 drivers/common/idpf/idpf_common_virtchnl.h    |    2 +-
 drivers/net/cpfl/cpfl_ethdev.c                |   36 +-
 drivers/net/cpfl/cpfl_rxtx.c                  |    8 +-
 drivers/net/idpf/idpf_rxtx.c                  |    8 +-
 22 files changed, 2746 insertions(+), 2066 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c
 create mode 100644 drivers/common/idpf/base/idpf_osdep.c
 create mode 100644 drivers/common/idpf/base/idpf_xn.c
 create mode 100644 drivers/common/idpf/base/idpf_xn.h

-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 01/25] common/idpf: added NVME CPF specific code with defines
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-29 12:32   ` Bruce Richardson
  2024-05-28  7:28 ` [PATCH 02/25] common/idpf: updated IDPF VF device ID Soumyadeep Hore
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

The aim of the changes is to remove NVME dependency on
memory allocations, and to use a prepared buffer instead.

The changes do not affect other components.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c     | 27 +++++++++++++++++---
 drivers/common/idpf/base/idpf_controlq_api.h |  9 +++++--
 2 files changed, 31 insertions(+), 5 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index a82ca628de..0ba7281a45 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #include "idpf_controlq.h"
@@ -145,8 +145,12 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	    qinfo->buf_size > IDPF_CTLQ_MAX_BUF_LEN)
 		return -EINVAL;
 
+#ifndef NVME_CPF
 	cq = (struct idpf_ctlq_info *)
 	     idpf_calloc(hw, 1, sizeof(struct idpf_ctlq_info));
+#else
+	cq = *cq_out;
+#endif
 	if (!cq)
 		return -ENOMEM;
 
@@ -172,10 +176,15 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	}
 
 	if (status)
+#ifdef NVME_CPF
+		return status;
+#else
 		goto init_free_q;
+#endif
 
 	if (is_rxq) {
 		idpf_ctlq_init_rxq_bufs(cq);
+#ifndef NVME_CPF
 	} else {
 		/* Allocate the array of msg pointers for TX queues */
 		cq->bi.tx_msg = (struct idpf_ctlq_msg **)
@@ -185,6 +194,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			status = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
+#endif
 	}
 
 	idpf_ctlq_setup_regs(cq, qinfo);
@@ -195,6 +205,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 
 	LIST_INSERT_HEAD(&hw->cq_list_head, cq, cq_list);
 
+#ifndef NVME_CPF
 	*cq_out = cq;
 	return status;
 
@@ -205,6 +216,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	idpf_free(hw, cq);
 	cq = NULL;
 
+#endif
 	return status;
 }
 
@@ -232,8 +244,13 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
  * destroyed. This must be called prior to using the individual add/remove
  * APIs.
  */
+#ifdef NVME_CPF
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+                   struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq)
+#else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info)
+#endif
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
 	int ret_code = 0;
@@ -244,6 +261,10 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 	for (i = 0; i < num_q; i++) {
 		struct idpf_ctlq_create_info *qinfo = q_info + i;
 
+#ifdef NVME_CPF
+		cq = *(ctlq + i);
+
+#endif	
 		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
 		if (ret_code)
 			goto init_destroy_qs;
@@ -398,7 +419,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
  * ctlq_msgs and free or reuse the DMA buffers.
  */
 static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
-				struct idpf_ctlq_msg *msg_status[], bool force)
+		                struct idpf_ctlq_msg *msg_status[], bool force)
 {
 	struct idpf_ctlq_desc *desc;
 	u16 i = 0, num_to_clean;
@@ -469,7 +490,7 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
  * ctlq_msgs and free or reuse the DMA buffers.
  */
 int idpf_ctlq_clean_sq_force(struct idpf_ctlq_info *cq, u16 *clean_count,
-			     struct idpf_ctlq_msg *msg_status[])
+		             struct idpf_ctlq_msg *msg_status[])
 {
 	return __idpf_ctlq_clean_sq(cq, clean_count, msg_status, true);
 }
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 38f5d2df3c..bce5187981 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_API_H_
@@ -158,8 +158,13 @@ enum idpf_mbx_opc {
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
  */
+#ifdef NVME_CPF
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+                   struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq);
+#else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
+#endif
 
 /* Allocate and initialize a single control queue, which will be added to the
  * control queue list; returns a handle to the created control queue
@@ -186,7 +191,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 
 /* Reclaims all descriptors on HW write back */
 int idpf_ctlq_clean_sq_force(struct idpf_ctlq_info *cq, u16 *clean_count,
-			     struct idpf_ctlq_msg *msg_status[]);
+		             struct idpf_ctlq_msg *msg_status[]);
 
 /* Reclaims send descriptors on HW write back */
 int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 02/25] common/idpf: updated IDPF VF device ID
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 01/25] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 03/25] common/idpf: update ADD QUEUE GROUPS offset Soumyadeep Hore
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

Update IDPF VF device id to 145C removing the support for legacy AVF of
0x1889.

In accordance to DCR-3788 added device ID for S-IOV device.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_devids.h | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_devids.h b/drivers/common/idpf/base/idpf_devids.h
index c47762d5b7..acd235c540 100644
--- a/drivers/common/idpf/base/idpf_devids.h
+++ b/drivers/common/idpf/base/idpf_devids.h
@@ -1,18 +1,20 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_DEVIDS_H_
 #define _IDPF_DEVIDS_H_
 
+#ifndef LINUX_SUPPORT
 /* Vendor ID */
 #define IDPF_INTEL_VENDOR_ID		0x8086
+#endif /* LINUX_SUPPORT */
 
 /* Device IDs */
 #define IDPF_DEV_ID_PF			0x1452
-#define IDPF_DEV_ID_VF			0x1889
-
-
-
+#define IDPF_DEV_ID_VF			0x145C
+#ifdef SIOV_SUPPORT
+#define IDPF_DEV_ID_VF_SIOV		0x0DD5
+#endif /* SIOV_SUPPORT */
 
 #endif /* _IDPF_DEVIDS_H_ */
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 03/25] common/idpf: update ADD QUEUE GROUPS offset
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 01/25] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 02/25] common/idpf: updated IDPF VF device ID Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-29 12:38   ` Bruce Richardson
  2024-05-28  7:28 ` [PATCH 04/25] common/idpf: update in PTP message validation Soumyadeep Hore
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

Some compilers will use 64-bit addressing and compiler will detect
such loss of data

virtchnl2.h(1890,40): warning C4244: '=': conversion from '__int64' to
'__le32', possible loss of data

on line 1890
offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;

Removed unnecessary zero init

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 3900b784d0..f44c0965b4 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _VIRTCHNL2_H_
@@ -47,9 +47,9 @@
  * that is never used.
  */
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
+        { virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
 #define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+        { virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
 
 /* New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
@@ -471,8 +471,8 @@
  * error regardless of version mismatch.
  */
 struct virtchnl2_version_info {
-	u32 major;
-	u32 minor;
+        u32 major;
+        u32 minor;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
@@ -1414,9 +1414,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
  * and returns the status.
  */
 struct virtchnl2_promisc_info {
-	__le32 vport_id;
+        __le32 vport_id;
 	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
-	__le16 flags;
+        __le16 flags;
 	u8 pad[2];
 };
 
@@ -1733,7 +1733,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
 		valid_len = sizeof(struct virtchnl2_add_queue_groups);
 		if (msglen != valid_len) {
-			__le32 i = 0, offset = 0;
+			__le64 offset;
+			__le32 i;
 			struct virtchnl2_add_queue_groups *add_queue_grp =
 				(struct virtchnl2_add_queue_groups *)msg;
 			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
@@ -1904,8 +1905,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	/* These are always errors coming from the VF. */
 	case VIRTCHNL2_OP_EVENT:
 	case VIRTCHNL2_OP_UNKNOWN:
-	default:
-		return VIRTCHNL2_STATUS_ERR_ESRCH;
+        default:
+                return VIRTCHNL2_STATUS_ERR_ESRCH;
 	}
 	/* few more checks */
 	if (err_msg_format || valid_len != msglen)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 04/25] common/idpf: update in PTP message validation
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (2 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 03/25] common/idpf: update ADD QUEUE GROUPS offset Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-29 13:03   ` Bruce Richardson
  2024-05-28  7:28 ` [PATCH 05/25] common/idpf: added FLOW STEER capability and a vport flag Soumyadeep Hore
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

When the message for getting timestamp latches is sent by the driver,
number of latches is equal to 0. Current implementation of message
validation function incorrectly notifies this kind of message length as
invalid.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index f44c0965b4..9a1310ca24 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -1873,7 +1873,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_CAPS:
 		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_get_ptp_caps *ptp_caps =
 			(struct virtchnl2_get_ptp_caps *)msg;
 
@@ -1889,7 +1889,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
 		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
 			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 05/25] common/idpf: added FLOW STEER capability and a vport flag
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (3 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 04/25] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 06/25] common/idpf: moved the IDPF HW into API header file Soumyadeep Hore
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

Removed unused VIRTCHNL2_CAP_ADQ capability and use that bit for
VIRTCHNL2_CAP_INLINE_FLOW_STEER capability.

Added VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA port flag to allow
enable/disable per vport.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 9a1310ca24..51d982b500 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -220,7 +220,7 @@
 #define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
 #define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
 #define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_ADQ			BIT(6)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
 #define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
 #define VIRTCHNL2_CAP_PROMISC			BIT(8)
 #define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
@@ -593,7 +593,8 @@ struct virtchnl2_queue_reg_chunks {
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
 /* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT	BIT(0)
+#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
+#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 06/25] common/idpf: moved the IDPF HW into API header file
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (4 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 05/25] common/idpf: added FLOW STEER capability and a vport flag Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 07/25] common/idpf: avoid defensive programming Soumyadeep Hore
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

There is an issue of recursive header file includes in accessing the
idpf_hw structure. The controlq.h has the structure definition and osdep
header file needs that. The problem is the controlq.h also needs
the osdep header file contents, basically both dependent on each other.

Today it was resolved in CP by bringing their own idpf_hw definition but
that's not the case for other components which wanted to use the idpf_hw
directly from the shared code.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.h     | 110 +------------------
 drivers/common/idpf/base/idpf_controlq_api.h |  34 +++++-
 drivers/common/idpf/base/idpf_type.h         |  10 +-
 3 files changed, 37 insertions(+), 117 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.h b/drivers/common/idpf/base/idpf_controlq.h
index 80ca06e632..86ed3b7bcb 100644
--- a/drivers/common/idpf/base/idpf_controlq.h
+++ b/drivers/common/idpf/base/idpf_controlq.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_H_
@@ -18,7 +18,7 @@
 
 #define IDPF_CTLQ_DESC_UNUSED(R)					\
 	((u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
-	       (R)->next_to_clean - (R)->next_to_use - 1))
+	      (R)->next_to_clean - (R)->next_to_use - 1))
 
 /* Data type manipulation macros. */
 #define IDPF_HI_DWORD(x)	((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF))
@@ -96,111 +96,6 @@ struct idpf_mbxq_desc {
 	u32 pf_vf_id;		/* used by CP when sending to PF */
 };
 
-enum idpf_mac_type {
-	IDPF_MAC_UNKNOWN = 0,
-	IDPF_MAC_PF,
-	IDPF_MAC_VF,
-	IDPF_MAC_GENERIC
-};
-
-#define ETH_ALEN 6
-
-struct idpf_mac_info {
-	enum idpf_mac_type type;
-	u8 addr[ETH_ALEN];
-	u8 perm_addr[ETH_ALEN];
-};
-
-#define IDPF_AQ_LINK_UP 0x1
-
-/* PCI bus types */
-enum idpf_bus_type {
-	idpf_bus_type_unknown = 0,
-	idpf_bus_type_pci,
-	idpf_bus_type_pcix,
-	idpf_bus_type_pci_express,
-	idpf_bus_type_reserved
-};
-
-/* PCI bus speeds */
-enum idpf_bus_speed {
-	idpf_bus_speed_unknown	= 0,
-	idpf_bus_speed_33	= 33,
-	idpf_bus_speed_66	= 66,
-	idpf_bus_speed_100	= 100,
-	idpf_bus_speed_120	= 120,
-	idpf_bus_speed_133	= 133,
-	idpf_bus_speed_2500	= 2500,
-	idpf_bus_speed_5000	= 5000,
-	idpf_bus_speed_8000	= 8000,
-	idpf_bus_speed_reserved
-};
-
-/* PCI bus widths */
-enum idpf_bus_width {
-	idpf_bus_width_unknown	= 0,
-	idpf_bus_width_pcie_x1	= 1,
-	idpf_bus_width_pcie_x2	= 2,
-	idpf_bus_width_pcie_x4	= 4,
-	idpf_bus_width_pcie_x8	= 8,
-	idpf_bus_width_32	= 32,
-	idpf_bus_width_64	= 64,
-	idpf_bus_width_reserved
-};
-
-/* Bus parameters */
-struct idpf_bus_info {
-	enum idpf_bus_speed speed;
-	enum idpf_bus_width width;
-	enum idpf_bus_type type;
-
-	u16 func;
-	u16 device;
-	u16 lan_id;
-	u16 bus_id;
-};
-
-/* Function specific capabilities */
-struct idpf_hw_func_caps {
-	u32 num_alloc_vfs;
-	u32 vf_base_id;
-};
-
-/* Define the APF hardware struct to replace other control structs as needed
- * Align to ctlq_hw_info
- */
-struct idpf_hw {
-	/* Some part of BAR0 address space is not mapped by the LAN driver.
-	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
-	 * will have its own base hardware address when mapped.
-	 */
-	u8 *hw_addr;
-	u8 *hw_addr_region2;
-	u64 hw_addr_len;
-	u64 hw_addr_region2_len;
-
-	void *back;
-
-	/* control queue - send and receive */
-	struct idpf_ctlq_info *asq;
-	struct idpf_ctlq_info *arq;
-
-	/* subsystem structs */
-	struct idpf_mac_info mac;
-	struct idpf_bus_info bus;
-	struct idpf_hw_func_caps func_caps;
-
-	/* pci info */
-	u16 device_id;
-	u16 vendor_id;
-	u16 subsystem_device_id;
-	u16 subsystem_vendor_id;
-	u8 revision_id;
-	bool adapter_stopped;
-
-	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
-};
-
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
 			     struct idpf_ctlq_info *cq);
 
@@ -210,4 +105,5 @@ void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq);
 void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem,
 			 u64 size);
 void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem);
+
 #endif /* _IDPF_CONTROLQ_H_ */
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index bce5187981..3ad2da5b2e 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -154,6 +154,36 @@ enum idpf_mbx_opc {
 	idpf_mbq_opc_send_msg_to_peer_drv	= 0x0804,
 };
 
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct idpf_hw {
+	/* Some part of BAR0 address space is not mapped by the LAN driver.
+	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
+	 * will have its own base hardware address when mapped.
+	 */
+	u8 *hw_addr;
+	u8 *hw_addr_region2;
+	u64 hw_addr_len;
+	u64 hw_addr_region2_len;
+
+	void *back;
+
+	/* control queue - send and receive */
+	struct idpf_ctlq_info *asq;
+	struct idpf_ctlq_info *arq;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	bool adapter_stopped;
+
+	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
+};
+
 /* API supported for control queue management */
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
@@ -161,10 +191,12 @@ enum idpf_mbx_opc {
 #ifdef NVME_CPF
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
                    struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq);
+
 #else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
-#endif
+
+#endif /* NVME_CPF */
 
 /* Allocate and initialize a single control queue, which will be added to the
  * control queue list; returns a handle to the created control queue
diff --git a/drivers/common/idpf/base/idpf_type.h b/drivers/common/idpf/base/idpf_type.h
index a22d28f448..26ae9df147 100644
--- a/drivers/common/idpf/base/idpf_type.h
+++ b/drivers/common/idpf/base/idpf_type.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_TYPE_H_
@@ -80,14 +80,6 @@ struct idpf_ctlq_size {
 	u16 arq_ring_size;
 };
 
-/* Temporary definition to compile - TBD if needed */
-struct idpf_arq_event_info {
-	struct idpf_ctlq_desc desc;
-	u16 msg_len;
-	u16 buf_len;
-	u8 *msg_buf;
-};
-
 struct idpf_get_set_rss_key_data {
 	u8 standard_rss_key[0x28];
 	u8 extended_hash_key[0xc];
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 07/25] common/idpf: avoid defensive programming
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (5 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 06/25] common/idpf: moved the IDPF HW into API header file Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 08/25] common/idpf: move related defines into enums Soumyadeep Hore
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

Based on the upstream feedback, driver should not use any
defensive programming strategy by checking for NULL pointers
and other conditional checks unnecessarily in the code flow
to fall back, instead fail and fix the bug in a proper way.

Some of the checks checks are identified and removed/wrapped
in this patch:
- As the control queue is freed and deleted from the list after the
idpf_ctlq_shutdown call, there is no need to have the ring_size
check in idpf_ctlq_shutdown.
- From the upstream perspective shared code is part of the Linux
driver and it doesn't make sense to add zero 'len' and 'buf_size'
check in idpf_ctlq_add as to start with, driver provides valid
sizes, if not it is a bug.
- Remove cq NULL and zero ring_size check wherever possible as
the IDPF driver code flow does not pass any NULL cq pointer to
the control queue callbacks. If it passes then it is a bug and
should be fixed rather than checking for NULL pointer and falling
back which is not the right way.

Note: Most of the checks are wrapped with __KERNEL__ flag and will
not have any impact on other shared code consumers other than the
IDPF Linux driver as I am not confident if the same reasoning works
for other components as well.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index 0ba7281a45..4d31c6e6d8 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -98,9 +98,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
 	idpf_acquire_lock(&cq->cq_lock);
 
-	if (!cq->ring_size)
-		goto shutdown_sq_out;
-
 #ifdef SIMICS_BUILD
 	wr32(hw, cq->reg.head, 0);
 	wr32(hw, cq->reg.tail, 0);
@@ -115,7 +112,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 	/* Set ring_size to 0 to indicate uninitialized queue */
 	cq->ring_size = 0;
 
-shutdown_sq_out:
 	idpf_release_lock(&cq->cq_lock);
 	idpf_destroy_lock(&cq->cq_lock);
 }
@@ -661,9 +657,6 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 	int ret_code = 0;
 	u16 i = 0;
 
-	if (!cq || !cq->ring_size)
-		return -ENOBUFS;
-
 	if (*num_q_msg == 0)
 		return 0;
 	else if (*num_q_msg > cq->ring_size)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 08/25] common/idpf: move related defines into enums
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (6 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 07/25] common/idpf: avoid defensive programming Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-05-28  7:28 ` [PATCH 09/25] common/idpf: add flex array support to virtchnl2 structures Soumyadeep Hore
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
  9 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

Kernel coding style prefers the use of enums, so we must change
all groups of related defines to enums. The names of the enums
are chosen to follow the common part of the naming pattern
as much as possible.

Replaced the common labels from the comments with the enum names.

While at it, modify header description based on upstream feedback.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h          | 2042 ++++++++++-------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  859 ++++---
 2 files changed, 1783 insertions(+), 1118 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 51d982b500..45e77bbb94 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -8,320 +8,404 @@
 /* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
  * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
  * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ *
+ * PF/VF uses the virtchnl interface defined in this header file to communicate
+ * with device Control Plane (CP). Driver and the CP may run on different
+ * platforms with different endianness. To avoid byte order discrepancies,
+ * all the structures in this header follow little-endian format.
+ *
+ * This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
  */
 
 #include "virtchnl2_lan_desc.h"
 
-/* VIRTCHNL2_ERROR_CODES */
-/* success */
-#define	VIRTCHNL2_STATUS_SUCCESS	0
-/* Operation not permitted, used in case of command not permitted for sender */
-#define	VIRTCHNL2_STATUS_ERR_EPERM	1
-/* Bad opcode - virtchnl interface problem */
-#define	VIRTCHNL2_STATUS_ERR_ESRCH	3
-/* I/O error - HW access error */
-#define	VIRTCHNL2_STATUS_ERR_EIO	5
-/* No such resource - Referenced resource is not allacated */
-#define	VIRTCHNL2_STATUS_ERR_ENXIO	6
-/* Permission denied - Resource is not permitted to caller */
-#define	VIRTCHNL2_STATUS_ERR_EACCES	13
-/* Device or resource busy - In case shared resource is in use by others */
-#define	VIRTCHNL2_STATUS_ERR_EBUSY	16
-/* Object already exists and not free */
-#define	VIRTCHNL2_STATUS_ERR_EEXIST	17
-/* Invalid input argument in command */
-#define	VIRTCHNL2_STATUS_ERR_EINVAL	22
-/* No space left or allocation failure */
-#define	VIRTCHNL2_STATUS_ERR_ENOSPC	28
-/* Parameter out of range */
-#define	VIRTCHNL2_STATUS_ERR_ERANGE	34
-
-/* Op not allowed in current dev mode */
-#define	VIRTCHNL2_STATUS_ERR_EMODE	200
-/* State Machine error - Command sequence problem */
-#define	VIRTCHNL2_STATUS_ERR_ESM	201
-
-/* These macros are used to generate compilation errors if a structure/union
- * is not exactly the correct length. It gives a divide by zero error if the
- * structure/union is not of the correct size, otherwise it creates an enum
- * that is never used.
- */
-#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
-        { virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
-#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
-        { virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
-
-/* New major set of opcodes introduced and so leaving room for
+/**
+ * enum virtchnl2_status - Error codes.
+ * @VIRTCHNL2_STATUS_SUCCESS: Success
+ * @VIRTCHNL2_STATUS_ERR_EPERM: Operation not permitted, used in case of command
+ *				not permitted for sender
+ * @VIRTCHNL2_STATUS_ERR_ESRCH: Bad opcode - virtchnl interface problem
+ * @VIRTCHNL2_STATUS_ERR_EIO: I/O error - HW access error
+ * @VIRTCHNL2_STATUS_ERR_ENXIO: No such resource - Referenced resource is not
+ *				allocated
+ * @VIRTCHNL2_STATUS_ERR_EACCES: Permission denied - Resource is not permitted
+ *				 to caller
+ * @VIRTCHNL2_STATUS_ERR_EBUSY: Device or resource busy - In case shared
+ *				resource is in use by others
+ * @VIRTCHNL2_STATUS_ERR_EEXIST: Object already exists and not free
+ * @VIRTCHNL2_STATUS_ERR_EINVAL: Invalid input argument in command
+ * @VIRTCHNL2_STATUS_ERR_ENOSPC: No space left or allocation failure
+ * @VIRTCHNL2_STATUS_ERR_ERANGE: Parameter out of range
+ * @VIRTCHNL2_STATUS_ERR_EMODE: Operation not allowed in current dev mode
+ * @VIRTCHNL2_STATUS_ERR_ESM: State Machine error - Command sequence problem
+#ifndef EXTERNAL_RELEASE
+ * @VIRTCHNL2_STATUS_ERR_OEM_1: OEM_1 error code
+#endif
+ */
+enum virtchnl2_status {
+	VIRTCHNL2_STATUS_SUCCESS	= 0,
+	VIRTCHNL2_STATUS_ERR_EPERM	= 1,
+	VIRTCHNL2_STATUS_ERR_ESRCH	= 3,
+	VIRTCHNL2_STATUS_ERR_EIO	= 5,
+	VIRTCHNL2_STATUS_ERR_ENXIO	= 6,
+	VIRTCHNL2_STATUS_ERR_EACCES	= 13,
+	VIRTCHNL2_STATUS_ERR_EBUSY	= 16,
+	VIRTCHNL2_STATUS_ERR_EEXIST	= 17,
+	VIRTCHNL2_STATUS_ERR_EINVAL	= 22,
+	VIRTCHNL2_STATUS_ERR_ENOSPC	= 28,
+	VIRTCHNL2_STATUS_ERR_ERANGE	= 34,
+	VIRTCHNL2_STATUS_ERR_EMODE	= 200,
+	VIRTCHNL2_STATUS_ERR_ESM	= 201,
+};
+
+/**
+ * This macro is used to generate compilation errors if a structure
+ * is not exactly the correct length.
+ */
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
+	static_assert((n) == sizeof(struct X),	\
+		      "Structure length does not match with the expected value")
+
+/**
+ * New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
  * be used if both the PF and VF have successfully negotiated the
- * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
- */
-#define		VIRTCHNL2_OP_UNKNOWN			0
-#define		VIRTCHNL2_OP_VERSION			1
-#define		VIRTCHNL2_OP_GET_CAPS			500
-#define		VIRTCHNL2_OP_CREATE_VPORT		501
-#define		VIRTCHNL2_OP_DESTROY_VPORT		502
-#define		VIRTCHNL2_OP_ENABLE_VPORT		503
-#define		VIRTCHNL2_OP_DISABLE_VPORT		504
-#define		VIRTCHNL2_OP_CONFIG_TX_QUEUES		505
-#define		VIRTCHNL2_OP_CONFIG_RX_QUEUES		506
-#define		VIRTCHNL2_OP_ENABLE_QUEUES		507
-#define		VIRTCHNL2_OP_DISABLE_QUEUES		508
-#define		VIRTCHNL2_OP_ADD_QUEUES			509
-#define		VIRTCHNL2_OP_DEL_QUEUES			510
-#define		VIRTCHNL2_OP_MAP_QUEUE_VECTOR		511
-#define		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		512
-#define		VIRTCHNL2_OP_GET_RSS_KEY		513
-#define		VIRTCHNL2_OP_SET_RSS_KEY		514
-#define		VIRTCHNL2_OP_GET_RSS_LUT		515
-#define		VIRTCHNL2_OP_SET_RSS_LUT		516
-#define		VIRTCHNL2_OP_GET_RSS_HASH		517
-#define		VIRTCHNL2_OP_SET_RSS_HASH		518
-#define		VIRTCHNL2_OP_SET_SRIOV_VFS		519
-#define		VIRTCHNL2_OP_ALLOC_VECTORS		520
-#define		VIRTCHNL2_OP_DEALLOC_VECTORS		521
-#define		VIRTCHNL2_OP_EVENT			522
-#define		VIRTCHNL2_OP_GET_STATS			523
-#define		VIRTCHNL2_OP_RESET_VF			524
-	/* opcode 525 is reserved */
-#define		VIRTCHNL2_OP_GET_PTYPE_INFO		526
-	/* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
-	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ * VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
+ */
+enum virtchnl2_op {
+	VIRTCHNL2_OP_UNKNOWN			= 0,
+	VIRTCHNL2_OP_VERSION			= 1,
+	VIRTCHNL2_OP_GET_CAPS			= 500,
+	VIRTCHNL2_OP_CREATE_VPORT		= 501,
+	VIRTCHNL2_OP_DESTROY_VPORT		= 502,
+	VIRTCHNL2_OP_ENABLE_VPORT		= 503,
+	VIRTCHNL2_OP_DISABLE_VPORT		= 504,
+	VIRTCHNL2_OP_CONFIG_TX_QUEUES		= 505,
+	VIRTCHNL2_OP_CONFIG_RX_QUEUES		= 506,
+	VIRTCHNL2_OP_ENABLE_QUEUES		= 507,
+	VIRTCHNL2_OP_DISABLE_QUEUES		= 508,
+	VIRTCHNL2_OP_ADD_QUEUES			= 509,
+	VIRTCHNL2_OP_DEL_QUEUES			= 510,
+	VIRTCHNL2_OP_MAP_QUEUE_VECTOR		= 511,
+	VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		= 512,
+	VIRTCHNL2_OP_GET_RSS_KEY		= 513,
+	VIRTCHNL2_OP_SET_RSS_KEY		= 514,
+	VIRTCHNL2_OP_GET_RSS_LUT		= 515,
+	VIRTCHNL2_OP_SET_RSS_LUT		= 516,
+	VIRTCHNL2_OP_GET_RSS_HASH		= 517,
+	VIRTCHNL2_OP_SET_RSS_HASH		= 518,
+	VIRTCHNL2_OP_SET_SRIOV_VFS		= 519,
+	VIRTCHNL2_OP_ALLOC_VECTORS		= 520,
+	VIRTCHNL2_OP_DEALLOC_VECTORS		= 521,
+	VIRTCHNL2_OP_EVENT			= 522,
+	VIRTCHNL2_OP_GET_STATS			= 523,
+	VIRTCHNL2_OP_RESET_VF			= 524,
+	/* Opcode 525 is reserved */
+	VIRTCHNL2_OP_GET_PTYPE_INFO		= 526,
+	/* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
 	 */
-	/* opcodes 529, 530, and 531 are reserved */
-#define		VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	532
-#define		VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	533
-#define		VIRTCHNL2_OP_LOOPBACK			534
-#define		VIRTCHNL2_OP_ADD_MAC_ADDR		535
-#define		VIRTCHNL2_OP_DEL_MAC_ADDR		536
-#define		VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	537
-#define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
-#define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
-#define		VIRTCHNL2_OP_GET_PORT_STATS		540
+/* Opcodes 529, 530, and 531 are reserved */
+
+	VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	= 532,
+	VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	= 533,
+	VIRTCHNL2_OP_LOOPBACK			= 534,
+	VIRTCHNL2_OP_ADD_MAC_ADDR		= 535,
+	VIRTCHNL2_OP_DEL_MAC_ADDR		= 536,
+	VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	= 537,
+	VIRTCHNL2_OP_ADD_QUEUE_GROUPS		= 538,
+	VIRTCHNL2_OP_DEL_QUEUE_GROUPS		= 539,
+	VIRTCHNL2_OP_GET_PORT_STATS		= 540,
 	/* TimeSync opcodes */
-#define		VIRTCHNL2_OP_GET_PTP_CAPS		541
-#define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
+	VIRTCHNL2_OP_GET_PTP_CAPS		= 541,
+	VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	= 542,
+};
+
 
 #define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX	0xFFFF
 
-/* VIRTCHNL2_VPORT_TYPE
- * Type of virtual port
+/**
+ * enum virtchnl2_vport_type - Type of virtual port
+ * @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SRIOV: SRIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SIOV: SIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SUBDEV: Subdevice virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_MNG: Management virtual port type
  */
-#define VIRTCHNL2_VPORT_TYPE_DEFAULT		0
-#define VIRTCHNL2_VPORT_TYPE_SRIOV		1
-#define VIRTCHNL2_VPORT_TYPE_SIOV		2
-#define VIRTCHNL2_VPORT_TYPE_SUBDEV		3
-#define VIRTCHNL2_VPORT_TYPE_MNG		4
+enum virtchnl2_vport_type {
+	VIRTCHNL2_VPORT_TYPE_DEFAULT		= 0,
+	VIRTCHNL2_VPORT_TYPE_SRIOV		= 1,
+	VIRTCHNL2_VPORT_TYPE_SIOV		= 2,
+	VIRTCHNL2_VPORT_TYPE_SUBDEV		= 3,
+	VIRTCHNL2_VPORT_TYPE_MNG		= 4,
+};
 
-/* VIRTCHNL2_QUEUE_MODEL
- * Type of queue model
+/**
+ * enum virtchnl2_queue_model - Type of queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model
  *
  * In the single queue model, the same transmit descriptor queue is used by
  * software to post descriptors to hardware and by hardware to post completed
  * descriptors to software.
  * Likewise, the same receive descriptor queue is used by hardware to post
  * completions to software and by software to post buffers to hardware.
- */
-#define VIRTCHNL2_QUEUE_MODEL_SINGLE		0
-/* In the split queue model, hardware uses transmit completion queues to post
+ *
+ * In the split queue model, hardware uses transmit completion queues to post
  * descriptor/buffer completions to software, while software uses transmit
  * descriptor queues to post descriptors to hardware.
  * Likewise, hardware posts descriptor completions to the receive descriptor
  * queue, while software uses receive buffer queues to post buffers to hardware.
  */
-#define VIRTCHNL2_QUEUE_MODEL_SPLIT		1
-
-/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
- * Checksum offload capability flags
- */
-#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		BIT(0)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	BIT(1)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	BIT(2)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	BIT(3)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	BIT(4)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	BIT(5)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	BIT(6)
-#define VIRTCHNL2_CAP_TX_CSUM_GENERIC		BIT(7)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		BIT(8)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	BIT(9)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	BIT(10)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	BIT(11)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	BIT(12)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	BIT(13)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	BIT(14)
-#define VIRTCHNL2_CAP_RX_CSUM_GENERIC		BIT(15)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	BIT(16)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	BIT(17)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	BIT(18)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	BIT(19)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	BIT(20)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	BIT(21)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	BIT(22)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	BIT(23)
-
-/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
- * Segmentation offload capability flags
- */
-#define VIRTCHNL2_CAP_SEG_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_SEG_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_SEG_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_SEG_IPV6_TCP		BIT(3)
-#define VIRTCHNL2_CAP_SEG_IPV6_UDP		BIT(4)
-#define VIRTCHNL2_CAP_SEG_IPV6_SCTP		BIT(5)
-#define VIRTCHNL2_CAP_SEG_GENERIC		BIT(6)
-#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	BIT(7)
-#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	BIT(8)
-
-/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
- * Receive Side Scaling Flow type capability flags
- */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT(13)
-
-/* VIRTCHNL2_HEADER_SPLIT_CAPS
- * Header split capability flags
- */
-/* for prepended metadata  */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		BIT(0)
-/* all VLANs go into header buffer */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		BIT(1)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		BIT(2)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		BIT(3)
-
-/* VIRTCHNL2_RSC_OFFLOAD_CAPS
- * Receive Side Coalescing offload capability flags
- */
-#define VIRTCHNL2_CAP_RSC_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSC_IPV4_SCTP		BIT(1)
-#define VIRTCHNL2_CAP_RSC_IPV6_TCP		BIT(2)
-#define VIRTCHNL2_CAP_RSC_IPV6_SCTP		BIT(3)
-
-/* VIRTCHNL2_OTHER_CAPS
- * Other capability flags
- * SPLITQ_QSCHED: Queue based scheduling using split queue model
- * TX_VLAN: VLAN tag insertion
- * RX_VLAN: VLAN tag stripping
- */
-#define VIRTCHNL2_CAP_RDMA			BIT(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
-#define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT(11)
-/* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT(12)
-#define VIRTCHNL2_CAP_PTP			BIT(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT(15)
-#define VIRTCHNL2_CAP_FDIR			BIT(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT(19)
-/* Enable miss completion types plus ability to detect a miss completion if a
- * reserved bit is set in a standared completion's tag.
- */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT(20)
-/* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT(63)
-
-/* VIRTCHNL2_TXQ_SCHED_MODE
- * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
- * completions where descriptors and buffers are completed at the same time.
- * Flow scheduling mode allows for out of order packet processing where
- * descriptors are cleaned in order, but buffers can be completed out of order.
- */
-#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		0
-#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW		1
-
-/* VIRTCHNL2_TXQ_FLAGS
- * Transmit Queue feature flags
- *
- * Enable rule miss completion type; packet completion for a packet
- * sent on exception path; only relevant in flow scheduling mode
- */
-#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		BIT(0)
-
-/* VIRTCHNL2_PEER_TYPE
- * Transmit mailbox peer type
- */
-#define VIRTCHNL2_RDMA_CPF			0
-#define VIRTCHNL2_NVME_CPF			1
-#define VIRTCHNL2_ATE_CPF			2
-#define VIRTCHNL2_LCE_CPF			3
-
-/* VIRTCHNL2_RXQ_FLAGS
- * Receive Queue Feature flags
- */
-#define VIRTCHNL2_RXQ_RSC			BIT(0)
-#define VIRTCHNL2_RXQ_HDR_SPLIT			BIT(1)
-/* When set, packet descriptors are flushed by hardware immediately after
- * processing each packet.
- */
-#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	BIT(2)
-#define VIRTCHNL2_RX_DESC_SIZE_16BYTE		BIT(3)
-#define VIRTCHNL2_RX_DESC_SIZE_32BYTE		BIT(4)
-
-/* VIRTCHNL2_RSS_ALGORITHM
- * Type of RSS algorithm
- */
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC		0
-#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC			1
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC		2
-#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC			3
-
-/* VIRTCHNL2_EVENT_CODES
- * Type of event
- */
-#define VIRTCHNL2_EVENT_UNKNOWN			0
-#define VIRTCHNL2_EVENT_LINK_CHANGE		1
-/* These messages are only sent to PF from CP */
-#define VIRTCHNL2_EVENT_START_RESET_ADI		2
-#define VIRTCHNL2_EVENT_FINISH_RESET_ADI	3
-#define VIRTCHNL2_EVENT_ADI_ACTIVE		4
-
-/* VIRTCHNL2_QUEUE_TYPE
- * Transmit and Receive queue types are valid in legacy as well as split queue
- * models. With Split Queue model, 2 additional types are introduced -
- * TX_COMPLETION and RX_BUFFER. In split queue model, receive  corresponds to
+enum virtchnl2_queue_model {
+	VIRTCHNL2_QUEUE_MODEL_SINGLE		= 0,
+	VIRTCHNL2_QUEUE_MODEL_SPLIT		= 1,
+};
+
+/* Checksum offload capability flags */
+enum virtchnl2_cap_txrx_csum {
+	VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		= BIT(0),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	= BIT(1),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	= BIT(2),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	= BIT(3),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	= BIT(4),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	= BIT(5),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_CAP_TX_CSUM_GENERIC		= BIT(7),
+	VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		= BIT(8),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	= BIT(9),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	= BIT(10),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	= BIT(11),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	= BIT(12),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	= BIT(13),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	= BIT(14),
+	VIRTCHNL2_CAP_RX_CSUM_GENERIC		= BIT(15),
+	VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	= BIT(16),
+	VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	= BIT(17),
+	VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	= BIT(18),
+	VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	= BIT(19),
+	VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	= BIT(20),
+	VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	= BIT(21),
+	VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	= BIT(22),
+	VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	= BIT(23),
+};
+
+/* Segmentation offload capability flags */
+enum virtchnl2_cap_seg {
+	VIRTCHNL2_CAP_SEG_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_SEG_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_SEG_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_SEG_IPV6_TCP		= BIT(3),
+	VIRTCHNL2_CAP_SEG_IPV6_UDP		= BIT(4),
+	VIRTCHNL2_CAP_SEG_IPV6_SCTP		= BIT(5),
+	VIRTCHNL2_CAP_SEG_GENERIC		= BIT(6),
+	VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	= BIT(7),
+	VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	= BIT(8),
+};
+
+/* Receive Side Scaling Flow type capability flags */
+enum virtchnl2_cap_rss {
+	VIRTCHNL2_CAP_RSS_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSS_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_RSS_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_RSS_IPV4_OTHER		= BIT(3),
+	VIRTCHNL2_CAP_RSS_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_CAP_RSS_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_CAP_RSS_IPV6_SCTP		= BIT(6),
+	VIRTCHNL2_CAP_RSS_IPV6_OTHER		= BIT(7),
+	VIRTCHNL2_CAP_RSS_IPV4_AH		= BIT(8),
+	VIRTCHNL2_CAP_RSS_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		= BIT(10),
+	VIRTCHNL2_CAP_RSS_IPV6_AH		= BIT(11),
+	VIRTCHNL2_CAP_RSS_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		= BIT(13),
+};
+
+/* Header split capability flags */
+enum virtchnl2_cap_rx_hsplit_at {
+	/* For prepended metadata  */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		= BIT(0),
+	/* All VLANs go into header buffer */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		= BIT(1),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		= BIT(2),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		= BIT(3),
+};
+
+/* Receive Side Coalescing offload capability flags */
+enum virtchnl2_cap_rsc {
+	VIRTCHNL2_CAP_RSC_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSC_IPV4_SCTP		= BIT(1),
+	VIRTCHNL2_CAP_RSC_IPV6_TCP		= BIT(2),
+	VIRTCHNL2_CAP_RSC_IPV6_SCTP		= BIT(3),
+};
+
+/* Other capability flags */
+enum virtchnl2_cap_other {
+	VIRTCHNL2_CAP_RDMA			= BIT_ULL(0),
+	VIRTCHNL2_CAP_SRIOV			= BIT_ULL(1),
+	VIRTCHNL2_CAP_MACFILTER			= BIT_ULL(2),
+	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
+	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
+	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
+	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
+	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
+	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
+	VIRTCHNL2_CAP_INLINE_IPSEC		= BIT_ULL(10),
+	VIRTCHNL2_CAP_LARGE_NUM_QUEUES		= BIT_ULL(11),
+	/* Require additional info */
+	VIRTCHNL2_CAP_VLAN			= BIT_ULL(12),
+	VIRTCHNL2_CAP_PTP			= BIT_ULL(13),
+	VIRTCHNL2_CAP_ADV_RSS			= BIT_ULL(15),
+	VIRTCHNL2_CAP_FDIR			= BIT_ULL(16),
+	VIRTCHNL2_CAP_RX_FLEX_DESC		= BIT_ULL(17),
+	VIRTCHNL2_CAP_PTYPE			= BIT_ULL(18),
+	VIRTCHNL2_CAP_LOOPBACK			= BIT_ULL(19),
+	/* Enable miss completion types plus ability to detect a miss completion
+	 * if a reserved bit is set in a standard completion's tag.
+	 */
+	VIRTCHNL2_CAP_MISS_COMPL_TAG		= BIT_ULL(20),
+	/* This must be the last capability */
+	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
+};
+
+/**
+ * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
+ * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
+ *				    completions where descriptors and buffers
+ *				    are completed at the same time.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
+ *				   packet processing where descriptors are
+ *				   cleaned in order, but buffers can be
+ *				   completed out of order.
+ */
+enum virtchnl2_txq_sched_mode {
+	VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		= 0,
+	VIRTCHNL2_TXQ_SCHED_MODE_FLOW		= 1,
+};
+
+/**
+ * enum virtchnl2_txq_flags - Transmit Queue feature flags
+ * @VIRTCHNL2_TXQ_ENABLE_MISS_COMPL: Enable rule miss completion type. Packet
+ *				     completion for a packet sent on exception
+ *				     path and only relevant in flow scheduling
+ *				     mode.
+ */
+enum virtchnl2_txq_flags {
+	VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		= BIT(0),
+};
+
+/**
+ * enum virtchnl2_peer_type - Transmit mailbox peer type
+ * @VIRTCHNL2_RDMA_CPF: RDMA peer type
+ * @VIRTCHNL2_NVME_CPF: NVME peer type
+ * @VIRTCHNL2_ATE_CPF: ATE peer type
+ * @VIRTCHNL2_LCE_CPF: LCE peer type
+ */
+enum virtchnl2_peer_type {
+	VIRTCHNL2_RDMA_CPF			= 0,
+	VIRTCHNL2_NVME_CPF			= 1,
+	VIRTCHNL2_ATE_CPF			= 2,
+	VIRTCHNL2_LCE_CPF			= 3,
+};
+
+/**
+ * enum virtchnl2_rxq_flags - Receive Queue Feature flags
+ * @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag
+ * @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag
+ * @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
+ *					by hardware immediately after processing
+ *					each packet.
+ * @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size
+ * @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size
+ */
+enum virtchnl2_rxq_flags {
+	VIRTCHNL2_RXQ_RSC			= BIT(0),
+	VIRTCHNL2_RXQ_HDR_SPLIT			= BIT(1),
+	VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	= BIT(2),
+	VIRTCHNL2_RX_DESC_SIZE_16BYTE		= BIT(3),
+	VIRTCHNL2_RX_DESC_SIZE_32BYTE		= BIT(4),
+};
+
+/**
+ * enum virtchnl2_rss_alg - Type of RSS algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC: TOEPLITZ_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_R_ASYMMETRIC: R_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC: TOEPLITZ_SYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC: XOR_SYMMETRIC algorithm
+ */
+enum virtchnl2_rss_alg {
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC	= 0,
+	VIRTCHNL2_RSS_ALG_R_ASYMMETRIC		= 1,
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC	= 2,
+	VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC		= 3,
+};
+
+/**
+ * enum virtchnl2_event_codes - Type of event
+ * @VIRTCHNL2_EVENT_UNKNOWN: Unknown event type
+ * @VIRTCHNL2_EVENT_LINK_CHANGE: Link change event type
+ * @VIRTCHNL2_EVENT_START_RESET_ADI: Start reset ADI event type
+ * @VIRTCHNL2_EVENT_FINISH_RESET_ADI: Finish reset ADI event type
+ * @VIRTCHNL2_EVENT_ADI_ACTIVE: Event type to indicate 'function active' state
+ *				of ADI.
+ */
+enum virtchnl2_event_codes {
+	VIRTCHNL2_EVENT_UNKNOWN			= 0,
+	VIRTCHNL2_EVENT_LINK_CHANGE		= 1,
+	/* These messages are only sent to PF from CP */
+	VIRTCHNL2_EVENT_START_RESET_ADI		= 2,
+	VIRTCHNL2_EVENT_FINISH_RESET_ADI	= 3,
+	VIRTCHNL2_EVENT_ADI_ACTIVE		= 4,
+};
+
+/**
+ * enum virtchnl2_queue_type - Various queue types
+ * @VIRTCHNL2_QUEUE_TYPE_TX: TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX: RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: TX completion queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: RX buffer queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_TX: Config TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_RX: Config RX queue type
+#ifdef NOT_FOR_UPSTREAM
+ * @VIRTCHNL2_QUEUE_TYPE_P2P_TX: P2P TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_P2P_RX: P2P RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION: P2P TX completion queue type
+ * @VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER: P2P RX buffer queue type
+#endif
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_TX: TX mailbox queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_RX: RX mailbox queue type
+ *
+ * Transmit and Receive queue types are valid in single as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced which are
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
  * the queue where hardware posts completions.
  */
-#define VIRTCHNL2_QUEUE_TYPE_TX			0
-#define VIRTCHNL2_QUEUE_TYPE_RX			1
-#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	2
-#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		3
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		4
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		5
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX		6
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX		7
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	8
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	9
-#define VIRTCHNL2_QUEUE_TYPE_MBX_TX		10
-#define VIRTCHNL2_QUEUE_TYPE_MBX_RX		11
-
-/* VIRTCHNL2_ITR_IDX
- * Virtchannel interrupt throttling rate index
- */
-#define VIRTCHNL2_ITR_IDX_0			0
-#define VIRTCHNL2_ITR_IDX_1			1
-
-/* VIRTCHNL2_VECTOR_LIMITS
+enum virtchnl2_queue_type {
+	VIRTCHNL2_QUEUE_TYPE_TX			= 0,
+	VIRTCHNL2_QUEUE_TYPE_RX			= 1,
+	VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	= 2,
+	VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		= 3,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		= 4,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		= 5,
+	/* Queue types 6, 7, 8, 9 are reserved */
+	VIRTCHNL2_QUEUE_TYPE_MBX_TX		= 10,
+	VIRTCHNL2_QUEUE_TYPE_MBX_RX		= 11,
+};
+
+/**
+ * enum virtchnl2_itr_idx - Interrupt throttling rate index
+ * @VIRTCHNL2_ITR_IDX_0: ITR index 0
+ * @VIRTCHNL2_ITR_IDX_1: ITR index 1
+ */
+enum virtchnl2_itr_idx {
+	VIRTCHNL2_ITR_IDX_0			= 0,
+	VIRTCHNL2_ITR_IDX_1			= 1,
+};
+
+/**
+ * VIRTCHNL2_VECTOR_LIMITS
  * Since PF/VF messages are limited by __le16 size, precalculate the maximum
  * possible values of nested elements in virtchnl structures that virtual
  * channel can possibly handle in a single message.
@@ -335,131 +419,155 @@
 		((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
 		sizeof(struct virtchnl2_queue_vector))
 
-/* VIRTCHNL2_MAC_TYPE
- * VIRTCHNL2_MAC_ADDR_PRIMARY
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_PRIMARY for the
- * primary/device unicast MAC address filter for VIRTCHNL2_OP_ADD_MAC_ADDR and
- * VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the underlying control plane
- * function to accurately track the MAC address and for VM/function reset.
- *
- * VIRTCHNL2_MAC_ADDR_EXTRA
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_EXTRA for any extra
- * unicast and/or multicast filters that are being added/deleted via
- * VIRTCHNL2_OP_ADD_MAC_ADDR/VIRTCHNL2_OP_DEL_MAC_ADDR respectively.
+/**
+ * enum virtchnl2_mac_addr_type - MAC address types
+ * @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
+ *				primary/device unicast MAC address filter for
+ *				VIRTCHNL2_OP_ADD_MAC_ADDR and
+ *				VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
+ *				underlying control plane function to accurately
+ *				track the MAC address and for VM/function reset.
+ * @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
+ *			      unicast and/or multicast filters that are being
+ *			      added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
+ *			      VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
-#define VIRTCHNL2_MAC_ADDR_PRIMARY		1
-#define VIRTCHNL2_MAC_ADDR_EXTRA		2
+enum virtchnl2_mac_addr_type {
+	VIRTCHNL2_MAC_ADDR_PRIMARY		= 1,
+	VIRTCHNL2_MAC_ADDR_EXTRA		= 2,
+};
 
-/* VIRTCHNL2_PROMISC_FLAGS
- * Flags used for promiscuous mode
+/**
+ * enum virtchnl2_promisc_flags - Flags used for promiscuous mode
+ * @VIRTCHNL2_UNICAST_PROMISC: Unicast promiscuous mode
+ * @VIRTCHNL2_MULTICAST_PROMISC: Multicast promiscuous mode
  */
-#define VIRTCHNL2_UNICAST_PROMISC		BIT(0)
-#define VIRTCHNL2_MULTICAST_PROMISC		BIT(1)
+enum virtchnl2_promisc_flags {
+	VIRTCHNL2_UNICAST_PROMISC		= BIT(0),
+	VIRTCHNL2_MULTICAST_PROMISC		= BIT(1),
+};
 
-/* VIRTCHNL2_QUEUE_GROUP_TYPE
- * Type of queue groups
+/**
+ * enum virtchnl2_queue_group_type - Type of queue groups
+ * @VIRTCHNL2_QUEUE_GROUP_DATA: Data queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_MBX: Mailbox queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_CONFIG: Config queue group type
+ *
  * 0 till 0xFF is for general use
  */
-#define VIRTCHNL2_QUEUE_GROUP_DATA		1
-#define VIRTCHNL2_QUEUE_GROUP_MBX		2
-#define VIRTCHNL2_QUEUE_GROUP_CONFIG		3
+enum virtchnl2_queue_group_type {
+	VIRTCHNL2_QUEUE_GROUP_DATA		= 1,
+	VIRTCHNL2_QUEUE_GROUP_MBX		= 2,
+	VIRTCHNL2_QUEUE_GROUP_CONFIG		= 3,
+};
+
+#ifdef NOT_FOR_UPSTREAM
+/* 0x100 and on is for OEM */
+#define VIRTCHNL2_QUEUE_GROUP_P2P		0x100
+#endif /* NOT_FOR_UPSTREAM */
 
-/* VIRTCHNL2_PROTO_HDR_TYPE
- * Protocol header type within a packet segment. A segment consists of one or
+/* Protocol header type within a packet segment. A segment consists of one or
  * more protocol headers that make up a logical group of protocol headers. Each
  * logical group of protocol headers encapsulates or is encapsulated using/by
  * tunneling or encapsulation protocols for network virtualization.
  */
-/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ANY			0
-#define VIRTCHNL2_PROTO_HDR_PRE_MAC		1
-/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_MAC			2
-#define VIRTCHNL2_PROTO_HDR_POST_MAC		3
-#define VIRTCHNL2_PROTO_HDR_ETHERTYPE		4
-#define VIRTCHNL2_PROTO_HDR_VLAN		5
-#define VIRTCHNL2_PROTO_HDR_SVLAN		6
-#define VIRTCHNL2_PROTO_HDR_CVLAN		7
-#define VIRTCHNL2_PROTO_HDR_MPLS		8
-#define VIRTCHNL2_PROTO_HDR_UMPLS		9
-#define VIRTCHNL2_PROTO_HDR_MMPLS		10
-#define VIRTCHNL2_PROTO_HDR_PTP			11
-#define VIRTCHNL2_PROTO_HDR_CTRL		12
-#define VIRTCHNL2_PROTO_HDR_LLDP		13
-#define VIRTCHNL2_PROTO_HDR_ARP			14
-#define VIRTCHNL2_PROTO_HDR_ECP			15
-#define VIRTCHNL2_PROTO_HDR_EAPOL		16
-#define VIRTCHNL2_PROTO_HDR_PPPOD		17
-#define VIRTCHNL2_PROTO_HDR_PPPOE		18
-/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4		19
-/* IPv4 and IPv6 Fragment header types are only associated to
- * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
- * cannot be used independently.
- */
-/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG		20
-/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6		21
-/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG		22
-#define VIRTCHNL2_PROTO_HDR_IPV6_EH		23
-/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_UDP			24
-/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TCP			25
-/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_SCTP		26
-/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMP		27
-/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMPV6		28
-#define VIRTCHNL2_PROTO_HDR_IGMP		29
-#define VIRTCHNL2_PROTO_HDR_AH			30
-#define VIRTCHNL2_PROTO_HDR_ESP			31
-#define VIRTCHNL2_PROTO_HDR_IKE			32
-#define VIRTCHNL2_PROTO_HDR_NATT_KEEP		33
-/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_PAY			34
-#define VIRTCHNL2_PROTO_HDR_L2TPV2		35
-#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	36
-#define VIRTCHNL2_PROTO_HDR_L2TPV3		37
-#define VIRTCHNL2_PROTO_HDR_GTP			38
-#define VIRTCHNL2_PROTO_HDR_GTP_EH		39
-#define VIRTCHNL2_PROTO_HDR_GTPCV2		40
-#define VIRTCHNL2_PROTO_HDR_GTPC_TEID		41
-#define VIRTCHNL2_PROTO_HDR_GTPU		42
-#define VIRTCHNL2_PROTO_HDR_GTPU_UL		43
-#define VIRTCHNL2_PROTO_HDR_GTPU_DL		44
-#define VIRTCHNL2_PROTO_HDR_ECPRI		45
-#define VIRTCHNL2_PROTO_HDR_VRRP		46
-#define VIRTCHNL2_PROTO_HDR_OSPF		47
-/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TUN			48
-#define VIRTCHNL2_PROTO_HDR_GRE			49
-#define VIRTCHNL2_PROTO_HDR_NVGRE		50
-#define VIRTCHNL2_PROTO_HDR_VXLAN		51
-#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE		52
-#define VIRTCHNL2_PROTO_HDR_GENEVE		53
-#define VIRTCHNL2_PROTO_HDR_NSH			54
-#define VIRTCHNL2_PROTO_HDR_QUIC		55
-#define VIRTCHNL2_PROTO_HDR_PFCP		56
-#define VIRTCHNL2_PROTO_HDR_PFCP_NODE		57
-#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION	58
-#define VIRTCHNL2_PROTO_HDR_RTP			59
-#define VIRTCHNL2_PROTO_HDR_ROCE		60
-#define VIRTCHNL2_PROTO_HDR_ROCEV1		61
-#define VIRTCHNL2_PROTO_HDR_ROCEV2		62
-/* protocol ids up to 32767 are reserved for AVF use */
-/* 32768 - 65534 are used for user defined protocol ids */
-/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_NO_PROTO		65535
-
-#define VIRTCHNL2_VERSION_MAJOR_2        2
-#define VIRTCHNL2_VERSION_MINOR_0        0
-
-
-/* VIRTCHNL2_OP_VERSION
+enum virtchnl2_proto_hdr_type {
+	/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ANY			= 0,
+	VIRTCHNL2_PROTO_HDR_PRE_MAC		= 1,
+	/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_MAC			= 2,
+	VIRTCHNL2_PROTO_HDR_POST_MAC		= 3,
+	VIRTCHNL2_PROTO_HDR_ETHERTYPE		= 4,
+	VIRTCHNL2_PROTO_HDR_VLAN		= 5,
+	VIRTCHNL2_PROTO_HDR_SVLAN		= 6,
+	VIRTCHNL2_PROTO_HDR_CVLAN		= 7,
+	VIRTCHNL2_PROTO_HDR_MPLS		= 8,
+	VIRTCHNL2_PROTO_HDR_UMPLS		= 9,
+	VIRTCHNL2_PROTO_HDR_MMPLS		= 10,
+	VIRTCHNL2_PROTO_HDR_PTP			= 11,
+	VIRTCHNL2_PROTO_HDR_CTRL		= 12,
+	VIRTCHNL2_PROTO_HDR_LLDP		= 13,
+	VIRTCHNL2_PROTO_HDR_ARP			= 14,
+	VIRTCHNL2_PROTO_HDR_ECP			= 15,
+	VIRTCHNL2_PROTO_HDR_EAPOL		= 16,
+	VIRTCHNL2_PROTO_HDR_PPPOD		= 17,
+	VIRTCHNL2_PROTO_HDR_PPPOE		= 18,
+	/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4		= 19,
+	/* IPv4 and IPv6 Fragment header types are only associated to
+	 * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+	 * cannot be used independently.
+	 */
+	/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4_FRAG		= 20,
+	/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6		= 21,
+	/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6_FRAG		= 22,
+	VIRTCHNL2_PROTO_HDR_IPV6_EH		= 23,
+	/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_UDP			= 24,
+	/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TCP			= 25,
+	/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_SCTP		= 26,
+	/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMP		= 27,
+	/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMPV6		= 28,
+	VIRTCHNL2_PROTO_HDR_IGMP		= 29,
+	VIRTCHNL2_PROTO_HDR_AH			= 30,
+	VIRTCHNL2_PROTO_HDR_ESP			= 31,
+	VIRTCHNL2_PROTO_HDR_IKE			= 32,
+	VIRTCHNL2_PROTO_HDR_NATT_KEEP		= 33,
+	/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_PAY			= 34,
+	VIRTCHNL2_PROTO_HDR_L2TPV2		= 35,
+	VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	= 36,
+	VIRTCHNL2_PROTO_HDR_L2TPV3		= 37,
+	VIRTCHNL2_PROTO_HDR_GTP			= 38,
+	VIRTCHNL2_PROTO_HDR_GTP_EH		= 39,
+	VIRTCHNL2_PROTO_HDR_GTPCV2		= 40,
+	VIRTCHNL2_PROTO_HDR_GTPC_TEID		= 41,
+	VIRTCHNL2_PROTO_HDR_GTPU		= 42,
+	VIRTCHNL2_PROTO_HDR_GTPU_UL		= 43,
+	VIRTCHNL2_PROTO_HDR_GTPU_DL		= 44,
+	VIRTCHNL2_PROTO_HDR_ECPRI		= 45,
+	VIRTCHNL2_PROTO_HDR_VRRP		= 46,
+	VIRTCHNL2_PROTO_HDR_OSPF		= 47,
+	/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TUN			= 48,
+	VIRTCHNL2_PROTO_HDR_GRE			= 49,
+	VIRTCHNL2_PROTO_HDR_NVGRE		= 50,
+	VIRTCHNL2_PROTO_HDR_VXLAN		= 51,
+	VIRTCHNL2_PROTO_HDR_VXLAN_GPE		= 52,
+	VIRTCHNL2_PROTO_HDR_GENEVE		= 53,
+	VIRTCHNL2_PROTO_HDR_NSH			= 54,
+	VIRTCHNL2_PROTO_HDR_QUIC		= 55,
+	VIRTCHNL2_PROTO_HDR_PFCP		= 56,
+	VIRTCHNL2_PROTO_HDR_PFCP_NODE		= 57,
+	VIRTCHNL2_PROTO_HDR_PFCP_SESSION	= 58,
+	VIRTCHNL2_PROTO_HDR_RTP			= 59,
+	VIRTCHNL2_PROTO_HDR_ROCE		= 60,
+	VIRTCHNL2_PROTO_HDR_ROCEV1		= 61,
+	VIRTCHNL2_PROTO_HDR_ROCEV2		= 62,
+	/* Protocol ids up to 32767 are reserved */
+	/* 32768 - 65534 are used for user defined protocol ids */
+	/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_NO_PROTO		= 65535,
+};
+
+enum virtchl2_version {
+	VIRTCHNL2_VERSION_MINOR_0		= 0,
+	VIRTCHNL2_VERSION_MAJOR_2		= 2,
+};
+
+/**
+ * struct virtchnl2_version_info - Version information
+ * @major: Major version
+ * @minor: Minor version
+ *
  * PF/VF posts its version number to the CP. CP responds with its version number
  * in the same format, along with a return code.
  * If there is a major version mismatch, then the PF/VF cannot operate.
@@ -469,15 +577,54 @@
  * This version opcode MUST always be specified as == 1, regardless of other
  * changes in the API. The CP must always respond to this message without
  * error regardless of version mismatch.
+ *
+ * Associated with VIRTCHNL2_OP_VERSION.
  */
 struct virtchnl2_version_info {
-        u32 major;
-        u32 minor;
+	__le32 major;
+	__le32 minor;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
 
-/* VIRTCHNL2_OP_GET_CAPS
+/**
+ * struct virtchnl2_get_capabilities - Capabilities info
+ * @csum_caps: See enum virtchnl2_cap_txrx_csum
+ * @seg_caps: See enum virtchnl2_cap_seg
+ * @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at
+ * @rsc_caps: See enum virtchnl2_cap_rsc
+ * @rss_caps: See enum virtchnl2_cap_rss
+ * @other_caps: See enum virtchnl2_cap_other
+ * @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
+ *		     provided by CP.
+ * @mailbox_vector_id: Mailbox vector id
+ * @num_allocated_vectors: Maximum number of allocated vectors for the device
+ * @max_rx_q: Maximum number of supported Rx queues
+ * @max_tx_q: Maximum number of supported Tx queues
+ * @max_rx_bufq: Maximum number of supported buffer queues
+ * @max_tx_complq: Maximum number of supported completion queues
+ * @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
+ *		   responds with the maximum VFs granted.
+ * @max_vports: Maximum number of vports that can be supported
+ * @default_num_vports: Default number of vports driver should allocate on load
+ * @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes
+ * @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
+ *			    sent per transmit packet without needing to be
+ *			    linearized.
+ * @reserved: Reserved field
+ * @max_adis: Max number of ADIs
+#ifdef NOT_FOR_UPSTREAM
+ * @oem_cp_ver_major: OEM CP major version number
+ * @oem_cp_ver_minor: OEM CP minor version number
+#else
+ * @reserved1: Reserved field
+#endif
+ * @device_type: See enum virtchl2_device_type
+ * @min_sso_packet_len: Min packet length supported by device for single
+ *			segment offload
+ * @max_hdr_buf_per_lso: Max number of header buffers that can be used for
+ *			 an LSO
+ * @pad1: Padding for future extensions
+ *
  * Dataplane driver sends this message to CP to negotiate capabilities and
  * provides a virtchnl2_get_capabilities structure with its desired
  * capabilities, max_sriov_vfs and num_allocated_vectors.
@@ -495,317 +642,344 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
  * mailbox_vector_id and the number of itr index registers in itr_idx_map.
  * It also responds with default number of vports that the dataplane driver
  * should comeup with in default_num_vports and maximum number of vports that
- * can be supported in max_vports
+ * can be supported in max_vports.
+ *
+ * Associated with VIRTCHNL2_OP_GET_CAPS.
  */
 struct virtchnl2_get_capabilities {
-	/* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
 	__le32 csum_caps;
-
-	/* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
 	__le32 seg_caps;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 hsplit_caps;
-
-	/* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
 	__le32 rsc_caps;
-
-	/* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions  */
 	__le64 rss_caps;
-
-
-	/* see VIRTCHNL2_OTHER_CAPS definitions  */
 	__le64 other_caps;
-
-	/* DYN_CTL register offset and vector id for mailbox provided by CP */
 	__le32 mailbox_dyn_ctl;
 	__le16 mailbox_vector_id;
-	/* Maximum number of allocated vectors for the device */
 	__le16 num_allocated_vectors;
-
-	/* Maximum number of queues that can be supported */
 	__le16 max_rx_q;
 	__le16 max_tx_q;
 	__le16 max_rx_bufq;
 	__le16 max_tx_complq;
-
-	/* The PF sends the maximum VFs it is requesting. The CP responds with
-	 * the maximum VFs granted.
-	 */
 	__le16 max_sriov_vfs;
-
-	/* maximum number of vports that can be supported */
 	__le16 max_vports;
-	/* default number of vports driver should allocate on load */
 	__le16 default_num_vports;
-
-	/* Max header length hardware can parse/checksum, in bytes */
 	__le16 max_tx_hdr_size;
-
-	/* Max number of scatter gather buffers that can be sent per transmit
-	 * packet without needing to be linearized
-	 */
 	u8 max_sg_bufs_per_tx_pkt;
-
-	u8 reserved1;
-	/* upper bound of number of ADIs supported */
+	u8 reserved;
 	__le16 max_adis;
-
-	/* version of Control Plane that is running */
+#ifdef NOT_FOR_UPSTREAM
 	__le16 oem_cp_ver_major;
 	__le16 oem_cp_ver_minor;
-	/* see VIRTCHNL2_DEVICE_TYPE definitions */
+#else
+	u8 reserved1[4];
+#endif /* NOT_FOR_UPSTREAM */
 	__le32 device_type;
-
-	/* min packet length supported by device for single segment offload */
 	u8 min_sso_packet_len;
-	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
-
-	u8 reserved[10];
+	u8 pad1[10];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
 
+/**
+ * struct virtchnl2_queue_reg_chunk - Single queue chunk
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Start Queue ID
+ * @num_queues: Number of queues in the chunk
+ * @pad: Padding
+ * @qtail_reg_start: Queue tail register offset
+ * @qtail_reg_spacing: Queue tail register spacing
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_queue_reg_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
 	__le32 pad;
-
-	/* Queue tail register offset and spacing provided by CP */
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
-
-	u8 reserved[4];
+	u8 pad1[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/**
+ * struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
+ *				       queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info
+ */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
-/* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
-#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
+/**
+ * enum virtchnl2_vport_flags - Vport flags
+ * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+#ifdef NOT_FOR_UPSTREAM
+ * @VIRTCHNL2_VPORT_PORT2PORT_PORT: Port2port port flag
+#endif
+ */
+enum virtchnl2_vport_flags {
+	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+#ifdef NOT_FOR_UPSTREAM
+	VIRTCHNL2_VPORT_PORT2PORT_PORT		= BIT(15),
+#endif /* NOT_FOR_UPSTREAM */
+};
 
+#ifndef LINUX_SUPPORT
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
+#endif /* !LINUX_SUPPORT */
 
-/* VIRTCHNL2_OP_CREATE_VPORT
- * PF sends this message to CP to create a vport by filling in required
+/**
+ * struct virtchnl2_create_vport - Create vport config info
+ * @vport_type: See enum virtchnl2_vport_type
+ * @txq_model: See virtchnl2_queue_model
+ * @rxq_model: See virtchnl2_queue_model
+ * @num_tx_q: Number of Tx queues
+ * @num_tx_complq: Valid only if txq_model is split queue
+ * @num_rx_q: Number of Rx queues
+ * @num_rx_bufq: Valid only if rxq_model is split queue
+ * @default_rx_q: Relative receive queue index to be used as default
+ * @vport_index: Used to align PF and CP in case of default multiple vports,
+ *		 it is filled by the PF and CP returns the same value, to
+ *		 enable the driver to support multiple asynchronous parallel
+ *		 CREATE_VPORT requests and associate a response to a specific
+ *		 request.
+ * @max_mtu: Max MTU. CP populates this field on response
+ * @vport_id: Vport id. CP populates this field on response
+ * @default_mac_addr: Default MAC address
+ * @vport_flags: See enum virtchnl2_vport_flags
+ * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
+ * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
+ * @reserved: Reserved bytes and cannot be used
+ * @rss_algorithm: RSS algorithm
+ * @rss_key_size: RSS key size
+ * @rss_lut_size: RSS LUT size
+ * @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to CP to create a vport by filling in required
  * fields of virtchnl2_create_vport structure.
  * CP responds with the updated virtchnl2_create_vport structure containing the
  * necessary fields followed by chunks which in turn will have an array of
  * num_chunks entries of virtchnl2_queue_chunk structures.
  */
 struct virtchnl2_create_vport {
-	/* PF/VF populates the following fields on request */
-	/* see VIRTCHNL2_VPORT_TYPE definitions */
 	__le16 vport_type;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 txq_model;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 rxq_model;
 	__le16 num_tx_q;
-	/* valid only if txq_model is split queue */
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
-	/* valid only if rxq_model is split queue */
 	__le16 num_rx_bufq;
-	/* relative receive queue index to be used as default */
 	__le16 default_rx_q;
-	/* used to align PF and CP in case of default multiple vports, it is
-	 * filled by the PF and CP returns the same value, to enable the driver
-	 * to support multiple asynchronous parallel CREATE_VPORT requests and
-	 * associate a response to a specific request
-	 */
 	__le16 vport_index;
-
-	/* CP populates the following fields on response */
 	__le16 max_mtu;
 	__le32 vport_id;
+#ifndef LINUX_SUPPORT
 	u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_VPORT_FLAGS definitions */
+#else
+	u8 default_mac_addr[ETH_ALEN];
+#endif /* !LINUX_SUPPORT */
 	__le16 vport_flags;
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 rx_desc_ids;
-	/* see VIRTCHNL2_TX_DESC_IDS definitions */
 	__le64 tx_desc_ids;
-
-	u8 reserved1[72];
-
-	/* see VIRTCHNL2_RSS_ALGORITHM definitions */
+	u8 reserved[72];
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
-
-	u8 reserved[20];
+	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
 
-/* VIRTCHNL2_OP_DESTROY_VPORT
- * VIRTCHNL2_OP_ENABLE_VPORT
- * VIRTCHNL2_OP_DISABLE_VPORT
- * PF sends this message to CP to destroy, enable or disable a vport by filling
- * in the vport_id in virtchnl2_vport structure.
+/**
+ * struct virtchnl2_vport - Vport identifier information
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends this message to CP to destroy, enable or disable a vport by
+ * filling in the vport_id in virtchnl2_vport structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
+ * VIRTCHNL2_OP_DISABLE_VPORT.
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
 
-/* Transmit queue config info */
+/**
+ * struct virtchnl2_txq_info - Transmit queue config info
+ * @dma_ring_addr: DMA address
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue ID
+ * @relative_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue. Used in many to one mapping of transmit queues to
+ *		       completion queue.
+ * @model: See enum virtchnl2_queue_model
+ * @sched_mode: See enum virtchnl2_txq_sched_mode
+ * @qflags: TX queue feature flags
+ * @ring_len: Ring length
+ * @tx_compl_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue.
+ * @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
+ * @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
+ *		      messages for the respective CONFIG_TX queue.
+ * @pad: Padding
+ * @egress_pasid: Egress PASID info
+ * @egress_hdr_pasid: Egress HDR passid
+ * @egress_buf_pasid: Egress buf passid
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_txq_info {
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
-
 	__le32 queue_id;
-	/* valid only if queue model is split and type is transmit queue. Used
-	 * in many to one mapping of transmit queues to completion queue
-	 */
 	__le16 relative_queue_id;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 model;
-
-	/* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
 	__le16 sched_mode;
-
-	/* see VIRTCHNL2_TXQ_FLAGS definitions */
 	__le16 qflags;
 	__le16 ring_len;
-
-	/* valid only if queue model is split and type is transmit queue */
 	__le16 tx_compl_queue_id;
-	/* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
-	/* see VIRTCHNL2_PEER_TYPE definitions */
 	__le16 peer_type;
-	/* valid only if queue type is CONFIG_TX and used to deliver messages
-	 * for the respective CONFIG_TX queue
-	 */
 	__le16 peer_rx_queue_id;
-
 	u8 pad[4];
-
-	/* Egress pasid is used for SIOV use case */
 	__le32 egress_pasid;
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
-
-	u8 reserved[8];
+	u8 pad1[8];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
 
-/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
- * PF sends this message to set up parameters for one or more transmit queues.
- * This message contains an array of num_qinfo instances of virtchnl2_txq_info
- * structures. CP configures requested queues and returns a status code. If
- * num_qinfo specified is greater than the number of queues associated with the
- * vport, an error is returned and no queues are configured.
+/**
+ * struct virtchnl2_config_tx_queues - TX queue config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of virtchnl2_txq_info structs
+ * @pad: Padding for future extensions
+ * @qinfo: Tx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more transmit
+ * queues. This message contains an array of num_qinfo instances of
+ * virtchnl2_txq_info structures. CP configures requested queues and returns
+ * a status code. If num_qinfo specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
  */
 struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
-
-	u8 reserved[10];
+	u8 pad[10];
 	struct virtchnl2_txq_info qinfo[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
 
-/* Receive queue config info */
+/**
+ * struct virtchnl2_rxq_info - Receive queue config info
+ * @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue id
+ * @model: See enum virtchnl2_queue_model
+ * @hdr_buffer_size: Header buffer size
+ * @data_buffer_size: Data buffer size
+ * @max_pkt_size: Max packet size
+ * @ring_len: Ring length
+ * @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
+ *			 This field must be a power of 2.
+ * @pad: Padding
+ * @dma_head_wb_addr: Applicable only for receive buffer queues
+ * @qflags: Applicable only for receive completion queues.
+ *	    See enum virtchnl2_rxq_flags.
+ * @rx_buffer_low_watermark: Rx buffer low watermark
+ * @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
+ *	       only if this field is set.
+ * @pad1: Padding
+ * @ingress_pasid: Ingress PASID
+ * @ingress_hdr_pasid: Ingress PASID header
+ * @ingress_buf_pasid: Ingress PASID buffer
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_rxq_info {
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 desc_ids;
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 queue_id;
-
-	/* see QUEUE_MODEL definitions */
 	__le16 model;
-
 	__le16 hdr_buffer_size;
 	__le32 data_buffer_size;
 	__le32 max_pkt_size;
-
 	__le16 ring_len;
 	u8 buffer_notif_stride;
-	u8 pad[1];
-
-	/* Applicable only for receive buffer queues */
+	u8 pad;
 	__le64 dma_head_wb_addr;
-
-	/* Applicable only for receive completion queues */
-	/* see VIRTCHNL2_RXQ_FLAGS definitions */
 	__le16 qflags;
-
 	__le16 rx_buffer_low_watermark;
-
-	/* valid only in split queue model */
 	__le16 rx_bufq1_id;
-	/* valid only in split queue model */
 	__le16 rx_bufq2_id;
-	/* it indicates if there is a second buffer, rx_bufq2_id is valid only
-	 * if this field is set
-	 */
 	u8 bufq2_ena;
-	u8 pad2[3];
-
-	/* Ingress pasid is used for SIOV use case */
+	u8 pad1[3];
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
-
-	u8 reserved[16];
+	u8 pad2[16];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
-/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
- * PF sends this message to set up parameters for one or more receive queues.
+/**
+ * struct virtchnl2_config_rx_queues - Rx queues config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of instances
+ * @pad: Padding for future extensions
+ * @qinfo: Rx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more receive queues.
  * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
  * structures. CP configures requested queues and returns a status code.
  * If the number of queues specified is greater than the number of queues
  * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
  */
 struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
-
-	u8 reserved[18];
+	u8 pad[18];
 	struct virtchnl2_rxq_info qinfo[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
 
-/* VIRTCHNL2_OP_ADD_QUEUES
- * PF sends this message to request additional transmit/receive queues beyond
+/**
+ * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
+ * @vport_id: Vport id
+ * @num_tx_q: Number of Tx qieues
+ * @num_tx_complq: Number of Tx completion queues
+ * @num_rx_q:  Number of Rx queues
+ * @num_rx_bufq:  Number of Rx buffer queues
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to request additional transmit/receive queues beyond
  * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
  * structure is used to specify the number of each type of queues.
  * CP responds with the same structure with the actual number of queues assigned
  * followed by num_chunks of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUES.
  */
 struct virtchnl2_add_queues {
 	__le32 vport_id;
@@ -813,72 +987,84 @@ struct virtchnl2_add_queues {
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
 	__le16 num_rx_bufq;
-	u8 reserved[4];
+	u8 pad[4];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
 
 /* Queue Groups Extension */
-
+/**
+ * struct virtchnl2_rx_queue_group_info - RX queue group info
+ * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
+ *		  allocated by CreateVport command. New size will be returned
+ *		  if allocation succeeded, otherwise original rss_size from
+ *		  CreateVport will be returned.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_rx_queue_group_info {
-	/* IN/OUT, user can ask to update rss_lut size originally allocated
-	 * by CreateVport command. New size will be returned if allocation
-	 * succeeded, otherwise original rss_size from CreateVport will
-	 * be returned.
-	 */
 	__le16 rss_lut_size;
-	/* Future extension purpose */
 	u8 pad[6];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
 
+/**
+ * struct virtchnl2_tx_queue_group_info - TX queue group info
+ * @tx_tc: TX TC queue group will be connected to
+ * @priority: Each group can have its own priority, value 0-7, while each group
+ *	      with unique priority is strict priority. It can be single set of
+ *	      queue groups which configured with same priority, then they are
+ *	      assumed part of WFQ arbitration group and are expected to be
+ *	      assigned with weight.
+ * @is_sp: Determines if queue group is expected to be Strict Priority according
+ *	   to its priority.
+ * @pad: Padding
+ * @pir_weight: Peak Info Rate Weight in case Queue Group is part of WFQ
+ *		arbitration set.
+ *		The weights of the groups are independent of each other.
+ *		Possible values: 1-200
+ * @cir_pad: Future extension purpose for CIR only
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_tx_queue_group_info { /* IN */
-	/* TX TC queue group will be connected to */
 	u8 tx_tc;
-	/* Each group can have its own priority, value 0-7, while each group
-	 * with unique priority is strict priority.
-	 * It can be single set of queue groups which configured with
-	 * same priority, then they are assumed part of WFQ arbitration
-	 * group and are expected to be assigned with weight.
-	 */
 	u8 priority;
-	/* Determines if queue group is expected to be Strict Priority
-	 * according to its priority
-	 */
 	u8 is_sp;
 	u8 pad;
-
-	/* Peak Info Rate Weight in case Queue Group is part of WFQ
-	 * arbitration set.
-	 * The weights of the groups are independent of each other.
-	 * Possible values: 1-200
-	 */
 	__le16 pir_weight;
-	/* Future extension purpose for CIR only */
 	u8 cir_pad[2];
-	/* Future extension purpose*/
 	u8 pad2[8];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_tx_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_group_id - Queue group ID
+ * @queue_group_id: Queue group ID - Depended on it's type
+ *		    Data: Is an ID which is relative to Vport
+ *		    Config & Mailbox: Is an ID which is relative to func
+ *		    This ID is use in future calls, i.e. delete.
+ *		    Requested by host and assigned by Control plane.
+ * @queue_group_type: Functional type: See enum virtchnl2_queue_group_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_group_id {
-	/* Queue group ID - depended on it's type
-	 * Data: is an ID which is relative to Vport
-	 * Config & Mailbox: is an ID which is relative to func.
-	 * This ID is use in future calls, i.e. delete.
-	 * Requested by host and assigned by Control plane.
-	 */
 	__le16 queue_group_id;
-	/* Functional type: see VIRTCHNL2_QUEUE_GROUP_TYPE definitions */
 	__le16 queue_group_type;
 	u8 pad[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 
+/**
+ * struct virtchnl2_queue_group_info - Queue group info
+ * @qg_id: Queue group ID
+ * @num_tx_q: Number of TX queues
+ * @num_tx_complq: Number of completion queues
+ * @num_rx_q: Number of RX queues
+ * @num_rx_bufq: Number of RX buffer queues
+ * @tx_q_grp_info: TX queue group info
+ * @rx_q_grp_info: RX queue group info
+ * @pad: Padding for future extensions
+ * @chunks: Queue register chunks
+ */
 struct virtchnl2_queue_group_info {
 	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
@@ -890,242 +1076,309 @@ struct virtchnl2_queue_group_info {
 
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
-	/* Future extension purpose */
 	u8 pad[40];
 	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_groups - Queue groups list
+ * @num_queue_groups: Total number of queue groups
+ * @pad: Padding for future extensions
+ * @groups: Array of queue group info
+ */
 struct virtchnl2_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[6];
 	struct virtchnl2_queue_group_info groups[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
 
-/* VIRTCHNL2_OP_ADD_QUEUE_GROUPS
+/**
+ * struct virtchnl2_add_queue_groups - Add queue groups
+ * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @pad: Padding for future extensions
+ * @qg_info: IN/OUT. List of all the queue groups
+ *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
  * groups and queues assigned followed by num_queue_groups and num_chunks of
  * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
-	/* IN, vport_id to add queue group to, same as allocated by CreateVport.
-	 * NA for mailbox and other types not assigned to vport
-	 */
 	__le32 vport_id;
 	u8 pad[4];
-	/* IN/OUT */
 	struct virtchnl2_queue_groups qg_info;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
-/* VIRTCHNL2_OP_DEL_QUEUE_GROUPS
+/**
+ * struct virtchnl2_delete_queue_groups - Delete queue groups
+ * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ *	      CreateVport.
+ * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @pad: Padding
+ * @qg_ids: IN, IDs & types of Queue Groups to delete
+ *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
  * to be deleted. CP performs requested action and returns status and update
  * num_queue_groups with number of successfully deleted queue groups.
+ *
+ * Associated with VIRTCHNL2_OP_DEL_QUEUE_GROUPS.
  */
 struct virtchnl2_delete_queue_groups {
-	/* IN, vport_id to delete queue group from, same as
-	 * allocated by CreateVport.
-	 */
 	__le32 vport_id;
-	/* IN/OUT, Defines number of groups provided below */
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	/* IN, IDs & types of Queue Groups to delete */
 	struct virtchnl2_queue_group_id qg_ids[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
 
-/* Structure to specify a chunk of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
+ *				   interrupt vectors.
+ * @start_vector_id: Start vector id
+ * @start_evv_id: Start EVV id
+ * @num_vectors: Number of vectors
+ * @pad: Padding
+ * @dynctl_reg_start: DYN_CTL register offset
+ * @dynctl_reg_spacing: Register spacing between DYN_CTL registers of 2
+ *			consecutive vectors.
+ * @itrn_reg_start: ITRN register offset
+ * @itrn_reg_spacing: Register spacing between dynctl registers of 2
+ *		      consecutive vectors.
+ * @itrn_index_spacing: Register spacing between itrn registers of the same
+ *			vector where n=0..2.
+ * @pad1: Padding for future extensions
+ *
+ * Register offsets and spacing provided by CP.
+ * Dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
 struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
-	__le16 pad1;
-
-	/* Register offsets and spacing provided by CP.
-	 * dynamic control registers are used for enabling/disabling/re-enabling
-	 * interrupts and updating interrupt rates in the hotpath. Any changes
-	 * to interrupt rates in the dynamic control registers will be reflected
-	 * in the interrupt throttling rate registers.
-	 * itrn registers are used to update interrupt rates for specific
-	 * interrupt indices without modifying the state of the interrupt.
-	 */
+	__le16 pad;
+
 	__le32 dynctl_reg_start;
-	/* register spacing between dynctl registers of 2 consecutive vectors */
 	__le32 dynctl_reg_spacing;
 
 	__le32 itrn_reg_start;
-	/* register spacing between itrn registers of 2 consecutive vectors */
 	__le32 itrn_reg_spacing;
-	/* register spacing between itrn registers of the same vector
-	 * where n=0..2
-	 */
 	__le32 itrn_index_spacing;
-	u8 reserved[4];
+	u8 pad1[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
-/* Structure to specify several chunks of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunks - Chunks of contiguous interrupt vectors
+ * @num_vchunks: number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends virtchnl2_vector_chunks struct to specify the vectors it is
+ * giving away. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
+ */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunk vchunks[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
 
-/* VIRTCHNL2_OP_ALLOC_VECTORS
- * PF sends this message to request additional interrupt vectors beyond the
+/**
+ * struct virtchnl2_alloc_vectors - Vector allocation info
+ * @num_vectors: Number of vectors
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends this message to request additional interrupt vectors beyond the
  * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
  * structure is used to specify the number of vectors requested. CP responds
  * with the same structure with the actual number of vectors assigned followed
  * by virtchnl2_vector_chunks structure identifying the vector ids.
+ *
+ * Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunks vchunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
 
-/* VIRTCHNL2_OP_DEALLOC_VECTORS
- * PF sends this message to release the vectors.
- * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
- * away. CP performs requested action and returns status.
- */
-
-/* VIRTCHNL2_OP_GET_RSS_LUT
- * VIRTCHNL2_OP_SET_RSS_LUT
- * PF sends this message to get or set RSS lookup table. Only supported if
+/**
+ * struct virtchnl2_rss_lut - RSS LUT info
+ * @vport_id: Vport id
+ * @lut_entries_start: Start of LUT entries
+ * @lut_entries: Number of LUT entrties
+ * @pad: Padding
+ * @lut: RSS lookup table
+ *
+ * PF/VF sends this message to get or set RSS lookup table. Only supported if
  * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_lut structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
  */
 struct virtchnl2_rss_lut {
 	__le32 vport_id;
+
 	__le16 lut_entries_start;
 	__le16 lut_entries;
-	u8 reserved[4];
-	__le32 lut[1]; /* RSS lookup table */
+	u8 pad[4];
+	__le32 lut[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * PF sends this message to get RSS key. Only supported if both PF and CP
- * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
- * the virtchnl2_rss_key structure
- */
-
-/* VIRTCHNL2_OP_GET_RSS_HASH
- * VIRTCHNL2_OP_SET_RSS_HASH
- * PF sends these messages to get and set the hash filter enable bits for RSS.
- * By default, the CP sets these to all possible traffic types that the
+/**
+ * struct virtchnl2_rss_hash - RSS hash info
+ * @ptype_groups: Packet type groups bitmap
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends these messages to get and set the hash filter enable bits for
+ * RSS. By default, the CP sets these to all possible traffic types that the
  * hardware supports. The PF can query this value if it wants to change the
  * traffic types that are hashed by the hardware.
  * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
  * during configuration negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
  */
 struct virtchnl2_rss_hash {
-	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
+
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
 
-/* VIRTCHNL2_OP_SET_SRIOV_VFS
+/**
+ * struct virtchnl2_sriov_vfs_info - VFs info
+ * @num_vfs: Number of VFs
+ * @pad: Padding for future extensions
+ *
  * This message is used to set number of SRIOV VFs to be created. The actual
  * allocation of resources for the VFs in terms of vport, queues and interrupts
- * is done by CP. When this call completes, the APF driver calls
+ * is done by CP. When this call completes, the IDPF driver calls
  * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
  * The number of VFs set to 0 will destroy all the VFs of this function.
+ *
+ * Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
  */
-
 struct virtchnl2_sriov_vfs_info {
 	__le16 num_vfs;
 	__le16 pad;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 
-/* structure to specify single chunk of queue */
-/* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_queue_reg_chunks - Specify several chunks of
+ *						contiguous queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info. 'chunks' is fixed size(not flexible) and
+ *	    will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 
-/* structure to specify single chunk of interrupt vector */
-/* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_vector_chunks - Chunks of contiguous interrupt
+ *					     vectors.
+ * @num_vchunks: Number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info. 'vchunks' is fixed size
+ *	     (not flexible) and will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
 	struct virtchnl2_vector_chunk vchunks[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_non_flex_vector_chunks);
 
-/* VIRTCHNL2_OP_NON_FLEX_CREATE_ADI
+/**
+ * struct virtchnl2_non_flex_create_adi - Create ADI
+ * @pasid: PF sends PASID to CP
+ * @mbx_id: mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
+ *	    id else it is set to 0 by PF.
+ * @mbx_vec_id: PF sends mailbox vector id to CP
+ * @adi_index: PF populates this ADI index
+ * @adi_id: CP populates ADI id
+ * @pad: Padding
+ * @chunks: CP populates queue chunks
+ * @vchunks: PF sends vector chunks to CP
+ *
  * PF sends this message to CP to create ADI by filling in required
  * fields of virtchnl2_non_flex_create_adi structure.
- * CP responds with the updated virtchnl2_non_flex_create_adi structure containing
- * the necessary fields followed by chunks which in turn will have an array of
- * num_chunks entries of virtchnl2_queue_chunk structures.
+ * CP responds with the updated virtchnl2_non_flex_create_adi structure
+ * containing the necessary fields followed by chunks which in turn will have
+ * an array of num_chunks entries of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_CREATE_ADI.
  */
 struct virtchnl2_non_flex_create_adi {
-	/* PF sends PASID to CP */
 	__le32 pasid;
-	/*
-	 * mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
-	 * id else it is set to 0 by PF
-	 */
 	__le16 mbx_id;
-	/* PF sends mailbox vector id to CP */
 	__le16 mbx_vec_id;
-	/* PF populates this ADI index */
 	__le16 adi_index;
-	/* CP populates ADI id */
 	__le16 adi_id;
-	u8 reserved[64];
-	u8 pad[4];
-	/* CP populates queue chunks */
+	u8 pad[68];
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
-	/* PF sends vector chunks to CP */
 	struct virtchnl2_non_flex_vector_chunks vchunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
 
-/* VIRTCHNL2_OP_DESTROY_ADI
+/**
+ * struct virtchnl2_non_flex_destroy_adi - Destroy ADI
+ * @adi_id: ADI id to destroy
+ * @pad: Padding
+ *
  * PF sends this message to CP to destroy ADI by filling
  * in the adi_id in virtchnl2_destropy_adi structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI.
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
-	u8 reserved[2];
+	u8 pad[2];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 
-/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+/**
+ * struct virtchnl2_ptype - Packet type info
+ * @ptype_id_10: 10-bit packet type
+ * @ptype_id_8: 8-bit packet type
+ * @proto_id_count: Number of protocol ids the packet supports, maximum of 32
+ *		    protocol ids are supported.
+ * @pad: Padding
+ * @proto_id: proto_id_count decides the allocation of protocol id array.
+ *	      See enum virtchnl2_proto_hdr_type.
+ *
+ * Based on the descriptor type the PF supports, CP fills ptype_id_10 or
  * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
  * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
  * last ptype.
@@ -1133,70 +1386,101 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 struct virtchnl2_ptype {
 	__le16 ptype_id_10;
 	u8 ptype_id_8;
-	/* number of protocol ids the packet supports, maximum of 32
-	 * protocol ids are supported
-	 */
 	u8 proto_id_count;
 	__le16 pad;
-	/* proto_id_count decides the allocation of protocol id array */
-	/* see VIRTCHNL2_PROTO_HDR_TYPE */
 	__le16 proto_id[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
 
-/* VIRTCHNL2_OP_GET_PTYPE_INFO
- * PF sends this message to CP to get all supported packet types. It does by
- * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
- * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
- * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
- * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
- * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
- * messages, where each message will have the start ptype id, number of ptypes
- * sent in that message and the ptype array itself. When CP is done updating
- * all ptype information it extracted from the package (number of ptypes
- * extracted might be less than what PF expects), it will append a dummy ptype
- * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
- * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
- * messages.
+/**
+ * struct virtchnl2_get_ptype_info - Packet type info
+ * @start_ptype_id: Starting ptype ID
+ * @num_ptypes: Number of packet types from start_ptype_id
+ * @pad: Padding for future extensions
+ * @ptype: Array of packet type info
+ *
+ * The total number of supported packet types is based on the descriptor type.
+ * For the flex descriptor, it is 1024 (10-bit ptype), and for the base
+ * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
+ * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
+ * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
+ * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
+ * is no specific field for the ptypes but are added at the end of the
+ * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * Reason for doing this is because compiler doesn't allow nested flexible
+ * array fields).
+ *
+ * If all the ptypes don't fit into one mailbox buffer, CP splits the
+ * ptype info into multiple messages, where each message will have its own
+ * 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
+ * updating all the ptype information extracted from the package (the number of
+ * ptypes extracted might be less than what PF/VF expects), it will append a
+ * dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
+ * to the ptype array.
+ *
+ * PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
  */
 struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
+
 	struct virtchnl2_ptype ptype[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
 
-/* VIRTCHNL2_OP_GET_STATS
+/**
+ * struct virtchnl2_vport_stats - Vport statistics
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @rx_bytes: Received bytes
+ * @rx_unicast: Received unicast packets
+ * @rx_multicast: Received multicast packets
+ * @rx_broadcast: Received broadcast packets
+ * @rx_discards: Discarded packets on receive
+ * @rx_errors: Receive errors
+ * @rx_unknown_protocol: Unlnown protocol
+ * @tx_bytes: Transmitted bytes
+ * @tx_unicast: Transmitted unicast packets
+ * @tx_multicast: Transmitted multicast packets
+ * @tx_broadcast: Transmitted broadcast packets
+ * @tx_discards: Discarded packets on transmit
+ * @tx_errors: Transmit errors
+ * @rx_invalid_frame_length: Packets with invalid frame length
+ * @rx_overflow_drop: Packets dropped on buffer overflow
+ *
  * PF/VF sends this message to CP to get the update stats by specifying the
  * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ *
+ * Associated with VIRTCHNL2_OP_GET_STATS.
  */
 struct virtchnl2_vport_stats {
 	__le32 vport_id;
 	u8 pad[4];
 
-	__le64 rx_bytes;		/* received bytes */
-	__le64 rx_unicast;		/* received unicast pkts */
-	__le64 rx_multicast;		/* received multicast pkts */
-	__le64 rx_broadcast;		/* received broadcast pkts */
+	__le64 rx_bytes;
+	__le64 rx_unicast;
+	__le64 rx_multicast;
+	__le64 rx_broadcast;
 	__le64 rx_discards;
 	__le64 rx_errors;
 	__le64 rx_unknown_protocol;
-	__le64 tx_bytes;		/* transmitted bytes */
-	__le64 tx_unicast;		/* transmitted unicast pkts */
-	__le64 tx_multicast;		/* transmitted multicast pkts */
-	__le64 tx_broadcast;		/* transmitted broadcast pkts */
+	__le64 tx_bytes;
+	__le64 tx_unicast;
+	__le64 tx_multicast;
+	__le64 tx_broadcast;
 	__le64 tx_discards;
 	__le64 tx_errors;
 	__le64 rx_invalid_frame_length;
 	__le64 rx_overflow_drop;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
 
-/* physical port statistics */
+/**
+ * struct virtchnl2_phy_port_stats - Physical port statistics
+ */
 struct virtchnl2_phy_port_stats {
 	__le64 rx_bytes;
 	__le64 rx_unicast_pkts;
@@ -1223,7 +1507,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 rx_runt_errors;
 	__le64 rx_illegal_bytes;
 	__le64 rx_total_pkts;
-	u8 rx_reserved[128];
+	u8 rx_pad[128];
 
 	__le64 tx_bytes;
 	__le64 tx_unicast_pkts;
@@ -1242,17 +1526,23 @@ struct virtchnl2_phy_port_stats {
 	__le64 tx_xoff_events;
 	__le64 tx_dropped_link_down_pkts;
 	__le64 tx_total_pkts;
-	u8 tx_reserved[128];
+	u8 tx_pad[128];
 	__le64 mac_local_faults;
 	__le64 mac_remote_faults;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(600, virtchnl2_phy_port_stats);
 
-/* VIRTCHNL2_OP_GET_PORT_STATS
- * PF/VF sends this message to CP to get the updated stats by specifying the
+/**
+ * struct virtchnl2_port_stats - Port statistics
+ * @vport_id: Vport ID
+ * @pad: Padding
+ * @phy_port_stats: Physical port statistics
+ * @virt_port_stats: Vport statistics
+ *
  * vport_id. CP responds with stats in struct virtchnl2_port_stats that
  * includes both physical port as well as vport statistics.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PORT_STATS.
  */
 struct virtchnl2_port_stats {
 	__le32 vport_id;
@@ -1261,198 +1551,267 @@ struct virtchnl2_port_stats {
 	struct virtchnl2_phy_port_stats phy_port_stats;
 	struct virtchnl2_vport_stats virt_port_stats;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(736, virtchnl2_port_stats);
 
-/* VIRTCHNL2_OP_EVENT
+/**
+ * struct virtchnl2_event - Event info
+ * @event: Event opcode. See enum virtchnl2_event_codes
+ * @link_speed: Link_speed provided in Mbps
+ * @vport_id: Vport ID
+ * @link_status: Link status
+ * @pad: Padding
+ * @adi_id: ADI id
+ *
  * CP sends this message to inform the PF/VF driver of events that may affect
  * it. No direct response is expected from the driver, though it may generate
  * other messages in response to this one.
+ *
+ * Associated with VIRTCHNL2_OP_EVENT.
  */
 struct virtchnl2_event {
-	/* see VIRTCHNL2_EVENT_CODES definitions */
 	__le32 event;
-	/* link_speed provided in Mbps */
 	__le32 link_speed;
+
 	__le32 vport_id;
+
 	u8 link_status;
-	u8 pad[1];
-	/* CP sends reset notification to PF with corresponding ADI ID */
+	u8 pad;
 	__le16 adi_id;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * VIRTCHNL2_OP_SET_RSS_KEY
+/**
+ * struct virtchnl2_rss_key - RSS key info
+ * @vport_id: Vport id
+ * @key_len: Length of RSS key
+ * @pad: Padding
+ * @key: RSS hash key, packed bytes
  * PF/VF sends this message to get or set RSS key. Only supported if both
  * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_key structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
  */
 struct virtchnl2_rss_key {
 	__le32 vport_id;
+
 	__le16 key_len;
 	u8 pad;
-	u8 key[1];         /* RSS hash key, packed bytes */
+	u8 key[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
 
-/* structure to specify a chunk of contiguous queues */
+/**
+ * struct virtchnl2_queue_chunk - Chunk of contiguous queues
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Starting queue id
+ * @num_queues: Number of queues
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
-	u8 reserved[4];
+	u8 pad[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/* struct virtchnl2_queue_chunks - Chunks of contiguous queues
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
+ */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_chunk chunks[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
 
-/* VIRTCHNL2_OP_ENABLE_QUEUES
- * VIRTCHNL2_OP_DISABLE_QUEUES
- * VIRTCHNL2_OP_DEL_QUEUES
+/**
+ * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
  *
- * PF sends these messages to enable, disable or delete queues specified in
- * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * PF/VF sends these messages to enable, disable or delete queues specified in
+ * chunks. It sends virtchnl2_del_ena_dis_queues struct to specify the queues
  * to be enabled/disabled/deleted. Also applicable to single queue receive or
  * transmit. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
+ * VIRTCHNL2_OP_DISABLE_QUEUES.
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
 
-/* Queue to vector mapping */
+/**
+ * struct virtchnl2_queue_vector - Queue to vector mapping
+ * @queue_id: Queue id
+ * @vector_id: Vector id
+ * @pad: Padding
+ * @itr_idx: See enum virtchnl2_itr_idx
+ * @queue_type: See enum virtchnl2_queue_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_vector {
 	__le32 queue_id;
 	__le16 vector_id;
 	u8 pad[2];
 
-	/* see VIRTCHNL2_ITR_IDX definitions */
 	__le32 itr_idx;
 
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
-	u8 reserved[8];
+	u8 pad1[8];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
 
-/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
- * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+/**
+ * struct virtchnl2_queue_vector_maps - Map/unmap queues info
+ * @vport_id: Vport id
+ * @num_qv_maps: Number of queue vector maps
+ * @pad: Padding
+ * @qv_maps: Queue to vector maps
  *
- * PF sends this message to map or unmap queues to vectors and interrupt
+ * PF/VF sends this message to map or unmap queues to vectors and interrupt
  * throttling rate index registers. External data buffer contains
  * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
  * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
  * after validating the queue and vector ids and returns a status code.
+ *
+ * Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
  */
 struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
+
 	__le16 num_qv_maps;
 	u8 pad[10];
+
 	struct virtchnl2_queue_vector qv_maps[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
 
-/* VIRTCHNL2_OP_LOOPBACK
+/**
+ * struct virtchnl2_loopback - Loopback info
+ * @vport_id: Vport id
+ * @enable: Enable/disable
+ * @pad: Padding for future extensions
  *
  * PF/VF sends this message to transition to/from the loopback state. Setting
  * the 'enable' to 1 enables the loopback state and setting 'enable' to 0
  * disables it. CP configures the state to loopback and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_LOOPBACK.
  */
 struct virtchnl2_loopback {
 	__le32 vport_id;
+
 	u8 enable;
 	u8 pad[3];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
 
-/* structure to specify each MAC address */
+/* struct virtchnl2_mac_addr - MAC address info
+ * @addr: MAC address
+ * @type: MAC type. See enum virtchnl2_mac_addr_type.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_mac_addr {
+#ifndef LINUX_SUPPORT
 	u8 addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_MAC_TYPE definitions */
+#else
+	u8 addr[ETH_ALEN];
+#endif
 	u8 type;
 	u8 pad;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
 
-/* VIRTCHNL2_OP_ADD_MAC_ADDR
- * VIRTCHNL2_OP_DEL_MAC_ADDR
+/**
+ * struct virtchnl2_mac_addr_list - List of MAC addresses
+ * @vport_id: Vport id
+ * @num_mac_addr: Number of MAC addresses
+ * @pad: Padding
+ * @mac_addr_list: List with MAC address info
  *
  * PF/VF driver uses this structure to send list of MAC addresses to be
  * added/deleted to the CP where as CP performs the action and returns the
  * status.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
 struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
+
 	__le16 num_mac_addr;
 	u8 pad[2];
+
 	struct virtchnl2_mac_addr mac_addr_list[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
 
-/* VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE
+/**
+ * struct virtchnl2_promisc_info - Promiscuous type information
+ * @vport_id: Vport id
+ * @flags: See enum virtchnl2_promisc_flags
+ * @pad: Padding for future extensions
  *
  * PF/VF sends vport id and flags to the CP where as CP performs the action
  * and returns the status.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
  */
 struct virtchnl2_promisc_info {
-        __le32 vport_id;
-	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
-        __le16 flags;
+	__le32 vport_id;
+	__le16 flags;
 	u8 pad[2];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
 
-/* VIRTCHNL2_PTP_CAPS
- * PTP capabilities
+/**
+ * enum virtchnl2_ptp_caps - PTP capabilities
  */
-#define VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	BIT(0)
-#define VIRTCHNL2_PTP_CAP_PTM			BIT(1)
-#define VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	BIT(2)
-#define VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	BIT(3)
-#define	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	BIT(4)
+enum virtchnl2_ptp_caps {
+	VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	= BIT(0),
+	VIRTCHNL2_PTP_CAP_PTM			= BIT(1),
+	VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	= BIT(2),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	= BIT(3),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	= BIT(4),
+};
 
-/* Legacy cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_legacy_cross_time_reg - Legacy cross time registers
+ *						offsets.
+ */
 struct virtchnl2_ptp_legacy_cross_time_reg {
 	__le32 shadow_time_0;
 	__le32 shadow_time_l;
 	__le32 shadow_time_h;
 	__le32 cmd_sync;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_legacy_cross_time_reg);
 
-/* PTM cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_ptm_cross_time_reg - PTM cross time registers offsets
+ */
 struct virtchnl2_ptp_ptm_cross_time_reg {
 	__le32 art_l;
 	__le32 art_h;
 	__le32 cmd_sync;
 	u8 pad[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_ptm_cross_time_reg);
 
-/* Registers needed to control the main clock */
+/**
+ * struct virtchnl2_ptp_device_clock_control - Registers needed to control the
+ *					       main clock.
+ */
 struct virtchnl2_ptp_device_clock_control {
 	__le32 cmd;
 	__le32 incval_l;
@@ -1461,39 +1820,53 @@ struct virtchnl2_ptp_device_clock_control {
 	__le32 shadj_h;
 	u8 pad[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_device_clock_control);
 
-/* Structure that defines tx tstamp entry - index and register offset */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_entry - PTP TX timestamp entry
+ * @tx_latch_register_base: TX latch register base
+ * @tx_latch_register_offset: TX latch register offset
+ * @index: Index
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_entry {
 	__le32 tx_latch_register_base;
 	__le32 tx_latch_register_offset;
 	u8 index;
 	u8 pad[7];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_entry);
 
-/* Structure that defines tx tstamp entries - total number of latches
- * and the array of entries.
+/**
+ * struct virtchnl2_ptp_tx_tstamp - Structure that defines tx tstamp entries
+ * @num_latches: Total number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @ptp_tx_tstamp_entries: Aarray of TX timestamp entries
  */
 struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
 	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
 
-/* VIRTCHNL2_OP_GET_PTP_CAPS
+/**
+ * struct virtchnl2_get_ptp_caps - Get PTP capabilities
+ * @ptp_caps: PTP capability bitmap. See enum virtchnl2_ptp_caps.
+ * @pad: Padding
+ * @legacy_cross_time_reg: Legacy cross time register
+ * @ptm_cross_time_reg: PTM cross time register
+ * @device_clock_control: Device clock control
+ * @tx_tstamp: TX timestamp
+ *
  * PV/VF sends this message to negotiate PTP capabilities. CP updates bitmap
  * with supported features and fulfills appropriate structures.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_CAPS.
  */
 struct virtchnl2_get_ptp_caps {
-	/* PTP capability bitmap */
-	/* see VIRTCHNL2_PTP_CAPS definitions */
 	__le32 ptp_caps;
 	u8 pad[4];
 
@@ -1502,10 +1875,17 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
 
-/* Structure that describes tx tstamp values, index and validity */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
+ *					  values, index and validity.
+ * @tstamp_h: Timestamp high
+ * @tstamp_l: Timestamp low
+ * @index: Index
+ * @valid: Timestamp validity
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_latch {
 	__le32 tstamp_h;
 	__le32 tstamp_l;
@@ -1513,16 +1893,22 @@ struct virtchnl2_ptp_tx_tstamp_latch {
 	u8 valid;
 	u8 pad[6];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
 
-/* VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latches - PTP TX timestamp latches
+ * @num_latches: Number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @tstamp_latches: PTP TX timestamp latch
+ *
  * PF/VF sends this message to receive a specified number of timestamps
  * entries.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES.
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
 	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
@@ -1613,7 +1999,7 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
  * @msg: pointer to the msg buffer
  * @msglen: msg length
  *
- * validate msg format against struct for each opcode
+ * Validate msg format against struct for each opcode.
  */
 static inline int
 virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
@@ -1622,7 +2008,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	bool err_msg_format = false;
 	__le32 valid_len = 0;
 
-	/* Validate message length. */
+	/* Validate message length */
 	switch (v_opcode) {
 	case VIRTCHNL2_OP_VERSION:
 		valid_len = sizeof(struct virtchnl2_version_info);
@@ -1637,7 +2023,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_create_vport *)msg;
 
 			if (cvport->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1652,7 +2038,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_non_flex_create_adi *)msg;
 
 			if (cadi->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1707,7 +2093,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_add_queues *)msg;
 
 			if (add_q->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1802,7 +2188,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_alloc_vectors *)msg;
 
 			if (v_av->vchunks.num_vchunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1831,7 +2217,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_key *)msg;
 
 			if (vrk->key_len == 0) {
-				/* zero length is allowed as input */
+				/* Zero length is allowed as input */
 				break;
 			}
 
@@ -1846,7 +2232,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_lut *)msg;
 
 			if (vrl->lut_entries == 0) {
-				/* zero entries is allowed as input */
+				/* Zero entries is allowed as input */
 				break;
 			}
 
@@ -1903,13 +2289,13 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
 		}
 		break;
-	/* These are always errors coming from the VF. */
+	/* These are always errors coming from the VF */
 	case VIRTCHNL2_OP_EVENT:
 	case VIRTCHNL2_OP_UNKNOWN:
-        default:
-                return VIRTCHNL2_STATUS_ERR_ESRCH;
+	default:
+		return VIRTCHNL2_STATUS_ERR_ESRCH;
 	}
-	/* few more checks */
+	/* Few more checks */
 	if (err_msg_format || valid_len != msglen)
 		return VIRTCHNL2_STATUS_ERR_EINVAL;
 
diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index e6e782a219..1ed09a7372 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 /*
  * Copyright (C) 2019 Intel Corporation
@@ -9,201 +9,228 @@
 #ifndef _VIRTCHNL2_LAN_DESC_H_
 #define _VIRTCHNL2_LAN_DESC_H_
 
-/* VIRTCHNL2_TX_DESC_IDS
+/* This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
+ */
+
+/**
+ * VIRTCHNL2_TX_DESC_IDS
  * Transmit descriptor ID flags
  */
-#define VIRTCHNL2_TXDID_DATA				BIT(0)
-#define VIRTCHNL2_TXDID_CTX				BIT(1)
-#define VIRTCHNL2_TXDID_REINJECT_CTX			BIT(2)
-#define VIRTCHNL2_TXDID_FLEX_DATA			BIT(3)
-#define VIRTCHNL2_TXDID_FLEX_CTX			BIT(4)
-#define VIRTCHNL2_TXDID_FLEX_TSO_CTX			BIT(5)
-#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		BIT(6)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		BIT(7)
-#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	BIT(8)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	BIT(9)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		BIT(10)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			BIT(11)
-#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			BIT(12)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		BIT(13)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		BIT(14)
-#define VIRTCHNL2_TXDID_DESC_DONE			BIT(15)
-
-/* VIRTCHNL2_RX_DESC_IDS
+enum virtchnl2_tx_desc_ids {
+	VIRTCHNL2_TXDID_DATA				= BIT(0),
+	VIRTCHNL2_TXDID_CTX				= BIT(1),
+	VIRTCHNL2_TXDID_REINJECT_CTX			= BIT(2),
+	VIRTCHNL2_TXDID_FLEX_DATA			= BIT(3),
+	VIRTCHNL2_TXDID_FLEX_CTX			= BIT(4),
+	VIRTCHNL2_TXDID_FLEX_TSO_CTX			= BIT(5),
+	VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		= BIT(6),
+	VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		= BIT(7),
+	VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	= BIT(8),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	= BIT(9),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		= BIT(10),
+	VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			= BIT(11),
+	VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			= BIT(12),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		= BIT(13),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		= BIT(14),
+	VIRTCHNL2_TXDID_DESC_DONE			= BIT(15),
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_IDS
  * Receive descriptor IDs (range from 0 to 63)
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE			0
-#define VIRTCHNL2_RXDID_1_32B_BASE			1
-/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
- * differentiated based on queue model; e.g. single queue model can
- * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
- * for DID 2.
- */
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ			2
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC			2
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW			3
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB		4
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL		5
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2			6
-#define VIRTCHNL2_RXDID_7_HW_RSVD			7
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC		16
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN		17
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4		18
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6		19
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW		20
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP		21
-/* 22 through 63 are reserved */
-
-/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+enum virtchnl2_rx_desc_ids {
+	VIRTCHNL2_RXDID_0_16B_BASE,
+	VIRTCHNL2_RXDID_1_32B_BASE,
+	/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+	 * differentiated based on queue model; e.g. single queue model can
+	 * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+	 * for DID 2.
+	 */
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ		= 2,
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC		= VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW		= 3,
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB	= 4,
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL	= 5,
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2		= 6,
+	VIRTCHNL2_RXDID_7_HW_RSVD		= 7,
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC	= 16,
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN	= 17,
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4	= 18,
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6	= 19,
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW	= 20,
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP	= 21,
+	/* 22 through 63 are reserved */
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		BIT(VIRTCHNL2_RXDID_0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		BIT(VIRTCHNL2_RXDID_1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
-/* 22 through 63 are reserved */
-
-/* Rx */
+#define VIRTCHNL2_RXDID_M(bit)			BIT_ULL(VIRTCHNL2_RXDID_##bit)
+
+enum virtchnl2_rx_desc_id_bitmasks {
+	VIRTCHNL2_RXDID_0_16B_BASE_M		= VIRTCHNL2_RXDID_M(0_16B_BASE),
+	VIRTCHNL2_RXDID_1_32B_BASE_M		= VIRTCHNL2_RXDID_M(1_32B_BASE),
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		= VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		= VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		= VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW),
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	= VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB),
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	= VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL),
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	= VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2),
+	VIRTCHNL2_RXDID_7_HW_RSVD_M		= VIRTCHNL2_RXDID_M(7_HW_RSVD),
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	= VIRTCHNL2_RXDID_M(16_COMMS_GENERIC),
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	= VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN),
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	= VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4),
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	= VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6),
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	= VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW),
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	= VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP),
+	/* 22 through 63 are reserved */
+};
+
 /* For splitq virtchnl2_rx_flex_desc_adv desc members */
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		GENMASK(3, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		GENMASK(7, 6)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		\
-	IDPF_M(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M			\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M		GENMASK(15, 13)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M	\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M		GENMASK(13, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M		GENMASK(14, 12)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
 
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S		7
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST			8 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		0 /* 2 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			8 /* this entry must be last!!! */
-
-/* for singleq (flex) virtchnl2_rx_flex_desc fields */
-/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		= 0,
+	/* 2 bits */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		= 2,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		= 3,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	= 4,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	= 5,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	= 6,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	= 7,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			= 8,
+};
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields
+ * for virtchnl2_rx_flex_desc.ptype_flex_flags0 member
+ */
 #define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			GENMASK(9, 0)
 
-/* for virtchnl2_rx_flex_desc.pkt_length member */
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M			\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+/* For virtchnl2_rx_flex_desc.pkt_len member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S		0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M		GENMASK(13, 0)
 
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S			1
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S			2
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S			3
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S		7
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S			8
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S		9
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S			10
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S			11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST			16 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+enum virtchnl2_rx_flex_desc_status_error_0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			0 /* 4 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			5
-/* [10:6] reserved */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			16 /* this entry must be last!!! */
-
-/* for virtchnl2_rx_flex_desc.ts_low member */
+enum virtchnl2_rx_flex_desc_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			= 0,
+	/* 4 bits */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			= 4,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			= 5,
+	/* [10:6] reserved */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		= 11,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		= 12,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		= 13,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		= 14,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		= 15,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			= 16,
+};
+
+/* For virtchnl2_rx_flex_desc.ts_low member */
 #define VIRTCHNL2_RX_FLEX_TSTAMP_VALID				BIT(0)
 
 /* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
@@ -211,72 +238,89 @@
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M	\
 	BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S	52
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	\
-	IDPF_M(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	GENMASK_ULL(62, 52)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S	38
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	\
-	IDPF_M(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	GENMASK_ULL(51, 38)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S	30
-#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	\
-	IDPF_M(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	GENMASK_ULL(37, 30)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S	19
-#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	\
-	IDPF_M(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	GENMASK_ULL(26, 19)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S	0
-#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	\
-	IDPF_M(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	GENMASK_ULL(18, 0)
 
-/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		0
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		1
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		2
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		3
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		4
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		5 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	8
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		9 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		11
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		12 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		14
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	15
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		16 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	18
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		19 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		= 1,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		= 5, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	= 8,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		= 9, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		= 11,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		= 12, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		= 14,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	= 15,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		= 16, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	= 18,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		= 19, /* this entry must be last!!! */
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S	0
+enum virtcnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S,
+};
 
-/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		0
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	1
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		2
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		3 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		3
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		4
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		5
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		6
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		7
-
-/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_error_bits {
+	VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	= 1,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		= 3, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		= 5,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		= 6,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		= 7,
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA		0
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID		1
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV		2
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH		3
+enum virtchnl2_rx_base_desc_flstat_values {
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH,
+};
 
-/* Receive Descriptors */
-/* splitq buf
+/**
+ * struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
+ * @qword0: RX buffer struct
+ * @qword0.buf_id: Buffer identifier
+ * @qword0.rsvd0: Reserved
+ * @qword0.rsvd1: Reserved
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd2: Reserved
+ *
+ * Receive Descriptors
+ * SplitQ buffer
  * |                                       16|                   0|
  * ----------------------------------------------------------------
  * | RSV                                     | Buffer ID          |
@@ -291,16 +335,23 @@
  */
 struct virtchnl2_splitq_rx_buf_desc {
 	struct {
-		__le16  buf_id; /* Buffer Identifier */
+		__le16  buf_id;
 		__le16  rsvd0;
 		__le32  rsvd1;
 	} qword0;
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
-/* singleq buf
+/**
+ * struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd1: Reserved
+ * @rsvd2: Reserved
+ *
+ * SingleQ buffer
  * |                                                             0|
  * ----------------------------------------------------------------
  * | Rx packet buffer address                                     |
@@ -314,18 +365,44 @@ struct virtchnl2_splitq_rx_buf_desc {
  * |                                                             0|
  */
 struct virtchnl2_singleq_rx_buf_desc {
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd1;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
+/**
+ * union virtchnl2_rx_buf_desc - RX buffer descriptor
+ * @read: Singleq RX buffer descriptor format
+ * @split_rd: Splitq RX buffer descriptor format
+ */
 union virtchnl2_rx_buf_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
 	struct virtchnl2_splitq_rx_buf_desc		split_rd;
 };
 
-/* (0x00) singleq wb(compl) */
+/**
+ * struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format
+ * @qword0: First quad word struct
+ * @qword0.lo_dword: Lower dual word struct
+ * @qword0.lo_dword.mirroring_status: Mirrored packet status
+ * @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet
+ * @qword0.hi_dword: High dual word union
+ * @qword0.hi_dword.rss: RSS hash
+ * @qword0.hi_dword.fd_id: Flow director filter id
+ * @qword1: Second quad word struct
+ * @qword1.status_error_ptype_len: Status/error/PTYPE/length
+ * @qword2: Third quad word struct
+ * @qword2.ext_status: Extended status
+ * @qword2.rsvd: Reserved
+ * @qword2.l2tag2_1: Extracted L2 tag 2 from the packet
+ * @qword2.l2tag2_2: Reserved
+ * @qword3: Fourth quad word struct
+ * @qword3.reserved: Reserved
+ * @qword3.fd_id: Flow director filter id
+ *
+ * Profile ID 0x1, SingleQ, base writeback format.
+ */
 struct virtchnl2_singleq_base_rx_desc {
 	struct {
 		struct {
@@ -333,16 +410,15 @@ struct virtchnl2_singleq_base_rx_desc {
 			__le16 l2tag1;
 		} lo_dword;
 		union {
-			__le32 rss; /* RSS Hash */
-			__le32 fd_id; /* Flow Director filter id */
+			__le32 rss;
+			__le32 fd_id;
 		} hi_dword;
 	} qword0;
 	struct {
-		/* status/error/PTYPE/length */
 		__le64 status_error_ptype_len;
 	} qword1;
 	struct {
-		__le16 ext_status; /* extended status */
+		__le16 ext_status;
 		__le16 rsvd;
 		__le16 l2tag2_1;
 		__le16 l2tag2_2;
@@ -351,32 +427,51 @@ struct virtchnl2_singleq_base_rx_desc {
 		__le32 reserved;
 		__le32 fd_id;
 	} qword3;
-}; /* writeback */
+};
 
-/* (0x01) singleq flex compl */
+/**
+ * struct virtchnl2_rx_flex_desc - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @flex_meta0: Flexible metadata container 0
+ * @flex_meta1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @time_stamp_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flex_meta2: Flexible metadata container 2
+ * @flex_meta3: Flexible metadata container 3
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.flex_meta4: Flexible metadata container 4
+ * @flex_ts.flex.flex_meta5: Flexible metadata container 5
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x1, SingleQ, flex completion writeback format.
+ */
 struct virtchnl2_rx_flex_desc {
 	/* Qword 0 */
-	u8 rxdid; /* descriptor builder profile id */
-	u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
-	__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
-	__le16 pkt_len; /* [15:14] are reserved */
-	__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
-					/* sph=[11:11] */
-					/* ff1/ext=[15:12] */
-
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
 	__le16 flex_meta0;
 	__le16 flex_meta1;
-
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flex_flags2;
 	u8 time_stamp_low;
 	__le16 l2tag2_1st;
 	__le16 l2tag2_2nd;
-
 	/* Qword 3 */
 	__le16 flex_meta2;
 	__le16 flex_meta3;
@@ -389,7 +484,29 @@ struct virtchnl2_rx_flex_desc {
 	} flex_ts;
 };
 
-/* (0x02) */
+/**
+ * struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SingleQ, flex writeback format.
+ */
 struct virtchnl2_rx_flex_desc_nic {
 	/* Qword 0 */
 	u8 rxdid;
@@ -397,19 +514,16 @@ struct virtchnl2_rx_flex_desc_nic {
 	__le16 ptype_flex_flags0;
 	__le16 pkt_len;
 	__le16 hdr_len_sph_flex_flags1;
-
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
 	__le32 rss_hash;
-
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flexi_flags2;
 	u8 ts_low;
 	__le16 l2tag2_1st;
 	__le16 l2tag2_2nd;
-
 	/* Qword 3 */
 	__le32 flow_id;
 	union {
@@ -421,8 +535,27 @@ struct virtchnl2_rx_flex_desc_nic {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Switch Profile
- * RxDID Profile Id 3
+/**
+ * struct virtchnl2_rx_flex_desc_sw - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @src_vsi: Source VSI, [10:15] are reserved
+ * @flex_md1_rsvd: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor Switch Profile
+ * RxDID Profile ID 0x3, SingleQ
  * Flex-field 0: Source Vsi
  */
 struct virtchnl2_rx_flex_desc_sw {
@@ -432,28 +565,144 @@ struct virtchnl2_rx_flex_desc_sw {
 	__le16 ptype_flex_flags0;
 	__le16 pkt_len;
 	__le16 hdr_len_sph_flex_flags1;
-
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
-	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 src_vsi;
 	__le16 flex_md1_rsvd;
-
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flex_flags2;
 	u8 ts_low;
 	__le16 l2tag2_1st;
 	__le16 l2tag2_2nd;
-
 	/* Qword 3 */
-	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 rsvd;
 	__le32 ts_high;
 };
 
+#ifndef EXTERNAL_RELEASE
+/**
+ * struct virtchnl2_rx_flex_desc_nic_veb_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @dst_vsi: Destination VSI, [10:15] are reserved
+ * @flex_field_1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 0x4
+ * Flex-field 0: Destination Vsi
+ */
+struct virtchnl2_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi;
+	__le16 flex_field_1;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le32 rsvd;
+	__le32 ts_high;
+};
 
-/* Rx Flex Descriptor NIC Profile
- * RxDID Profile Id 6
+/**
+ * struct virtchnl2_rx_flex_desc_nic_acl_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @acl_ctr0: ACL counter 0
+ * @acl_ctr1: ACL counter 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @acl_ctr2: ACL counter 2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile ID 0x5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct virtchnl2_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd;
+	__le32 ts_high;
+};
+#endif /* !EXTERNAL_RELEASE */
+
+/**
+ * struct virtchnl2_rx_flex_desc_nic_2 - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @src_vsi: Source VSI
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC Profile
+ * RxDID Profile ID 0x6
  * Flex-field 0: RSS hash lower 16-bits
  * Flex-field 1: RSS hash upper 16-bits
  * Flex-field 2: Flow Id lower 16-bits
@@ -467,19 +716,16 @@ struct virtchnl2_rx_flex_desc_nic_2 {
 	__le16 ptype_flex_flags0;
 	__le16 pkt_len;
 	__le16 hdr_len_sph_flex_flags1;
-
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
 	__le32 rss_hash;
-
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flexi_flags2;
 	u8 ts_low;
 	__le16 l2tag2_1st;
 	__le16 l2tag2_2nd;
-
 	/* Qword 3 */
 	__le16 flow_id;
 	__le16 src_vsi;
@@ -492,29 +738,43 @@ struct virtchnl2_rx_flex_desc_nic_2 {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Advanced (Split Queue Model)
- * RxDID Profile Id 7
+/**
+ * struct virtchnl2_rx_flex_desc_adv - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @fmd0: Flexible metadata container 0
+ * @fmd1: Flexible metadata container 1
+ * @fmd2: Flexible metadata container 2
+ * @fflags2: Flags
+ * @hash3: Upper bits of Rx hash value
+ * @fmd3: Flexible metadata container 3
+ * @fmd4: Flexible metadata container 4
+ * @fmd5: Flexible metadata container 5
+ * @fmd6: Flexible metadata container 6
+ * @fmd7_0: Flexible metadata container 7.0
+ * @fmd7_1: Flexible metadata container 7.1
+ *
+ * RX Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile ID 0x2
  */
 struct virtchnl2_rx_flex_desc_adv {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
@@ -533,10 +793,42 @@ struct virtchnl2_rx_flex_desc_adv {
 	__le16 fmd6;
 	__le16 fmd7_0;
 	__le16 fmd7_1;
-}; /* writeback */
+};
 
-/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
- * RxDID Profile Id 8
+/**
+ * struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @buf_id: Buffer identifier. Only in splitq mode.
+ * @misc: Union
+ * @misc.raw_cs: Raw checksum
+ * @misc.l2tag1: Stripped L2 tag from the received packet
+ * @misc.rscseglen: RSC segment length
+ * @hash1: Lower 16 bits of Rx hash value, hash[15:0]
+ * @ff2_mirrid_hash2: Union
+ * @ff2_mirrid_hash2.fflags2: Flexible flags section 2
+ * @ff2_mirrid_hash2.mirrorid: Mirror id
+ * @ff2_mirrid_hash2.hash2: 8 bits of Rx hash value, hash[23:16]
+ * @hash3: Upper 8 bits of Rx hash value, hash[31:24]
+ * @l2tag2: Extracted L2 tag 2 from the packet
+ * @fmd4: Flexible metadata container 4
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @fmd6: Flexible metadata container 6
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SplitQ, flex writeback format.
+ *
  * Flex-field 0: BufferID
  * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
  * Flex-field 2: Hash[15:0]
@@ -547,30 +839,17 @@ struct virtchnl2_rx_flex_desc_adv {
  */
 struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
 	u8 fflags1;
 	u8 ts_low;
-	__le16 buf_id; /* only in splitq */
+	__le16 buf_id;
 	union {
 		__le16 raw_cs;
 		__le16 l2tag1;
@@ -590,7 +869,7 @@ struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	__le16 l2tag1;
 	__le16 fmd6;
 	__le32 ts_high;
-}; /* writeback */
+};
 
 union virtchnl2_rx_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH 09/25] common/idpf: add flex array support to virtchnl2 structures
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (7 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 08/25] common/idpf: move related defines into enums Soumyadeep Hore
@ 2024-05-28  7:28 ` Soumyadeep Hore
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
  9 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-05-28  7:28 UTC (permalink / raw)
  To: yuying.zhang, jingjing.wu; +Cc: dev

The current virtchnl header uses 1-sized array to address
the dynamic size of the virtchnl structure. For example in the
following structure, the size of the struct depends on the 'num_chunks'
and we use 'chunks[1]' to dereference each chunk information.

struct virtchnl2_queue_reg_chunks {
	__le16 num_chunks;
	u8 pad[6];
	struct virtchnl2_queue_reg_chunk chunks[1];
};

With the internal Linux upstream feedback that is received on
IDPF driver and also some references available online, it
is discouraged to use 1-sized array fields in the structures,
especially in the new Linux drivers that are going to be
upstreamed. Instead, it is recommended to use flex array fields
for the dynamic sized structures.

The problem with this approach is that, C++ doesn't have support
for flex array fields and it might be a problem for Windows driver.

This patch introduces flex array support for the dynamic sized
structures wrapped with 'FLEX_ARRAY_SUPPORT' flag and should be
defined only if the flex array fields are supported.

Also there is a special case in virtchnl2_get_ptype_info and
where the struct has nested flex arrays which is not supported.
To support the flex arrays and not break the message format,
the top level flex array field is removed and the sender/receiver
is expected to parse the message accordingly.

The above reasoning applies for virtchnl2_add_queue_groups as well
but the struct is modified a bit by removing virtchnl2_queue_groups
structure to better support the flex array.

virtchnl2_vc_validate_vf_msg function is refactored to consider the
cases where CP/driver supports or doesn't support the flex array.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 600 ++++++++++++++++-----------
 1 file changed, 352 insertions(+), 248 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 45e77bbb94..355e2e3038 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -63,9 +63,19 @@ enum virtchnl2_status {
  * This macro is used to generate compilation errors if a structure
  * is not exactly the correct length.
  */
-#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
-	static_assert((n) == sizeof(struct X),	\
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)		\
+	static_assert((n) == sizeof(struct X),		\
 		      "Structure length does not match with the expected value")
+#ifdef FLEX_ARRAY_SUPPORT
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	static_assert((n) == struct_size_t(struct X, T, 1),\
+		      "Structure length with flex array does not match with the expected value")
+#else
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
+
+#define STRUCT_VAR_LEN		1
+#endif /* FLEX_ARRAY_SUPPORT */
 
 /**
  * New major set of opcodes introduced and so leaving room for
@@ -270,6 +280,43 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
 };
 
+/**
+ * enum virtchnl2_action_types - Available actions for sideband flow steering
+ * @VIRTCHNL2_ACTION_DROP: Drop the packet
+ * @VIRTCHNL2_ACTION_PASSTHRU: Forward the packet to the next classifier/stage
+ * @VIRTCHNL2_ACTION_QUEUE: Forward the packet to a receive queue
+ * @VIRTCHNL2_ACTION_Q_GROUP: Forward the packet to a receive queue group
+ * @VIRTCHNL2_ACTION_MARK: Mark the packet with specific marker value
+ * @VIRTCHNL2_ACTION_COUNT: Increment the corresponding counter
+ */
+
+enum virtchnl2_action_types {
+	VIRTCHNL2_ACTION_DROP		= BIT(0),
+	VIRTCHNL2_ACTION_PASSTHRU	= BIT(1),
+	VIRTCHNL2_ACTION_QUEUE		= BIT(2),
+	VIRTCHNL2_ACTION_Q_GROUP	= BIT(3),
+	VIRTCHNL2_ACTION_MARK		= BIT(4),
+	VIRTCHNL2_ACTION_COUNT		= BIT(5),
+};
+
+/* Flow type capabilities for Flow Steering and Receive-Side Scaling */
+enum virtchnl2_flow_types {
+	VIRTCHNL2_FLOW_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_FLOW_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_FLOW_IPV4_SCTP	= BIT(2),
+	VIRTCHNL2_FLOW_IPV4_OTHER	= BIT(3),
+	VIRTCHNL2_FLOW_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_FLOW_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_FLOW_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_FLOW_IPV6_OTHER	= BIT(7),
+	VIRTCHNL2_FLOW_IPV4_AH		= BIT(8),
+	VIRTCHNL2_FLOW_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_FLOW_IPV4_AH_ESP	= BIT(10),
+	VIRTCHNL2_FLOW_IPV6_AH		= BIT(11),
+	VIRTCHNL2_FLOW_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_FLOW_IPV6_AH_ESP	= BIT(13),
+};
+
 /**
  * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
  * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
@@ -711,21 +758,26 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_reg_chunk chunks[1];
+	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 
 /**
  * enum virtchnl2_vport_flags - Vport flags
  * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
- * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER: Inline flow steering enabled
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ: Inline flow steering enabled
+ * with explicit Rx queue action
+ * @VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER: Sideband flow steering enabled
 #ifdef NOT_FOR_UPSTREAM
  * @VIRTCHNL2_VPORT_PORT2PORT_PORT: Port2port port flag
 #endif
  */
 enum virtchnl2_vport_flags {
 	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
-	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ	= BIT(2),
+	VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER	= BIT(3),
 #ifdef NOT_FOR_UPSTREAM
 	VIRTCHNL2_VPORT_PORT2PORT_PORT		= BIT(15),
 #endif /* NOT_FOR_UPSTREAM */
@@ -757,6 +809,14 @@ enum virtchnl2_vport_flags {
  * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
  * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
  * @reserved: Reserved bytes and cannot be used
+ * @inline_flow_types: Bit mask of supported inline-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_types: Bit mask of supported sideband-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_actions: Bit mask of supported action types
+ *  for sideband flow steering (See enum virtchnl2_action_types)
+ * @flow_steer_max_rules: Max rules allowed for inline and sideband
+ *  flow steering combined
  * @rss_algorithm: RSS algorithm
  * @rss_key_size: RSS key size
  * @rss_lut_size: RSS LUT size
@@ -790,7 +850,11 @@ struct virtchnl2_create_vport {
 	__le16 vport_flags;
 	__le64 rx_desc_ids;
 	__le64 tx_desc_ids;
-	u8 reserved[72];
+	u8 reserved[48];
+	__le64 inline_flow_types;
+	__le64 sideband_flow_types;
+	__le32 sideband_flow_actions;
+	__le32 flow_steer_max_rules;
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
@@ -798,7 +862,7 @@ struct virtchnl2_create_vport {
 	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(192, virtchnl2_create_vport, chunks.chunks);
 
 /**
  * struct virtchnl2_vport - Vport identifier information
@@ -880,9 +944,9 @@ struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 	u8 pad[10];
-	struct virtchnl2_txq_info qinfo[1];
+	struct virtchnl2_txq_info qinfo[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(72, virtchnl2_config_tx_queues, qinfo);
 
 /**
  * struct virtchnl2_rxq_info - Receive queue config info
@@ -959,9 +1023,9 @@ struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 	u8 pad[18];
-	struct virtchnl2_rxq_info qinfo[1];
+	struct virtchnl2_rxq_info qinfo[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(112, virtchnl2_config_rx_queues, qinfo);
 
 /**
  * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
@@ -990,15 +1054,15 @@ struct virtchnl2_add_queues {
 	u8 pad[4];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(56, virtchnl2_add_queues, chunks.chunks);
 
 /* Queue Groups Extension */
 /**
  * struct virtchnl2_rx_queue_group_info - RX queue group info
- * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
- *		  allocated by CreateVport command. New size will be returned
- *		  if allocation succeeded, otherwise original rss_size from
- *		  CreateVport will be returned.
+ * @rss_lut_size: User can ask to update rss_lut size originally allocated by
+ *		  CreateVport command. New size will be returned if allocation
+ *		  succeeded, otherwise original rss_size from CreateVport
+ *		  will be returned.
  * @pad: Padding for future extensions
  */
 struct virtchnl2_rx_queue_group_info {
@@ -1025,7 +1089,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
  * @cir_pad: Future extension purpose for CIR only
  * @pad2: Padding for future extensions
  */
-struct virtchnl2_tx_queue_group_info { /* IN */
+struct virtchnl2_tx_queue_group_info {
 	u8 tx_tc;
 	u8 priority;
 	u8 is_sp;
@@ -1056,19 +1120,17 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 /**
  * struct virtchnl2_queue_group_info - Queue group info
  * @qg_id: Queue group ID
- * @num_tx_q: Number of TX queues
- * @num_tx_complq: Number of completion queues
- * @num_rx_q: Number of RX queues
- * @num_rx_bufq: Number of RX buffer queues
+ * @num_tx_q: Number of TX queues requested
+ * @num_tx_complq: Number of completion queues requested
+ * @num_rx_q: Number of RX queues requested
+ * @num_rx_bufq: Number of RX buffer queues requested
  * @tx_q_grp_info: TX queue group info
  * @rx_q_grp_info: RX queue group info
  * @pad: Padding for future extensions
- * @chunks: Queue register chunks
+ * @chunks: Queue register chunks from CP
  */
 struct virtchnl2_queue_group_info {
-	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
-	/* IN, Number of queue of different types in the group. */
 	__le16 num_tx_q;
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
@@ -1077,53 +1139,56 @@ struct virtchnl2_queue_group_info {
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
 	u8 pad[40];
-	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
-};
-VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
-
-/**
- * struct virtchnl2_queue_groups - Queue groups list
- * @num_queue_groups: Total number of queue groups
- * @pad: Padding for future extensions
- * @groups: Array of queue group info
- */
-struct virtchnl2_queue_groups {
-	__le16 num_queue_groups;
-	u8 pad[6];
-	struct virtchnl2_queue_group_info groups[1];
+	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(120, virtchnl2_queue_group_info, chunks.chunks);
 
 /**
  * struct virtchnl2_add_queue_groups - Add queue groups
- * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ * @vport_id: Vport_id to add queue group to, same as allocated by
  *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @num_queue_groups: Total number of queue groups
  * @pad: Padding for future extensions
- * @qg_info: IN/OUT. List of all the queue groups
+#ifndef FLEX_ARRAY_SUPPORT
+ * @groups: List of all the queue group info structures
+#endif
  *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
- * groups and queues assigned followed by num_queue_groups and num_chunks of
- * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ * groups and queues assigned followed by num_queue_groups and groups of
+ * virtchnl2_queue_group_info and virtchnl2_queue_chunk structures.
+#ifdef FLEX_ARRAY_SUPPORT
+ * (Note: There is no specific field for the queue group info but are added at
+ * the end of the add queue groups message. Receiver of this message is expected
+ * to extract the queue group info accordingly. Reason for doing this is because
+ * compiler doesn't allow nested flexible array fields).
+#endif
  *
  * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
 	__le32 vport_id;
-	u8 pad[4];
-	struct virtchnl2_queue_groups qg_info;
+	__le16 num_queue_groups;
+	u8 pad[10];
+#ifndef FLEX_ARRAY_SUPPORT
+	struct virtchnl2_queue_group_info groups[STRUCT_VAR_LEN];
+#endif /* !FLEX_ARRAY_SUPPORT */
 };
+#ifndef FLEX_ARRAY_SUPPORT
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
+#else
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_add_queue_groups);
+#endif /* !FLEX_ARRAY_SUPPORT */
 
 /**
  * struct virtchnl2_delete_queue_groups - Delete queue groups
- * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ * @vport_id: Vport ID to delete queue group from, same as allocated by
  *	      CreateVport.
- * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @num_queue_groups: Defines number of groups provided
  * @pad: Padding
- * @qg_ids: IN, IDs & types of Queue Groups to delete
+ * @qg_ids: IDs & types of Queue Groups to delete
  *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
@@ -1137,9 +1202,9 @@ struct virtchnl2_delete_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	struct virtchnl2_queue_group_id qg_ids[1];
+	struct virtchnl2_queue_group_id qg_ids[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_delete_queue_groups, qg_ids);
 
 /**
  * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
@@ -1197,9 +1262,9 @@ struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
 
-	struct virtchnl2_vector_chunk vchunks[1];
+	struct virtchnl2_vector_chunk vchunks[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(48, virtchnl2_vector_chunks, vchunks);
 
 /**
  * struct virtchnl2_alloc_vectors - Vector allocation info
@@ -1221,7 +1286,7 @@ struct virtchnl2_alloc_vectors {
 
 	struct virtchnl2_vector_chunks vchunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(64, virtchnl2_alloc_vectors, vchunks.vchunks);
 
 /**
  * struct virtchnl2_rss_lut - RSS LUT info
@@ -1243,9 +1308,9 @@ struct virtchnl2_rss_lut {
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	__le32 lut[1];
+	__le32 lut[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_rss_lut, lut);
 
 /**
  * struct virtchnl2_rss_hash - RSS hash info
@@ -1388,9 +1453,9 @@ struct virtchnl2_ptype {
 	u8 ptype_id_8;
 	u8 proto_id_count;
 	__le16 pad;
-	__le16 proto_id[1];
+	__le16 proto_id[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(8, virtchnl2_ptype, proto_id);
 
 /**
  * struct virtchnl2_get_ptype_info - Packet type info
@@ -1404,11 +1469,13 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
  * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
  * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
  * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
- * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
- * is no specific field for the ptypes but are added at the end of the
- * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * are added at the end of the 'virtchnl2_get_ptype_info' message.
+#ifdef FLEX_ARRAY_SUPPORT
+ * (Note: There is no specific field for the ptypes but are added at the end of
+ * the ptype info message. PF/VF is expected to extract the ptypes accordingly.
  * Reason for doing this is because compiler doesn't allow nested flexible
  * array fields).
+#endif
  *
  * If all the ptypes don't fit into one mailbox buffer, CP splits the
  * ptype info into multiple messages, where each message will have its own
@@ -1426,10 +1493,15 @@ struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
-
-	struct virtchnl2_ptype ptype[1];
+#ifndef FLEX_ARRAY_SUPPORT
+	struct virtchnl2_ptype ptype[STRUCT_VAR_LEN];
+#endif /* !FLEX_ARRAY_SUPPORT */
 };
+#ifndef FLEX_ARRAY_SUPPORT
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
+#else
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_get_ptype_info);
+#endif /* !FLEX_ARRAY_SUPPORT */
 
 /**
  * struct virtchnl2_vport_stats - Vport statistics
@@ -1597,7 +1669,11 @@ struct virtchnl2_rss_key {
 
 	__le16 key_len;
 	u8 pad;
+#ifdef FLEX_ARRAY_SUPPORT
+	DECLARE_FLEX_ARRAY(u8, key);
+#else
 	u8 key[1];
+#endif /* FLEX_ARRAY_SUPPORT */
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
 
@@ -1624,9 +1700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_chunk chunks[1];
+	struct virtchnl2_queue_chunk chunks[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_queue_chunks, chunks);
 
 /**
  * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
@@ -1648,7 +1724,7 @@ struct virtchnl2_del_ena_dis_queues {
 
 	struct virtchnl2_queue_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(32, virtchnl2_del_ena_dis_queues, chunks.chunks);
 
 /**
  * struct virtchnl2_queue_vector - Queue to vector mapping
@@ -1693,9 +1769,9 @@ struct virtchnl2_queue_vector_maps {
 	__le16 num_qv_maps;
 	u8 pad[10];
 
-	struct virtchnl2_queue_vector qv_maps[1];
+	struct virtchnl2_queue_vector qv_maps[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_vector_maps, qv_maps);
 
 /**
  * struct virtchnl2_loopback - Loopback info
@@ -1752,9 +1828,9 @@ struct virtchnl2_mac_addr_list {
 	__le16 num_mac_addr;
 	u8 pad[2];
 
-	struct virtchnl2_mac_addr mac_addr_list[1];
+	struct virtchnl2_mac_addr mac_addr_list[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_mac_addr_list, mac_addr_list);
 
 /**
  * struct virtchnl2_promisc_info - Promiscuous type information
@@ -1848,9 +1924,10 @@ struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
+	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[STRUCT_VAR_LEN];
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp,
+			       ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_get_ptp_caps - Get PTP capabilities
@@ -1875,7 +1952,8 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(88, virtchnl2_get_ptp_caps,
+			       tx_tstamp.ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
@@ -1911,10 +1989,10 @@ struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
+	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp_latches);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp_latches,
+			       tstamp_latches);
 
 static inline const char *virtchnl2_op_str(__le32 v_opcode)
 {
@@ -2002,12 +2080,31 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
  * Validate msg format against struct for each opcode.
  */
 static inline int
-virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
+virtchnl2_vc_validate_vf_msg(struct virtchnl2_version_info *ver, u32 v_opcode,
 			     u8 *msg, __le16 msglen)
 {
 	bool err_msg_format = false;
+#ifdef FLEX_ARRAY_SUPPORT
+	bool is_flex_array = true;
+#else
+	bool is_flex_array = false;
+#endif /* !FLEX_ARRAY_SUPPORT */
 	__le32 valid_len = 0;
-
+	__le32 num_chunks;
+	__le32 num_qgrps;
+
+	/* It is possible that the FLEX_ARRAY_SUPPORT flag is not defined
+	 * by all the users of virtchnl2 header file. Let's take an example
+	 * where the driver doesn't support flex array and CP does. In this
+	 * case, the size of the VIRTCHNL2_OP_CREATE_VPORT message sent from
+	 * the driver would be 192 bytes because of the 1-sized array in the
+	 * virtchnl2_create_vport structure whereas the message size expected
+	 * by the CP would be 160 bytes (as the driver doesn't send any chunk
+	 * information on create vport). This means, both 160 and 192 byte
+	 * message length are valid. The math for the message size check of the
+	 * opcodes consider the said scenarios for the flex array supported
+	 * structures.
+	 */
 	/* Validate message length */
 	switch (v_opcode) {
 	case VIRTCHNL2_OP_VERSION:
@@ -2017,19 +2114,21 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 		valid_len = sizeof(struct virtchnl2_get_capabilities);
 		break;
 	case VIRTCHNL2_OP_CREATE_VPORT:
-		valid_len = sizeof(struct virtchnl2_create_vport);
-		if (msglen >= valid_len) {
-			struct virtchnl2_create_vport *cvport =
-				(struct virtchnl2_create_vport *)msg;
+		num_chunks = ((struct virtchnl2_create_vport *)msg)->chunks.num_chunks;
+		valid_len = struct_size_t(struct virtchnl2_create_vport,
+					  chunks.chunks, num_chunks);
 
-			if (cvport->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
+		if (!is_flex_array)
+			/* Remove the additional chunk included in the
+			 * struct_size_t calculation in case of no flex array
+			 * support, due to the 1-sized array.
+			 */
+			valid_len -= sizeof(struct virtchnl2_queue_reg_chunk);
+
+		/* Zero chunks is allowed as input */
+		if (!num_chunks && msglen > valid_len)
+			valid_len += sizeof(struct virtchnl2_queue_reg_chunk);
 
-			valid_len += (cvport->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
 		break;
 	case VIRTCHNL2_OP_NON_FLEX_CREATE_ADI:
 		valid_len = sizeof(struct virtchnl2_non_flex_create_adi);
@@ -2061,183 +2160,165 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 		valid_len = sizeof(struct virtchnl2_vport);
 		break;
 	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_tx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_tx_queues *ctq =
-				(struct virtchnl2_config_tx_queues *)msg;
-			if (ctq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (ctq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_txq_info);
+		num_chunks = ((struct virtchnl2_config_tx_queues *)msg)->num_qinfo;
+		if (!num_chunks) {
+			err_msg_format = true;
+			break;
 		}
+
+		valid_len = struct_size_t(struct virtchnl2_config_tx_queues,
+					  qinfo, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_txq_info);
+
 		break;
 	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_rx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_rx_queues *crq =
-				(struct virtchnl2_config_rx_queues *)msg;
-			if (crq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (crq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_rxq_info);
+		num_chunks = ((struct virtchnl2_config_rx_queues *)msg)->num_qinfo;
+		if (!num_chunks) {
+			err_msg_format = true;
+			break;
 		}
+
+		valid_len = struct_size_t(struct virtchnl2_config_rx_queues,
+					  qinfo, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_rxq_info);
+
 		break;
 	case VIRTCHNL2_OP_ADD_QUEUES:
-		valid_len = sizeof(struct virtchnl2_add_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_add_queues *add_q =
-				(struct virtchnl2_add_queues *)msg;
+		num_chunks = ((struct virtchnl2_add_queues *)msg)->chunks.num_chunks;
+		valid_len = struct_size_t(struct virtchnl2_add_queues,
+					  chunks.chunks, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_queue_reg_chunk);
 
-			if (add_q->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
+		/* Zero chunks is allowed as input */
+		if (!num_chunks && msglen > valid_len)
+			valid_len += sizeof(struct virtchnl2_queue_reg_chunk);
 
-			valid_len += (add_q->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
 		break;
 	case VIRTCHNL2_OP_ENABLE_QUEUES:
 	case VIRTCHNL2_OP_DISABLE_QUEUES:
 	case VIRTCHNL2_OP_DEL_QUEUES:
-		valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_del_ena_dis_queues *qs =
-				(struct virtchnl2_del_ena_dis_queues *)msg;
-			if (qs->chunks.num_chunks == 0 ||
-			    qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (qs->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_chunk);
+		num_chunks = ((struct virtchnl2_del_ena_dis_queues *)msg)->chunks.num_chunks;
+		if (!num_chunks ||
+		    num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
+			err_msg_format = true;
+			break;
 		}
+
+		valid_len = struct_size_t(struct virtchnl2_del_ena_dis_queues,
+					  chunks.chunks, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_queue_chunk);
+
 		break;
 	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
+		num_qgrps = ((struct virtchnl2_add_queue_groups *)msg)->num_queue_groups;
+		if (!num_qgrps) {
+			err_msg_format = true;
+			break;
+		}
+
+		/* valid_len is also used as an offset to find the array of
+		 * virtchnl2_queue_group_info structures
+		 */
 		valid_len = sizeof(struct virtchnl2_add_queue_groups);
-		if (msglen != valid_len) {
-			__le64 offset;
-			__le32 i;
-			struct virtchnl2_add_queue_groups *add_queue_grp =
-				(struct virtchnl2_add_queue_groups *)msg;
-			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
-			struct virtchnl2_queue_group_info *grp_info;
-			__le32 chunk_size = sizeof(struct virtchnl2_queue_reg_chunk);
-			__le32 group_size = sizeof(struct virtchnl2_queue_group_info);
-			__le32 total_chunks_size;
-
-			if (groups->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (groups->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_info);
-			offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
-
-			for (i = 0; i < groups->num_queue_groups; i++) {
-				grp_info = (struct virtchnl2_queue_group_info *)
-						   ((u8 *)groups + offset);
-				if (grp_info->chunks.num_chunks == 0) {
-					offset += group_size;
-					continue;
-				}
-				total_chunks_size = (grp_info->chunks.num_chunks - 1) * chunk_size;
-				offset += group_size + total_chunks_size;
-				valid_len += total_chunks_size;
-			}
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_queue_group_info);
+
+		while (num_qgrps--) {
+			struct virtchnl2_queue_group_info *qgrp_info;
+
+			qgrp_info = (struct virtchnl2_queue_group_info *)
+					((u8 *)msg + valid_len);
+			num_chunks = qgrp_info->chunks.num_chunks;
+
+			valid_len += struct_size_t(struct virtchnl2_queue_group_info,
+						   chunks.chunks, num_chunks);
+			if (!is_flex_array)
+				valid_len -= sizeof(struct virtchnl2_queue_reg_chunk);
 		}
+
 		break;
 	case VIRTCHNL2_OP_DEL_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_delete_queue_groups);
-		if (msglen != valid_len) {
-			struct virtchnl2_delete_queue_groups *del_queue_grp =
-				(struct virtchnl2_delete_queue_groups *)msg;
+		num_qgrps = ((struct virtchnl2_delete_queue_groups *)msg)->num_queue_groups;
+		if (!num_qgrps) {
+			err_msg_format = true;
+			break;
+		}
 
-			if (del_queue_grp->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
+		valid_len = struct_size_t(struct virtchnl2_delete_queue_groups,
+					  qg_ids, num_qgrps);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_queue_group_id);
 
-			valid_len += (del_queue_grp->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_id);
-		}
 		break;
 	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-		valid_len = sizeof(struct virtchnl2_queue_vector_maps);
-		if (msglen >= valid_len) {
-			struct virtchnl2_queue_vector_maps *v_qp =
-				(struct virtchnl2_queue_vector_maps *)msg;
-			if (v_qp->num_qv_maps == 0 ||
-			    v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_qp->num_qv_maps - 1) *
-				      sizeof(struct virtchnl2_queue_vector);
+		num_chunks = ((struct virtchnl2_queue_vector_maps *)msg)->num_qv_maps;
+		if (!num_chunks ||
+		    num_chunks > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
+			err_msg_format = true;
+			break;
 		}
+
+		valid_len = struct_size_t(struct virtchnl2_queue_vector_maps,
+					  qv_maps, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_queue_vector);
+
 		break;
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_alloc_vectors);
-		if (msglen >= valid_len) {
-			struct virtchnl2_alloc_vectors *v_av =
-				(struct virtchnl2_alloc_vectors *)msg;
+		num_chunks = ((struct virtchnl2_alloc_vectors *)msg)->vchunks.num_vchunks;
+		valid_len = struct_size_t(struct virtchnl2_alloc_vectors,
+					  vchunks.vchunks, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_vector_chunk);
 
-			if (v_av->vchunks.num_vchunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
+		/* Zero chunks is allowed as input */
+		if (!num_chunks && msglen > valid_len)
+			valid_len += sizeof(struct virtchnl2_vector_chunk);
 
-			valid_len += (v_av->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
 		break;
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_vector_chunks);
-		if (msglen >= valid_len) {
-			struct virtchnl2_vector_chunks *v_chunks =
-				(struct virtchnl2_vector_chunks *)msg;
-			if (v_chunks->num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_chunks->num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
+		num_chunks = ((struct virtchnl2_vector_chunks *)msg)->num_vchunks;
+		if (!num_chunks) {
+			err_msg_format = true;
+			break;
 		}
+
+		valid_len = struct_size_t(struct virtchnl2_vector_chunks,
+					  vchunks, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_vector_chunk);
+
 		break;
 	case VIRTCHNL2_OP_GET_RSS_KEY:
 	case VIRTCHNL2_OP_SET_RSS_KEY:
-		valid_len = sizeof(struct virtchnl2_rss_key);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_key *vrk =
-				(struct virtchnl2_rss_key *)msg;
+		num_chunks = ((struct virtchnl2_rss_key *)msg)->key_len;
+		valid_len = struct_size_t(struct virtchnl2_rss_key,
+					  key, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(u8);
 
-			if (vrk->key_len == 0) {
-				/* Zero length is allowed as input */
-				break;
-			}
+		/* Zero entries is allowed as input */
+		if (!num_chunks && msglen > valid_len)
+			valid_len += sizeof(u8);
 
-			valid_len += vrk->key_len - 1;
-		}
 		break;
 	case VIRTCHNL2_OP_GET_RSS_LUT:
 	case VIRTCHNL2_OP_SET_RSS_LUT:
-		valid_len = sizeof(struct virtchnl2_rss_lut);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_lut *vrl =
-				(struct virtchnl2_rss_lut *)msg;
+		num_chunks = ((struct virtchnl2_rss_lut *)msg)->lut_entries;
+		valid_len = struct_size_t(struct virtchnl2_rss_lut,
+					  lut, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(__le32);
 
-			if (vrl->lut_entries == 0) {
-				/* Zero entries is allowed as input */
-				break;
-			}
+		/* Zero entries is allowed as input */
+		if (!num_chunks && msglen > valid_len)
+			valid_len += sizeof(__le32);
 
-			valid_len += (vrl->lut_entries - 1) * sizeof(vrl->lut);
-		}
 		break;
 	case VIRTCHNL2_OP_GET_RSS_HASH:
 	case VIRTCHNL2_OP_SET_RSS_HASH:
@@ -2257,37 +2338,60 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 		break;
 	case VIRTCHNL2_OP_RESET_VF:
 		break;
-	case VIRTCHNL2_OP_GET_PTP_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_get_ptp_caps *ptp_caps =
-			(struct virtchnl2_get_ptp_caps *)msg;
+#ifdef VIRTCHNL2_EDT_SUPPORT
+	case VIRTCHNL2_OP_GET_EDT_CAPS:
+		valid_len = sizeof(struct virtchnl2_edt_caps);
+		break;
+#endif /* VIRTCHNL2_EDT_SUPPORT */
+#ifdef NOT_FOR_UPSTREAM
+	case VIRTCHNL2_OP_GET_OEM_CAPS:
+		valid_len = sizeof(struct virtchnl2_oem_caps);
+		break;
+#endif /* NOT_FOR_UPSTREAM */
+#ifdef VIRTCHNL2_IWARP
+	case VIRTCHNL2_OP_RDMA:
+		/* These messages are opaque to us and will be validated in
+		 * the RDMA client code. We just need to check for nonzero
+		 * length. The firmware will enforce max length restrictions.
+		 */
+		if (msglen)
+			valid_len = msglen;
+		else
+			err_msg_format = true;
+		break;
+	case VIRTCHNL2_OP_RELEASE_RDMA_IRQ_MAP:
+		break;
+	case VIRTCHNL2_OP_CONFIG_RDMA_IRQ_MAP:
+		num_chunks = ((struct virtchnl2_rdma_qvlist_info *)msg)->num_vectors;
+		if (!num_chunks ||
+		    num_chunks > VIRTCHNL2_OP_CONFIG_RDMA_IRQ_MAP_MAX) {
+			err_msg_format = true;
+			break;
+		}
 
-			if (ptp_caps->tx_tstamp.num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
+		valid_len = struct_size_t(struct virtchnl2_rdma_qvlist_info,
+					  qv_info, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_rdma_qv_info);
 
-			valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_entry));
-		}
+		break;
+#endif /* VIRTCHNL2_IWARP */
+	case VIRTCHNL2_OP_GET_PTP_CAPS:
+		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
 		break;
 	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
 		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
+		num_chunks = ((struct virtchnl2_ptp_tx_tstamp_latches *)msg)->num_latches;
+		if (!num_chunks) {
+			err_msg_format = true;
+			break;
+		}
 
-		if (msglen > valid_len) {
-			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
-			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
-
-			if (tx_tstamp_latches->num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
+		valid_len = struct_size_t(struct virtchnl2_ptp_tx_tstamp_latches,
+					  tstamp_latches, num_chunks);
+		if (!is_flex_array)
+			valid_len -= sizeof(struct virtchnl2_ptp_tx_tstamp_latch);
 
-			valid_len += ((tx_tstamp_latches->num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
-		}
 		break;
 	/* These are always errors coming from the VF */
 	case VIRTCHNL2_OP_EVENT:
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH 01/25] common/idpf: added NVME CPF specific code with defines
  2024-05-28  7:28 ` [PATCH 01/25] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
@ 2024-05-29 12:32   ` Bruce Richardson
  0 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-05-29 12:32 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: yuying.zhang, jingjing.wu, dev

On Tue, May 28, 2024 at 07:28:31AM +0000, Soumyadeep Hore wrote:
> The aim of the changes is to remove NVME dependency on

Hi,

The intro words "The aim of the changes" is unnecessary. Just shorten the
commit log by stating what the patch is for without any intro:

"Remove NVME dependency on memory allocations..."

If rewording, it would be worth including detail about when the define is
expected to be used - will it be defined/configured in a later patch, or is
it expected that it's a build-time setting set by user? If latter case, we
need a doc update here.

One further minor nit below too.

/Bruce



> memory allocations, and to use a prepared buffer instead.
> 
> The changes do not affect other components.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>  drivers/common/idpf/base/idpf_controlq.c     | 27 +++++++++++++++++---
>  drivers/common/idpf/base/idpf_controlq_api.h |  9 +++++--
>  2 files changed, 31 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
> index a82ca628de..0ba7281a45 100644
> --- a/drivers/common/idpf/base/idpf_controlq.c
> +++ b/drivers/common/idpf/base/idpf_controlq.c
> @@ -1,5 +1,5 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2001-2023 Intel Corporation
> + * Copyright(c) 2001-2024 Intel Corporation
>   */
>  
>  #include "idpf_controlq.h"
> @@ -145,8 +145,12 @@ int idpf_ctlq_add(struct idpf_hw *hw,
>  	    qinfo->buf_size > IDPF_CTLQ_MAX_BUF_LEN)
>  		return -EINVAL;
>  
> +#ifndef NVME_CPF
>  	cq = (struct idpf_ctlq_info *)
>  	     idpf_calloc(hw, 1, sizeof(struct idpf_ctlq_info));
> +#else
> +	cq = *cq_out;
> +#endif
>  	if (!cq)
>  		return -ENOMEM;
>  
> @@ -172,10 +176,15 @@ int idpf_ctlq_add(struct idpf_hw *hw,
>  	}
>  
>  	if (status)
> +#ifdef NVME_CPF
> +		return status;
> +#else
>  		goto init_free_q;
> +#endif
>  
>  	if (is_rxq) {
>  		idpf_ctlq_init_rxq_bufs(cq);
> +#ifndef NVME_CPF
>  	} else {
>  		/* Allocate the array of msg pointers for TX queues */
>  		cq->bi.tx_msg = (struct idpf_ctlq_msg **)
> @@ -185,6 +194,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
>  			status = -ENOMEM;
>  			goto init_dealloc_q_mem;
>  		}
> +#endif
>  	}
>  
>  	idpf_ctlq_setup_regs(cq, qinfo);
> @@ -195,6 +205,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
>  
>  	LIST_INSERT_HEAD(&hw->cq_list_head, cq, cq_list);
>  
> +#ifndef NVME_CPF
>  	*cq_out = cq;
>  	return status;
>  
> @@ -205,6 +216,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
>  	idpf_free(hw, cq);
>  	cq = NULL;
>  
> +#endif
>  	return status;
>  }
>  
> @@ -232,8 +244,13 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
>   * destroyed. This must be called prior to using the individual add/remove
>   * APIs.
>   */
> +#ifdef NVME_CPF
> +int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
> +                   struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq)
> +#else
>  int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
>  		   struct idpf_ctlq_create_info *q_info)
> +#endif
>  {
>  	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
>  	int ret_code = 0;
> @@ -244,6 +261,10 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
>  	for (i = 0; i < num_q; i++) {
>  		struct idpf_ctlq_create_info *qinfo = q_info + i;
>  
> +#ifdef NVME_CPF
> +		cq = *(ctlq + i);
> +
> +#endif	
>  		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
>  		if (ret_code)
>  			goto init_destroy_qs;
> @@ -398,7 +419,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
>   * ctlq_msgs and free or reuse the DMA buffers.
>   */
>  static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
> -				struct idpf_ctlq_msg *msg_status[], bool force)
> +		                struct idpf_ctlq_msg *msg_status[], bool force)
>  {
>  	struct idpf_ctlq_desc *desc;
>  	u16 i = 0, num_to_clean;
> @@ -469,7 +490,7 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
>   * ctlq_msgs and free or reuse the DMA buffers.
>   */
>  int idpf_ctlq_clean_sq_force(struct idpf_ctlq_info *cq, u16 *clean_count,
> -			     struct idpf_ctlq_msg *msg_status[])
> +		             struct idpf_ctlq_msg *msg_status[])
>  {
>  	return __idpf_ctlq_clean_sq(cq, clean_count, msg_status, true);
>  }
> diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
> index 38f5d2df3c..bce5187981 100644
> --- a/drivers/common/idpf/base/idpf_controlq_api.h
> +++ b/drivers/common/idpf/base/idpf_controlq_api.h
> @@ -1,5 +1,5 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2001-2023 Intel Corporation
> + * Copyright(c) 2001-2024 Intel Corporation
>   */
>  
>  #ifndef _IDPF_CONTROLQ_API_H_
> @@ -158,8 +158,13 @@ enum idpf_mbx_opc {
>  /* Will init all required q including default mb.  "q_info" is an array of
>   * create_info structs equal to the number of control queues to be created.
>   */
> +#ifdef NVME_CPF
> +int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
> +                   struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq);
> +#else
>  int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
>  		   struct idpf_ctlq_create_info *q_info);
> +#endif
>  
>  /* Allocate and initialize a single control queue, which will be added to the
>   * control queue list; returns a handle to the created control queue
> @@ -186,7 +191,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
>  
>  /* Reclaims all descriptors on HW write back */
>  int idpf_ctlq_clean_sq_force(struct idpf_ctlq_info *cq, u16 *clean_count,
> -			     struct idpf_ctlq_msg *msg_status[]);
> +		             struct idpf_ctlq_msg *msg_status[]);

This is a whitespace change that has slipped in unrelated to the rest of
the patch. Not a big deal, just keep an eye out for such things. If there
are a few like this, you can consider just putting them in a misc patch at
the end.

>  
>  /* Reclaims send descriptors on HW write back */
>  int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH 03/25] common/idpf: update ADD QUEUE GROUPS offset
  2024-05-28  7:28 ` [PATCH 03/25] common/idpf: update ADD QUEUE GROUPS offset Soumyadeep Hore
@ 2024-05-29 12:38   ` Bruce Richardson
  0 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-05-29 12:38 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: yuying.zhang, jingjing.wu, dev

On Tue, May 28, 2024 at 07:28:33AM +0000, Soumyadeep Hore wrote:
> Some compilers will use 64-bit addressing and compiler will detect
> such loss of data
> 
> virtchnl2.h(1890,40): warning C4244: '=': conversion from '__int64' to
> '__le32', possible loss of data
> 
> on line 1890
> offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
> 
> Removed unnecessary zero init
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>

There seems to be lots of whitespace changes here unrelated to the actual
change in the patch description. Please try and keep the patches "clean"
with only changes described in the commit log present, as far as is
possible.

Thanks,
/Bruce

> ---
>  drivers/common/idpf/base/virtchnl2.h | 21 +++++++++++----------
>  1 file changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
> index 3900b784d0..f44c0965b4 100644
> --- a/drivers/common/idpf/base/virtchnl2.h
> +++ b/drivers/common/idpf/base/virtchnl2.h
> @@ -1,5 +1,5 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2001-2023 Intel Corporation
> + * Copyright(c) 2001-2024 Intel Corporation
>   */
>  
>  #ifndef _VIRTCHNL2_H_
> @@ -47,9 +47,9 @@
>   * that is never used.
>   */
>  #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
> -	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
> +        { virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
>  #define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
> -	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
> +        { virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
>  
>  /* New major set of opcodes introduced and so leaving room for
>   * old misc opcodes to be added in future. Also these opcodes may only
> @@ -471,8 +471,8 @@
>   * error regardless of version mismatch.
>   */
>  struct virtchnl2_version_info {
> -	u32 major;
> -	u32 minor;
> +        u32 major;
> +        u32 minor;
>  };
>  
>  VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
> @@ -1414,9 +1414,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
>   * and returns the status.
>   */
>  struct virtchnl2_promisc_info {
> -	__le32 vport_id;
> +        __le32 vport_id;
>  	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
> -	__le16 flags;
> +        __le16 flags;
>  	u8 pad[2];
>  };
>  
> @@ -1733,7 +1733,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
>  	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
>  		valid_len = sizeof(struct virtchnl2_add_queue_groups);
>  		if (msglen != valid_len) {
> -			__le32 i = 0, offset = 0;
> +			__le64 offset;
> +			__le32 i;
>  			struct virtchnl2_add_queue_groups *add_queue_grp =
>  				(struct virtchnl2_add_queue_groups *)msg;
>  			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
> @@ -1904,8 +1905,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
>  	/* These are always errors coming from the VF. */
>  	case VIRTCHNL2_OP_EVENT:
>  	case VIRTCHNL2_OP_UNKNOWN:
> -	default:
> -		return VIRTCHNL2_STATUS_ERR_ESRCH;
> +        default:
> +                return VIRTCHNL2_STATUS_ERR_ESRCH;
>  	}
>  	/* few more checks */
>  	if (err_msg_format || valid_len != msglen)
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH 04/25] common/idpf: update in PTP message validation
  2024-05-28  7:28 ` [PATCH 04/25] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-05-29 13:03   ` Bruce Richardson
  0 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-05-29 13:03 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: jingjing.wu, dev

On Tue, May 28, 2024 at 07:28:34AM +0000, Soumyadeep Hore wrote:
> When the message for getting timestamp latches is sent by the driver,
> number of latches is equal to 0. Current implementation of message
> validation function incorrectly notifies this kind of message length as
> invalid.
> 

This description doesn't seem to match what the code is doing here.
The code change below is changing the check on the message length from
checking for >88 bytes rather than >= 88 bytes. This doesn't seem to have
anything to do with not returning error when latches == 0. That check is a
few lines further in the code block and is unmodified by this patch:

1872 >       case VIRTCHNL2_OP_GET_PTP_CAPS:
1873 >       >       valid_len = sizeof(struct virtchnl2_get_ptp_caps);
1874 
1875 >       >       if (msglen >= valid_len) {
1876 >       >       >       struct virtchnl2_get_ptp_caps *ptp_caps =
1877 >       >       >       (struct virtchnl2_get_ptp_caps *)msg;
1878
1879 >       >       >       if (ptp_caps->tx_tstamp.num_latches == 0) {

This is the check for 0         ^^^^

1880 >       >       >       >       err_msg_format = true;
1881 >       >       >       >       break;
1882 >       >       >       }
1883 
1884 >       >       >       valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
1885 >       >       >       >             sizeof(struct virtchnl2_ptp_tx_tstamp_entry));

But alowing num_latches == 0 would make this calculation overflow ^^^^

1886 >       >       }
1887 >       >       break;

> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>  drivers/common/idpf/base/virtchnl2.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
> index f44c0965b4..9a1310ca24 100644
> --- a/drivers/common/idpf/base/virtchnl2.h
> +++ b/drivers/common/idpf/base/virtchnl2.h
> @@ -1873,7 +1873,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
>  	case VIRTCHNL2_OP_GET_PTP_CAPS:
>  		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
>  
> -		if (msglen >= valid_len) {
> +		if (msglen > valid_len) {
>  			struct virtchnl2_get_ptp_caps *ptp_caps =
>  			(struct virtchnl2_get_ptp_caps *)msg;
>  
> @@ -1889,7 +1889,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
>  	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
>  		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
>  
> -		if (msglen >= valid_len) {
> +		if (msglen > valid_len) {
>  			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
>  			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
>  
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 00/21] Update MEV TS Base Driver
  2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
                   ` (8 preceding siblings ...)
  2024-05-28  7:28 ` [PATCH 09/25] common/idpf: add flex array support to virtchnl2 structures Soumyadeep Hore
@ 2024-06-04  8:05 ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 01/21] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
                     ` (21 more replies)
  9 siblings, 22 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

These patches integrate the latest changes in MEV TS IDPF Base driver.

---
v2:
- Changed implementation based on review comments
- Fixed compilation errors for Windows, Alpine and FreeBSD
---

Soumyadeep Hore (21):
  common/idpf: added NVME CPF specific code with defines
  common/idpf: updated IDPF VF device ID
  common/idpf: added new virtchnl2 capability and vport flag
  common/idpf: moved the idpf HW into API header file
  common/idpf: avoid defensive programming
  common/idpf: use BIT ULL for large bitmaps
  common/idpf: convert data type to 'le'
  common/idpf: compress RXDID mask definitions
  common/idpf: refactor size check macro
  common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  common/idpf: use 'pad' and 'reserved' fields appropriately
  common/idpf: move related defines into enums
  common/idpf: avoid variable 0-init
  common/idpf: update in PTP message validation
  common/idpf: rename INLINE FLOW STEER to FLOW STEER
  common/idpf: add wmb before tail
  drivers: add flex array support and fix issues
  common/idpf: enable flow steer capability for vports
  common/idpf: add a new Tx context descriptor structure
  common/idpf: remove idpf common file
  drivers: adding type to idpf vc queue switch

 drivers/common/idpf/base/idpf_common.c        |  382 ---
 drivers/common/idpf/base/idpf_controlq.c      |   90 +-
 drivers/common/idpf/base/idpf_controlq.h      |  107 +-
 drivers/common/idpf/base/idpf_controlq_api.h  |   42 +-
 .../common/idpf/base/idpf_controlq_setup.c    |   18 +-
 drivers/common/idpf/base/idpf_devids.h        |    7 +-
 drivers/common/idpf/base/idpf_lan_txrx.h      |   20 +-
 drivers/common/idpf/base/idpf_osdep.h         |   72 +-
 drivers/common/idpf/base/idpf_type.h          |    4 +-
 drivers/common/idpf/base/meson.build          |    1 -
 drivers/common/idpf/base/virtchnl2.h          | 2390 +++++++++--------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  842 ++++--
 drivers/common/idpf/idpf_common_virtchnl.c    |   10 +-
 drivers/common/idpf/idpf_common_virtchnl.h    |    2 +-
 drivers/net/cpfl/cpfl_ethdev.c                |   40 +-
 drivers/net/cpfl/cpfl_rxtx.c                  |   12 +-
 drivers/net/idpf/idpf_rxtx.c                  |   12 +-
 17 files changed, 2065 insertions(+), 1986 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 01/21] common/idpf: added NVME CPF specific code with defines
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 02/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
                     ` (20 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Removes NVME dependency on memory allocations and
uses a prepared buffer instead.

The changes do not affect other components.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c     | 23 +++++++++++++++++++-
 drivers/common/idpf/base/idpf_controlq_api.h |  7 +++++-
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index a82ca628de..bada75abfc 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #include "idpf_controlq.h"
@@ -145,8 +145,12 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	    qinfo->buf_size > IDPF_CTLQ_MAX_BUF_LEN)
 		return -EINVAL;
 
+#ifndef NVME_CPF
 	cq = (struct idpf_ctlq_info *)
 	     idpf_calloc(hw, 1, sizeof(struct idpf_ctlq_info));
+#else
+	cq = *cq_out;
+#endif
 	if (!cq)
 		return -ENOMEM;
 
@@ -172,10 +176,15 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	}
 
 	if (status)
+#ifdef NVME_CPF
+		return status;
+#else
 		goto init_free_q;
+#endif
 
 	if (is_rxq) {
 		idpf_ctlq_init_rxq_bufs(cq);
+#ifndef NVME_CPF
 	} else {
 		/* Allocate the array of msg pointers for TX queues */
 		cq->bi.tx_msg = (struct idpf_ctlq_msg **)
@@ -185,6 +194,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			status = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
+#endif
 	}
 
 	idpf_ctlq_setup_regs(cq, qinfo);
@@ -195,6 +205,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 
 	LIST_INSERT_HEAD(&hw->cq_list_head, cq, cq_list);
 
+#ifndef NVME_CPF
 	*cq_out = cq;
 	return status;
 
@@ -204,6 +215,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 init_free_q:
 	idpf_free(hw, cq);
 	cq = NULL;
+#endif
 
 	return status;
 }
@@ -232,8 +244,13 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
  * destroyed. This must be called prior to using the individual add/remove
  * APIs.
  */
+#ifdef NVME_CPF
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+			struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq)
+#else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info)
+#endif
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
 	int ret_code = 0;
@@ -244,6 +261,10 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 	for (i = 0; i < num_q; i++) {
 		struct idpf_ctlq_create_info *qinfo = q_info + i;
 
+#ifdef NVME_CPF
+		cq = *(ctlq + i);
+#endif
+
 		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
 		if (ret_code)
 			goto init_destroy_qs;
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 38f5d2df3c..6b6f3e84c2 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_API_H_
@@ -158,8 +158,13 @@ enum idpf_mbx_opc {
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
  */
+#ifdef NVME_CPF
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+			struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq);
+#else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
+#endif
 
 /* Allocate and initialize a single control queue, which will be added to the
  * control queue list; returns a handle to the created control queue
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 02/21] common/idpf: updated IDPF VF device ID
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 01/21] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 03/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
                     ` (19 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Update IDPF VF device id to 145C.

Also added device ID for S-IOV device.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_devids.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_devids.h b/drivers/common/idpf/base/idpf_devids.h
index c47762d5b7..0eb2def264 100644
--- a/drivers/common/idpf/base/idpf_devids.h
+++ b/drivers/common/idpf/base/idpf_devids.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_DEVIDS_H_
@@ -10,7 +10,10 @@
 
 /* Device IDs */
 #define IDPF_DEV_ID_PF			0x1452
-#define IDPF_DEV_ID_VF			0x1889
+#define IDPF_DEV_ID_VF			0x145C
+#ifdef SIOV_SUPPORT
+#define IDPF_DEV_ID_VF_SIOV		0x0DD5
+#endif /* SIOV_SUPPORT */
 
 
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 03/21] common/idpf: added new virtchnl2 capability and vport flag
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 01/21] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 02/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 04/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
                     ` (18 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Removed unused VIRTCHNL2_CAP_ADQ capability and use that bit for
VIRTCHNL2_CAP_INLINE_FLOW_STEER capability.

Added VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA port flag to allow
enable/disable per vport.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 3900b784d0..6eff0f1ea1 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _VIRTCHNL2_H_
@@ -220,7 +220,7 @@
 #define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
 #define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
 #define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_ADQ			BIT(6)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
 #define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
 #define VIRTCHNL2_CAP_PROMISC			BIT(8)
 #define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
@@ -593,7 +593,8 @@ struct virtchnl2_queue_reg_chunks {
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
 /* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT	BIT(0)
+#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
+#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 04/21] common/idpf: moved the idpf HW into API header file
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (2 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 03/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 05/21] common/idpf: avoid defensive programming Soumyadeep Hore
                     ` (17 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

There is an issue of recursive header file includes in accessing the
idpf_hw structure. The controlq.h has the structure definition and osdep
header file needs that. The problem is the controlq.h also needs
the osdep header file contents, basically both dependent on each other.

Moving the definition from controlq.h into api.h resolves the problem.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c       |   4 +-
 drivers/common/idpf/base/idpf_controlq.h     | 107 +------------------
 drivers/common/idpf/base/idpf_controlq_api.h |  35 ++++++
 drivers/common/idpf/base/idpf_osdep.h        |  72 ++++++++++++-
 drivers/common/idpf/base/idpf_type.h         |   4 +-
 5 files changed, 111 insertions(+), 111 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
index 7181a7f14c..bb540345c2 100644
--- a/drivers/common/idpf/base/idpf_common.c
+++ b/drivers/common/idpf/base/idpf_common.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
-#include "idpf_type.h"
 #include "idpf_prototype.h"
+#include "idpf_type.h"
 #include <virtchnl.h>
 
 
diff --git a/drivers/common/idpf/base/idpf_controlq.h b/drivers/common/idpf/base/idpf_controlq.h
index 80ca06e632..3f74b5a898 100644
--- a/drivers/common/idpf/base/idpf_controlq.h
+++ b/drivers/common/idpf/base/idpf_controlq.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_H_
@@ -96,111 +96,6 @@ struct idpf_mbxq_desc {
 	u32 pf_vf_id;		/* used by CP when sending to PF */
 };
 
-enum idpf_mac_type {
-	IDPF_MAC_UNKNOWN = 0,
-	IDPF_MAC_PF,
-	IDPF_MAC_VF,
-	IDPF_MAC_GENERIC
-};
-
-#define ETH_ALEN 6
-
-struct idpf_mac_info {
-	enum idpf_mac_type type;
-	u8 addr[ETH_ALEN];
-	u8 perm_addr[ETH_ALEN];
-};
-
-#define IDPF_AQ_LINK_UP 0x1
-
-/* PCI bus types */
-enum idpf_bus_type {
-	idpf_bus_type_unknown = 0,
-	idpf_bus_type_pci,
-	idpf_bus_type_pcix,
-	idpf_bus_type_pci_express,
-	idpf_bus_type_reserved
-};
-
-/* PCI bus speeds */
-enum idpf_bus_speed {
-	idpf_bus_speed_unknown	= 0,
-	idpf_bus_speed_33	= 33,
-	idpf_bus_speed_66	= 66,
-	idpf_bus_speed_100	= 100,
-	idpf_bus_speed_120	= 120,
-	idpf_bus_speed_133	= 133,
-	idpf_bus_speed_2500	= 2500,
-	idpf_bus_speed_5000	= 5000,
-	idpf_bus_speed_8000	= 8000,
-	idpf_bus_speed_reserved
-};
-
-/* PCI bus widths */
-enum idpf_bus_width {
-	idpf_bus_width_unknown	= 0,
-	idpf_bus_width_pcie_x1	= 1,
-	idpf_bus_width_pcie_x2	= 2,
-	idpf_bus_width_pcie_x4	= 4,
-	idpf_bus_width_pcie_x8	= 8,
-	idpf_bus_width_32	= 32,
-	idpf_bus_width_64	= 64,
-	idpf_bus_width_reserved
-};
-
-/* Bus parameters */
-struct idpf_bus_info {
-	enum idpf_bus_speed speed;
-	enum idpf_bus_width width;
-	enum idpf_bus_type type;
-
-	u16 func;
-	u16 device;
-	u16 lan_id;
-	u16 bus_id;
-};
-
-/* Function specific capabilities */
-struct idpf_hw_func_caps {
-	u32 num_alloc_vfs;
-	u32 vf_base_id;
-};
-
-/* Define the APF hardware struct to replace other control structs as needed
- * Align to ctlq_hw_info
- */
-struct idpf_hw {
-	/* Some part of BAR0 address space is not mapped by the LAN driver.
-	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
-	 * will have its own base hardware address when mapped.
-	 */
-	u8 *hw_addr;
-	u8 *hw_addr_region2;
-	u64 hw_addr_len;
-	u64 hw_addr_region2_len;
-
-	void *back;
-
-	/* control queue - send and receive */
-	struct idpf_ctlq_info *asq;
-	struct idpf_ctlq_info *arq;
-
-	/* subsystem structs */
-	struct idpf_mac_info mac;
-	struct idpf_bus_info bus;
-	struct idpf_hw_func_caps func_caps;
-
-	/* pci info */
-	u16 device_id;
-	u16 vendor_id;
-	u16 subsystem_device_id;
-	u16 subsystem_vendor_id;
-	u8 revision_id;
-	bool adapter_stopped;
-
-	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
-};
-
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
 			     struct idpf_ctlq_info *cq);
 
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 6b6f3e84c2..f3a397ea58 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -154,6 +154,41 @@ enum idpf_mbx_opc {
 	idpf_mbq_opc_send_msg_to_peer_drv	= 0x0804,
 };
 
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct idpf_hw {
+	/* Some part of BAR0 address space is not mapped by the LAN driver.
+	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
+	 * will have its own base hardware address when mapped.
+	 */
+	u8 *hw_addr;
+	u8 *hw_addr_region2;
+	u64 hw_addr_len;
+	u64 hw_addr_region2_len;
+
+	void *back;
+
+	/* control queue - send and receive */
+	struct idpf_ctlq_info *asq;
+	struct idpf_ctlq_info *arq;
+
+	/* subsystem structs */
+	struct idpf_mac_info mac;
+	struct idpf_bus_info bus;
+	struct idpf_hw_func_caps func_caps;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	bool adapter_stopped;
+
+	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
+};
+
 /* API supported for control queue management */
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
diff --git a/drivers/common/idpf/base/idpf_osdep.h b/drivers/common/idpf/base/idpf_osdep.h
index 74a376cb13..b2af8f443d 100644
--- a/drivers/common/idpf/base/idpf_osdep.h
+++ b/drivers/common/idpf/base/idpf_osdep.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_OSDEP_H_
@@ -353,4 +353,74 @@ idpf_hweight32(u32 num)
 
 #endif
 
+enum idpf_mac_type {
+	IDPF_MAC_UNKNOWN = 0,
+	IDPF_MAC_PF,
+	IDPF_MAC_VF,
+	IDPF_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct idpf_mac_info {
+	enum idpf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+};
+
+#define IDPF_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum idpf_bus_type {
+	idpf_bus_type_unknown = 0,
+	idpf_bus_type_pci,
+	idpf_bus_type_pcix,
+	idpf_bus_type_pci_express,
+	idpf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum idpf_bus_speed {
+	idpf_bus_speed_unknown	= 0,
+	idpf_bus_speed_33	= 33,
+	idpf_bus_speed_66	= 66,
+	idpf_bus_speed_100	= 100,
+	idpf_bus_speed_120	= 120,
+	idpf_bus_speed_133	= 133,
+	idpf_bus_speed_2500	= 2500,
+	idpf_bus_speed_5000	= 5000,
+	idpf_bus_speed_8000	= 8000,
+	idpf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum idpf_bus_width {
+	idpf_bus_width_unknown	= 0,
+	idpf_bus_width_pcie_x1	= 1,
+	idpf_bus_width_pcie_x2	= 2,
+	idpf_bus_width_pcie_x4	= 4,
+	idpf_bus_width_pcie_x8	= 8,
+	idpf_bus_width_32	= 32,
+	idpf_bus_width_64	= 64,
+	idpf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct idpf_bus_info {
+	enum idpf_bus_speed speed;
+	enum idpf_bus_width width;
+	enum idpf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct idpf_hw_func_caps {
+	u32 num_alloc_vfs;
+	u32 vf_base_id;
+};
+
 #endif /* _IDPF_OSDEP_H_ */
diff --git a/drivers/common/idpf/base/idpf_type.h b/drivers/common/idpf/base/idpf_type.h
index a22d28f448..2ff818035b 100644
--- a/drivers/common/idpf/base/idpf_type.h
+++ b/drivers/common/idpf/base/idpf_type.h
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_TYPE_H_
 #define _IDPF_TYPE_H_
 
-#include "idpf_controlq.h"
+#include "idpf_osdep.h"
 
 #define UNREFERENCED_XPARAMETER
 #define UNREFERENCED_1PARAMETER(_p)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 05/21] common/idpf: avoid defensive programming
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (3 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 04/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 06/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
                     ` (16 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Based on the upstream feedback, driver should not use any
defensive programming strategy by checking for NULL pointers
and other conditional checks unnecessarily in the code flow
to fall back, instead fail and fix the bug in a proper way.

Some of the checks are identified and removed/wrapped
in this patch:
- As the control queue is freed and deleted from the list after the
idpf_ctlq_shutdown call, there is no need to have the ring_size
check in idpf_ctlq_shutdown.
- From the upstream perspective shared code is part of the Linux
driver and it doesn't make sense to add zero 'len' and 'buf_size'
check in idpf_ctlq_add as to start with, driver provides valid
sizes, if not it is a bug.
- Remove cq NULL and zero ring_size check wherever possible as
the IDPF driver code flow does not pass any NULL cq pointer to
the control queue callbacks. If it passes then it is a bug and
should be fixed rather than checking for NULL pointer and falling
back which is not the right way.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index bada75abfc..b5ba9c3bd0 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -98,9 +98,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
 	idpf_acquire_lock(&cq->cq_lock);
 
-	if (!cq->ring_size)
-		goto shutdown_sq_out;
-
 #ifdef SIMICS_BUILD
 	wr32(hw, cq->reg.head, 0);
 	wr32(hw, cq->reg.tail, 0);
@@ -115,7 +112,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 	/* Set ring_size to 0 to indicate uninitialized queue */
 	cq->ring_size = 0;
 
-shutdown_sq_out:
 	idpf_release_lock(&cq->cq_lock);
 	idpf_destroy_lock(&cq->cq_lock);
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 06/21] common/idpf: use BIT ULL for large bitmaps
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (4 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 05/21] common/idpf: avoid defensive programming Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 07/21] common/idpf: convert data type to 'le' Soumyadeep Hore
                     ` (15 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

For bitmaps greater than 32 bits, use BIT_ULL instead of BIT
macro as reported by compiler.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 70 ++++++++++++++--------------
 1 file changed, 35 insertions(+), 35 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 6eff0f1ea1..851c6629dd 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -175,20 +175,20 @@
 /* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
  * Receive Side Scaling Flow type capability flags
  */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT(13)
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
 
 /* VIRTCHNL2_HEADER_SPLIT_CAPS
  * Header split capability flags
@@ -214,32 +214,32 @@
  * TX_VLAN: VLAN tag insertion
  * RX_VLAN: VLAN tag stripping
  */
-#define VIRTCHNL2_CAP_RDMA			BIT(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
-#define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT(11)
+#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
+#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
+#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
+#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
+#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
+#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
 /* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT(12)
-#define VIRTCHNL2_CAP_PTP			BIT(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT(15)
-#define VIRTCHNL2_CAP_FDIR			BIT(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT(19)
+#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
+#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
+#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
+#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
+#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
+#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
 /* Enable miss completion types plus ability to detect a miss completion if a
  * reserved bit is set in a standared completion's tag.
  */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT(20)
+#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
 /* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT(63)
+#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
 
 /* VIRTCHNL2_TXQ_SCHED_MODE
  * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 07/21] common/idpf: convert data type to 'le'
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (5 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 06/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 08/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
                     ` (14 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'u32' data type is used for the struct members in
'virtchnl2_version_info' which should be '__le32'.
Make the change accordingly.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 851c6629dd..1f59730297 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -471,8 +471,8 @@
  * error regardless of version mismatch.
  */
 struct virtchnl2_version_info {
-	u32 major;
-	u32 minor;
+	__le32 major;
+	__le32 minor;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 08/21] common/idpf: compress RXDID mask definitions
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (6 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 07/21] common/idpf: convert data type to 'le' Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:05   ` [PATCH v2 09/21] common/idpf: refactor size check macro Soumyadeep Hore
                     ` (13 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using the long RXDID definitions, introduce a
macro which uses common part of the RXDID definitions i.e.
VIRTCHNL2_RXDID_ and the bit passed to generate a mask.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 31 ++++++++++---------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index e6e782a219..f632271788 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -58,22 +58,23 @@
 /* VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		BIT(VIRTCHNL2_RXDID_0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		BIT(VIRTCHNL2_RXDID_1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
+#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
 /* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
 /* 22 through 63 are reserved */
 
 /* Rx */
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 09/21] common/idpf: refactor size check macro
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (7 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 08/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
@ 2024-06-04  8:05   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 10/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
                     ` (12 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:05 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using 'divide by 0' to check the struct length,
use the static_assert macro

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 1f59730297..f8b97f2e06 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -41,15 +41,12 @@
 /* State Machine error - Command sequence problem */
 #define	VIRTCHNL2_STATUS_ERR_ESM	201
 
-/* These macros are used to generate compilation errors if a structure/union
- * is not exactly the correct length. It gives a divide by zero error if the
- * structure/union is not of the correct size, otherwise it creates an enum
- * that is never used.
+/* This macro is used to generate compilation errors if a structure
+ * is not exactly the correct length.
  */
-#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
-#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
+	static_assert((n) == sizeof(struct X),	\
+		      "Structure length does not match with the expected value")
 
 /* New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 10/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (8 preceding siblings ...)
  2024-06-04  8:05   ` [PATCH v2 09/21] common/idpf: refactor size check macro Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 11/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
                     ` (11 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Mask for VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M was defined wrongly
and this patch fixes it.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index f632271788..9e04cf8628 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -111,7 +111,7 @@
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 11/21] common/idpf: use 'pad' and 'reserved' fields appropriately
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (9 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 10/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 12/21] common/idpf: move related defines into enums Soumyadeep Hore
                     ` (10 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'pad' naming is used if the field is actually a padding byte
and is also used for bytes meant for future addition of new
fields, whereas 'reserved' is only used if the field is reserved
and cannot be used for any other purpose.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 75 ++++++++++++++--------------
 1 file changed, 38 insertions(+), 37 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index f8b97f2e06..f638e434db 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -84,7 +84,7 @@
 #define		VIRTCHNL2_OP_GET_PTYPE_INFO		526
 	/* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
 	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
-	 */
+ */
 	/* opcodes 529, 530, and 531 are reserved */
 #define		VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	532
 #define		VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	533
@@ -95,7 +95,7 @@
 #define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
 #define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
 #define		VIRTCHNL2_OP_GET_PORT_STATS		540
-	/* TimeSync opcodes */
+/* TimeSync opcodes */
 #define		VIRTCHNL2_OP_GET_PTP_CAPS		541
 #define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
 
@@ -559,7 +559,7 @@ struct virtchnl2_get_capabilities {
 	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
-	u8 reserved[10];
+	u8 pad1[10];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
@@ -575,7 +575,7 @@ struct virtchnl2_queue_reg_chunk {
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
-	u8 reserved[4];
+	u8 pad1[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
@@ -583,7 +583,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -648,7 +648,7 @@ struct virtchnl2_create_vport {
 	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
 
-	u8 reserved[20];
+	u8 pad2[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -663,7 +663,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
@@ -708,7 +708,7 @@ struct virtchnl2_txq_info {
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
 
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
@@ -723,8 +723,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
 struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
-
-	u8 reserved[10];
+	u8 pad[10];
 	struct virtchnl2_txq_info qinfo[1];
 };
 
@@ -749,7 +748,7 @@ struct virtchnl2_rxq_info {
 
 	__le16 ring_len;
 	u8 buffer_notif_stride;
-	u8 pad[1];
+	u8 pad;
 
 	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
@@ -768,16 +767,15 @@ struct virtchnl2_rxq_info {
 	 * if this field is set
 	 */
 	u8 bufq2_ena;
-	u8 pad2[3];
+	u8 pad1[3];
 
 	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
 
-	u8 reserved[16];
+	u8 pad2[16];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
 /* VIRTCHNL2_OP_CONFIG_RX_QUEUES
@@ -790,8 +788,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
-
-	u8 reserved[18];
+	u8 pad[18];
 	struct virtchnl2_rxq_info qinfo[1];
 };
 
@@ -810,7 +807,8 @@ struct virtchnl2_add_queues {
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
 	__le16 num_rx_bufq;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -948,7 +946,7 @@ struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
-	__le16 pad1;
+	__le16 pad;
 
 	/* Register offsets and spacing provided by CP.
 	 * dynamic control registers are used for enabling/disabling/re-enabling
@@ -969,15 +967,15 @@ struct virtchnl2_vector_chunk {
 	 * where n=0..2
 	 */
 	__le32 itrn_index_spacing;
-	u8 reserved[4];
+	u8 pad1[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
 /* Structure to specify several chunks of contiguous interrupt vectors */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -992,7 +990,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunks vchunks;
 };
 
@@ -1014,8 +1013,9 @@ struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
-	u8 reserved[4];
-	__le32 lut[1]; /* RSS lookup table */
+	u8 pad[4];
+	/* RSS lookup table */
+	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
@@ -1039,7 +1039,7 @@ struct virtchnl2_rss_hash {
 	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
@@ -1063,7 +1063,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 /* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -1073,7 +1073,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 /* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -1100,8 +1100,7 @@ struct virtchnl2_non_flex_create_adi {
 	__le16 adi_index;
 	/* CP populates ADI id */
 	__le16 adi_id;
-	u8 reserved[64];
-	u8 pad[4];
+	u8 pad[68];
 	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
 	/* PF sends vector chunks to CP */
@@ -1117,7 +1116,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
-	u8 reserved[2];
+	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
@@ -1220,7 +1219,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 rx_runt_errors;
 	__le64 rx_illegal_bytes;
 	__le64 rx_total_pkts;
-	u8 rx_reserved[128];
+	u8 rx_pad[128];
 
 	__le64 tx_bytes;
 	__le64 tx_unicast_pkts;
@@ -1239,7 +1238,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 tx_xoff_events;
 	__le64 tx_dropped_link_down_pkts;
 	__le64 tx_total_pkts;
-	u8 tx_reserved[128];
+	u8 tx_pad[128];
 	__le64 mac_local_faults;
 	__le64 mac_remote_faults;
 };
@@ -1273,7 +1272,8 @@ struct virtchnl2_event {
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
-	u8 pad[1];
+	u8 pad;
+
 	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
@@ -1301,7 +1301,7 @@ struct virtchnl2_queue_chunk {
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
@@ -1309,7 +1309,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_chunk chunks[1];
 };
 
@@ -1326,7 +1326,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_chunks chunks;
 };
 
@@ -1343,7 +1344,7 @@ struct virtchnl2_queue_vector {
 
 	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 12/21] common/idpf: move related defines into enums
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (10 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 11/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 13/21] common/idpf: avoid variable 0-init Soumyadeep Hore
                     ` (9 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Changes all groups of related defines to enums. The names of
the enums are chosen to follow the common part of the naming
pattern as much as possible.

Replaced the common labels from the comments with the enum names.

While at it, modify header description based on upstream feedback.

Some variable names modified and comments updated in descriptive way.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h          | 1849 ++++++++++-------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  843 +++++---
 2 files changed, 1687 insertions(+), 1005 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index f638e434db..35ff1942c2 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -8,317 +8,396 @@
 /* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
  * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
  * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ *
+ * PF/VF uses the virtchnl interface defined in this header file to communicate
+ * with device Control Plane (CP). Driver and the CP may run on different
+ * platforms with different endianness. To avoid byte order discrepancies,
+ * all the structures in this header follow little-endian format.
+ *
+ * This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
  */
 
 #include "virtchnl2_lan_desc.h"
 
-/* VIRTCHNL2_ERROR_CODES */
-/* success */
-#define	VIRTCHNL2_STATUS_SUCCESS	0
-/* Operation not permitted, used in case of command not permitted for sender */
-#define	VIRTCHNL2_STATUS_ERR_EPERM	1
-/* Bad opcode - virtchnl interface problem */
-#define	VIRTCHNL2_STATUS_ERR_ESRCH	3
-/* I/O error - HW access error */
-#define	VIRTCHNL2_STATUS_ERR_EIO	5
-/* No such resource - Referenced resource is not allacated */
-#define	VIRTCHNL2_STATUS_ERR_ENXIO	6
-/* Permission denied - Resource is not permitted to caller */
-#define	VIRTCHNL2_STATUS_ERR_EACCES	13
-/* Device or resource busy - In case shared resource is in use by others */
-#define	VIRTCHNL2_STATUS_ERR_EBUSY	16
-/* Object already exists and not free */
-#define	VIRTCHNL2_STATUS_ERR_EEXIST	17
-/* Invalid input argument in command */
-#define	VIRTCHNL2_STATUS_ERR_EINVAL	22
-/* No space left or allocation failure */
-#define	VIRTCHNL2_STATUS_ERR_ENOSPC	28
-/* Parameter out of range */
-#define	VIRTCHNL2_STATUS_ERR_ERANGE	34
-
-/* Op not allowed in current dev mode */
-#define	VIRTCHNL2_STATUS_ERR_EMODE	200
-/* State Machine error - Command sequence problem */
-#define	VIRTCHNL2_STATUS_ERR_ESM	201
-
-/* This macro is used to generate compilation errors if a structure
+/**
+ * enum virtchnl2_status - Error codes.
+ * @VIRTCHNL2_STATUS_SUCCESS: Success
+ * @VIRTCHNL2_STATUS_ERR_EPERM: Operation not permitted, used in case of command
+ *				not permitted for sender
+ * @VIRTCHNL2_STATUS_ERR_ESRCH: Bad opcode - virtchnl interface problem
+ * @VIRTCHNL2_STATUS_ERR_EIO: I/O error - HW access error
+ * @VIRTCHNL2_STATUS_ERR_ENXIO: No such resource - Referenced resource is not
+ *				allocated
+ * @VIRTCHNL2_STATUS_ERR_EACCES: Permission denied - Resource is not permitted
+ *				 to caller
+ * @VIRTCHNL2_STATUS_ERR_EBUSY: Device or resource busy - In case shared
+ *				resource is in use by others
+ * @VIRTCHNL2_STATUS_ERR_EEXIST: Object already exists and not free
+ * @VIRTCHNL2_STATUS_ERR_EINVAL: Invalid input argument in command
+ * @VIRTCHNL2_STATUS_ERR_ENOSPC: No space left or allocation failure
+ * @VIRTCHNL2_STATUS_ERR_ERANGE: Parameter out of range
+ * @VIRTCHNL2_STATUS_ERR_EMODE: Operation not allowed in current dev mode
+ * @VIRTCHNL2_STATUS_ERR_ESM: State Machine error - Command sequence problem
+ */
+enum virtchnl2_status {
+	VIRTCHNL2_STATUS_SUCCESS	= 0,
+	VIRTCHNL2_STATUS_ERR_EPERM	= 1,
+	VIRTCHNL2_STATUS_ERR_ESRCH	= 3,
+	VIRTCHNL2_STATUS_ERR_EIO	= 5,
+	VIRTCHNL2_STATUS_ERR_ENXIO	= 6,
+	VIRTCHNL2_STATUS_ERR_EACCES	= 13,
+	VIRTCHNL2_STATUS_ERR_EBUSY	= 16,
+	VIRTCHNL2_STATUS_ERR_EEXIST	= 17,
+	VIRTCHNL2_STATUS_ERR_EINVAL	= 22,
+	VIRTCHNL2_STATUS_ERR_ENOSPC	= 28,
+	VIRTCHNL2_STATUS_ERR_ERANGE	= 34,
+	VIRTCHNL2_STATUS_ERR_EMODE	= 200,
+	VIRTCHNL2_STATUS_ERR_ESM	= 201,
+};
+
+/**
+ * This macro is used to generate compilation errors if a structure
  * is not exactly the correct length.
  */
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
 
-/* New major set of opcodes introduced and so leaving room for
+/**
+ * New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
  * be used if both the PF and VF have successfully negotiated the
- * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
- */
-#define		VIRTCHNL2_OP_UNKNOWN			0
-#define		VIRTCHNL2_OP_VERSION			1
-#define		VIRTCHNL2_OP_GET_CAPS			500
-#define		VIRTCHNL2_OP_CREATE_VPORT		501
-#define		VIRTCHNL2_OP_DESTROY_VPORT		502
-#define		VIRTCHNL2_OP_ENABLE_VPORT		503
-#define		VIRTCHNL2_OP_DISABLE_VPORT		504
-#define		VIRTCHNL2_OP_CONFIG_TX_QUEUES		505
-#define		VIRTCHNL2_OP_CONFIG_RX_QUEUES		506
-#define		VIRTCHNL2_OP_ENABLE_QUEUES		507
-#define		VIRTCHNL2_OP_DISABLE_QUEUES		508
-#define		VIRTCHNL2_OP_ADD_QUEUES			509
-#define		VIRTCHNL2_OP_DEL_QUEUES			510
-#define		VIRTCHNL2_OP_MAP_QUEUE_VECTOR		511
-#define		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		512
-#define		VIRTCHNL2_OP_GET_RSS_KEY		513
-#define		VIRTCHNL2_OP_SET_RSS_KEY		514
-#define		VIRTCHNL2_OP_GET_RSS_LUT		515
-#define		VIRTCHNL2_OP_SET_RSS_LUT		516
-#define		VIRTCHNL2_OP_GET_RSS_HASH		517
-#define		VIRTCHNL2_OP_SET_RSS_HASH		518
-#define		VIRTCHNL2_OP_SET_SRIOV_VFS		519
-#define		VIRTCHNL2_OP_ALLOC_VECTORS		520
-#define		VIRTCHNL2_OP_DEALLOC_VECTORS		521
-#define		VIRTCHNL2_OP_EVENT			522
-#define		VIRTCHNL2_OP_GET_STATS			523
-#define		VIRTCHNL2_OP_RESET_VF			524
-	/* opcode 525 is reserved */
-#define		VIRTCHNL2_OP_GET_PTYPE_INFO		526
-	/* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
-	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
- */
-	/* opcodes 529, 530, and 531 are reserved */
-#define		VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	532
-#define		VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	533
-#define		VIRTCHNL2_OP_LOOPBACK			534
-#define		VIRTCHNL2_OP_ADD_MAC_ADDR		535
-#define		VIRTCHNL2_OP_DEL_MAC_ADDR		536
-#define		VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	537
-#define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
-#define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
-#define		VIRTCHNL2_OP_GET_PORT_STATS		540
-/* TimeSync opcodes */
-#define		VIRTCHNL2_OP_GET_PTP_CAPS		541
-#define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
+ * VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
+ */
+enum virtchnl2_op {
+	VIRTCHNL2_OP_UNKNOWN			= 0,
+	VIRTCHNL2_OP_VERSION			= 1,
+	VIRTCHNL2_OP_GET_CAPS			= 500,
+	VIRTCHNL2_OP_CREATE_VPORT		= 501,
+	VIRTCHNL2_OP_DESTROY_VPORT		= 502,
+	VIRTCHNL2_OP_ENABLE_VPORT		= 503,
+	VIRTCHNL2_OP_DISABLE_VPORT		= 504,
+	VIRTCHNL2_OP_CONFIG_TX_QUEUES		= 505,
+	VIRTCHNL2_OP_CONFIG_RX_QUEUES		= 506,
+	VIRTCHNL2_OP_ENABLE_QUEUES		= 507,
+	VIRTCHNL2_OP_DISABLE_QUEUES		= 508,
+	VIRTCHNL2_OP_ADD_QUEUES			= 509,
+	VIRTCHNL2_OP_DEL_QUEUES			= 510,
+	VIRTCHNL2_OP_MAP_QUEUE_VECTOR		= 511,
+	VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		= 512,
+	VIRTCHNL2_OP_GET_RSS_KEY		= 513,
+	VIRTCHNL2_OP_SET_RSS_KEY		= 514,
+	VIRTCHNL2_OP_GET_RSS_LUT		= 515,
+	VIRTCHNL2_OP_SET_RSS_LUT		= 516,
+	VIRTCHNL2_OP_GET_RSS_HASH		= 517,
+	VIRTCHNL2_OP_SET_RSS_HASH		= 518,
+	VIRTCHNL2_OP_SET_SRIOV_VFS		= 519,
+	VIRTCHNL2_OP_ALLOC_VECTORS		= 520,
+	VIRTCHNL2_OP_DEALLOC_VECTORS		= 521,
+	VIRTCHNL2_OP_EVENT			= 522,
+	VIRTCHNL2_OP_GET_STATS			= 523,
+	VIRTCHNL2_OP_RESET_VF			= 524,
+	/* Opcode 525 is reserved */
+	VIRTCHNL2_OP_GET_PTYPE_INFO		= 526,
+	/* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
+	 */
+/* Opcodes 529, 530, and 531 are reserved */
+	VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	= 532,
+	VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	= 533,
+	VIRTCHNL2_OP_LOOPBACK			= 534,
+	VIRTCHNL2_OP_ADD_MAC_ADDR		= 535,
+	VIRTCHNL2_OP_DEL_MAC_ADDR		= 536,
+	VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	= 537,
+	VIRTCHNL2_OP_ADD_QUEUE_GROUPS		= 538,
+	VIRTCHNL2_OP_DEL_QUEUE_GROUPS		= 539,
+	VIRTCHNL2_OP_GET_PORT_STATS		= 540,
+	/* TimeSync opcodes */
+	VIRTCHNL2_OP_GET_PTP_CAPS		= 541,
+	VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	= 542,
+};
 
 #define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX	0xFFFF
 
-/* VIRTCHNL2_VPORT_TYPE
- * Type of virtual port
+/**
+ * enum virtchnl2_vport_type - Type of virtual port
+ * @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SRIOV: SRIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SIOV: SIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SUBDEV: Subdevice virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_MNG: Management virtual port type
  */
-#define VIRTCHNL2_VPORT_TYPE_DEFAULT		0
-#define VIRTCHNL2_VPORT_TYPE_SRIOV		1
-#define VIRTCHNL2_VPORT_TYPE_SIOV		2
-#define VIRTCHNL2_VPORT_TYPE_SUBDEV		3
-#define VIRTCHNL2_VPORT_TYPE_MNG		4
+enum virtchnl2_vport_type {
+	VIRTCHNL2_VPORT_TYPE_DEFAULT		= 0,
+	VIRTCHNL2_VPORT_TYPE_SRIOV		= 1,
+	VIRTCHNL2_VPORT_TYPE_SIOV		= 2,
+	VIRTCHNL2_VPORT_TYPE_SUBDEV		= 3,
+	VIRTCHNL2_VPORT_TYPE_MNG		= 4,
+};
 
-/* VIRTCHNL2_QUEUE_MODEL
- * Type of queue model
+/**
+ * enum virtchnl2_queue_model - Type of queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model
  *
  * In the single queue model, the same transmit descriptor queue is used by
  * software to post descriptors to hardware and by hardware to post completed
  * descriptors to software.
  * Likewise, the same receive descriptor queue is used by hardware to post
  * completions to software and by software to post buffers to hardware.
- */
-#define VIRTCHNL2_QUEUE_MODEL_SINGLE		0
-/* In the split queue model, hardware uses transmit completion queues to post
+ *
+ * In the split queue model, hardware uses transmit completion queues to post
  * descriptor/buffer completions to software, while software uses transmit
  * descriptor queues to post descriptors to hardware.
  * Likewise, hardware posts descriptor completions to the receive descriptor
  * queue, while software uses receive buffer queues to post buffers to hardware.
  */
-#define VIRTCHNL2_QUEUE_MODEL_SPLIT		1
-
-/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
- * Checksum offload capability flags
- */
-#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		BIT(0)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	BIT(1)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	BIT(2)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	BIT(3)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	BIT(4)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	BIT(5)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	BIT(6)
-#define VIRTCHNL2_CAP_TX_CSUM_GENERIC		BIT(7)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		BIT(8)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	BIT(9)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	BIT(10)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	BIT(11)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	BIT(12)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	BIT(13)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	BIT(14)
-#define VIRTCHNL2_CAP_RX_CSUM_GENERIC		BIT(15)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	BIT(16)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	BIT(17)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	BIT(18)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	BIT(19)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	BIT(20)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	BIT(21)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	BIT(22)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	BIT(23)
-
-/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
- * Segmentation offload capability flags
- */
-#define VIRTCHNL2_CAP_SEG_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_SEG_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_SEG_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_SEG_IPV6_TCP		BIT(3)
-#define VIRTCHNL2_CAP_SEG_IPV6_UDP		BIT(4)
-#define VIRTCHNL2_CAP_SEG_IPV6_SCTP		BIT(5)
-#define VIRTCHNL2_CAP_SEG_GENERIC		BIT(6)
-#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	BIT(7)
-#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	BIT(8)
-
-/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
- * Receive Side Scaling Flow type capability flags
- */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
-
-/* VIRTCHNL2_HEADER_SPLIT_CAPS
- * Header split capability flags
- */
-/* for prepended metadata  */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		BIT(0)
-/* all VLANs go into header buffer */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		BIT(1)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		BIT(2)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		BIT(3)
-
-/* VIRTCHNL2_RSC_OFFLOAD_CAPS
- * Receive Side Coalescing offload capability flags
- */
-#define VIRTCHNL2_CAP_RSC_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSC_IPV4_SCTP		BIT(1)
-#define VIRTCHNL2_CAP_RSC_IPV6_TCP		BIT(2)
-#define VIRTCHNL2_CAP_RSC_IPV6_SCTP		BIT(3)
-
-/* VIRTCHNL2_OTHER_CAPS
- * Other capability flags
- * SPLITQ_QSCHED: Queue based scheduling using split queue model
- * TX_VLAN: VLAN tag insertion
- * RX_VLAN: VLAN tag stripping
- */
-#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
-#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
-/* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
-#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
-#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
-/* Enable miss completion types plus ability to detect a miss completion if a
- * reserved bit is set in a standared completion's tag.
- */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
-/* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
-
-/* VIRTCHNL2_TXQ_SCHED_MODE
- * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
- * completions where descriptors and buffers are completed at the same time.
- * Flow scheduling mode allows for out of order packet processing where
- * descriptors are cleaned in order, but buffers can be completed out of order.
- */
-#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		0
-#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW		1
-
-/* VIRTCHNL2_TXQ_FLAGS
- * Transmit Queue feature flags
- *
- * Enable rule miss completion type; packet completion for a packet
- * sent on exception path; only relevant in flow scheduling mode
- */
-#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		BIT(0)
-
-/* VIRTCHNL2_PEER_TYPE
- * Transmit mailbox peer type
- */
-#define VIRTCHNL2_RDMA_CPF			0
-#define VIRTCHNL2_NVME_CPF			1
-#define VIRTCHNL2_ATE_CPF			2
-#define VIRTCHNL2_LCE_CPF			3
-
-/* VIRTCHNL2_RXQ_FLAGS
- * Receive Queue Feature flags
- */
-#define VIRTCHNL2_RXQ_RSC			BIT(0)
-#define VIRTCHNL2_RXQ_HDR_SPLIT			BIT(1)
-/* When set, packet descriptors are flushed by hardware immediately after
- * processing each packet.
- */
-#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	BIT(2)
-#define VIRTCHNL2_RX_DESC_SIZE_16BYTE		BIT(3)
-#define VIRTCHNL2_RX_DESC_SIZE_32BYTE		BIT(4)
-
-/* VIRTCHNL2_RSS_ALGORITHM
- * Type of RSS algorithm
- */
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC		0
-#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC			1
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC		2
-#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC			3
-
-/* VIRTCHNL2_EVENT_CODES
- * Type of event
- */
-#define VIRTCHNL2_EVENT_UNKNOWN			0
-#define VIRTCHNL2_EVENT_LINK_CHANGE		1
-/* These messages are only sent to PF from CP */
-#define VIRTCHNL2_EVENT_START_RESET_ADI		2
-#define VIRTCHNL2_EVENT_FINISH_RESET_ADI	3
-#define VIRTCHNL2_EVENT_ADI_ACTIVE		4
-
-/* VIRTCHNL2_QUEUE_TYPE
- * Transmit and Receive queue types are valid in legacy as well as split queue
- * models. With Split Queue model, 2 additional types are introduced -
- * TX_COMPLETION and RX_BUFFER. In split queue model, receive  corresponds to
+enum virtchnl2_queue_model {
+	VIRTCHNL2_QUEUE_MODEL_SINGLE		= 0,
+	VIRTCHNL2_QUEUE_MODEL_SPLIT		= 1,
+};
+
+/* Checksum offload capability flags */
+enum virtchnl2_cap_txrx_csum {
+	VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		= BIT(0),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	= BIT(1),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	= BIT(2),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	= BIT(3),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	= BIT(4),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	= BIT(5),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_CAP_TX_CSUM_GENERIC		= BIT(7),
+	VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		= BIT(8),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	= BIT(9),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	= BIT(10),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	= BIT(11),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	= BIT(12),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	= BIT(13),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	= BIT(14),
+	VIRTCHNL2_CAP_RX_CSUM_GENERIC		= BIT(15),
+	VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	= BIT(16),
+	VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	= BIT(17),
+	VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	= BIT(18),
+	VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	= BIT(19),
+	VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	= BIT(20),
+	VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	= BIT(21),
+	VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	= BIT(22),
+	VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	= BIT(23),
+};
+
+/* Segmentation offload capability flags */
+enum virtchnl2_cap_seg {
+	VIRTCHNL2_CAP_SEG_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_SEG_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_SEG_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_SEG_IPV6_TCP		= BIT(3),
+	VIRTCHNL2_CAP_SEG_IPV6_UDP		= BIT(4),
+	VIRTCHNL2_CAP_SEG_IPV6_SCTP		= BIT(5),
+	VIRTCHNL2_CAP_SEG_GENERIC		= BIT(6),
+	VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	= BIT(7),
+	VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	= BIT(8),
+};
+
+/* Receive Side Scaling Flow type capability flags */
+enum virtchnl2_cap_rss {
+	VIRTCHNL2_CAP_RSS_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSS_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_RSS_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_RSS_IPV4_OTHER		= BIT(3),
+	VIRTCHNL2_CAP_RSS_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_CAP_RSS_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_CAP_RSS_IPV6_SCTP		= BIT(6),
+	VIRTCHNL2_CAP_RSS_IPV6_OTHER		= BIT(7),
+	VIRTCHNL2_CAP_RSS_IPV4_AH		= BIT(8),
+	VIRTCHNL2_CAP_RSS_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		= BIT(10),
+	VIRTCHNL2_CAP_RSS_IPV6_AH		= BIT(11),
+	VIRTCHNL2_CAP_RSS_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		= BIT(13),
+};
+
+/* Header split capability flags */
+enum virtchnl2_cap_rx_hsplit_at {
+	/* For prepended metadata  */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		= BIT(0),
+	/* All VLANs go into header buffer */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		= BIT(1),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		= BIT(2),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		= BIT(3),
+};
+
+/* Receive Side Coalescing offload capability flags */
+enum virtchnl2_cap_rsc {
+	VIRTCHNL2_CAP_RSC_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSC_IPV4_SCTP		= BIT(1),
+	VIRTCHNL2_CAP_RSC_IPV6_TCP		= BIT(2),
+	VIRTCHNL2_CAP_RSC_IPV6_SCTP		= BIT(3),
+};
+
+/* Other capability flags */
+enum virtchnl2_cap_other {
+	VIRTCHNL2_CAP_RDMA			= BIT_ULL(0),
+	VIRTCHNL2_CAP_SRIOV			= BIT_ULL(1),
+	VIRTCHNL2_CAP_MACFILTER			= BIT_ULL(2),
+	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
+	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
+	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
+	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
+	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
+	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
+	VIRTCHNL2_CAP_INLINE_IPSEC		= BIT_ULL(10),
+	VIRTCHNL2_CAP_LARGE_NUM_QUEUES		= BIT_ULL(11),
+	/* Require additional info */
+	VIRTCHNL2_CAP_VLAN			= BIT_ULL(12),
+	VIRTCHNL2_CAP_PTP			= BIT_ULL(13),
+	VIRTCHNL2_CAP_ADV_RSS			= BIT_ULL(15),
+	VIRTCHNL2_CAP_FDIR			= BIT_ULL(16),
+	VIRTCHNL2_CAP_RX_FLEX_DESC		= BIT_ULL(17),
+	VIRTCHNL2_CAP_PTYPE			= BIT_ULL(18),
+	VIRTCHNL2_CAP_LOOPBACK			= BIT_ULL(19),
+	/* Enable miss completion types plus ability to detect a miss completion
+	 * if a reserved bit is set in a standard completion's tag.
+	 */
+	VIRTCHNL2_CAP_MISS_COMPL_TAG		= BIT_ULL(20),
+	/* This must be the last capability */
+	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
+};
+
+/**
+ * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
+ * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
+ *				    completions where descriptors and buffers
+ *				    are completed at the same time.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
+ *				   packet processing where descriptors are
+ *				   cleaned in order, but buffers can be
+ *				   completed out of order.
+ */
+enum virtchnl2_txq_sched_mode {
+	VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		= 0,
+	VIRTCHNL2_TXQ_SCHED_MODE_FLOW		= 1,
+};
+
+/**
+ * enum virtchnl2_txq_flags - Transmit Queue feature flags
+ * @VIRTCHNL2_TXQ_ENABLE_MISS_COMPL: Enable rule miss completion type. Packet
+ *				     completion for a packet sent on exception
+ *				     path and only relevant in flow scheduling
+ *				     mode.
+ */
+enum virtchnl2_txq_flags {
+	VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		= BIT(0),
+};
+
+/**
+ * enum virtchnl2_peer_type - Transmit mailbox peer type
+ * @VIRTCHNL2_RDMA_CPF: RDMA peer type
+ * @VIRTCHNL2_NVME_CPF: NVME peer type
+ * @VIRTCHNL2_ATE_CPF: ATE peer type
+ * @VIRTCHNL2_LCE_CPF: LCE peer type
+ */
+enum virtchnl2_peer_type {
+	VIRTCHNL2_RDMA_CPF			= 0,
+	VIRTCHNL2_NVME_CPF			= 1,
+	VIRTCHNL2_ATE_CPF			= 2,
+	VIRTCHNL2_LCE_CPF			= 3,
+};
+
+/**
+ * enum virtchnl2_rxq_flags - Receive Queue Feature flags
+ * @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag
+ * @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag
+ * @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
+ *					by hardware immediately after processing
+ *					each packet.
+ * @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size
+ * @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size
+ */
+enum virtchnl2_rxq_flags {
+	VIRTCHNL2_RXQ_RSC			= BIT(0),
+	VIRTCHNL2_RXQ_HDR_SPLIT			= BIT(1),
+	VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	= BIT(2),
+	VIRTCHNL2_RX_DESC_SIZE_16BYTE		= BIT(3),
+	VIRTCHNL2_RX_DESC_SIZE_32BYTE		= BIT(4),
+};
+
+/**
+ * enum virtchnl2_rss_alg - Type of RSS algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC: TOEPLITZ_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_R_ASYMMETRIC: R_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC: TOEPLITZ_SYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC: XOR_SYMMETRIC algorithm
+ */
+enum virtchnl2_rss_alg {
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC	= 0,
+	VIRTCHNL2_RSS_ALG_R_ASYMMETRIC		= 1,
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC	= 2,
+	VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC		= 3,
+};
+
+/**
+ * enum virtchnl2_event_codes - Type of event
+ * @VIRTCHNL2_EVENT_UNKNOWN: Unknown event type
+ * @VIRTCHNL2_EVENT_LINK_CHANGE: Link change event type
+ * @VIRTCHNL2_EVENT_START_RESET_ADI: Start reset ADI event type
+ * @VIRTCHNL2_EVENT_FINISH_RESET_ADI: Finish reset ADI event type
+ * @VIRTCHNL2_EVENT_ADI_ACTIVE: Event type to indicate 'function active' state
+ *				of ADI.
+ */
+enum virtchnl2_event_codes {
+	VIRTCHNL2_EVENT_UNKNOWN			= 0,
+	VIRTCHNL2_EVENT_LINK_CHANGE		= 1,
+	/* These messages are only sent to PF from CP */
+	VIRTCHNL2_EVENT_START_RESET_ADI		= 2,
+	VIRTCHNL2_EVENT_FINISH_RESET_ADI	= 3,
+	VIRTCHNL2_EVENT_ADI_ACTIVE		= 4,
+};
+
+/**
+ * enum virtchnl2_queue_type - Various queue types
+ * @VIRTCHNL2_QUEUE_TYPE_TX: TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX: RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: TX completion queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: RX buffer queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_TX: Config TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_RX: Config RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_TX: TX mailbox queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_RX: RX mailbox queue type
+ *
+ * Transmit and Receive queue types are valid in single as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced which are
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
  * the queue where hardware posts completions.
  */
-#define VIRTCHNL2_QUEUE_TYPE_TX			0
-#define VIRTCHNL2_QUEUE_TYPE_RX			1
-#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	2
-#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		3
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		4
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		5
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX		6
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX		7
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	8
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	9
-#define VIRTCHNL2_QUEUE_TYPE_MBX_TX		10
-#define VIRTCHNL2_QUEUE_TYPE_MBX_RX		11
-
-/* VIRTCHNL2_ITR_IDX
- * Virtchannel interrupt throttling rate index
- */
-#define VIRTCHNL2_ITR_IDX_0			0
-#define VIRTCHNL2_ITR_IDX_1			1
-
-/* VIRTCHNL2_VECTOR_LIMITS
+enum virtchnl2_queue_type {
+	VIRTCHNL2_QUEUE_TYPE_TX			= 0,
+	VIRTCHNL2_QUEUE_TYPE_RX			= 1,
+	VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	= 2,
+	VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		= 3,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		= 4,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		= 5,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX		= 6,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX		= 7,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	= 8,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	= 9,
+	VIRTCHNL2_QUEUE_TYPE_MBX_TX		= 10,
+	VIRTCHNL2_QUEUE_TYPE_MBX_RX		= 11,
+};
+
+/**
+ * enum virtchnl2_itr_idx - Interrupt throttling rate index
+ * @VIRTCHNL2_ITR_IDX_0: ITR index 0
+ * @VIRTCHNL2_ITR_IDX_1: ITR index 1
+ */
+enum virtchnl2_itr_idx {
+	VIRTCHNL2_ITR_IDX_0			= 0,
+	VIRTCHNL2_ITR_IDX_1			= 1,
+};
+
+/**
+ * VIRTCHNL2_VECTOR_LIMITS
  * Since PF/VF messages are limited by __le16 size, precalculate the maximum
  * possible values of nested elements in virtchnl structures that virtual
  * channel can possibly handle in a single message.
@@ -332,131 +411,150 @@
 		((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
 		sizeof(struct virtchnl2_queue_vector))
 
-/* VIRTCHNL2_MAC_TYPE
- * VIRTCHNL2_MAC_ADDR_PRIMARY
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_PRIMARY for the
- * primary/device unicast MAC address filter for VIRTCHNL2_OP_ADD_MAC_ADDR and
- * VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the underlying control plane
- * function to accurately track the MAC address and for VM/function reset.
- *
- * VIRTCHNL2_MAC_ADDR_EXTRA
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_EXTRA for any extra
- * unicast and/or multicast filters that are being added/deleted via
- * VIRTCHNL2_OP_ADD_MAC_ADDR/VIRTCHNL2_OP_DEL_MAC_ADDR respectively.
+/**
+ * enum virtchnl2_mac_addr_type - MAC address types
+ * @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
+ *				primary/device unicast MAC address filter for
+ *				VIRTCHNL2_OP_ADD_MAC_ADDR and
+ *				VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
+ *				underlying control plane function to accurately
+ *				track the MAC address and for VM/function reset.
+ * @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
+ *			      unicast and/or multicast filters that are being
+ *			      added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
+ *			      VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
-#define VIRTCHNL2_MAC_ADDR_PRIMARY		1
-#define VIRTCHNL2_MAC_ADDR_EXTRA		2
+enum virtchnl2_mac_addr_type {
+	VIRTCHNL2_MAC_ADDR_PRIMARY		= 1,
+	VIRTCHNL2_MAC_ADDR_EXTRA		= 2,
+};
 
-/* VIRTCHNL2_PROMISC_FLAGS
- * Flags used for promiscuous mode
+/**
+ * enum virtchnl2_promisc_flags - Flags used for promiscuous mode
+ * @VIRTCHNL2_UNICAST_PROMISC: Unicast promiscuous mode
+ * @VIRTCHNL2_MULTICAST_PROMISC: Multicast promiscuous mode
  */
-#define VIRTCHNL2_UNICAST_PROMISC		BIT(0)
-#define VIRTCHNL2_MULTICAST_PROMISC		BIT(1)
+enum virtchnl2_promisc_flags {
+	VIRTCHNL2_UNICAST_PROMISC		= BIT(0),
+	VIRTCHNL2_MULTICAST_PROMISC		= BIT(1),
+};
 
-/* VIRTCHNL2_QUEUE_GROUP_TYPE
- * Type of queue groups
+/**
+ * enum virtchnl2_queue_group_type - Type of queue groups
+ * @VIRTCHNL2_QUEUE_GROUP_DATA: Data queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_MBX: Mailbox queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_CONFIG: Config queue group type
+ *
  * 0 till 0xFF is for general use
  */
-#define VIRTCHNL2_QUEUE_GROUP_DATA		1
-#define VIRTCHNL2_QUEUE_GROUP_MBX		2
-#define VIRTCHNL2_QUEUE_GROUP_CONFIG		3
+enum virtchnl2_queue_group_type {
+	VIRTCHNL2_QUEUE_GROUP_DATA		= 1,
+	VIRTCHNL2_QUEUE_GROUP_MBX		= 2,
+	VIRTCHNL2_QUEUE_GROUP_CONFIG		= 3,
+};
 
-/* VIRTCHNL2_PROTO_HDR_TYPE
- * Protocol header type within a packet segment. A segment consists of one or
+/* Protocol header type within a packet segment. A segment consists of one or
  * more protocol headers that make up a logical group of protocol headers. Each
  * logical group of protocol headers encapsulates or is encapsulated using/by
  * tunneling or encapsulation protocols for network virtualization.
  */
-/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ANY			0
-#define VIRTCHNL2_PROTO_HDR_PRE_MAC		1
-/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_MAC			2
-#define VIRTCHNL2_PROTO_HDR_POST_MAC		3
-#define VIRTCHNL2_PROTO_HDR_ETHERTYPE		4
-#define VIRTCHNL2_PROTO_HDR_VLAN		5
-#define VIRTCHNL2_PROTO_HDR_SVLAN		6
-#define VIRTCHNL2_PROTO_HDR_CVLAN		7
-#define VIRTCHNL2_PROTO_HDR_MPLS		8
-#define VIRTCHNL2_PROTO_HDR_UMPLS		9
-#define VIRTCHNL2_PROTO_HDR_MMPLS		10
-#define VIRTCHNL2_PROTO_HDR_PTP			11
-#define VIRTCHNL2_PROTO_HDR_CTRL		12
-#define VIRTCHNL2_PROTO_HDR_LLDP		13
-#define VIRTCHNL2_PROTO_HDR_ARP			14
-#define VIRTCHNL2_PROTO_HDR_ECP			15
-#define VIRTCHNL2_PROTO_HDR_EAPOL		16
-#define VIRTCHNL2_PROTO_HDR_PPPOD		17
-#define VIRTCHNL2_PROTO_HDR_PPPOE		18
-/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4		19
-/* IPv4 and IPv6 Fragment header types are only associated to
- * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
- * cannot be used independently.
- */
-/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG		20
-/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6		21
-/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG		22
-#define VIRTCHNL2_PROTO_HDR_IPV6_EH		23
-/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_UDP			24
-/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TCP			25
-/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_SCTP		26
-/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMP		27
-/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMPV6		28
-#define VIRTCHNL2_PROTO_HDR_IGMP		29
-#define VIRTCHNL2_PROTO_HDR_AH			30
-#define VIRTCHNL2_PROTO_HDR_ESP			31
-#define VIRTCHNL2_PROTO_HDR_IKE			32
-#define VIRTCHNL2_PROTO_HDR_NATT_KEEP		33
-/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_PAY			34
-#define VIRTCHNL2_PROTO_HDR_L2TPV2		35
-#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	36
-#define VIRTCHNL2_PROTO_HDR_L2TPV3		37
-#define VIRTCHNL2_PROTO_HDR_GTP			38
-#define VIRTCHNL2_PROTO_HDR_GTP_EH		39
-#define VIRTCHNL2_PROTO_HDR_GTPCV2		40
-#define VIRTCHNL2_PROTO_HDR_GTPC_TEID		41
-#define VIRTCHNL2_PROTO_HDR_GTPU		42
-#define VIRTCHNL2_PROTO_HDR_GTPU_UL		43
-#define VIRTCHNL2_PROTO_HDR_GTPU_DL		44
-#define VIRTCHNL2_PROTO_HDR_ECPRI		45
-#define VIRTCHNL2_PROTO_HDR_VRRP		46
-#define VIRTCHNL2_PROTO_HDR_OSPF		47
-/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TUN			48
-#define VIRTCHNL2_PROTO_HDR_GRE			49
-#define VIRTCHNL2_PROTO_HDR_NVGRE		50
-#define VIRTCHNL2_PROTO_HDR_VXLAN		51
-#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE		52
-#define VIRTCHNL2_PROTO_HDR_GENEVE		53
-#define VIRTCHNL2_PROTO_HDR_NSH			54
-#define VIRTCHNL2_PROTO_HDR_QUIC		55
-#define VIRTCHNL2_PROTO_HDR_PFCP		56
-#define VIRTCHNL2_PROTO_HDR_PFCP_NODE		57
-#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION	58
-#define VIRTCHNL2_PROTO_HDR_RTP			59
-#define VIRTCHNL2_PROTO_HDR_ROCE		60
-#define VIRTCHNL2_PROTO_HDR_ROCEV1		61
-#define VIRTCHNL2_PROTO_HDR_ROCEV2		62
-/* protocol ids up to 32767 are reserved for AVF use */
-/* 32768 - 65534 are used for user defined protocol ids */
-/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_NO_PROTO		65535
-
-#define VIRTCHNL2_VERSION_MAJOR_2        2
-#define VIRTCHNL2_VERSION_MINOR_0        0
-
-
-/* VIRTCHNL2_OP_VERSION
+enum virtchnl2_proto_hdr_type {
+	/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ANY			= 0,
+	VIRTCHNL2_PROTO_HDR_PRE_MAC		= 1,
+	/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_MAC			= 2,
+	VIRTCHNL2_PROTO_HDR_POST_MAC		= 3,
+	VIRTCHNL2_PROTO_HDR_ETHERTYPE		= 4,
+	VIRTCHNL2_PROTO_HDR_VLAN		= 5,
+	VIRTCHNL2_PROTO_HDR_SVLAN		= 6,
+	VIRTCHNL2_PROTO_HDR_CVLAN		= 7,
+	VIRTCHNL2_PROTO_HDR_MPLS		= 8,
+	VIRTCHNL2_PROTO_HDR_UMPLS		= 9,
+	VIRTCHNL2_PROTO_HDR_MMPLS		= 10,
+	VIRTCHNL2_PROTO_HDR_PTP			= 11,
+	VIRTCHNL2_PROTO_HDR_CTRL		= 12,
+	VIRTCHNL2_PROTO_HDR_LLDP		= 13,
+	VIRTCHNL2_PROTO_HDR_ARP			= 14,
+	VIRTCHNL2_PROTO_HDR_ECP			= 15,
+	VIRTCHNL2_PROTO_HDR_EAPOL		= 16,
+	VIRTCHNL2_PROTO_HDR_PPPOD		= 17,
+	VIRTCHNL2_PROTO_HDR_PPPOE		= 18,
+	/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4		= 19,
+	/* IPv4 and IPv6 Fragment header types are only associated to
+	 * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+	 * cannot be used independently.
+	 */
+	/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4_FRAG		= 20,
+	/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6		= 21,
+	/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6_FRAG		= 22,
+	VIRTCHNL2_PROTO_HDR_IPV6_EH		= 23,
+	/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_UDP			= 24,
+	/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TCP			= 25,
+	/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_SCTP		= 26,
+	/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMP		= 27,
+	/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMPV6		= 28,
+	VIRTCHNL2_PROTO_HDR_IGMP		= 29,
+	VIRTCHNL2_PROTO_HDR_AH			= 30,
+	VIRTCHNL2_PROTO_HDR_ESP			= 31,
+	VIRTCHNL2_PROTO_HDR_IKE			= 32,
+	VIRTCHNL2_PROTO_HDR_NATT_KEEP		= 33,
+	/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_PAY			= 34,
+	VIRTCHNL2_PROTO_HDR_L2TPV2		= 35,
+	VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	= 36,
+	VIRTCHNL2_PROTO_HDR_L2TPV3		= 37,
+	VIRTCHNL2_PROTO_HDR_GTP			= 38,
+	VIRTCHNL2_PROTO_HDR_GTP_EH		= 39,
+	VIRTCHNL2_PROTO_HDR_GTPCV2		= 40,
+	VIRTCHNL2_PROTO_HDR_GTPC_TEID		= 41,
+	VIRTCHNL2_PROTO_HDR_GTPU		= 42,
+	VIRTCHNL2_PROTO_HDR_GTPU_UL		= 43,
+	VIRTCHNL2_PROTO_HDR_GTPU_DL		= 44,
+	VIRTCHNL2_PROTO_HDR_ECPRI		= 45,
+	VIRTCHNL2_PROTO_HDR_VRRP		= 46,
+	VIRTCHNL2_PROTO_HDR_OSPF		= 47,
+	/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TUN			= 48,
+	VIRTCHNL2_PROTO_HDR_GRE			= 49,
+	VIRTCHNL2_PROTO_HDR_NVGRE		= 50,
+	VIRTCHNL2_PROTO_HDR_VXLAN		= 51,
+	VIRTCHNL2_PROTO_HDR_VXLAN_GPE		= 52,
+	VIRTCHNL2_PROTO_HDR_GENEVE		= 53,
+	VIRTCHNL2_PROTO_HDR_NSH			= 54,
+	VIRTCHNL2_PROTO_HDR_QUIC		= 55,
+	VIRTCHNL2_PROTO_HDR_PFCP		= 56,
+	VIRTCHNL2_PROTO_HDR_PFCP_NODE		= 57,
+	VIRTCHNL2_PROTO_HDR_PFCP_SESSION	= 58,
+	VIRTCHNL2_PROTO_HDR_RTP			= 59,
+	VIRTCHNL2_PROTO_HDR_ROCE		= 60,
+	VIRTCHNL2_PROTO_HDR_ROCEV1		= 61,
+	VIRTCHNL2_PROTO_HDR_ROCEV2		= 62,
+	/* Protocol ids up to 32767 are reserved */
+	/* 32768 - 65534 are used for user defined protocol ids */
+	/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_NO_PROTO		= 65535,
+};
+
+enum virtchl2_version {
+	VIRTCHNL2_VERSION_MINOR_0		= 0,
+	VIRTCHNL2_VERSION_MAJOR_2		= 2,
+};
+
+/**
+ * struct virtchnl2_version_info - Version information
+ * @major: Major version
+ * @minor: Minor version
+ *
  * PF/VF posts its version number to the CP. CP responds with its version number
  * in the same format, along with a return code.
  * If there is a major version mismatch, then the PF/VF cannot operate.
@@ -466,6 +564,8 @@
  * This version opcode MUST always be specified as == 1, regardless of other
  * changes in the API. The CP must always respond to this message without
  * error regardless of version mismatch.
+ *
+ * Associated with VIRTCHNL2_OP_VERSION.
  */
 struct virtchnl2_version_info {
 	__le32 major;
@@ -474,7 +574,39 @@ struct virtchnl2_version_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
 
-/* VIRTCHNL2_OP_GET_CAPS
+/**
+ * struct virtchnl2_get_capabilities - Capabilities info
+ * @csum_caps: See enum virtchnl2_cap_txrx_csum
+ * @seg_caps: See enum virtchnl2_cap_seg
+ * @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at
+ * @rsc_caps: See enum virtchnl2_cap_rsc
+ * @rss_caps: See enum virtchnl2_cap_rss
+ * @other_caps: See enum virtchnl2_cap_other
+ * @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
+ *		     provided by CP.
+ * @mailbox_vector_id: Mailbox vector id
+ * @num_allocated_vectors: Maximum number of allocated vectors for the device
+ * @max_rx_q: Maximum number of supported Rx queues
+ * @max_tx_q: Maximum number of supported Tx queues
+ * @max_rx_bufq: Maximum number of supported buffer queues
+ * @max_tx_complq: Maximum number of supported completion queues
+ * @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
+ *		   responds with the maximum VFs granted.
+ * @max_vports: Maximum number of vports that can be supported
+ * @default_num_vports: Default number of vports driver should allocate on load
+ * @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes
+ * @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
+ *			    sent per transmit packet without needing to be
+ *			    linearized.
+ * @reserved: Reserved field
+ * @max_adis: Max number of ADIs
+ * @device_type: See enum virtchl2_device_type
+ * @min_sso_packet_len: Min packet length supported by device for single
+ *			segment offload
+ * @max_hdr_buf_per_lso: Max number of header buffers that can be used for
+ *			 an LSO
+ * @pad1: Padding for future extensions
+ *
  * Dataplane driver sends this message to CP to negotiate capabilities and
  * provides a virtchnl2_get_capabilities structure with its desired
  * capabilities, max_sriov_vfs and num_allocated_vectors.
@@ -492,60 +624,30 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
  * mailbox_vector_id and the number of itr index registers in itr_idx_map.
  * It also responds with default number of vports that the dataplane driver
  * should comeup with in default_num_vports and maximum number of vports that
- * can be supported in max_vports
+ * can be supported in max_vports.
+ *
+ * Associated with VIRTCHNL2_OP_GET_CAPS.
  */
 struct virtchnl2_get_capabilities {
-	/* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
 	__le32 csum_caps;
-
-	/* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
 	__le32 seg_caps;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 hsplit_caps;
-
-	/* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
 	__le32 rsc_caps;
-
-	/* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions  */
 	__le64 rss_caps;
-
-
-	/* see VIRTCHNL2_OTHER_CAPS definitions  */
 	__le64 other_caps;
-
-	/* DYN_CTL register offset and vector id for mailbox provided by CP */
 	__le32 mailbox_dyn_ctl;
 	__le16 mailbox_vector_id;
-	/* Maximum number of allocated vectors for the device */
 	__le16 num_allocated_vectors;
-
-	/* Maximum number of queues that can be supported */
 	__le16 max_rx_q;
 	__le16 max_tx_q;
 	__le16 max_rx_bufq;
 	__le16 max_tx_complq;
-
-	/* The PF sends the maximum VFs it is requesting. The CP responds with
-	 * the maximum VFs granted.
-	 */
 	__le16 max_sriov_vfs;
-
-	/* maximum number of vports that can be supported */
 	__le16 max_vports;
-	/* default number of vports driver should allocate on load */
 	__le16 default_num_vports;
-
-	/* Max header length hardware can parse/checksum, in bytes */
 	__le16 max_tx_hdr_size;
-
-	/* Max number of scatter gather buffers that can be sent per transmit
-	 * packet without needing to be linearized
-	 */
 	u8 max_sg_bufs_per_tx_pkt;
-
-	u8 reserved1;
-	/* upper bound of number of ADIs supported */
+	u8 reserved;
 	__le16 max_adis;
 
 	/* version of Control Plane that is running */
@@ -553,10 +655,7 @@ struct virtchnl2_get_capabilities {
 	__le16 oem_cp_ver_minor;
 	/* see VIRTCHNL2_DEVICE_TYPE definitions */
 	__le32 device_type;
-
-	/* min packet length supported by device for single segment offload */
 	u8 min_sso_packet_len;
-	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
 	u8 pad1[10];
@@ -564,14 +663,21 @@ struct virtchnl2_get_capabilities {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
 
+/**
+ * struct virtchnl2_queue_reg_chunk - Single queue chunk
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Start Queue ID
+ * @num_queues: Number of queues in the chunk
+ * @pad: Padding
+ * @qtail_reg_start: Queue tail register offset
+ * @qtail_reg_spacing: Queue tail register spacing
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_queue_reg_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
 	__le32 pad;
-
-	/* Queue tail register offset and spacing provided by CP */
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
@@ -580,7 +686,13 @@ struct virtchnl2_queue_reg_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/**
+ * struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
+ *				       queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info
+ */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -589,77 +701,91 @@ struct virtchnl2_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
-/* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
-#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
+/**
+ * enum virtchnl2_vport_flags - Vport flags
+ * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ */
+enum virtchnl2_vport_flags {
+	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+};
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-/* VIRTCHNL2_OP_CREATE_VPORT
- * PF sends this message to CP to create a vport by filling in required
+
+/**
+ * struct virtchnl2_create_vport - Create vport config info
+ * @vport_type: See enum virtchnl2_vport_type
+ * @txq_model: See virtchnl2_queue_model
+ * @rxq_model: See virtchnl2_queue_model
+ * @num_tx_q: Number of Tx queues
+ * @num_tx_complq: Valid only if txq_model is split queue
+ * @num_rx_q: Number of Rx queues
+ * @num_rx_bufq: Valid only if rxq_model is split queue
+ * @default_rx_q: Relative receive queue index to be used as default
+ * @vport_index: Used to align PF and CP in case of default multiple vports,
+ *		 it is filled by the PF and CP returns the same value, to
+ *		 enable the driver to support multiple asynchronous parallel
+ *		 CREATE_VPORT requests and associate a response to a specific
+ *		 request.
+ * @max_mtu: Max MTU. CP populates this field on response
+ * @vport_id: Vport id. CP populates this field on response
+ * @default_mac_addr: Default MAC address
+ * @vport_flags: See enum virtchnl2_vport_flags
+ * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
+ * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
+ * @reserved: Reserved bytes and cannot be used
+ * @rss_algorithm: RSS algorithm
+ * @rss_key_size: RSS key size
+ * @rss_lut_size: RSS LUT size
+ * @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to CP to create a vport by filling in required
  * fields of virtchnl2_create_vport structure.
  * CP responds with the updated virtchnl2_create_vport structure containing the
  * necessary fields followed by chunks which in turn will have an array of
  * num_chunks entries of virtchnl2_queue_chunk structures.
  */
 struct virtchnl2_create_vport {
-	/* PF/VF populates the following fields on request */
-	/* see VIRTCHNL2_VPORT_TYPE definitions */
 	__le16 vport_type;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 txq_model;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 rxq_model;
 	__le16 num_tx_q;
-	/* valid only if txq_model is split queue */
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
-	/* valid only if rxq_model is split queue */
 	__le16 num_rx_bufq;
-	/* relative receive queue index to be used as default */
 	__le16 default_rx_q;
-	/* used to align PF and CP in case of default multiple vports, it is
-	 * filled by the PF and CP returns the same value, to enable the driver
-	 * to support multiple asynchronous parallel CREATE_VPORT requests and
-	 * associate a response to a specific request
-	 */
 	__le16 vport_index;
-
-	/* CP populates the following fields on response */
 	__le16 max_mtu;
 	__le32 vport_id;
 	u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_VPORT_FLAGS definitions */
 	__le16 vport_flags;
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 rx_desc_ids;
-	/* see VIRTCHNL2_TX_DESC_IDS definitions */
 	__le64 tx_desc_ids;
-
-	u8 reserved1[72];
-
-	/* see VIRTCHNL2_RSS_ALGORITHM definitions */
+	u8 reserved[72];
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
-
-	u8 pad2[20];
+	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
 
-/* VIRTCHNL2_OP_DESTROY_VPORT
- * VIRTCHNL2_OP_ENABLE_VPORT
- * VIRTCHNL2_OP_DISABLE_VPORT
- * PF sends this message to CP to destroy, enable or disable a vport by filling
- * in the vport_id in virtchnl2_vport structure.
+/**
+ * struct virtchnl2_vport - Vport identifier information
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends this message to CP to destroy, enable or disable a vport by
+ * filling in the vport_id in virtchnl2_vport structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
+ * VIRTCHNL2_OP_DISABLE_VPORT.
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
@@ -668,42 +794,43 @@ struct virtchnl2_vport {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
 
-/* Transmit queue config info */
+/**
+ * struct virtchnl2_txq_info - Transmit queue config info
+ * @dma_ring_addr: DMA address
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue ID
+ * @relative_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue. Used in many to one mapping of transmit queues to
+ *		       completion queue.
+ * @model: See enum virtchnl2_queue_model
+ * @sched_mode: See enum virtchnl2_txq_sched_mode
+ * @qflags: TX queue feature flags
+ * @ring_len: Ring length
+ * @tx_compl_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue.
+ * @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
+ * @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
+ *		      messages for the respective CONFIG_TX queue.
+ * @pad: Padding
+ * @egress_pasid: Egress PASID info
+ * @egress_hdr_pasid: Egress HDR passid
+ * @egress_buf_pasid: Egress buf passid
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_txq_info {
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
-
 	__le32 queue_id;
-	/* valid only if queue model is split and type is transmit queue. Used
-	 * in many to one mapping of transmit queues to completion queue
-	 */
 	__le16 relative_queue_id;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 model;
-
-	/* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
 	__le16 sched_mode;
-
-	/* see VIRTCHNL2_TXQ_FLAGS definitions */
 	__le16 qflags;
 	__le16 ring_len;
-
-	/* valid only if queue model is split and type is transmit queue */
 	__le16 tx_compl_queue_id;
-	/* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
-	/* see VIRTCHNL2_PEER_TYPE definitions */
 	__le16 peer_type;
-	/* valid only if queue type is CONFIG_TX and used to deliver messages
-	 * for the respective CONFIG_TX queue
-	 */
 	__le16 peer_rx_queue_id;
 
 	u8 pad[4];
-
-	/* Egress pasid is used for SIOV use case */
 	__le32 egress_pasid;
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
@@ -713,12 +840,20 @@ struct virtchnl2_txq_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
 
-/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
- * PF sends this message to set up parameters for one or more transmit queues.
- * This message contains an array of num_qinfo instances of virtchnl2_txq_info
- * structures. CP configures requested queues and returns a status code. If
- * num_qinfo specified is greater than the number of queues associated with the
- * vport, an error is returned and no queues are configured.
+/**
+ * struct virtchnl2_config_tx_queues - TX queue config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of virtchnl2_txq_info structs
+ * @pad: Padding for future extensions
+ * @qinfo: Tx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more transmit
+ * queues. This message contains an array of num_qinfo instances of
+ * virtchnl2_txq_info structures. CP configures requested queues and returns
+ * a status code. If num_qinfo specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
  */
 struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
@@ -729,47 +864,55 @@ struct virtchnl2_config_tx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
 
-/* Receive queue config info */
+/**
+ * struct virtchnl2_rxq_info - Receive queue config info
+ * @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue id
+ * @model: See enum virtchnl2_queue_model
+ * @hdr_buffer_size: Header buffer size
+ * @data_buffer_size: Data buffer size
+ * @max_pkt_size: Max packet size
+ * @ring_len: Ring length
+ * @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
+ *			 This field must be a power of 2.
+ * @pad: Padding
+ * @dma_head_wb_addr: Applicable only for receive buffer queues
+ * @qflags: Applicable only for receive completion queues.
+ *	    See enum virtchnl2_rxq_flags.
+ * @rx_buffer_low_watermark: Rx buffer low watermark
+ * @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
+ *	       only if this field is set.
+ * @pad1: Padding
+ * @ingress_pasid: Ingress PASID
+ * @ingress_hdr_pasid: Ingress PASID header
+ * @ingress_buf_pasid: Ingress PASID buffer
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_rxq_info {
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 desc_ids;
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 queue_id;
-
-	/* see QUEUE_MODEL definitions */
 	__le16 model;
-
 	__le16 hdr_buffer_size;
 	__le32 data_buffer_size;
 	__le32 max_pkt_size;
-
 	__le16 ring_len;
 	u8 buffer_notif_stride;
 	u8 pad;
-
-	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
-
-	/* Applicable only for receive completion queues */
-	/* see VIRTCHNL2_RXQ_FLAGS definitions */
 	__le16 qflags;
-
 	__le16 rx_buffer_low_watermark;
-
-	/* valid only in split queue model */
 	__le16 rx_bufq1_id;
-	/* valid only in split queue model */
 	__le16 rx_bufq2_id;
-	/* it indicates if there is a second buffer, rx_bufq2_id is valid only
-	 * if this field is set
-	 */
 	u8 bufq2_ena;
 	u8 pad1[3];
-
-	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
@@ -778,12 +921,20 @@ struct virtchnl2_rxq_info {
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
-/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
- * PF sends this message to set up parameters for one or more receive queues.
+/**
+ * struct virtchnl2_config_rx_queues - Rx queues config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of instances
+ * @pad: Padding for future extensions
+ * @qinfo: Rx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more receive queues.
  * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
  * structures. CP configures requested queues and returns a status code.
  * If the number of queues specified is greater than the number of queues
  * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
  */
 struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
@@ -794,12 +945,23 @@ struct virtchnl2_config_rx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
 
-/* VIRTCHNL2_OP_ADD_QUEUES
- * PF sends this message to request additional transmit/receive queues beyond
+/**
+ * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
+ * @vport_id: Vport id
+ * @num_tx_q: Number of Tx qieues
+ * @num_tx_complq: Number of Tx completion queues
+ * @num_rx_q:  Number of Rx queues
+ * @num_rx_bufq:  Number of Rx buffer queues
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to request additional transmit/receive queues beyond
  * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
  * structure is used to specify the number of each type of queues.
  * CP responds with the same structure with the actual number of queues assigned
  * followed by num_chunks of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUES.
  */
 struct virtchnl2_add_queues {
 	__le32 vport_id;
@@ -815,65 +977,81 @@ struct virtchnl2_add_queues {
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
 
 /* Queue Groups Extension */
-
+/**
+ * struct virtchnl2_rx_queue_group_info - RX queue group info
+ * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
+ *		  allocated by CreateVport command. New size will be returned
+ *		  if allocation succeeded, otherwise original rss_size from
+ *		  CreateVport will be returned.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_rx_queue_group_info {
-	/* IN/OUT, user can ask to update rss_lut size originally allocated
-	 * by CreateVport command. New size will be returned if allocation
-	 * succeeded, otherwise original rss_size from CreateVport will
-	 * be returned.
-	 */
 	__le16 rss_lut_size;
-	/* Future extension purpose */
 	u8 pad[6];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
 
+/**
+ * struct virtchnl2_tx_queue_group_info - TX queue group info
+ * @tx_tc: TX TC queue group will be connected to
+ * @priority: Each group can have its own priority, value 0-7, while each group
+ *	      with unique priority is strict priority. It can be single set of
+ *	      queue groups which configured with same priority, then they are
+ *	      assumed part of WFQ arbitration group and are expected to be
+ *	      assigned with weight.
+ * @is_sp: Determines if queue group is expected to be Strict Priority according
+ *	   to its priority.
+ * @pad: Padding
+ * @pir_weight: Peak Info Rate Weight in case Queue Group is part of WFQ
+ *		arbitration set.
+ *		The weights of the groups are independent of each other.
+ *		Possible values: 1-200
+ * @cir_pad: Future extension purpose for CIR only
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_tx_queue_group_info { /* IN */
-	/* TX TC queue group will be connected to */
 	u8 tx_tc;
-	/* Each group can have its own priority, value 0-7, while each group
-	 * with unique priority is strict priority.
-	 * It can be single set of queue groups which configured with
-	 * same priority, then they are assumed part of WFQ arbitration
-	 * group and are expected to be assigned with weight.
-	 */
 	u8 priority;
-	/* Determines if queue group is expected to be Strict Priority
-	 * according to its priority
-	 */
 	u8 is_sp;
 	u8 pad;
-
-	/* Peak Info Rate Weight in case Queue Group is part of WFQ
-	 * arbitration set.
-	 * The weights of the groups are independent of each other.
-	 * Possible values: 1-200
-	 */
 	__le16 pir_weight;
-	/* Future extension purpose for CIR only */
 	u8 cir_pad[2];
-	/* Future extension purpose*/
 	u8 pad2[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_tx_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_group_id - Queue group ID
+ * @queue_group_id: Queue group ID - Depended on it's type
+ *		    Data: Is an ID which is relative to Vport
+ *		    Config & Mailbox: Is an ID which is relative to func
+ *		    This ID is use in future calls, i.e. delete.
+ *		    Requested by host and assigned by Control plane.
+ * @queue_group_type: Functional type: See enum virtchnl2_queue_group_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_group_id {
-	/* Queue group ID - depended on it's type
-	 * Data: is an ID which is relative to Vport
-	 * Config & Mailbox: is an ID which is relative to func.
-	 * This ID is use in future calls, i.e. delete.
-	 * Requested by host and assigned by Control plane.
-	 */
 	__le16 queue_group_id;
-	/* Functional type: see VIRTCHNL2_QUEUE_GROUP_TYPE definitions */
 	__le16 queue_group_type;
 	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 
+/**
+ * struct virtchnl2_queue_group_info - Queue group info
+ * @qg_id: Queue group ID
+ * @num_tx_q: Number of TX queues
+ * @num_tx_complq: Number of completion queues
+ * @num_rx_q: Number of RX queues
+ * @num_rx_bufq: Number of RX buffer queues
+ * @tx_q_grp_info: TX queue group info
+ * @rx_q_grp_info: RX queue group info
+ * @pad: Padding for future extensions
+ * @chunks: Queue register chunks
+ */
 struct virtchnl2_queue_group_info {
 	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
@@ -885,13 +1063,18 @@ struct virtchnl2_queue_group_info {
 
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
-	/* Future extension purpose */
 	u8 pad[40];
 	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_groups - Queue groups list
+ * @num_queue_groups: Total number of queue groups
+ * @pad: Padding for future extensions
+ * @groups: Array of queue group info
+ */
 struct virtchnl2_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[6];
@@ -900,78 +1083,107 @@ struct virtchnl2_queue_groups {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
 
-/* VIRTCHNL2_OP_ADD_QUEUE_GROUPS
+/**
+ * struct virtchnl2_add_queue_groups - Add queue groups
+ * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @pad: Padding for future extensions
+ * @qg_info: IN/OUT. List of all the queue groups
+ *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
  * groups and queues assigned followed by num_queue_groups and num_chunks of
  * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
-	/* IN, vport_id to add queue group to, same as allocated by CreateVport.
-	 * NA for mailbox and other types not assigned to vport
-	 */
 	__le32 vport_id;
 	u8 pad[4];
-	/* IN/OUT */
 	struct virtchnl2_queue_groups qg_info;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
-/* VIRTCHNL2_OP_DEL_QUEUE_GROUPS
+/**
+ * struct virtchnl2_delete_queue_groups - Delete queue groups
+ * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ *	      CreateVport.
+ * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @pad: Padding
+ * @qg_ids: IN, IDs & types of Queue Groups to delete
+ *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
  * to be deleted. CP performs requested action and returns status and update
  * num_queue_groups with number of successfully deleted queue groups.
+ *
+ * Associated with VIRTCHNL2_OP_DEL_QUEUE_GROUPS.
  */
 struct virtchnl2_delete_queue_groups {
-	/* IN, vport_id to delete queue group from, same as
-	 * allocated by CreateVport.
-	 */
 	__le32 vport_id;
-	/* IN/OUT, Defines number of groups provided below */
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	/* IN, IDs & types of Queue Groups to delete */
 	struct virtchnl2_queue_group_id qg_ids[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
 
-/* Structure to specify a chunk of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
+ *				   interrupt vectors.
+ * @start_vector_id: Start vector id
+ * @start_evv_id: Start EVV id
+ * @num_vectors: Number of vectors
+ * @pad: Padding
+ * @dynctl_reg_start: DYN_CTL register offset
+ * @dynctl_reg_spacing: Register spacing between DYN_CTL registers of 2
+ *			consecutive vectors.
+ * @itrn_reg_start: ITRN register offset
+ * @itrn_reg_spacing: Register spacing between dynctl registers of 2
+ *		      consecutive vectors.
+ * @itrn_index_spacing: Register spacing between itrn registers of the same
+ *			vector where n=0..2.
+ * @pad1: Padding for future extensions
+ *
+ * Register offsets and spacing provided by CP.
+ * Dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
 struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
 	__le16 pad;
 
-	/* Register offsets and spacing provided by CP.
-	 * dynamic control registers are used for enabling/disabling/re-enabling
-	 * interrupts and updating interrupt rates in the hotpath. Any changes
-	 * to interrupt rates in the dynamic control registers will be reflected
-	 * in the interrupt throttling rate registers.
-	 * itrn registers are used to update interrupt rates for specific
-	 * interrupt indices without modifying the state of the interrupt.
-	 */
 	__le32 dynctl_reg_start;
-	/* register spacing between dynctl registers of 2 consecutive vectors */
 	__le32 dynctl_reg_spacing;
 
 	__le32 itrn_reg_start;
-	/* register spacing between itrn registers of 2 consecutive vectors */
 	__le32 itrn_reg_spacing;
-	/* register spacing between itrn registers of the same vector
-	 * where n=0..2
-	 */
 	__le32 itrn_index_spacing;
 	u8 pad1[4];
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
-/* Structure to specify several chunks of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunks - Chunks of contiguous interrupt vectors
+ * @num_vchunks: number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends virtchnl2_vector_chunks struct to specify the vectors it is
+ * giving away. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
+ */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -981,12 +1193,19 @@ struct virtchnl2_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
 
-/* VIRTCHNL2_OP_ALLOC_VECTORS
- * PF sends this message to request additional interrupt vectors beyond the
+/**
+ * struct virtchnl2_alloc_vectors - Vector allocation info
+ * @num_vectors: Number of vectors
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends this message to request additional interrupt vectors beyond the
  * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
  * structure is used to specify the number of vectors requested. CP responds
  * with the same structure with the actual number of vectors assigned followed
  * by virtchnl2_vector_chunks structure identifying the vector ids.
+ *
+ * Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
@@ -997,46 +1216,46 @@ struct virtchnl2_alloc_vectors {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
 
-/* VIRTCHNL2_OP_DEALLOC_VECTORS
- * PF sends this message to release the vectors.
- * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
- * away. CP performs requested action and returns status.
- */
-
-/* VIRTCHNL2_OP_GET_RSS_LUT
- * VIRTCHNL2_OP_SET_RSS_LUT
- * PF sends this message to get or set RSS lookup table. Only supported if
+/**
+ * struct virtchnl2_rss_lut - RSS LUT info
+ * @vport_id: Vport id
+ * @lut_entries_start: Start of LUT entries
+ * @lut_entries: Number of LUT entrties
+ * @pad: Padding
+ * @lut: RSS lookup table
+ *
+ * PF/VF sends this message to get or set RSS lookup table. Only supported if
  * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_lut structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
  */
 struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	/* RSS lookup table */
 	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * PF sends this message to get RSS key. Only supported if both PF and CP
- * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
- * the virtchnl2_rss_key structure
- */
-
-/* VIRTCHNL2_OP_GET_RSS_HASH
- * VIRTCHNL2_OP_SET_RSS_HASH
- * PF sends these messages to get and set the hash filter enable bits for RSS.
- * By default, the CP sets these to all possible traffic types that the
+/**
+ * struct virtchnl2_rss_hash - RSS hash info
+ * @ptype_groups: Packet type groups bitmap
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends these messages to get and set the hash filter enable bits for
+ * RSS. By default, the CP sets these to all possible traffic types that the
  * hardware supports. The PF can query this value if it wants to change the
  * traffic types that are hashed by the hardware.
  * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
  * during configuration negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
  */
 struct virtchnl2_rss_hash {
-	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
 	u8 pad[4];
@@ -1044,12 +1263,18 @@ struct virtchnl2_rss_hash {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
 
-/* VIRTCHNL2_OP_SET_SRIOV_VFS
+/**
+ * struct virtchnl2_sriov_vfs_info - VFs info
+ * @num_vfs: Number of VFs
+ * @pad: Padding for future extensions
+ *
  * This message is used to set number of SRIOV VFs to be created. The actual
  * allocation of resources for the VFs in terms of vport, queues and interrupts
- * is done by CP. When this call completes, the APF driver calls
+ * is done by CP. When this call completes, the IDPF driver calls
  * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
  * The number of VFs set to 0 will destroy all the VFs of this function.
+ *
+ * Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
  */
 
 struct virtchnl2_sriov_vfs_info {
@@ -1059,8 +1284,14 @@ struct virtchnl2_sriov_vfs_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 
-/* structure to specify single chunk of queue */
-/* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_queue_reg_chunks - Specify several chunks of
+ *						contiguous queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info. 'chunks' is fixed size(not flexible) and
+ *	    will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1069,8 +1300,14 @@ struct virtchnl2_non_flex_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 
-/* structure to specify single chunk of interrupt vector */
-/* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_vector_chunks - Chunks of contiguous interrupt
+ *					     vectors.
+ * @num_vchunks: Number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info. 'vchunks' is fixed size
+ *	     (not flexible) and will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -1079,40 +1316,49 @@ struct virtchnl2_non_flex_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_non_flex_vector_chunks);
 
-/* VIRTCHNL2_OP_NON_FLEX_CREATE_ADI
+/**
+ * struct virtchnl2_non_flex_create_adi - Create ADI
+ * @pasid: PF sends PASID to CP
+ * @mbx_id: mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
+ *	    id else it is set to 0 by PF.
+ * @mbx_vec_id: PF sends mailbox vector id to CP
+ * @adi_index: PF populates this ADI index
+ * @adi_id: CP populates ADI id
+ * @pad: Padding
+ * @chunks: CP populates queue chunks
+ * @vchunks: PF sends vector chunks to CP
+ *
  * PF sends this message to CP to create ADI by filling in required
  * fields of virtchnl2_non_flex_create_adi structure.
- * CP responds with the updated virtchnl2_non_flex_create_adi structure containing
- * the necessary fields followed by chunks which in turn will have an array of
- * num_chunks entries of virtchnl2_queue_chunk structures.
+ * CP responds with the updated virtchnl2_non_flex_create_adi structure
+ * containing the necessary fields followed by chunks which in turn will have
+ * an array of num_chunks entries of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_CREATE_ADI.
  */
 struct virtchnl2_non_flex_create_adi {
-	/* PF sends PASID to CP */
 	__le32 pasid;
-	/*
-	 * mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
-	 * id else it is set to 0 by PF
-	 */
 	__le16 mbx_id;
-	/* PF sends mailbox vector id to CP */
 	__le16 mbx_vec_id;
-	/* PF populates this ADI index */
 	__le16 adi_index;
-	/* CP populates ADI id */
 	__le16 adi_id;
 	u8 pad[68];
-	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
-	/* PF sends vector chunks to CP */
 	struct virtchnl2_non_flex_vector_chunks vchunks;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
 
-/* VIRTCHNL2_OP_DESTROY_ADI
+/**
+ * struct virtchnl2_non_flex_destroy_adi - Destroy ADI
+ * @adi_id: ADI id to destroy
+ * @pad: Padding
+ *
  * PF sends this message to CP to destroy ADI by filling
  * in the adi_id in virtchnl2_destropy_adi structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI.
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
@@ -1121,7 +1367,17 @@ struct virtchnl2_non_flex_destroy_adi {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 
-/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+/**
+ * struct virtchnl2_ptype - Packet type info
+ * @ptype_id_10: 10-bit packet type
+ * @ptype_id_8: 8-bit packet type
+ * @proto_id_count: Number of protocol ids the packet supports, maximum of 32
+ *		    protocol ids are supported.
+ * @pad: Padding
+ * @proto_id: proto_id_count decides the allocation of protocol id array.
+ *	      See enum virtchnl2_proto_hdr_type.
+ *
+ * Based on the descriptor type the PF supports, CP fills ptype_id_10 or
  * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
  * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
  * last ptype.
@@ -1129,32 +1385,42 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 struct virtchnl2_ptype {
 	__le16 ptype_id_10;
 	u8 ptype_id_8;
-	/* number of protocol ids the packet supports, maximum of 32
-	 * protocol ids are supported
-	 */
 	u8 proto_id_count;
 	__le16 pad;
-	/* proto_id_count decides the allocation of protocol id array */
-	/* see VIRTCHNL2_PROTO_HDR_TYPE */
 	__le16 proto_id[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
 
-/* VIRTCHNL2_OP_GET_PTYPE_INFO
- * PF sends this message to CP to get all supported packet types. It does by
- * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
- * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
- * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
- * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
- * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
- * messages, where each message will have the start ptype id, number of ptypes
- * sent in that message and the ptype array itself. When CP is done updating
- * all ptype information it extracted from the package (number of ptypes
- * extracted might be less than what PF expects), it will append a dummy ptype
- * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
- * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
- * messages.
+/**
+ * struct virtchnl2_get_ptype_info - Packet type info
+ * @start_ptype_id: Starting ptype ID
+ * @num_ptypes: Number of packet types from start_ptype_id
+ * @pad: Padding for future extensions
+ * @ptype: Array of packet type info
+ *
+ * The total number of supported packet types is based on the descriptor type.
+ * For the flex descriptor, it is 1024 (10-bit ptype), and for the base
+ * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
+ * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
+ * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
+ * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
+ * is no specific field for the ptypes but are added at the end of the
+ * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * Reason for doing this is because compiler doesn't allow nested flexible
+ * array fields).
+ *
+ * If all the ptypes don't fit into one mailbox buffer, CP splits the
+ * ptype info into multiple messages, where each message will have its own
+ * 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
+ * updating all the ptype information extracted from the package (the number of
+ * ptypes extracted might be less than what PF/VF expects), it will append a
+ * dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
+ * to the ptype array.
+ *
+ * PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
  */
 struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
@@ -1165,25 +1431,46 @@ struct virtchnl2_get_ptype_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
 
-/* VIRTCHNL2_OP_GET_STATS
+/**
+ * struct virtchnl2_vport_stats - Vport statistics
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @rx_bytes: Received bytes
+ * @rx_unicast: Received unicast packets
+ * @rx_multicast: Received multicast packets
+ * @rx_broadcast: Received broadcast packets
+ * @rx_discards: Discarded packets on receive
+ * @rx_errors: Receive errors
+ * @rx_unknown_protocol: Unlnown protocol
+ * @tx_bytes: Transmitted bytes
+ * @tx_unicast: Transmitted unicast packets
+ * @tx_multicast: Transmitted multicast packets
+ * @tx_broadcast: Transmitted broadcast packets
+ * @tx_discards: Discarded packets on transmit
+ * @tx_errors: Transmit errors
+ * @rx_invalid_frame_length: Packets with invalid frame length
+ * @rx_overflow_drop: Packets dropped on buffer overflow
+ *
  * PF/VF sends this message to CP to get the update stats by specifying the
  * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ *
+ * Associated with VIRTCHNL2_OP_GET_STATS.
  */
 struct virtchnl2_vport_stats {
 	__le32 vport_id;
 	u8 pad[4];
 
-	__le64 rx_bytes;		/* received bytes */
-	__le64 rx_unicast;		/* received unicast pkts */
-	__le64 rx_multicast;		/* received multicast pkts */
-	__le64 rx_broadcast;		/* received broadcast pkts */
+	__le64 rx_bytes;
+	__le64 rx_unicast;
+	__le64 rx_multicast;
+	__le64 rx_broadcast;
 	__le64 rx_discards;
 	__le64 rx_errors;
 	__le64 rx_unknown_protocol;
-	__le64 tx_bytes;		/* transmitted bytes */
-	__le64 tx_unicast;		/* transmitted unicast pkts */
-	__le64 tx_multicast;		/* transmitted multicast pkts */
-	__le64 tx_broadcast;		/* transmitted broadcast pkts */
+	__le64 tx_bytes;
+	__le64 tx_unicast;
+	__le64 tx_multicast;
+	__le64 tx_broadcast;
 	__le64 tx_discards;
 	__le64 tx_errors;
 	__le64 rx_invalid_frame_length;
@@ -1192,7 +1479,9 @@ struct virtchnl2_vport_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
 
-/* physical port statistics */
+/**
+ * struct virtchnl2_phy_port_stats - Physical port statistics
+ */
 struct virtchnl2_phy_port_stats {
 	__le64 rx_bytes;
 	__le64 rx_unicast_pkts;
@@ -1245,10 +1534,17 @@ struct virtchnl2_phy_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(600, virtchnl2_phy_port_stats);
 
-/* VIRTCHNL2_OP_GET_PORT_STATS
- * PF/VF sends this message to CP to get the updated stats by specifying the
+/**
+ * struct virtchnl2_port_stats - Port statistics
+ * @vport_id: Vport ID
+ * @pad: Padding
+ * @phy_port_stats: Physical port statistics
+ * @virt_port_stats: Vport statistics
+ *
  * vport_id. CP responds with stats in struct virtchnl2_port_stats that
  * includes both physical port as well as vport statistics.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PORT_STATS.
  */
 struct virtchnl2_port_stats {
 	__le32 vport_id;
@@ -1260,44 +1556,61 @@ struct virtchnl2_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(736, virtchnl2_port_stats);
 
-/* VIRTCHNL2_OP_EVENT
+/**
+ * struct virtchnl2_event - Event info
+ * @event: Event opcode. See enum virtchnl2_event_codes
+ * @link_speed: Link_speed provided in Mbps
+ * @vport_id: Vport ID
+ * @link_status: Link status
+ * @pad: Padding
+ * @adi_id: ADI id
+ *
  * CP sends this message to inform the PF/VF driver of events that may affect
  * it. No direct response is expected from the driver, though it may generate
  * other messages in response to this one.
+ *
+ * Associated with VIRTCHNL2_OP_EVENT.
  */
 struct virtchnl2_event {
-	/* see VIRTCHNL2_EVENT_CODES definitions */
 	__le32 event;
-	/* link_speed provided in Mbps */
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
 	u8 pad;
-
-	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * VIRTCHNL2_OP_SET_RSS_KEY
+/**
+ * struct virtchnl2_rss_key - RSS key info
+ * @vport_id: Vport id
+ * @key_len: Length of RSS key
+ * @pad: Padding
+ * @key: RSS hash key, packed bytes
  * PF/VF sends this message to get or set RSS key. Only supported if both
  * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_key structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
  */
 struct virtchnl2_rss_key {
 	__le32 vport_id;
 	__le16 key_len;
 	u8 pad;
-	u8 key[1];         /* RSS hash key, packed bytes */
+	u8 key[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
 
-/* structure to specify a chunk of contiguous queues */
+/**
+ * struct virtchnl2_queue_chunk - Chunk of contiguous queues
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Starting queue id
+ * @num_queues: Number of queues
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
@@ -1306,7 +1619,11 @@ struct virtchnl2_queue_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/* struct virtchnl2_queue_chunks - Chunks of contiguous queues
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
+ */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1315,14 +1632,19 @@ struct virtchnl2_queue_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
 
-/* VIRTCHNL2_OP_ENABLE_QUEUES
- * VIRTCHNL2_OP_DISABLE_QUEUES
- * VIRTCHNL2_OP_DEL_QUEUES
+/**
+ * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
  *
- * PF sends these messages to enable, disable or delete queues specified in
- * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * PF/VF sends these messages to enable, disable or delete queues specified in
+ * chunks. It sends virtchnl2_del_ena_dis_queues struct to specify the queues
  * to be enabled/disabled/deleted. Also applicable to single queue receive or
  * transmit. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
+ * VIRTCHNL2_OP_DISABLE_QUEUES.
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
@@ -1333,30 +1655,43 @@ struct virtchnl2_del_ena_dis_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
 
-/* Queue to vector mapping */
+/**
+ * struct virtchnl2_queue_vector - Queue to vector mapping
+ * @queue_id: Queue id
+ * @vector_id: Vector id
+ * @pad: Padding
+ * @itr_idx: See enum virtchnl2_itr_idx
+ * @queue_type: See enum virtchnl2_queue_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_vector {
 	__le32 queue_id;
 	__le16 vector_id;
 	u8 pad[2];
 
-	/* see VIRTCHNL2_ITR_IDX definitions */
 	__le32 itr_idx;
 
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
 	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
 
-/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
- * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+/**
+ * struct virtchnl2_queue_vector_maps - Map/unmap queues info
+ * @vport_id: Vport id
+ * @num_qv_maps: Number of queue vector maps
+ * @pad: Padding
+ * @qv_maps: Queue to vector maps
  *
- * PF sends this message to map or unmap queues to vectors and interrupt
+ * PF/VF sends this message to map or unmap queues to vectors and interrupt
  * throttling rate index registers. External data buffer contains
  * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
  * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
  * after validating the queue and vector ids and returns a status code.
+ *
+ * Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
  */
 struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
@@ -1367,11 +1702,17 @@ struct virtchnl2_queue_vector_maps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
 
-/* VIRTCHNL2_OP_LOOPBACK
+/**
+ * struct virtchnl2_loopback - Loopback info
+ * @vport_id: Vport id
+ * @enable: Enable/disable
+ * @pad: Padding for future extensions
  *
  * PF/VF sends this message to transition to/from the loopback state. Setting
  * the 'enable' to 1 enables the loopback state and setting 'enable' to 0
  * disables it. CP configures the state to loopback and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_LOOPBACK.
  */
 struct virtchnl2_loopback {
 	__le32 vport_id;
@@ -1381,22 +1722,31 @@ struct virtchnl2_loopback {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
 
-/* structure to specify each MAC address */
+/* struct virtchnl2_mac_addr - MAC address info
+ * @addr: MAC address
+ * @type: MAC type. See enum virtchnl2_mac_addr_type.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_mac_addr {
 	u8 addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_MAC_TYPE definitions */
 	u8 type;
 	u8 pad;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
 
-/* VIRTCHNL2_OP_ADD_MAC_ADDR
- * VIRTCHNL2_OP_DEL_MAC_ADDR
+/**
+ * struct virtchnl2_mac_addr_list - List of MAC addresses
+ * @vport_id: Vport id
+ * @num_mac_addr: Number of MAC addresses
+ * @pad: Padding
+ * @mac_addr_list: List with MAC address info
  *
  * PF/VF driver uses this structure to send list of MAC addresses to be
  * added/deleted to the CP where as CP performs the action and returns the
  * status.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
 struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
@@ -1407,30 +1757,40 @@ struct virtchnl2_mac_addr_list {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
 
-/* VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE
+/**
+ * struct virtchnl2_promisc_info - Promiscuous type information
+ * @vport_id: Vport id
+ * @flags: See enum virtchnl2_promisc_flags
+ * @pad: Padding for future extensions
  *
  * PF/VF sends vport id and flags to the CP where as CP performs the action
  * and returns the status.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
  */
 struct virtchnl2_promisc_info {
 	__le32 vport_id;
-	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
 	__le16 flags;
 	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
 
-/* VIRTCHNL2_PTP_CAPS
- * PTP capabilities
+/**
+ * enum virtchnl2_ptp_caps - PTP capabilities
  */
-#define VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	BIT(0)
-#define VIRTCHNL2_PTP_CAP_PTM			BIT(1)
-#define VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	BIT(2)
-#define VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	BIT(3)
-#define	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	BIT(4)
+enum virtchnl2_ptp_caps {
+	VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	= BIT(0),
+	VIRTCHNL2_PTP_CAP_PTM			= BIT(1),
+	VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	= BIT(2),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	= BIT(3),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	= BIT(4),
+};
 
-/* Legacy cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_legacy_cross_time_reg - Legacy cross time registers
+ *						offsets.
+ */
 struct virtchnl2_ptp_legacy_cross_time_reg {
 	__le32 shadow_time_0;
 	__le32 shadow_time_l;
@@ -1440,7 +1800,9 @@ struct virtchnl2_ptp_legacy_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_legacy_cross_time_reg);
 
-/* PTM cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_ptm_cross_time_reg - PTM cross time registers offsets
+ */
 struct virtchnl2_ptp_ptm_cross_time_reg {
 	__le32 art_l;
 	__le32 art_h;
@@ -1450,7 +1812,10 @@ struct virtchnl2_ptp_ptm_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_ptm_cross_time_reg);
 
-/* Registers needed to control the main clock */
+/**
+ * struct virtchnl2_ptp_device_clock_control - Registers needed to control the
+ *					       main clock.
+ */
 struct virtchnl2_ptp_device_clock_control {
 	__le32 cmd;
 	__le32 incval_l;
@@ -1462,7 +1827,13 @@ struct virtchnl2_ptp_device_clock_control {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_device_clock_control);
 
-/* Structure that defines tx tstamp entry - index and register offset */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_entry - PTP TX timestamp entry
+ * @tx_latch_register_base: TX latch register base
+ * @tx_latch_register_offset: TX latch register offset
+ * @index: Index
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_entry {
 	__le32 tx_latch_register_base;
 	__le32 tx_latch_register_offset;
@@ -1472,12 +1843,15 @@ struct virtchnl2_ptp_tx_tstamp_entry {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_entry);
 
-/* Structure that defines tx tstamp entries - total number of latches
- * and the array of entries.
+/**
+ * struct virtchnl2_ptp_tx_tstamp - Structure that defines tx tstamp entries
+ * @num_latches: Total number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @ptp_tx_tstamp_entries: Aarray of TX timestamp entries
  */
 struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
 	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
@@ -1485,13 +1859,21 @@ struct virtchnl2_ptp_tx_tstamp {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
 
-/* VIRTCHNL2_OP_GET_PTP_CAPS
+/**
+ * struct virtchnl2_get_ptp_caps - Get PTP capabilities
+ * @ptp_caps: PTP capability bitmap. See enum virtchnl2_ptp_caps.
+ * @pad: Padding
+ * @legacy_cross_time_reg: Legacy cross time register
+ * @ptm_cross_time_reg: PTM cross time register
+ * @device_clock_control: Device clock control
+ * @tx_tstamp: TX timestamp
+ *
  * PV/VF sends this message to negotiate PTP capabilities. CP updates bitmap
  * with supported features and fulfills appropriate structures.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_CAPS.
  */
 struct virtchnl2_get_ptp_caps {
-	/* PTP capability bitmap */
-	/* see VIRTCHNL2_PTP_CAPS definitions */
 	__le32 ptp_caps;
 	u8 pad[4];
 
@@ -1503,7 +1885,15 @@ struct virtchnl2_get_ptp_caps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
 
-/* Structure that describes tx tstamp values, index and validity */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
+ *					  values, index and validity.
+ * @tstamp_h: Timestamp high
+ * @tstamp_l: Timestamp low
+ * @index: Index
+ * @valid: Timestamp validity
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_latch {
 	__le32 tstamp_h;
 	__le32 tstamp_l;
@@ -1514,9 +1904,17 @@ struct virtchnl2_ptp_tx_tstamp_latch {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
 
-/* VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latches - PTP TX timestamp latches
+ * @num_latches: Number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @tstamp_latches: PTP TX timestamp latch
+ *
  * PF/VF sends this message to receive a specified number of timestamps
  * entries.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES.
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
@@ -1611,7 +2009,7 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
  * @msg: pointer to the msg buffer
  * @msglen: msg length
  *
- * validate msg format against struct for each opcode
+ * Validate msg format against struct for each opcode.
  */
 static inline int
 virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
@@ -1620,7 +2018,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	bool err_msg_format = false;
 	__le32 valid_len = 0;
 
-	/* Validate message length. */
+	/* Validate message length */
 	switch (v_opcode) {
 	case VIRTCHNL2_OP_VERSION:
 		valid_len = sizeof(struct virtchnl2_version_info);
@@ -1635,7 +2033,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_create_vport *)msg;
 
 			if (cvport->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1650,7 +2048,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_non_flex_create_adi *)msg;
 
 			if (cadi->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1705,7 +2103,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_add_queues *)msg;
 
 			if (add_q->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1732,7 +2130,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
 		valid_len = sizeof(struct virtchnl2_add_queue_groups);
 		if (msglen != valid_len) {
-			__le32 i = 0, offset = 0;
+			__le64 offset;
+			__le32 i;
 			struct virtchnl2_add_queue_groups *add_queue_grp =
 				(struct virtchnl2_add_queue_groups *)msg;
 			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
@@ -1799,7 +2198,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_alloc_vectors *)msg;
 
 			if (v_av->vchunks.num_vchunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1828,7 +2227,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_key *)msg;
 
 			if (vrk->key_len == 0) {
-				/* zero length is allowed as input */
+				/* Zero length is allowed as input */
 				break;
 			}
 
@@ -1843,7 +2242,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_lut *)msg;
 
 			if (vrl->lut_entries == 0) {
-				/* zero entries is allowed as input */
+				/* Zero entries is allowed as input */
 				break;
 			}
 
@@ -1900,13 +2299,13 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
 		}
 		break;
-	/* These are always errors coming from the VF. */
+	/* These are always errors coming from the VF */
 	case VIRTCHNL2_OP_EVENT:
 	case VIRTCHNL2_OP_UNKNOWN:
 	default:
 		return VIRTCHNL2_STATUS_ERR_ESRCH;
 	}
-	/* few more checks */
+	/* Few more checks */
 	if (err_msg_format || valid_len != msglen)
 		return VIRTCHNL2_STATUS_ERR_EINVAL;
 
diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index 9e04cf8628..f7521d87a7 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 /*
  * Copyright (C) 2019 Intel Corporation
@@ -12,199 +12,220 @@
 /* VIRTCHNL2_TX_DESC_IDS
  * Transmit descriptor ID flags
  */
-#define VIRTCHNL2_TXDID_DATA				BIT(0)
-#define VIRTCHNL2_TXDID_CTX				BIT(1)
-#define VIRTCHNL2_TXDID_REINJECT_CTX			BIT(2)
-#define VIRTCHNL2_TXDID_FLEX_DATA			BIT(3)
-#define VIRTCHNL2_TXDID_FLEX_CTX			BIT(4)
-#define VIRTCHNL2_TXDID_FLEX_TSO_CTX			BIT(5)
-#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		BIT(6)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		BIT(7)
-#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	BIT(8)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	BIT(9)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		BIT(10)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			BIT(11)
-#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			BIT(12)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		BIT(13)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		BIT(14)
-#define VIRTCHNL2_TXDID_DESC_DONE			BIT(15)
-
-/* VIRTCHNL2_RX_DESC_IDS
+enum virtchnl2_tx_desc_ids {
+	VIRTCHNL2_TXDID_DATA				= BIT(0),
+	VIRTCHNL2_TXDID_CTX				= BIT(1),
+	VIRTCHNL2_TXDID_REINJECT_CTX			= BIT(2),
+	VIRTCHNL2_TXDID_FLEX_DATA			= BIT(3),
+	VIRTCHNL2_TXDID_FLEX_CTX			= BIT(4),
+	VIRTCHNL2_TXDID_FLEX_TSO_CTX			= BIT(5),
+	VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		= BIT(6),
+	VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		= BIT(7),
+	VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	= BIT(8),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	= BIT(9),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		= BIT(10),
+	VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			= BIT(11),
+	VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			= BIT(12),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		= BIT(13),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		= BIT(14),
+	VIRTCHNL2_TXDID_DESC_DONE			= BIT(15),
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_IDS
  * Receive descriptor IDs (range from 0 to 63)
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE			0
-#define VIRTCHNL2_RXDID_1_32B_BASE			1
-/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
- * differentiated based on queue model; e.g. single queue model can
- * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
- * for DID 2.
- */
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ			2
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC			2
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW			3
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB		4
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL		5
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2			6
-#define VIRTCHNL2_RXDID_7_HW_RSVD			7
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC		16
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN		17
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4		18
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6		19
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW		20
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP		21
-/* 22 through 63 are reserved */
-
-/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+enum virtchnl2_rx_desc_ids {
+	VIRTCHNL2_RXDID_0_16B_BASE,
+	VIRTCHNL2_RXDID_1_32B_BASE,
+	/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+	 * differentiated based on queue model; e.g. single queue model can
+	 * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+	 * for DID 2.
+	 */
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ		= 2,
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC		= VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW		= 3,
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB	= 4,
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL	= 5,
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2		= 6,
+	VIRTCHNL2_RXDID_7_HW_RSVD		= 7,
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC	= 16,
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN	= 17,
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4	= 18,
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6	= 19,
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW	= 20,
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP	= 21,
+	/* 22 through 63 are reserved */
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
-/* 22 through 63 are reserved */
-
-/* Rx */
+#define VIRTCHNL2_RXDID_M(bit)			BIT_ULL(VIRTCHNL2_RXDID_##bit)
+
+enum virtchnl2_rx_desc_id_bitmasks {
+	VIRTCHNL2_RXDID_0_16B_BASE_M		= VIRTCHNL2_RXDID_M(0_16B_BASE),
+	VIRTCHNL2_RXDID_1_32B_BASE_M		= VIRTCHNL2_RXDID_M(1_32B_BASE),
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		= VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		= VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		= VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW),
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	= VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB),
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	= VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL),
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	= VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2),
+	VIRTCHNL2_RXDID_7_HW_RSVD_M		= VIRTCHNL2_RXDID_M(7_HW_RSVD),
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	= VIRTCHNL2_RXDID_M(16_COMMS_GENERIC),
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	= VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN),
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	= VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4),
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	= VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6),
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	= VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW),
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	= VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP),
+	/* 22 through 63 are reserved */
+};
+
 /* For splitq virtchnl2_rx_flex_desc_adv desc members */
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		GENMASK(3, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		GENMASK(7, 6)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		\
-	IDPF_M(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M			\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M		GENMASK(15, 13)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M	\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M		GENMASK(13, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M		GENMASK(14, 12)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
 
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S		7
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST			8 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		0 /* 2 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			8 /* this entry must be last!!! */
-
-/* for singleq (flex) virtchnl2_rx_flex_desc fields */
-/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		= 0,
+	/* 2 bits */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		= 2,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		= 3,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	= 4,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	= 5,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	= 6,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	= 7,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			= 8,
+};
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields
+ * for virtchnl2_rx_flex_desc.ptype_flex_flags0 member
+ */
 #define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			GENMASK(9, 0)
 
-/* for virtchnl2_rx_flex_desc.pkt_length member */
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M			\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+/* For virtchnl2_rx_flex_desc.pkt_len member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S		0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M		GENMASK(13, 0)
 
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S			1
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S			2
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S			3
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S		7
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S			8
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S		9
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S			10
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S			11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST			16 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+enum virtchnl2_rx_flex_desc_status_error_0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			0 /* 4 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			5
-/* [10:6] reserved */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			16 /* this entry must be last!!! */
-
-/* for virtchnl2_rx_flex_desc.ts_low member */
+enum virtchnl2_rx_flex_desc_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			= 0,
+	/* 4 bits */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			= 4,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			= 5,
+	/* [10:6] reserved */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		= 11,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		= 12,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		= 13,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		= 14,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		= 15,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			= 16,
+};
+
+/* For virtchnl2_rx_flex_desc.ts_low member */
 #define VIRTCHNL2_RX_FLEX_TSTAMP_VALID				BIT(0)
 
 /* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
@@ -212,72 +233,89 @@
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M	\
 	BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S	52
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	\
-	IDPF_M(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	GENMASK_ULL(62, 52)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S	38
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	\
-	IDPF_M(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	GENMASK_ULL(51, 38)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S	30
-#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	\
-	IDPF_M(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	GENMASK_ULL(37, 30)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S	19
-#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	\
-	IDPF_M(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	GENMASK_ULL(26, 19)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S	0
-#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	\
-	IDPF_M(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	GENMASK_ULL(18, 0)
 
-/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		0
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		1
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		2
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		3
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		4
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		5 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	8
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		9 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		11
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		12 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		14
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	15
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		16 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	18
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		19 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		= 1,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		= 5, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	= 8,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		= 9, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		= 11,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		= 12, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		= 14,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	= 15,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		= 16, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	= 18,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		= 19, /* this entry must be last!!! */
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S	0
+enum virtcnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S,
+};
 
-/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		0
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	1
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		2
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		3 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		3
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		4
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		5
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		6
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		7
-
-/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_error_bits {
+	VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	= 1,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		= 3, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		= 5,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		= 6,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		= 7,
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA		0
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID		1
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV		2
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH		3
+enum virtchnl2_rx_base_desc_flstat_values {
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH,
+};
 
-/* Receive Descriptors */
-/* splitq buf
+/**
+ * struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
+ * @qword0: RX buffer struct
+ * @qword0.buf_id: Buffer identifier
+ * @qword0.rsvd0: Reserved
+ * @qword0.rsvd1: Reserved
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd2: Reserved
+ *
+ * Receive Descriptors
+ * SplitQ buffer
  * |                                       16|                   0|
  * ----------------------------------------------------------------
  * | RSV                                     | Buffer ID          |
@@ -292,16 +330,23 @@
  */
 struct virtchnl2_splitq_rx_buf_desc {
 	struct {
-		__le16  buf_id; /* Buffer Identifier */
+		__le16  buf_id;
 		__le16  rsvd0;
 		__le32  rsvd1;
 	} qword0;
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
-/* singleq buf
+/**
+ * struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd1: Reserved
+ * @rsvd2: Reserved
+ *
+ * SingleQ buffer
  * |                                                             0|
  * ----------------------------------------------------------------
  * | Rx packet buffer address                                     |
@@ -315,18 +360,44 @@ struct virtchnl2_splitq_rx_buf_desc {
  * |                                                             0|
  */
 struct virtchnl2_singleq_rx_buf_desc {
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd1;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
+/**
+ * union virtchnl2_rx_buf_desc - RX buffer descriptor
+ * @read: Singleq RX buffer descriptor format
+ * @split_rd: Splitq RX buffer descriptor format
+ */
 union virtchnl2_rx_buf_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
 	struct virtchnl2_splitq_rx_buf_desc		split_rd;
 };
 
-/* (0x00) singleq wb(compl) */
+/**
+ * struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format
+ * @qword0: First quad word struct
+ * @qword0.lo_dword: Lower dual word struct
+ * @qword0.lo_dword.mirroring_status: Mirrored packet status
+ * @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet
+ * @qword0.hi_dword: High dual word union
+ * @qword0.hi_dword.rss: RSS hash
+ * @qword0.hi_dword.fd_id: Flow director filter id
+ * @qword1: Second quad word struct
+ * @qword1.status_error_ptype_len: Status/error/PTYPE/length
+ * @qword2: Third quad word struct
+ * @qword2.ext_status: Extended status
+ * @qword2.rsvd: Reserved
+ * @qword2.l2tag2_1: Extracted L2 tag 2 from the packet
+ * @qword2.l2tag2_2: Reserved
+ * @qword3: Fourth quad word struct
+ * @qword3.reserved: Reserved
+ * @qword3.fd_id: Flow director filter id
+ *
+ * Profile ID 0x1, SingleQ, base writeback format.
+ */
 struct virtchnl2_singleq_base_rx_desc {
 	struct {
 		struct {
@@ -334,16 +405,15 @@ struct virtchnl2_singleq_base_rx_desc {
 			__le16 l2tag1;
 		} lo_dword;
 		union {
-			__le32 rss; /* RSS Hash */
-			__le32 fd_id; /* Flow Director filter id */
+			__le32 rss;
+			__le32 fd_id;
 		} hi_dword;
 	} qword0;
 	struct {
-		/* status/error/PTYPE/length */
 		__le64 status_error_ptype_len;
 	} qword1;
 	struct {
-		__le16 ext_status; /* extended status */
+		__le16 ext_status;
 		__le16 rsvd;
 		__le16 l2tag2_1;
 		__le16 l2tag2_2;
@@ -352,19 +422,40 @@ struct virtchnl2_singleq_base_rx_desc {
 		__le32 reserved;
 		__le32 fd_id;
 	} qword3;
-}; /* writeback */
+};
 
-/* (0x01) singleq flex compl */
+/**
+ * struct virtchnl2_rx_flex_desc - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @flex_meta0: Flexible metadata container 0
+ * @flex_meta1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @time_stamp_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flex_meta2: Flexible metadata container 2
+ * @flex_meta3: Flexible metadata container 3
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.flex_meta4: Flexible metadata container 4
+ * @flex_ts.flex.flex_meta5: Flexible metadata container 5
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x1, SingleQ, flex completion writeback format.
+ */
 struct virtchnl2_rx_flex_desc {
 	/* Qword 0 */
-	u8 rxdid; /* descriptor builder profile id */
-	u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
-	__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
-	__le16 pkt_len; /* [15:14] are reserved */
-	__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
-					/* sph=[11:11] */
-					/* ff1/ext=[15:12] */
-
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
@@ -390,7 +481,29 @@ struct virtchnl2_rx_flex_desc {
 	} flex_ts;
 };
 
-/* (0x02) */
+/**
+ * struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SingleQ, flex writeback format.
+ */
 struct virtchnl2_rx_flex_desc_nic {
 	/* Qword 0 */
 	u8 rxdid;
@@ -422,8 +535,27 @@ struct virtchnl2_rx_flex_desc_nic {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Switch Profile
- * RxDID Profile Id 3
+/**
+ * struct virtchnl2_rx_flex_desc_sw - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @src_vsi: Source VSI, [10:15] are reserved
+ * @flex_md1_rsvd: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor Switch Profile
+ * RxDID Profile ID 0x3, SingleQ
  * Flex-field 0: Source Vsi
  */
 struct virtchnl2_rx_flex_desc_sw {
@@ -437,9 +569,55 @@ struct virtchnl2_rx_flex_desc_sw {
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
-	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 src_vsi;
 	__le16 flex_md1_rsvd;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le32 rsvd;
+	__le32 ts_high;
+};
 
+#ifndef EXTERNAL_RELEASE
+/**
+ * struct virtchnl2_rx_flex_desc_nic_veb_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @dst_vsi: Destination VSI, [10:15] are reserved
+ * @flex_field_1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 0x4
+ * Flex-field 0: Destination Vsi
+ */
+struct virtchnl2_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi;
+	__le16 flex_field_1;
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flex_flags2;
@@ -448,13 +626,85 @@ struct virtchnl2_rx_flex_desc_sw {
 	__le16 l2tag2_2nd;
 
 	/* Qword 3 */
-	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 rsvd;
 	__le32 ts_high;
 };
 
-
-/* Rx Flex Descriptor NIC Profile
- * RxDID Profile Id 6
+/**
+ * struct virtchnl2_rx_flex_desc_nic_acl_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @acl_ctr0: ACL counter 0
+ * @acl_ctr1: ACL counter 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @acl_ctr2: ACL counter 2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile ID 0x5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct virtchnl2_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd;
+	__le32 ts_high;
+};
+#endif /* !EXTERNAL_RELEASE */
+
+/**
+ * struct virtchnl2_rx_flex_desc_nic_2 - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @src_vsi: Source VSI
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC Profile
+ * RxDID Profile ID 0x6
  * Flex-field 0: RSS hash lower 16-bits
  * Flex-field 1: RSS hash upper 16-bits
  * Flex-field 2: Flow Id lower 16-bits
@@ -493,29 +743,43 @@ struct virtchnl2_rx_flex_desc_nic_2 {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Advanced (Split Queue Model)
- * RxDID Profile Id 7
+/**
+ * struct virtchnl2_rx_flex_desc_adv - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @fmd0: Flexible metadata container 0
+ * @fmd1: Flexible metadata container 1
+ * @fmd2: Flexible metadata container 2
+ * @fflags2: Flags
+ * @hash3: Upper bits of Rx hash value
+ * @fmd3: Flexible metadata container 3
+ * @fmd4: Flexible metadata container 4
+ * @fmd5: Flexible metadata container 5
+ * @fmd6: Flexible metadata container 6
+ * @fmd7_0: Flexible metadata container 7.0
+ * @fmd7_1: Flexible metadata container 7.1
+ *
+ * RX Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile ID 0x2
  */
 struct virtchnl2_rx_flex_desc_adv {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
@@ -534,10 +798,42 @@ struct virtchnl2_rx_flex_desc_adv {
 	__le16 fmd6;
 	__le16 fmd7_0;
 	__le16 fmd7_1;
-}; /* writeback */
+};
 
-/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
- * RxDID Profile Id 8
+/**
+ * struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @buf_id: Buffer identifier. Only in splitq mode.
+ * @misc: Union
+ * @misc.raw_cs: Raw checksum
+ * @misc.l2tag1: Stripped L2 tag from the received packet
+ * @misc.rscseglen: RSC segment length
+ * @hash1: Lower 16 bits of Rx hash value, hash[15:0]
+ * @ff2_mirrid_hash2: Union
+ * @ff2_mirrid_hash2.fflags2: Flexible flags section 2
+ * @ff2_mirrid_hash2.mirrorid: Mirror id
+ * @ff2_mirrid_hash2.hash2: 8 bits of Rx hash value, hash[23:16]
+ * @hash3: Upper 8 bits of Rx hash value, hash[31:24]
+ * @l2tag2: Extracted L2 tag 2 from the packet
+ * @fmd4: Flexible metadata container 4
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @fmd6: Flexible metadata container 6
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SplitQ, flex writeback format.
+ *
  * Flex-field 0: BufferID
  * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
  * Flex-field 2: Hash[15:0]
@@ -548,30 +844,17 @@ struct virtchnl2_rx_flex_desc_adv {
  */
 struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
 	u8 fflags1;
 	u8 ts_low;
-	__le16 buf_id; /* only in splitq */
+	__le16 buf_id;
 	union {
 		__le16 raw_cs;
 		__le16 l2tag1;
@@ -591,7 +874,7 @@ struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	__le16 l2tag1;
 	__le16 fmd6;
 	__le32 ts_high;
-}; /* writeback */
+};
 
 union virtchnl2_rx_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 13/21] common/idpf: avoid variable 0-init
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (11 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 12/21] common/idpf: move related defines into enums Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 14/21] common/idpf: update in PTP message validation Soumyadeep Hore
                     ` (8 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Dont initialize the variables if not needed.

Also use 'err' instead of 'status', 'ret_code', 'ret' etc.
for consistency and change the return label 'sq_send_command_out'
to 'err_unlock'.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c      | 63 +++++++++----------
 .../common/idpf/base/idpf_controlq_setup.c    | 18 +++---
 2 files changed, 39 insertions(+), 42 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index b5ba9c3bd0..bd23e54421 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -61,7 +61,7 @@ static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
  */
 static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	for (i = 0; i < cq->ring_size; i++) {
 		struct idpf_ctlq_desc *desc = IDPF_CTLQ_DESC(cq, i);
@@ -134,7 +134,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 {
 	struct idpf_ctlq_info *cq;
 	bool is_rxq = false;
-	int status = 0;
+	int err;
 
 	if (!qinfo->len || !qinfo->buf_size ||
 	    qinfo->len > IDPF_CTLQ_MAX_RING_SIZE ||
@@ -164,16 +164,16 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 		is_rxq = true;
 		/* fallthrough */
 	case IDPF_CTLQ_TYPE_MAILBOX_TX:
-		status = idpf_ctlq_alloc_ring_res(hw, cq);
+		err = idpf_ctlq_alloc_ring_res(hw, cq);
 		break;
 	default:
-		status = -EINVAL;
+		err = -EINVAL;
 		break;
 	}
 
-	if (status)
+	if (err)
 #ifdef NVME_CPF
-		return status;
+		return err;
 #else
 		goto init_free_q;
 #endif
@@ -187,7 +187,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			idpf_calloc(hw, qinfo->len,
 				    sizeof(struct idpf_ctlq_msg *));
 		if (!cq->bi.tx_msg) {
-			status = -ENOMEM;
+			err = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
 #endif
@@ -203,17 +203,16 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 
 #ifndef NVME_CPF
 	*cq_out = cq;
-	return status;
+	return 0;
 
 init_dealloc_q_mem:
 	/* free ring buffers and the ring itself */
 	idpf_ctlq_dealloc_ring_res(hw, cq);
 init_free_q:
 	idpf_free(hw, cq);
-	cq = NULL;
 #endif
 
-	return status;
+	return err;
 }
 
 /**
@@ -249,8 +248,8 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 #endif
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
-	int ret_code = 0;
-	int i = 0;
+	int err;
+	int i;
 
 	LIST_INIT(&hw->cq_list_head);
 
@@ -261,19 +260,19 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		cq = *(ctlq + i);
 #endif
 
-		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
-		if (ret_code)
+		err = idpf_ctlq_add(hw, qinfo, &cq);
+		if (err)
 			goto init_destroy_qs;
 	}
 
-	return ret_code;
+	return 0;
 
 init_destroy_qs:
 	LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
 				 idpf_ctlq_info, cq_list)
 		idpf_ctlq_remove(hw, cq);
 
-	return ret_code;
+	return err;
 }
 
 /**
@@ -307,9 +306,9 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 		   u16 num_q_msg, struct idpf_ctlq_msg q_msg[])
 {
 	struct idpf_ctlq_desc *desc;
-	int num_desc_avail = 0;
-	int status = 0;
-	int i = 0;
+	int num_desc_avail;
+	int err = 0;
+	int i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -319,8 +318,8 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* Ensure there are enough descriptors to send all messages */
 	num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
 	if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
-		status = -ENOSPC;
-		goto sq_send_command_out;
+		err = -ENOSPC;
+		goto err_unlock;
 	}
 
 	for (i = 0; i < num_q_msg; i++) {
@@ -391,10 +390,10 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 
 	wr32(hw, cq->reg.tail, cq->next_to_use);
 
-sq_send_command_out:
+err_unlock:
 	idpf_release_lock(&cq->cq_lock);
 
-	return status;
+	return err;
 }
 
 /**
@@ -418,9 +417,8 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 				struct idpf_ctlq_msg *msg_status[], bool force)
 {
 	struct idpf_ctlq_desc *desc;
-	u16 i = 0, num_to_clean;
+	u16 i, num_to_clean;
 	u16 ntc, desc_err;
-	int ret = 0;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -467,7 +465,7 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 	/* Return number of descriptors actually cleaned */
 	*clean_count = i;
 
-	return ret;
+	return 0;
 }
 
 /**
@@ -534,7 +532,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	u16 ntp = cq->next_to_post;
 	bool buffs_avail = false;
 	u16 tbp = ntp + 1;
-	int status = 0;
 	int i = 0;
 
 	if (*buff_count > cq->ring_size)
@@ -635,7 +632,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* return the number of buffers that were not posted */
 	*buff_count = *buff_count - i;
 
-	return status;
+	return 0;
 }
 
 /**
@@ -654,8 +651,8 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 {
 	u16 num_to_clean, ntc, ret_val, flags;
 	struct idpf_ctlq_desc *desc;
-	int ret_code = 0;
-	u16 i = 0;
+	int err = 0;
+	u16 i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -688,7 +685,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 				      IDPF_CTLQ_FLAG_FTYPE_S;
 
 		if (flags & IDPF_CTLQ_FLAG_ERR)
-			ret_code = -EBADMSG;
+			err = -EBADMSG;
 
 		q_msg[i].cookie.mbx.chnl_opcode = LE32_TO_CPU(desc->cookie_high);
 		q_msg[i].cookie.mbx.chnl_retval = LE32_TO_CPU(desc->cookie_low);
@@ -734,7 +731,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 
 	*num_q_msg = i;
 	if (*num_q_msg == 0)
-		ret_code = -ENOMSG;
+		err = -ENOMSG;
 
-	return ret_code;
+	return err;
 }
diff --git a/drivers/common/idpf/base/idpf_controlq_setup.c b/drivers/common/idpf/base/idpf_controlq_setup.c
index 21f43c74f5..cd6bcb1cf0 100644
--- a/drivers/common/idpf/base/idpf_controlq_setup.c
+++ b/drivers/common/idpf/base/idpf_controlq_setup.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 
@@ -34,7 +34,7 @@ static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
 static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
 				struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	/* Do not allocate DMA buffers for transmit queues */
 	if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
@@ -153,20 +153,20 @@ void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
  */
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
-	int ret_code;
+	int err;
 
 	/* verify input for valid configuration */
 	if (!cq->ring_size || !cq->buf_size)
 		return -EINVAL;
 
 	/* allocate the ring memory */
-	ret_code = idpf_ctlq_alloc_desc_ring(hw, cq);
-	if (ret_code)
-		return ret_code;
+	err = idpf_ctlq_alloc_desc_ring(hw, cq);
+	if (err)
+		return err;
 
 	/* allocate buffers in the rings */
-	ret_code = idpf_ctlq_alloc_bufs(hw, cq);
-	if (ret_code)
+	err = idpf_ctlq_alloc_bufs(hw, cq);
+	if (err)
 		goto idpf_init_cq_free_ring;
 
 	/* success! */
@@ -174,5 +174,5 @@ int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 
 idpf_init_cq_free_ring:
 	idpf_free_dma_mem(hw, &cq->desc_ring);
-	return ret_code;
+	return err;
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 14/21] common/idpf: update in PTP message validation
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (12 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 13/21] common/idpf: avoid variable 0-init Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 15/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
                     ` (7 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

When the message for getting timestamp latches is sent by the driver,
number of latches is equal to 0. Current implementation of message
validation function incorrectly notifies this kind of message length as
invalid.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 35ff1942c2..b5703cb6ed 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -2270,7 +2270,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_CAPS:
 		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_get_ptp_caps *ptp_caps =
 			(struct virtchnl2_get_ptp_caps *)msg;
 
@@ -2286,7 +2286,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
 		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
 			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 15/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (13 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 14/21] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 16/21] common/idpf: add wmb before tail Soumyadeep Hore
                     ` (6 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

This capability bit indicates both inline as well as side band flow
steering capability.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index b5703cb6ed..468b8f355d 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -243,7 +243,7 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
 	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
 	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
-	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_FLOW_STEER		= BIT_ULL(6),
 	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
 	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
 	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 16/21] common/idpf: add wmb before tail
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (14 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 15/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 17/21] drivers: add flex array support and fix issues Soumyadeep Hore
                     ` (5 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Introduced through customer's feedback in their attempt to address some
bugs this introduces a memory barrier before posting ctlq tail. This
makes sure memory writes have a chance to take place before HW starts
messing with the descriptors.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index bd23e54421..ba2e328122 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -624,6 +624,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			/* Wrap to end of end ring since current ntp is 0 */
 			cq->next_to_post = cq->ring_size - 1;
 
+		idpf_wmb();
+
 		wr32(hw, cq->reg.tail, cq->next_to_post);
 	}
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 17/21] drivers: add flex array support and fix issues
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (15 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 16/21] common/idpf: add wmb before tail Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 18/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
                     ` (4 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

With the internal Linux upstream feedback that is received on
IDPF driver and also some references available online, it
is discouraged to use 1-sized array fields in the structures,
especially in the new Linux drivers that are going to be
upstreamed. Instead, it is recommended to use flex array fields
for the dynamic sized structures.

Some fixes based on code change is introduced to compile dpdk.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h       | 466 ++++-----------------
 drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
 drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
 3 files changed, 86 insertions(+), 410 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 468b8f355d..fb017b1306 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -63,6 +63,10 @@ enum virtchnl2_status {
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
+
+#define STRUCT_VAR_LEN		1
 
 /**
  * New major set of opcodes introduced and so leaving room for
@@ -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_reg_chunk chunks[1];
+	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 
 /**
  * enum virtchnl2_vport_flags - Vport flags
@@ -773,7 +776,7 @@ struct virtchnl2_create_vport {
 	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(192, virtchnl2_create_vport, chunks.chunks);
 
 /**
  * struct virtchnl2_vport - Vport identifier information
@@ -859,10 +862,9 @@ struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 	u8 pad[10];
-	struct virtchnl2_txq_info qinfo[1];
+	struct virtchnl2_txq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(72, virtchnl2_config_tx_queues, qinfo);
 
 /**
  * struct virtchnl2_rxq_info - Receive queue config info
@@ -940,10 +942,9 @@ struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 	u8 pad[18];
-	struct virtchnl2_rxq_info qinfo[1];
+	struct virtchnl2_rxq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(112, virtchnl2_config_rx_queues, qinfo);
 
 /**
  * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
@@ -973,16 +974,15 @@ struct virtchnl2_add_queues {
 
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(56, virtchnl2_add_queues, chunks.chunks);
 
 /* Queue Groups Extension */
 /**
  * struct virtchnl2_rx_queue_group_info - RX queue group info
- * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
- *		  allocated by CreateVport command. New size will be returned
- *		  if allocation succeeded, otherwise original rss_size from
- *		  CreateVport will be returned.
+ * @rss_lut_size: User can ask to update rss_lut size originally allocated by
+ *		  CreateVport command. New size will be returned if allocation
+ *		  succeeded, otherwise original rss_size from CreateVport
+ *		  will be returned.
  * @pad: Padding for future extensions
  */
 struct virtchnl2_rx_queue_group_info {
@@ -1010,7 +1010,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
  * @cir_pad: Future extension purpose for CIR only
  * @pad2: Padding for future extensions
  */
-struct virtchnl2_tx_queue_group_info { /* IN */
+struct virtchnl2_tx_queue_group_info {
 	u8 tx_tc;
 	u8 priority;
 	u8 is_sp;
@@ -1043,19 +1043,17 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 /**
  * struct virtchnl2_queue_group_info - Queue group info
  * @qg_id: Queue group ID
- * @num_tx_q: Number of TX queues
- * @num_tx_complq: Number of completion queues
- * @num_rx_q: Number of RX queues
- * @num_rx_bufq: Number of RX buffer queues
+ * @num_tx_q: Number of TX queues requested
+ * @num_tx_complq: Number of completion queues requested
+ * @num_rx_q: Number of RX queues requested
+ * @num_rx_bufq: Number of RX buffer queues requested
  * @tx_q_grp_info: TX queue group info
  * @rx_q_grp_info: RX queue group info
  * @pad: Padding for future extensions
- * @chunks: Queue register chunks
+ * @chunks: Queue register chunks from CP
  */
 struct virtchnl2_queue_group_info {
-	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
-	/* IN, Number of queue of different types in the group. */
 	__le16 num_tx_q;
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
@@ -1064,56 +1062,52 @@ struct virtchnl2_queue_group_info {
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
 	u8 pad[40];
-	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
-};
-
-VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
-
-/**
- * struct virtchnl2_queue_groups - Queue groups list
- * @num_queue_groups: Total number of queue groups
- * @pad: Padding for future extensions
- * @groups: Array of queue group info
- */
-struct virtchnl2_queue_groups {
-	__le16 num_queue_groups;
-	u8 pad[6];
-	struct virtchnl2_queue_group_info groups[1];
+	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(120, virtchnl2_queue_group_info, chunks.chunks);
 
 /**
  * struct virtchnl2_add_queue_groups - Add queue groups
- * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ * @vport_id: Vport_id to add queue group to, same as allocated by
  *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @num_queue_groups: Total number of queue groups
  * @pad: Padding for future extensions
- * @qg_info: IN/OUT. List of all the queue groups
+#ifndef FLEX_ARRAY_SUPPORT
+ * @groups: List of all the queue group info structures
+#endif
  *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
- * groups and queues assigned followed by num_queue_groups and num_chunks of
- * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ * groups and queues assigned followed by num_queue_groups and groups of
+ * virtchnl2_queue_group_info and virtchnl2_queue_chunk structures.
+#ifdef FLEX_ARRAY_SUPPORT
+ * (Note: There is no specific field for the queue group info but are added at
+ * the end of the add queue groups message. Receiver of this message is expected
+ * to extract the queue group info accordingly. Reason for doing this is because
+ * compiler doesn't allow nested flexible array fields).
+#endif
  *
  * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
 	__le32 vport_id;
-	u8 pad[4];
-	struct virtchnl2_queue_groups qg_info;
+	__le16 num_queue_groups;
+	u8 pad[10];
+	struct virtchnl2_queue_group_info groups[STRUCT_VAR_LEN];
+
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
 /**
  * struct virtchnl2_delete_queue_groups - Delete queue groups
- * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ * @vport_id: Vport ID to delete queue group from, same as allocated by
  *	      CreateVport.
- * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @num_queue_groups: Defines number of groups provided
  * @pad: Padding
- * @qg_ids: IN, IDs & types of Queue Groups to delete
+ * @qg_ids: IDs & types of Queue Groups to delete
  *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
@@ -1127,10 +1121,9 @@ struct virtchnl2_delete_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	struct virtchnl2_queue_group_id qg_ids[1];
+	struct virtchnl2_queue_group_id qg_ids[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_delete_queue_groups, qg_ids);
 
 /**
  * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
@@ -1188,10 +1181,9 @@ struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
 
-	struct virtchnl2_vector_chunk vchunks[1];
+	struct virtchnl2_vector_chunk vchunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(48, virtchnl2_vector_chunks, vchunks);
 
 /**
  * struct virtchnl2_alloc_vectors - Vector allocation info
@@ -1213,8 +1205,7 @@ struct virtchnl2_alloc_vectors {
 
 	struct virtchnl2_vector_chunks vchunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(64, virtchnl2_alloc_vectors, vchunks.vchunks);
 
 /**
  * struct virtchnl2_rss_lut - RSS LUT info
@@ -1235,10 +1226,9 @@ struct virtchnl2_rss_lut {
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	__le32 lut[1];
+	__le32 lut[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_rss_lut, lut);
 
 /**
  * struct virtchnl2_rss_hash - RSS hash info
@@ -1387,10 +1377,9 @@ struct virtchnl2_ptype {
 	u8 ptype_id_8;
 	u8 proto_id_count;
 	__le16 pad;
-	__le16 proto_id[1];
+	__le16 proto_id[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(8, virtchnl2_ptype, proto_id);
 
 /**
  * struct virtchnl2_get_ptype_info - Packet type info
@@ -1426,7 +1415,7 @@ struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
-	struct virtchnl2_ptype ptype[1];
+	struct virtchnl2_ptype ptype[STRUCT_VAR_LEN];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
@@ -1627,10 +1616,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_chunk chunks[1];
+	struct virtchnl2_queue_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_queue_chunks, chunks);
 
 /**
  * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
@@ -1652,8 +1640,7 @@ struct virtchnl2_del_ena_dis_queues {
 
 	struct virtchnl2_queue_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(32, virtchnl2_del_ena_dis_queues, chunks.chunks);
 
 /**
  * struct virtchnl2_queue_vector - Queue to vector mapping
@@ -1697,10 +1684,10 @@ struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
 	__le16 num_qv_maps;
 	u8 pad[10];
-	struct virtchnl2_queue_vector qv_maps[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+	struct virtchnl2_queue_vector qv_maps[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_vector_maps, qv_maps);
 
 /**
  * struct virtchnl2_loopback - Loopback info
@@ -1752,10 +1739,10 @@ struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
 	__le16 num_mac_addr;
 	u8 pad[2];
-	struct virtchnl2_mac_addr mac_addr_list[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
+	struct virtchnl2_mac_addr mac_addr_list[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_mac_addr_list, mac_addr_list);
 
 /**
  * struct virtchnl2_promisc_info - Promiscuous type information
@@ -1854,10 +1841,10 @@ struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
+	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp,
+			       ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_get_ptp_caps - Get PTP capabilities
@@ -1882,8 +1869,8 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(88, virtchnl2_get_ptp_caps,
+			       tx_tstamp.ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
@@ -1918,13 +1905,12 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
+	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp_latches);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp_latches,
+			       tstamp_latches);
 
 static inline const char *virtchnl2_op_str(__le32 v_opcode)
 {
@@ -2002,314 +1988,4 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
 	}
 }
 
-/**
- * virtchnl2_vc_validate_vf_msg
- * @ver: Virtchnl2 version info
- * @v_opcode: Opcode for the message
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- *
- * Validate msg format against struct for each opcode.
- */
-static inline int
-virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
-			     u8 *msg, __le16 msglen)
-{
-	bool err_msg_format = false;
-	__le32 valid_len = 0;
-
-	/* Validate message length */
-	switch (v_opcode) {
-	case VIRTCHNL2_OP_VERSION:
-		valid_len = sizeof(struct virtchnl2_version_info);
-		break;
-	case VIRTCHNL2_OP_GET_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_capabilities);
-		break;
-	case VIRTCHNL2_OP_CREATE_VPORT:
-		valid_len = sizeof(struct virtchnl2_create_vport);
-		if (msglen >= valid_len) {
-			struct virtchnl2_create_vport *cvport =
-				(struct virtchnl2_create_vport *)msg;
-
-			if (cvport->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (cvport->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_CREATE_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_create_adi);
-		if (msglen >= valid_len) {
-			struct virtchnl2_non_flex_create_adi *cadi =
-				(struct virtchnl2_non_flex_create_adi *)msg;
-
-			if (cadi->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			if (cadi->vchunks.num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (cadi->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-			valid_len += (cadi->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_destroy_adi);
-		break;
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-		valid_len = sizeof(struct virtchnl2_vport);
-		break;
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_tx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_tx_queues *ctq =
-				(struct virtchnl2_config_tx_queues *)msg;
-			if (ctq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (ctq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_txq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_rx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_rx_queues *crq =
-				(struct virtchnl2_config_rx_queues *)msg;
-			if (crq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (crq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_rxq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUES:
-		valid_len = sizeof(struct virtchnl2_add_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_add_queues *add_q =
-				(struct virtchnl2_add_queues *)msg;
-
-			if (add_q->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (add_q->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_DEL_QUEUES:
-		valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_del_ena_dis_queues *qs =
-				(struct virtchnl2_del_ena_dis_queues *)msg;
-			if (qs->chunks.num_chunks == 0 ||
-			    qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (qs->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_add_queue_groups);
-		if (msglen != valid_len) {
-			__le64 offset;
-			__le32 i;
-			struct virtchnl2_add_queue_groups *add_queue_grp =
-				(struct virtchnl2_add_queue_groups *)msg;
-			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
-			struct virtchnl2_queue_group_info *grp_info;
-			__le32 chunk_size = sizeof(struct virtchnl2_queue_reg_chunk);
-			__le32 group_size = sizeof(struct virtchnl2_queue_group_info);
-			__le32 total_chunks_size;
-
-			if (groups->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (groups->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_info);
-			offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
-
-			for (i = 0; i < groups->num_queue_groups; i++) {
-				grp_info = (struct virtchnl2_queue_group_info *)
-						   ((u8 *)groups + offset);
-				if (grp_info->chunks.num_chunks == 0) {
-					offset += group_size;
-					continue;
-				}
-				total_chunks_size = (grp_info->chunks.num_chunks - 1) * chunk_size;
-				offset += group_size + total_chunks_size;
-				valid_len += total_chunks_size;
-			}
-		}
-		break;
-	case VIRTCHNL2_OP_DEL_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_delete_queue_groups);
-		if (msglen != valid_len) {
-			struct virtchnl2_delete_queue_groups *del_queue_grp =
-				(struct virtchnl2_delete_queue_groups *)msg;
-
-			if (del_queue_grp->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += (del_queue_grp->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_id);
-		}
-		break;
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-		valid_len = sizeof(struct virtchnl2_queue_vector_maps);
-		if (msglen >= valid_len) {
-			struct virtchnl2_queue_vector_maps *v_qp =
-				(struct virtchnl2_queue_vector_maps *)msg;
-			if (v_qp->num_qv_maps == 0 ||
-			    v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_qp->num_qv_maps - 1) *
-				      sizeof(struct virtchnl2_queue_vector);
-		}
-		break;
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_alloc_vectors);
-		if (msglen >= valid_len) {
-			struct virtchnl2_alloc_vectors *v_av =
-				(struct virtchnl2_alloc_vectors *)msg;
-
-			if (v_av->vchunks.num_vchunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (v_av->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_vector_chunks);
-		if (msglen >= valid_len) {
-			struct virtchnl2_vector_chunks *v_chunks =
-				(struct virtchnl2_vector_chunks *)msg;
-			if (v_chunks->num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_chunks->num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-		valid_len = sizeof(struct virtchnl2_rss_key);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_key *vrk =
-				(struct virtchnl2_rss_key *)msg;
-
-			if (vrk->key_len == 0) {
-				/* Zero length is allowed as input */
-				break;
-			}
-
-			valid_len += vrk->key_len - 1;
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-		valid_len = sizeof(struct virtchnl2_rss_lut);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_lut *vrl =
-				(struct virtchnl2_rss_lut *)msg;
-
-			if (vrl->lut_entries == 0) {
-				/* Zero entries is allowed as input */
-				break;
-			}
-
-			valid_len += (vrl->lut_entries - 1) * sizeof(vrl->lut);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-		valid_len = sizeof(struct virtchnl2_rss_hash);
-		break;
-	case VIRTCHNL2_OP_SET_SRIOV_VFS:
-		valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		valid_len = sizeof(struct virtchnl2_get_ptype_info);
-		break;
-	case VIRTCHNL2_OP_GET_STATS:
-		valid_len = sizeof(struct virtchnl2_vport_stats);
-		break;
-	case VIRTCHNL2_OP_GET_PORT_STATS:
-		valid_len = sizeof(struct virtchnl2_port_stats);
-		break;
-	case VIRTCHNL2_OP_RESET_VF:
-		break;
-	case VIRTCHNL2_OP_GET_PTP_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_get_ptp_caps *ptp_caps =
-			(struct virtchnl2_get_ptp_caps *)msg;
-
-			if (ptp_caps->tx_tstamp.num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_entry));
-		}
-		break;
-	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
-		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
-			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
-
-			if (tx_tstamp_latches->num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((tx_tstamp_latches->num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
-		}
-		break;
-	/* These are always errors coming from the VF */
-	case VIRTCHNL2_OP_EVENT:
-	case VIRTCHNL2_OP_UNKNOWN:
-	default:
-		return VIRTCHNL2_STATUS_ERR_ESRCH;
-	}
-	/* Few more checks */
-	if (err_msg_format || valid_len != msglen)
-		return VIRTCHNL2_STATUS_ERR_EINVAL;
-
-	return 0;
-}
-
 #endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index c46ed50eb5..f00202f43c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -366,7 +366,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	int err = -1;
 
 	size = sizeof(*p2p_queue_grps_info) +
-	       (p2p_queue_grps_info->qg_info.num_queue_groups - 1) *
+	       (p2p_queue_grps_info->num_queue_groups - 1) *
 		   sizeof(struct virtchnl2_queue_group_info);
 
 	memset(&args, 0, sizeof(args));
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e718e9e19..e707043bf7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2393,18 +2393,18 @@ cpfl_p2p_q_grps_add(struct idpf_vport *vport,
 	int ret;
 
 	p2p_queue_grps_info->vport_id = vport->vport_id;
-	p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
-	p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0;
+	p2p_queue_grps_info->num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
+	p2p_queue_grps_info->groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
+	p2p_queue_grps_info->groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
+	p2p_queue_grps_info->groups[0].rx_q_grp_info.rss_lut_size = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.tx_tc = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.priority = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.is_sp = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.pir_weight = 0;
 
 	ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info);
 	if (ret != 0) {
@@ -2423,13 +2423,13 @@ cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
 	struct virtchnl2_queue_reg_chunks *vc_chunks_out;
 	int i, type;
 
-	if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=
+	if (p2p_q_vc_out_info->groups[0].qg_id.queue_group_type !=
 	    VIRTCHNL2_QUEUE_GROUP_P2P) {
 		PMD_DRV_LOG(ERR, "Add queue group response mismatch.");
 		return -EINVAL;
 	}
 
-	vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks;
+	vc_chunks_out = &p2p_q_vc_out_info->groups[0].chunks;
 
 	for (i = 0; i < vc_chunks_out->num_chunks; i++) {
 		type = vc_chunks_out->chunks[i].type;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 18/21] common/idpf: enable flow steer capability for vports
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (16 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 17/21] drivers: add flex array support and fix issues Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 19/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
                     ` (3 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Added virtchnl2_flow_types to be used for flow steering.

Added flow steer cap flags for vport create.

Add flow steer flow types and action types for vport create.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 60 ++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index fb017b1306..4ef4015429 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -269,6 +269,43 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
 };
 
+/**
+ * enum virtchnl2_action_types - Available actions for sideband flow steering
+ * @VIRTCHNL2_ACTION_DROP: Drop the packet
+ * @VIRTCHNL2_ACTION_PASSTHRU: Forward the packet to the next classifier/stage
+ * @VIRTCHNL2_ACTION_QUEUE: Forward the packet to a receive queue
+ * @VIRTCHNL2_ACTION_Q_GROUP: Forward the packet to a receive queue group
+ * @VIRTCHNL2_ACTION_MARK: Mark the packet with specific marker value
+ * @VIRTCHNL2_ACTION_COUNT: Increment the corresponding counter
+ */
+
+enum virtchnl2_action_types {
+	VIRTCHNL2_ACTION_DROP		= BIT(0),
+	VIRTCHNL2_ACTION_PASSTHRU	= BIT(1),
+	VIRTCHNL2_ACTION_QUEUE		= BIT(2),
+	VIRTCHNL2_ACTION_Q_GROUP	= BIT(3),
+	VIRTCHNL2_ACTION_MARK		= BIT(4),
+	VIRTCHNL2_ACTION_COUNT		= BIT(5),
+};
+
+/* Flow type capabilities for Flow Steering and Receive-Side Scaling */
+enum virtchnl2_flow_types {
+	VIRTCHNL2_FLOW_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_FLOW_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_FLOW_IPV4_SCTP	= BIT(2),
+	VIRTCHNL2_FLOW_IPV4_OTHER	= BIT(3),
+	VIRTCHNL2_FLOW_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_FLOW_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_FLOW_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_FLOW_IPV6_OTHER	= BIT(7),
+	VIRTCHNL2_FLOW_IPV4_AH		= BIT(8),
+	VIRTCHNL2_FLOW_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_FLOW_IPV4_AH_ESP	= BIT(10),
+	VIRTCHNL2_FLOW_IPV6_AH		= BIT(11),
+	VIRTCHNL2_FLOW_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_FLOW_IPV6_AH_ESP	= BIT(13),
+};
+
 /**
  * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
  * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
@@ -707,11 +744,16 @@ VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 /**
  * enum virtchnl2_vport_flags - Vport flags
  * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
- * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER: Inline flow steering enabled
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ: Inline flow steering enabled
+ * with explicit Rx queue action
+ * @VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER: Sideband flow steering enabled
  */
 enum virtchnl2_vport_flags {
 	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
-	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ	= BIT(2),
+	VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER	= BIT(3),
 };
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
@@ -739,6 +781,14 @@ enum virtchnl2_vport_flags {
  * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
  * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
  * @reserved: Reserved bytes and cannot be used
+ * @inline_flow_types: Bit mask of supported inline-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_types: Bit mask of supported sideband-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_actions: Bit mask of supported action types
+ *  for sideband flow steering (See enum virtchnl2_action_types)
+ * @flow_steer_max_rules: Max rules allowed for inline and sideband
+ *  flow steering combined
  * @rss_algorithm: RSS algorithm
  * @rss_key_size: RSS key size
  * @rss_lut_size: RSS LUT size
@@ -768,7 +818,11 @@ struct virtchnl2_create_vport {
 	__le16 vport_flags;
 	__le64 rx_desc_ids;
 	__le64 tx_desc_ids;
-	u8 reserved[72];
+	u8 reserved[48];
+	__le64 inline_flow_types;
+	__le64 sideband_flow_types;
+	__le32 sideband_flow_actions;
+	__le32 flow_steer_max_rules;
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 19/21] common/idpf: add a new Tx context descriptor structure
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (17 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 18/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 20/21] common/idpf: remove idpf common file Soumyadeep Hore
                     ` (2 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding a new structure for the context descriptor that contains
the support for timesync packets, where the index for timestamping is set.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_lan_txrx.h | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/idpf_lan_txrx.h b/drivers/common/idpf/base/idpf_lan_txrx.h
index c9eaeb5d3f..be27973a33 100644
--- a/drivers/common/idpf/base/idpf_lan_txrx.h
+++ b/drivers/common/idpf/base/idpf_lan_txrx.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_LAN_TXRX_H_
@@ -286,6 +286,24 @@ struct idpf_flex_tx_tso_ctx_qw {
 };
 
 union idpf_flex_tx_ctx_desc {
+		/* DTYPE = IDPF_TX_DESC_DTYPE_CTX (0x01) */
+	struct  {
+		struct {
+			u8 rsv[4];
+			__le16 l2tag2;
+			u8 rsv_2[2];
+		} qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 tsyn_reg_l;
+#define IDPF_TX_DESC_CTX_TSYN_L_M	GENMASK(15, 14)
+			__le16 tsyn_reg_h;
+#define IDPF_TX_DESC_CTX_TSYN_H_M	GENMASK(15, 0)
+			__le16 mss;
+#define IDPF_TX_DESC_CTX_MSS_M		GENMASK(14, 2)
+		} qw1;
+	} tsyn;
+
 	/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
 	struct {
 		struct idpf_flex_tx_tso_ctx_qw qw0;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 20/21] common/idpf: remove idpf common file
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (18 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 19/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-04  8:06   ` [PATCH v2 21/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

The file is redundant in our implementation and is not required
further.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 382 -------------------------
 drivers/common/idpf/base/meson.build   |   1 -
 2 files changed, 383 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
deleted file mode 100644
index bb540345c2..0000000000
--- a/drivers/common/idpf/base/idpf_common.c
+++ /dev/null
@@ -1,382 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2024 Intel Corporation
- */
-
-#include "idpf_prototype.h"
-#include "idpf_type.h"
-#include <virtchnl.h>
-
-
-/**
- * idpf_set_mac_type - Sets MAC type
- * @hw: pointer to the HW structure
- *
- * This function sets the mac type of the adapter based on the
- * vendor ID and device ID stored in the hw structure.
- */
-int idpf_set_mac_type(struct idpf_hw *hw)
-{
-	int status = 0;
-
-	DEBUGFUNC("Set MAC type\n");
-
-	if (hw->vendor_id == IDPF_INTEL_VENDOR_ID) {
-		switch (hw->device_id) {
-		case IDPF_DEV_ID_PF:
-			hw->mac.type = IDPF_MAC_PF;
-			break;
-		case IDPF_DEV_ID_VF:
-			hw->mac.type = IDPF_MAC_VF;
-			break;
-		default:
-			hw->mac.type = IDPF_MAC_GENERIC;
-			break;
-		}
-	} else {
-		status = -ENODEV;
-	}
-
-	DEBUGOUT2("Setting MAC type found mac: %d, returns: %d\n",
-		  hw->mac.type, status);
-	return status;
-}
-
-/**
- *  idpf_init_hw - main initialization routine
- *  @hw: pointer to the hardware structure
- *  @ctlq_size: struct to pass ctlq size data
- */
-int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
-{
-	struct idpf_ctlq_create_info *q_info;
-	int status = 0;
-	struct idpf_ctlq_info *cq = NULL;
-
-	/* Setup initial control queues */
-	q_info = (struct idpf_ctlq_create_info *)
-		 idpf_calloc(hw, 2, sizeof(struct idpf_ctlq_create_info));
-	if (!q_info)
-		return -ENOMEM;
-
-	q_info[0].type             = IDPF_CTLQ_TYPE_MAILBOX_TX;
-	q_info[0].buf_size         = ctlq_size.asq_buf_size;
-	q_info[0].len              = ctlq_size.asq_ring_size;
-	q_info[0].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[0].reg.head         = PF_FW_ATQH;
-		q_info[0].reg.tail         = PF_FW_ATQT;
-		q_info[0].reg.len          = PF_FW_ATQLEN;
-		q_info[0].reg.bah          = PF_FW_ATQBAH;
-		q_info[0].reg.bal          = PF_FW_ATQBAL;
-		q_info[0].reg.len_mask     = PF_FW_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = PF_FW_ATQH_ATQH_M;
-	} else {
-		q_info[0].reg.head         = VF_ATQH;
-		q_info[0].reg.tail         = VF_ATQT;
-		q_info[0].reg.len          = VF_ATQLEN;
-		q_info[0].reg.bah          = VF_ATQBAH;
-		q_info[0].reg.bal          = VF_ATQBAL;
-		q_info[0].reg.len_mask     = VF_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = VF_ATQH_ATQH_M;
-	}
-
-	q_info[1].type             = IDPF_CTLQ_TYPE_MAILBOX_RX;
-	q_info[1].buf_size         = ctlq_size.arq_buf_size;
-	q_info[1].len              = ctlq_size.arq_ring_size;
-	q_info[1].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[1].reg.head         = PF_FW_ARQH;
-		q_info[1].reg.tail         = PF_FW_ARQT;
-		q_info[1].reg.len          = PF_FW_ARQLEN;
-		q_info[1].reg.bah          = PF_FW_ARQBAH;
-		q_info[1].reg.bal          = PF_FW_ARQBAL;
-		q_info[1].reg.len_mask     = PF_FW_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = PF_FW_ARQH_ARQH_M;
-	} else {
-		q_info[1].reg.head         = VF_ARQH;
-		q_info[1].reg.tail         = VF_ARQT;
-		q_info[1].reg.len          = VF_ARQLEN;
-		q_info[1].reg.bah          = VF_ARQBAH;
-		q_info[1].reg.bal          = VF_ARQBAL;
-		q_info[1].reg.len_mask     = VF_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = VF_ARQH_ARQH_M;
-	}
-
-	status = idpf_ctlq_init(hw, 2, q_info);
-	if (status) {
-		/* TODO return error */
-		idpf_free(hw, q_info);
-		return status;
-	}
-
-	LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, idpf_ctlq_info, cq_list) {
-		if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = cq;
-		else if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = cq;
-	}
-
-	/* TODO hardcode a mac addr for now */
-	hw->mac.addr[0] = 0x00;
-	hw->mac.addr[1] = 0x00;
-	hw->mac.addr[2] = 0x00;
-	hw->mac.addr[3] = 0x00;
-	hw->mac.addr[4] = 0x03;
-	hw->mac.addr[5] = 0x14;
-
-	idpf_free(hw, q_info);
-
-	return 0;
-}
-
-/**
- * idpf_send_msg_to_cp
- * @hw: pointer to the hardware structure
- * @v_opcode: opcodes for VF-PF communication
- * @v_retval: return error code
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- * @cmd_details: pointer to command details
- *
- * Send message to CP. By default, this message
- * is sent asynchronously, i.e. idpf_asq_send_command() does not wait for
- * completion before returning.
- */
-int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
-			int v_retval, u8 *msg, u16 msglen)
-{
-	struct idpf_ctlq_msg ctlq_msg = { 0 };
-	struct idpf_dma_mem dma_mem = { 0 };
-	int status;
-
-	ctlq_msg.opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg.func_id = 0;
-	ctlq_msg.data_len = msglen;
-	ctlq_msg.cookie.mbx.chnl_retval = v_retval;
-	ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
-
-	if (msglen > 0) {
-		dma_mem.va = (struct idpf_dma_mem *)
-			  idpf_alloc_dma_mem(hw, &dma_mem, msglen);
-		if (!dma_mem.va)
-			return -ENOMEM;
-
-		idpf_memcpy(dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
-		ctlq_msg.ctx.indirect.payload = &dma_mem;
-	}
-	status = idpf_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
-
-	if (dma_mem.va)
-		idpf_free_dma_mem(hw, &dma_mem);
-
-	return status;
-}
-
-/**
- *  idpf_asq_done - check if FW has processed the Admin Send Queue
- *  @hw: pointer to the hw struct
- *
- *  Returns true if the firmware has processed all descriptors on the
- *  admin send queue. Returns false if there are still requests pending.
- */
-bool idpf_asq_done(struct idpf_hw *hw)
-{
-	/* AQ designers suggest use of head for better
-	 * timing reliability than DD bit
-	 */
-	return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
-}
-
-/**
- * idpf_check_asq_alive
- * @hw: pointer to the hw struct
- *
- * Returns true if Queue is enabled else false.
- */
-bool idpf_check_asq_alive(struct idpf_hw *hw)
-{
-	if (hw->asq->reg.len)
-		return !!(rd32(hw, hw->asq->reg.len) &
-			  PF_FW_ATQLEN_ATQENABLE_M);
-
-	return false;
-}
-
-/**
- *  idpf_clean_arq_element
- *  @hw: pointer to the hw struct
- *  @e: event info from the receive descriptor, includes any buffers
- *  @pending: number of events that could be left to process
- *
- *  This function cleans one Admin Receive Queue element and returns
- *  the contents through e.  It can also return how many events are
- *  left to process through 'pending'
- */
-int idpf_clean_arq_element(struct idpf_hw *hw,
-			   struct idpf_arq_event_info *e, u16 *pending)
-{
-	struct idpf_dma_mem *dma_mem = NULL;
-	struct idpf_ctlq_msg msg = { 0 };
-	int status;
-	u16 msg_data_len;
-
-	*pending = 1;
-
-	status = idpf_ctlq_recv(hw->arq, pending, &msg);
-	if (status == -ENOMSG)
-		goto exit;
-
-	/* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
-	e->desc.opcode = msg.opcode;
-	e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
-	e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
-	e->desc.ret_val = msg.status;
-	e->desc.datalen = msg.data_len;
-	if (msg.data_len > 0) {
-		if (!msg.ctx.indirect.payload || !msg.ctx.indirect.payload->va ||
-		    !e->msg_buf) {
-			return -EFAULT;
-		}
-		e->buf_len = msg.data_len;
-		msg_data_len = msg.data_len;
-		idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len,
-			    IDPF_DMA_TO_NONDMA);
-		dma_mem = msg.ctx.indirect.payload;
-	} else {
-		*pending = 0;
-	}
-
-	status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem);
-
-exit:
-	return status;
-}
-
-/**
- *  idpf_deinit_hw - shutdown routine
- *  @hw: pointer to the hardware structure
- */
-void idpf_deinit_hw(struct idpf_hw *hw)
-{
-	hw->asq = NULL;
-	hw->arq = NULL;
-
-	idpf_ctlq_deinit(hw);
-}
-
-/**
- * idpf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a RESET message to the CPF. Does not wait for response from CPF
- * as none will be forthcoming. Immediately after calling this function,
- * the control queue should be shut down and (optionally) reinitialized.
- */
-int idpf_reset(struct idpf_hw *hw)
-{
-	return idpf_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
-				      0, NULL, 0);
-}
-
-/**
- * idpf_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- */
-STATIC int idpf_get_set_rss_lut(struct idpf_hw *hw, u16 vsi_id,
-				bool pf_lut, u8 *lut, u16 lut_size,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- */
-int idpf_get_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
-}
-
-/**
- * idpf_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- */
-int idpf_set_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * idpf_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- */
-STATIC int idpf_get_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-				struct idpf_get_set_rss_key_data *key,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- */
-int idpf_get_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * idpf_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-int idpf_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, true);
-}
-
-RTE_LOG_REGISTER_DEFAULT(idpf_common_logger, NOTICE);
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 96d7642209..649c44d0ae 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -2,7 +2,6 @@
 # Copyright(c) 2023 Intel Corporation
 
 sources += files(
-        'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
 )
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v2 21/21] drivers: adding type to idpf vc queue switch
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (19 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 20/21] common/idpf: remove idpf common file Soumyadeep Hore
@ 2024-06-04  8:06   ` Soumyadeep Hore
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-04  8:06 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding an argument named type to define queue type
in idpf_vc_queue_switch(). This solves the issue of
improper queue type in virtchnl2 message.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/idpf_common_virtchnl.c |  8 ++------
 drivers/common/idpf/idpf_common_virtchnl.h |  2 +-
 drivers/net/cpfl/cpfl_ethdev.c             | 12 ++++++++----
 drivers/net/cpfl/cpfl_rxtx.c               | 12 ++++++++----
 drivers/net/idpf/idpf_rxtx.c               | 12 ++++++++----
 5 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f00202f43c..de511da788 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -769,15 +769,11 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
+		     bool rx, bool on, uint32_t type)
 {
-	uint32_t type;
 	int err, queue_id;
 
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+	if (rx)
 		queue_id = vport->chunks_info.rx_start_qid + qid;
 	else
 		queue_id = vport->chunks_info.tx_start_qid + qid;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 73446ded86..d6555978d5 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -31,7 +31,7 @@ int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
 int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-			 bool rx, bool on);
+			 bool rx, bool on, uint32_t type);
 __rte_internal
 int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e707043bf7..9e2a74371e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -1907,7 +1907,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	int i, ret;
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
 			return ret;
@@ -1915,7 +1916,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
 			return ret;
@@ -1943,7 +1945,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
 			return ret;
@@ -1951,7 +1954,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
 			return ret;
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ab8bec4645..47351ca102 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -1200,7 +1200,8 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1252,7 +1253,8 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1283,7 +1285,8 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 						     rx_queue_id - cpfl_vport->nb_data_txq,
 						     true, false);
 	else
-		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+								VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1331,7 +1334,8 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 						     tx_queue_id - cpfl_vport->nb_data_txq,
 						     false, false);
 	else
-		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+								VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 64f2235580..858bbefe3b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -595,7 +595,8 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -646,7 +647,8 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -669,7 +671,8 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -701,7 +704,8 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 00/22] Update MEV TS Base Driver
  2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
                     ` (20 preceding siblings ...)
  2024-06-04  8:06   ` [PATCH v2 21/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
@ 2024-06-12  3:52   ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
                       ` (22 more replies)
  21 siblings, 23 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

These patches integrate the latest changes in MEV TS IDPF Base driver.

---
v3:
- Removed additional whitespace changes
- Fixed warnings of CI
- Updated documentation relating to MEV TS FW release
---

Soumyadeep Hore (22):
  common/idpf: added NVME CPF specific code with defines
  common/idpf: updated IDPF VF device ID
  common/idpf: added new virtchnl2 capability and vport flag
  common/idpf: moved the idpf HW into API header file
  common/idpf: avoid defensive programming
  common/idpf: use BIT ULL for large bitmaps
  common/idpf: convert data type to 'le'
  common/idpf: compress RXDID mask definitions
  common/idpf: refactor size check macro
  common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  common/idpf: use 'pad' and 'reserved' fields appropriately
  common/idpf: move related defines into enums
  common/idpf: avoid variable 0-init
  common/idpf: update in PTP message validation
  common/idpf: rename INLINE FLOW STEER to FLOW STEER
  common/idpf: add wmb before tail
  drivers: add flex array support and fix issues
  common/idpf: enable flow steer capability for vports
  common/idpf: add a new Tx context descriptor structure
  common/idpf: remove idpf common file
  drivers: adding type to idpf vc queue switch
  doc: updated the documentation for cpfl PMD

 doc/guides/nics/cpfl.rst                      |    2 +
 drivers/common/idpf/base/idpf_common.c        |  382 ---
 drivers/common/idpf/base/idpf_controlq.c      |   90 +-
 drivers/common/idpf/base/idpf_controlq.h      |  107 +-
 drivers/common/idpf/base/idpf_controlq_api.h  |   42 +-
 .../common/idpf/base/idpf_controlq_setup.c    |   18 +-
 drivers/common/idpf/base/idpf_devids.h        |    7 +-
 drivers/common/idpf/base/idpf_lan_txrx.h      |   20 +-
 drivers/common/idpf/base/idpf_osdep.h         |   72 +-
 drivers/common/idpf/base/idpf_type.h          |    4 +-
 drivers/common/idpf/base/meson.build          |    1 -
 drivers/common/idpf/base/virtchnl2.h          | 2388 +++++++++--------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  842 ++++--
 drivers/common/idpf/idpf_common_virtchnl.c    |   10 +-
 drivers/common/idpf/idpf_common_virtchnl.h    |    2 +-
 drivers/net/cpfl/cpfl_ethdev.c                |   40 +-
 drivers/net/cpfl/cpfl_rxtx.c                  |   12 +-
 drivers/net/idpf/idpf_rxtx.c                  |   12 +-
 18 files changed, 2067 insertions(+), 1984 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-14 10:33       ` Burakov, Anatoly
  2024-06-12  3:52     ` [PATCH v3 02/22] common/idpf: updated IDPF VF device ID Soumyadeep Hore
                       ` (21 subsequent siblings)
  22 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Removes NVME dependency on memory allocations and
uses a prepared buffer instead.

The changes do not affect other components.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c     | 23 +++++++++++++++++++-
 drivers/common/idpf/base/idpf_controlq_api.h |  7 +++++-
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index a82ca628de..bada75abfc 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #include "idpf_controlq.h"
@@ -145,8 +145,12 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	    qinfo->buf_size > IDPF_CTLQ_MAX_BUF_LEN)
 		return -EINVAL;
 
+#ifndef NVME_CPF
 	cq = (struct idpf_ctlq_info *)
 	     idpf_calloc(hw, 1, sizeof(struct idpf_ctlq_info));
+#else
+	cq = *cq_out;
+#endif
 	if (!cq)
 		return -ENOMEM;
 
@@ -172,10 +176,15 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	}
 
 	if (status)
+#ifdef NVME_CPF
+		return status;
+#else
 		goto init_free_q;
+#endif
 
 	if (is_rxq) {
 		idpf_ctlq_init_rxq_bufs(cq);
+#ifndef NVME_CPF
 	} else {
 		/* Allocate the array of msg pointers for TX queues */
 		cq->bi.tx_msg = (struct idpf_ctlq_msg **)
@@ -185,6 +194,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			status = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
+#endif
 	}
 
 	idpf_ctlq_setup_regs(cq, qinfo);
@@ -195,6 +205,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 
 	LIST_INSERT_HEAD(&hw->cq_list_head, cq, cq_list);
 
+#ifndef NVME_CPF
 	*cq_out = cq;
 	return status;
 
@@ -204,6 +215,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 init_free_q:
 	idpf_free(hw, cq);
 	cq = NULL;
+#endif
 
 	return status;
 }
@@ -232,8 +244,13 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
  * destroyed. This must be called prior to using the individual add/remove
  * APIs.
  */
+#ifdef NVME_CPF
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+			struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq)
+#else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info)
+#endif
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
 	int ret_code = 0;
@@ -244,6 +261,10 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 	for (i = 0; i < num_q; i++) {
 		struct idpf_ctlq_create_info *qinfo = q_info + i;
 
+#ifdef NVME_CPF
+		cq = *(ctlq + i);
+#endif
+
 		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
 		if (ret_code)
 			goto init_destroy_qs;
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 38f5d2df3c..6b6f3e84c2 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_API_H_
@@ -158,8 +158,13 @@ enum idpf_mbx_opc {
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
  */
+#ifdef NVME_CPF
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+			struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq);
+#else
 int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info);
+#endif
 
 /* Allocate and initialize a single control queue, which will be added to the
  * control queue list; returns a handle to the created control queue
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 02/22] common/idpf: updated IDPF VF device ID
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-14 10:36       ` Burakov, Anatoly
  2024-06-12  3:52     ` [PATCH v3 03/22] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
                       ` (20 subsequent siblings)
  22 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Update IDPF VF device id to 145C.

Also added device ID for S-IOV device.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_devids.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_devids.h b/drivers/common/idpf/base/idpf_devids.h
index c47762d5b7..0eb2def264 100644
--- a/drivers/common/idpf/base/idpf_devids.h
+++ b/drivers/common/idpf/base/idpf_devids.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_DEVIDS_H_
@@ -10,7 +10,10 @@
 
 /* Device IDs */
 #define IDPF_DEV_ID_PF			0x1452
-#define IDPF_DEV_ID_VF			0x1889
+#define IDPF_DEV_ID_VF			0x145C
+#ifdef SIOV_SUPPORT
+#define IDPF_DEV_ID_VF_SIOV		0x0DD5
+#endif /* SIOV_SUPPORT */
 
 
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 03/22] common/idpf: added new virtchnl2 capability and vport flag
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 02/22] common/idpf: updated IDPF VF device ID Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 04/22] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
                       ` (19 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Removed unused VIRTCHNL2_CAP_ADQ capability and use that bit for
VIRTCHNL2_CAP_INLINE_FLOW_STEER capability.

Added VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA port flag to allow
enable/disable per vport.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 3900b784d0..6eff0f1ea1 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _VIRTCHNL2_H_
@@ -220,7 +220,7 @@
 #define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
 #define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
 #define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_ADQ			BIT(6)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
 #define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
 #define VIRTCHNL2_CAP_PROMISC			BIT(8)
 #define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
@@ -593,7 +593,8 @@ struct virtchnl2_queue_reg_chunks {
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
 /* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT	BIT(0)
+#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
+#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 04/22] common/idpf: moved the idpf HW into API header file
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (2 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 03/22] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 05/22] common/idpf: avoid defensive programming Soumyadeep Hore
                       ` (18 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

There is an issue of recursive header file includes in accessing the
idpf_hw structure. The controlq.h has the structure definition and osdep
header file needs that. The problem is the controlq.h also needs
the osdep header file contents, basically both dependent on each other.

Moving the definition from controlq.h into api.h resolves the problem.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c       |   4 +-
 drivers/common/idpf/base/idpf_controlq.h     | 107 +------------------
 drivers/common/idpf/base/idpf_controlq_api.h |  35 ++++++
 drivers/common/idpf/base/idpf_osdep.h        |  72 ++++++++++++-
 drivers/common/idpf/base/idpf_type.h         |   4 +-
 5 files changed, 111 insertions(+), 111 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
index 7181a7f14c..bb540345c2 100644
--- a/drivers/common/idpf/base/idpf_common.c
+++ b/drivers/common/idpf/base/idpf_common.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
-#include "idpf_type.h"
 #include "idpf_prototype.h"
+#include "idpf_type.h"
 #include <virtchnl.h>
 
 
diff --git a/drivers/common/idpf/base/idpf_controlq.h b/drivers/common/idpf/base/idpf_controlq.h
index 80ca06e632..3f74b5a898 100644
--- a/drivers/common/idpf/base/idpf_controlq.h
+++ b/drivers/common/idpf/base/idpf_controlq.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_H_
@@ -96,111 +96,6 @@ struct idpf_mbxq_desc {
 	u32 pf_vf_id;		/* used by CP when sending to PF */
 };
 
-enum idpf_mac_type {
-	IDPF_MAC_UNKNOWN = 0,
-	IDPF_MAC_PF,
-	IDPF_MAC_VF,
-	IDPF_MAC_GENERIC
-};
-
-#define ETH_ALEN 6
-
-struct idpf_mac_info {
-	enum idpf_mac_type type;
-	u8 addr[ETH_ALEN];
-	u8 perm_addr[ETH_ALEN];
-};
-
-#define IDPF_AQ_LINK_UP 0x1
-
-/* PCI bus types */
-enum idpf_bus_type {
-	idpf_bus_type_unknown = 0,
-	idpf_bus_type_pci,
-	idpf_bus_type_pcix,
-	idpf_bus_type_pci_express,
-	idpf_bus_type_reserved
-};
-
-/* PCI bus speeds */
-enum idpf_bus_speed {
-	idpf_bus_speed_unknown	= 0,
-	idpf_bus_speed_33	= 33,
-	idpf_bus_speed_66	= 66,
-	idpf_bus_speed_100	= 100,
-	idpf_bus_speed_120	= 120,
-	idpf_bus_speed_133	= 133,
-	idpf_bus_speed_2500	= 2500,
-	idpf_bus_speed_5000	= 5000,
-	idpf_bus_speed_8000	= 8000,
-	idpf_bus_speed_reserved
-};
-
-/* PCI bus widths */
-enum idpf_bus_width {
-	idpf_bus_width_unknown	= 0,
-	idpf_bus_width_pcie_x1	= 1,
-	idpf_bus_width_pcie_x2	= 2,
-	idpf_bus_width_pcie_x4	= 4,
-	idpf_bus_width_pcie_x8	= 8,
-	idpf_bus_width_32	= 32,
-	idpf_bus_width_64	= 64,
-	idpf_bus_width_reserved
-};
-
-/* Bus parameters */
-struct idpf_bus_info {
-	enum idpf_bus_speed speed;
-	enum idpf_bus_width width;
-	enum idpf_bus_type type;
-
-	u16 func;
-	u16 device;
-	u16 lan_id;
-	u16 bus_id;
-};
-
-/* Function specific capabilities */
-struct idpf_hw_func_caps {
-	u32 num_alloc_vfs;
-	u32 vf_base_id;
-};
-
-/* Define the APF hardware struct to replace other control structs as needed
- * Align to ctlq_hw_info
- */
-struct idpf_hw {
-	/* Some part of BAR0 address space is not mapped by the LAN driver.
-	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
-	 * will have its own base hardware address when mapped.
-	 */
-	u8 *hw_addr;
-	u8 *hw_addr_region2;
-	u64 hw_addr_len;
-	u64 hw_addr_region2_len;
-
-	void *back;
-
-	/* control queue - send and receive */
-	struct idpf_ctlq_info *asq;
-	struct idpf_ctlq_info *arq;
-
-	/* subsystem structs */
-	struct idpf_mac_info mac;
-	struct idpf_bus_info bus;
-	struct idpf_hw_func_caps func_caps;
-
-	/* pci info */
-	u16 device_id;
-	u16 vendor_id;
-	u16 subsystem_device_id;
-	u16 subsystem_vendor_id;
-	u8 revision_id;
-	bool adapter_stopped;
-
-	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
-};
-
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
 			     struct idpf_ctlq_info *cq);
 
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 6b6f3e84c2..f3a397ea58 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -154,6 +154,41 @@ enum idpf_mbx_opc {
 	idpf_mbq_opc_send_msg_to_peer_drv	= 0x0804,
 };
 
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct idpf_hw {
+	/* Some part of BAR0 address space is not mapped by the LAN driver.
+	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
+	 * will have its own base hardware address when mapped.
+	 */
+	u8 *hw_addr;
+	u8 *hw_addr_region2;
+	u64 hw_addr_len;
+	u64 hw_addr_region2_len;
+
+	void *back;
+
+	/* control queue - send and receive */
+	struct idpf_ctlq_info *asq;
+	struct idpf_ctlq_info *arq;
+
+	/* subsystem structs */
+	struct idpf_mac_info mac;
+	struct idpf_bus_info bus;
+	struct idpf_hw_func_caps func_caps;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	bool adapter_stopped;
+
+	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
+};
+
 /* API supported for control queue management */
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
diff --git a/drivers/common/idpf/base/idpf_osdep.h b/drivers/common/idpf/base/idpf_osdep.h
index 74a376cb13..b2af8f443d 100644
--- a/drivers/common/idpf/base/idpf_osdep.h
+++ b/drivers/common/idpf/base/idpf_osdep.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_OSDEP_H_
@@ -353,4 +353,74 @@ idpf_hweight32(u32 num)
 
 #endif
 
+enum idpf_mac_type {
+	IDPF_MAC_UNKNOWN = 0,
+	IDPF_MAC_PF,
+	IDPF_MAC_VF,
+	IDPF_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct idpf_mac_info {
+	enum idpf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+};
+
+#define IDPF_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum idpf_bus_type {
+	idpf_bus_type_unknown = 0,
+	idpf_bus_type_pci,
+	idpf_bus_type_pcix,
+	idpf_bus_type_pci_express,
+	idpf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum idpf_bus_speed {
+	idpf_bus_speed_unknown	= 0,
+	idpf_bus_speed_33	= 33,
+	idpf_bus_speed_66	= 66,
+	idpf_bus_speed_100	= 100,
+	idpf_bus_speed_120	= 120,
+	idpf_bus_speed_133	= 133,
+	idpf_bus_speed_2500	= 2500,
+	idpf_bus_speed_5000	= 5000,
+	idpf_bus_speed_8000	= 8000,
+	idpf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum idpf_bus_width {
+	idpf_bus_width_unknown	= 0,
+	idpf_bus_width_pcie_x1	= 1,
+	idpf_bus_width_pcie_x2	= 2,
+	idpf_bus_width_pcie_x4	= 4,
+	idpf_bus_width_pcie_x8	= 8,
+	idpf_bus_width_32	= 32,
+	idpf_bus_width_64	= 64,
+	idpf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct idpf_bus_info {
+	enum idpf_bus_speed speed;
+	enum idpf_bus_width width;
+	enum idpf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct idpf_hw_func_caps {
+	u32 num_alloc_vfs;
+	u32 vf_base_id;
+};
+
 #endif /* _IDPF_OSDEP_H_ */
diff --git a/drivers/common/idpf/base/idpf_type.h b/drivers/common/idpf/base/idpf_type.h
index a22d28f448..2ff818035b 100644
--- a/drivers/common/idpf/base/idpf_type.h
+++ b/drivers/common/idpf/base/idpf_type.h
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_TYPE_H_
 #define _IDPF_TYPE_H_
 
-#include "idpf_controlq.h"
+#include "idpf_osdep.h"
 
 #define UNREFERENCED_XPARAMETER
 #define UNREFERENCED_1PARAMETER(_p)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 05/22] common/idpf: avoid defensive programming
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (3 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 04/22] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-14 12:16       ` Burakov, Anatoly
  2024-06-12  3:52     ` [PATCH v3 06/22] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
                       ` (17 subsequent siblings)
  22 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Based on the upstream feedback, driver should not use any
defensive programming strategy by checking for NULL pointers
and other conditional checks unnecessarily in the code flow
to fall back, instead fail and fix the bug in a proper way.

Some of the checks are identified and removed/wrapped
in this patch:
- As the control queue is freed and deleted from the list after the
idpf_ctlq_shutdown call, there is no need to have the ring_size
check in idpf_ctlq_shutdown.
- From the upstream perspective shared code is part of the Linux
driver and it doesn't make sense to add zero 'len' and 'buf_size'
check in idpf_ctlq_add as to start with, driver provides valid
sizes, if not it is a bug.
- Remove cq NULL and zero ring_size check wherever possible as
the IDPF driver code flow does not pass any NULL cq pointer to
the control queue callbacks. If it passes then it is a bug and
should be fixed rather than checking for NULL pointer and falling
back which is not the right way.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index bada75abfc..b5ba9c3bd0 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -98,9 +98,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
 	idpf_acquire_lock(&cq->cq_lock);
 
-	if (!cq->ring_size)
-		goto shutdown_sq_out;
-
 #ifdef SIMICS_BUILD
 	wr32(hw, cq->reg.head, 0);
 	wr32(hw, cq->reg.tail, 0);
@@ -115,7 +112,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 	/* Set ring_size to 0 to indicate uninitialized queue */
 	cq->ring_size = 0;
 
-shutdown_sq_out:
 	idpf_release_lock(&cq->cq_lock);
 	idpf_destroy_lock(&cq->cq_lock);
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 06/22] common/idpf: use BIT ULL for large bitmaps
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (4 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 05/22] common/idpf: avoid defensive programming Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 07/22] common/idpf: convert data type to 'le' Soumyadeep Hore
                       ` (16 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

For bitmaps greater than 32 bits, use BIT_ULL instead of BIT
macro as reported by compiler.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 70 ++++++++++++++--------------
 1 file changed, 35 insertions(+), 35 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 6eff0f1ea1..851c6629dd 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -175,20 +175,20 @@
 /* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
  * Receive Side Scaling Flow type capability flags
  */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT(13)
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
 
 /* VIRTCHNL2_HEADER_SPLIT_CAPS
  * Header split capability flags
@@ -214,32 +214,32 @@
  * TX_VLAN: VLAN tag insertion
  * RX_VLAN: VLAN tag stripping
  */
-#define VIRTCHNL2_CAP_RDMA			BIT(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
-#define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT(11)
+#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
+#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
+#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
+#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
+#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
+#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
 /* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT(12)
-#define VIRTCHNL2_CAP_PTP			BIT(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT(15)
-#define VIRTCHNL2_CAP_FDIR			BIT(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT(19)
+#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
+#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
+#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
+#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
+#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
+#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
 /* Enable miss completion types plus ability to detect a miss completion if a
  * reserved bit is set in a standared completion's tag.
  */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT(20)
+#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
 /* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT(63)
+#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
 
 /* VIRTCHNL2_TXQ_SCHED_MODE
  * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 07/22] common/idpf: convert data type to 'le'
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (5 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 06/22] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-14 12:19       ` Burakov, Anatoly
  2024-06-12  3:52     ` [PATCH v3 08/22] common/idpf: compress RXDID mask definitions Soumyadeep Hore
                       ` (15 subsequent siblings)
  22 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'u32' data type is used for the struct members in
'virtchnl2_version_info' which should be '__le32'.
Make the change accordingly.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 851c6629dd..1f59730297 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -471,8 +471,8 @@
  * error regardless of version mismatch.
  */
 struct virtchnl2_version_info {
-	u32 major;
-	u32 minor;
+	__le32 major;
+	__le32 minor;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 08/22] common/idpf: compress RXDID mask definitions
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (6 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 07/22] common/idpf: convert data type to 'le' Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 09/22] common/idpf: refactor size check macro Soumyadeep Hore
                       ` (14 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using the long RXDID definitions, introduce a
macro which uses common part of the RXDID definitions i.e.
VIRTCHNL2_RXDID_ and the bit passed to generate a mask.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 31 ++++++++++---------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index e6e782a219..f632271788 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -58,22 +58,23 @@
 /* VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		BIT(VIRTCHNL2_RXDID_0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		BIT(VIRTCHNL2_RXDID_1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
+#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
 /* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
 /* 22 through 63 are reserved */
 
 /* Rx */
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 09/22] common/idpf: refactor size check macro
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (7 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 08/22] common/idpf: compress RXDID mask definitions Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-14 12:21       ` Burakov, Anatoly
  2024-06-12  3:52     ` [PATCH v3 10/22] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
                       ` (13 subsequent siblings)
  22 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using 'divide by 0' to check the struct length,
use the static_assert macro

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 1f59730297..f8b97f2e06 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -41,15 +41,12 @@
 /* State Machine error - Command sequence problem */
 #define	VIRTCHNL2_STATUS_ERR_ESM	201
 
-/* These macros are used to generate compilation errors if a structure/union
- * is not exactly the correct length. It gives a divide by zero error if the
- * structure/union is not of the correct size, otherwise it creates an enum
- * that is never used.
+/* This macro is used to generate compilation errors if a structure
+ * is not exactly the correct length.
  */
-#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
-#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
+	static_assert((n) == sizeof(struct X),	\
+		      "Structure length does not match with the expected value")
 
 /* New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 10/22] common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (8 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 09/22] common/idpf: refactor size check macro Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 11/22] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
                       ` (12 subsequent siblings)
  22 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Mask for VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M was defined wrongly
and this patch fixes it.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index f632271788..9e04cf8628 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -111,7 +111,7 @@
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 11/22] common/idpf: use 'pad' and 'reserved' fields appropriately
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (9 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 10/22] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 12/22] common/idpf: move related defines into enums Soumyadeep Hore
                       ` (11 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'pad' naming is used if the field is actually a padding byte
and is also used for bytes meant for future addition of new
fields, whereas 'reserved' is only used if the field is reserved
and cannot be used for any other purpose.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 71 +++++++++++++++-------------
 1 file changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index f8b97f2e06..d007c2f540 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -95,7 +95,7 @@
 #define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
 #define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
 #define		VIRTCHNL2_OP_GET_PORT_STATS		540
-	/* TimeSync opcodes */
+/* TimeSync opcodes */
 #define		VIRTCHNL2_OP_GET_PTP_CAPS		541
 #define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
 
@@ -559,7 +559,7 @@ struct virtchnl2_get_capabilities {
 	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
-	u8 reserved[10];
+	u8 pad1[10];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
@@ -575,7 +575,7 @@ struct virtchnl2_queue_reg_chunk {
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
-	u8 reserved[4];
+	u8 pad1[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
@@ -583,7 +583,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -648,7 +648,7 @@ struct virtchnl2_create_vport {
 	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
 
-	u8 reserved[20];
+	u8 pad2[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -663,7 +663,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
@@ -708,7 +708,7 @@ struct virtchnl2_txq_info {
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
 
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
@@ -724,7 +724,7 @@ struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 
-	u8 reserved[10];
+	u8 pad[10];
 	struct virtchnl2_txq_info qinfo[1];
 };
 
@@ -749,7 +749,7 @@ struct virtchnl2_rxq_info {
 
 	__le16 ring_len;
 	u8 buffer_notif_stride;
-	u8 pad[1];
+	u8 pad;
 
 	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
@@ -768,16 +768,15 @@ struct virtchnl2_rxq_info {
 	 * if this field is set
 	 */
 	u8 bufq2_ena;
-	u8 pad2[3];
+	u8 pad1[3];
 
 	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
 
-	u8 reserved[16];
+	u8 pad2[16];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
 /* VIRTCHNL2_OP_CONFIG_RX_QUEUES
@@ -791,7 +790,7 @@ struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 
-	u8 reserved[18];
+	u8 pad[18];
 	struct virtchnl2_rxq_info qinfo[1];
 };
 
@@ -810,7 +809,8 @@ struct virtchnl2_add_queues {
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
 	__le16 num_rx_bufq;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -948,7 +948,7 @@ struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
-	__le16 pad1;
+	__le16 pad;
 
 	/* Register offsets and spacing provided by CP.
 	 * dynamic control registers are used for enabling/disabling/re-enabling
@@ -969,15 +969,15 @@ struct virtchnl2_vector_chunk {
 	 * where n=0..2
 	 */
 	__le32 itrn_index_spacing;
-	u8 reserved[4];
+	u8 pad1[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
 /* Structure to specify several chunks of contiguous interrupt vectors */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -992,7 +992,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunks vchunks;
 };
 
@@ -1014,8 +1015,9 @@ struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
-	u8 reserved[4];
-	__le32 lut[1]; /* RSS lookup table */
+	u8 pad[4];
+	/* RSS lookup table */
+	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
@@ -1039,7 +1041,7 @@ struct virtchnl2_rss_hash {
 	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
@@ -1063,7 +1065,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 /* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -1073,7 +1075,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 /* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -1100,8 +1102,7 @@ struct virtchnl2_non_flex_create_adi {
 	__le16 adi_index;
 	/* CP populates ADI id */
 	__le16 adi_id;
-	u8 reserved[64];
-	u8 pad[4];
+	u8 pad[68];
 	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
 	/* PF sends vector chunks to CP */
@@ -1117,7 +1118,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
-	u8 reserved[2];
+	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
@@ -1220,7 +1221,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 rx_runt_errors;
 	__le64 rx_illegal_bytes;
 	__le64 rx_total_pkts;
-	u8 rx_reserved[128];
+	u8 rx_pad[128];
 
 	__le64 tx_bytes;
 	__le64 tx_unicast_pkts;
@@ -1239,7 +1240,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 tx_xoff_events;
 	__le64 tx_dropped_link_down_pkts;
 	__le64 tx_total_pkts;
-	u8 tx_reserved[128];
+	u8 tx_pad[128];
 	__le64 mac_local_faults;
 	__le64 mac_remote_faults;
 };
@@ -1273,7 +1274,8 @@ struct virtchnl2_event {
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
-	u8 pad[1];
+	u8 pad;
+
 	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
@@ -1301,7 +1303,7 @@ struct virtchnl2_queue_chunk {
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
@@ -1309,7 +1311,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_chunk chunks[1];
 };
 
@@ -1326,7 +1328,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_chunks chunks;
 };
 
@@ -1343,7 +1346,7 @@ struct virtchnl2_queue_vector {
 
 	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 12/22] common/idpf: move related defines into enums
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (10 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 11/22] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 13/22] common/idpf: avoid variable 0-init Soumyadeep Hore
                       ` (10 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Changes all groups of related defines to enums. The names of
the enums are chosen to follow the common part of the naming
pattern as much as possible.

Replaced the common labels from the comments with the enum names.

While at it, modify header description based on upstream feedback.

Some variable names modified and comments updated in descriptive way.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h          | 1847 ++++++++++-------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  843 +++++---
 2 files changed, 1686 insertions(+), 1004 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index d007c2f540..e76ccbd46f 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -8,317 +8,396 @@
 /* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
  * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
  * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ *
+ * PF/VF uses the virtchnl interface defined in this header file to communicate
+ * with device Control Plane (CP). Driver and the CP may run on different
+ * platforms with different endianness. To avoid byte order discrepancies,
+ * all the structures in this header follow little-endian format.
+ *
+ * This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
  */
 
 #include "virtchnl2_lan_desc.h"
 
-/* VIRTCHNL2_ERROR_CODES */
-/* success */
-#define	VIRTCHNL2_STATUS_SUCCESS	0
-/* Operation not permitted, used in case of command not permitted for sender */
-#define	VIRTCHNL2_STATUS_ERR_EPERM	1
-/* Bad opcode - virtchnl interface problem */
-#define	VIRTCHNL2_STATUS_ERR_ESRCH	3
-/* I/O error - HW access error */
-#define	VIRTCHNL2_STATUS_ERR_EIO	5
-/* No such resource - Referenced resource is not allacated */
-#define	VIRTCHNL2_STATUS_ERR_ENXIO	6
-/* Permission denied - Resource is not permitted to caller */
-#define	VIRTCHNL2_STATUS_ERR_EACCES	13
-/* Device or resource busy - In case shared resource is in use by others */
-#define	VIRTCHNL2_STATUS_ERR_EBUSY	16
-/* Object already exists and not free */
-#define	VIRTCHNL2_STATUS_ERR_EEXIST	17
-/* Invalid input argument in command */
-#define	VIRTCHNL2_STATUS_ERR_EINVAL	22
-/* No space left or allocation failure */
-#define	VIRTCHNL2_STATUS_ERR_ENOSPC	28
-/* Parameter out of range */
-#define	VIRTCHNL2_STATUS_ERR_ERANGE	34
-
-/* Op not allowed in current dev mode */
-#define	VIRTCHNL2_STATUS_ERR_EMODE	200
-/* State Machine error - Command sequence problem */
-#define	VIRTCHNL2_STATUS_ERR_ESM	201
-
-/* This macro is used to generate compilation errors if a structure
+/**
+ * enum virtchnl2_status - Error codes.
+ * @VIRTCHNL2_STATUS_SUCCESS: Success
+ * @VIRTCHNL2_STATUS_ERR_EPERM: Operation not permitted, used in case of command
+ *				not permitted for sender
+ * @VIRTCHNL2_STATUS_ERR_ESRCH: Bad opcode - virtchnl interface problem
+ * @VIRTCHNL2_STATUS_ERR_EIO: I/O error - HW access error
+ * @VIRTCHNL2_STATUS_ERR_ENXIO: No such resource - Referenced resource is not
+ *				allocated
+ * @VIRTCHNL2_STATUS_ERR_EACCES: Permission denied - Resource is not permitted
+ *				 to caller
+ * @VIRTCHNL2_STATUS_ERR_EBUSY: Device or resource busy - In case shared
+ *				resource is in use by others
+ * @VIRTCHNL2_STATUS_ERR_EEXIST: Object already exists and not free
+ * @VIRTCHNL2_STATUS_ERR_EINVAL: Invalid input argument in command
+ * @VIRTCHNL2_STATUS_ERR_ENOSPC: No space left or allocation failure
+ * @VIRTCHNL2_STATUS_ERR_ERANGE: Parameter out of range
+ * @VIRTCHNL2_STATUS_ERR_EMODE: Operation not allowed in current dev mode
+ * @VIRTCHNL2_STATUS_ERR_ESM: State Machine error - Command sequence problem
+ */
+enum virtchnl2_status {
+	VIRTCHNL2_STATUS_SUCCESS	= 0,
+	VIRTCHNL2_STATUS_ERR_EPERM	= 1,
+	VIRTCHNL2_STATUS_ERR_ESRCH	= 3,
+	VIRTCHNL2_STATUS_ERR_EIO	= 5,
+	VIRTCHNL2_STATUS_ERR_ENXIO	= 6,
+	VIRTCHNL2_STATUS_ERR_EACCES	= 13,
+	VIRTCHNL2_STATUS_ERR_EBUSY	= 16,
+	VIRTCHNL2_STATUS_ERR_EEXIST	= 17,
+	VIRTCHNL2_STATUS_ERR_EINVAL	= 22,
+	VIRTCHNL2_STATUS_ERR_ENOSPC	= 28,
+	VIRTCHNL2_STATUS_ERR_ERANGE	= 34,
+	VIRTCHNL2_STATUS_ERR_EMODE	= 200,
+	VIRTCHNL2_STATUS_ERR_ESM	= 201,
+};
+
+/**
+ * This macro is used to generate compilation errors if a structure
  * is not exactly the correct length.
  */
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
 
-/* New major set of opcodes introduced and so leaving room for
+/**
+ * New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
  * be used if both the PF and VF have successfully negotiated the
- * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
- */
-#define		VIRTCHNL2_OP_UNKNOWN			0
-#define		VIRTCHNL2_OP_VERSION			1
-#define		VIRTCHNL2_OP_GET_CAPS			500
-#define		VIRTCHNL2_OP_CREATE_VPORT		501
-#define		VIRTCHNL2_OP_DESTROY_VPORT		502
-#define		VIRTCHNL2_OP_ENABLE_VPORT		503
-#define		VIRTCHNL2_OP_DISABLE_VPORT		504
-#define		VIRTCHNL2_OP_CONFIG_TX_QUEUES		505
-#define		VIRTCHNL2_OP_CONFIG_RX_QUEUES		506
-#define		VIRTCHNL2_OP_ENABLE_QUEUES		507
-#define		VIRTCHNL2_OP_DISABLE_QUEUES		508
-#define		VIRTCHNL2_OP_ADD_QUEUES			509
-#define		VIRTCHNL2_OP_DEL_QUEUES			510
-#define		VIRTCHNL2_OP_MAP_QUEUE_VECTOR		511
-#define		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		512
-#define		VIRTCHNL2_OP_GET_RSS_KEY		513
-#define		VIRTCHNL2_OP_SET_RSS_KEY		514
-#define		VIRTCHNL2_OP_GET_RSS_LUT		515
-#define		VIRTCHNL2_OP_SET_RSS_LUT		516
-#define		VIRTCHNL2_OP_GET_RSS_HASH		517
-#define		VIRTCHNL2_OP_SET_RSS_HASH		518
-#define		VIRTCHNL2_OP_SET_SRIOV_VFS		519
-#define		VIRTCHNL2_OP_ALLOC_VECTORS		520
-#define		VIRTCHNL2_OP_DEALLOC_VECTORS		521
-#define		VIRTCHNL2_OP_EVENT			522
-#define		VIRTCHNL2_OP_GET_STATS			523
-#define		VIRTCHNL2_OP_RESET_VF			524
-	/* opcode 525 is reserved */
-#define		VIRTCHNL2_OP_GET_PTYPE_INFO		526
-	/* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
-	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ * VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
+ */
+enum virtchnl2_op {
+	VIRTCHNL2_OP_UNKNOWN			= 0,
+	VIRTCHNL2_OP_VERSION			= 1,
+	VIRTCHNL2_OP_GET_CAPS			= 500,
+	VIRTCHNL2_OP_CREATE_VPORT		= 501,
+	VIRTCHNL2_OP_DESTROY_VPORT		= 502,
+	VIRTCHNL2_OP_ENABLE_VPORT		= 503,
+	VIRTCHNL2_OP_DISABLE_VPORT		= 504,
+	VIRTCHNL2_OP_CONFIG_TX_QUEUES		= 505,
+	VIRTCHNL2_OP_CONFIG_RX_QUEUES		= 506,
+	VIRTCHNL2_OP_ENABLE_QUEUES		= 507,
+	VIRTCHNL2_OP_DISABLE_QUEUES		= 508,
+	VIRTCHNL2_OP_ADD_QUEUES			= 509,
+	VIRTCHNL2_OP_DEL_QUEUES			= 510,
+	VIRTCHNL2_OP_MAP_QUEUE_VECTOR		= 511,
+	VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		= 512,
+	VIRTCHNL2_OP_GET_RSS_KEY		= 513,
+	VIRTCHNL2_OP_SET_RSS_KEY		= 514,
+	VIRTCHNL2_OP_GET_RSS_LUT		= 515,
+	VIRTCHNL2_OP_SET_RSS_LUT		= 516,
+	VIRTCHNL2_OP_GET_RSS_HASH		= 517,
+	VIRTCHNL2_OP_SET_RSS_HASH		= 518,
+	VIRTCHNL2_OP_SET_SRIOV_VFS		= 519,
+	VIRTCHNL2_OP_ALLOC_VECTORS		= 520,
+	VIRTCHNL2_OP_DEALLOC_VECTORS		= 521,
+	VIRTCHNL2_OP_EVENT			= 522,
+	VIRTCHNL2_OP_GET_STATS			= 523,
+	VIRTCHNL2_OP_RESET_VF			= 524,
+	/* Opcode 525 is reserved */
+	VIRTCHNL2_OP_GET_PTYPE_INFO		= 526,
+	/* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
 	 */
-	/* opcodes 529, 530, and 531 are reserved */
-#define		VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	532
-#define		VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	533
-#define		VIRTCHNL2_OP_LOOPBACK			534
-#define		VIRTCHNL2_OP_ADD_MAC_ADDR		535
-#define		VIRTCHNL2_OP_DEL_MAC_ADDR		536
-#define		VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	537
-#define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
-#define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
-#define		VIRTCHNL2_OP_GET_PORT_STATS		540
-/* TimeSync opcodes */
-#define		VIRTCHNL2_OP_GET_PTP_CAPS		541
-#define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
+/* Opcodes 529, 530, and 531 are reserved */
+	VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	= 532,
+	VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	= 533,
+	VIRTCHNL2_OP_LOOPBACK			= 534,
+	VIRTCHNL2_OP_ADD_MAC_ADDR		= 535,
+	VIRTCHNL2_OP_DEL_MAC_ADDR		= 536,
+	VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	= 537,
+	VIRTCHNL2_OP_ADD_QUEUE_GROUPS		= 538,
+	VIRTCHNL2_OP_DEL_QUEUE_GROUPS		= 539,
+	VIRTCHNL2_OP_GET_PORT_STATS		= 540,
+	/* TimeSync opcodes */
+	VIRTCHNL2_OP_GET_PTP_CAPS		= 541,
+	VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	= 542,
+};
 
 #define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX	0xFFFF
 
-/* VIRTCHNL2_VPORT_TYPE
- * Type of virtual port
+/**
+ * enum virtchnl2_vport_type - Type of virtual port
+ * @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SRIOV: SRIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SIOV: SIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SUBDEV: Subdevice virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_MNG: Management virtual port type
  */
-#define VIRTCHNL2_VPORT_TYPE_DEFAULT		0
-#define VIRTCHNL2_VPORT_TYPE_SRIOV		1
-#define VIRTCHNL2_VPORT_TYPE_SIOV		2
-#define VIRTCHNL2_VPORT_TYPE_SUBDEV		3
-#define VIRTCHNL2_VPORT_TYPE_MNG		4
+enum virtchnl2_vport_type {
+	VIRTCHNL2_VPORT_TYPE_DEFAULT		= 0,
+	VIRTCHNL2_VPORT_TYPE_SRIOV		= 1,
+	VIRTCHNL2_VPORT_TYPE_SIOV		= 2,
+	VIRTCHNL2_VPORT_TYPE_SUBDEV		= 3,
+	VIRTCHNL2_VPORT_TYPE_MNG		= 4,
+};
 
-/* VIRTCHNL2_QUEUE_MODEL
- * Type of queue model
+/**
+ * enum virtchnl2_queue_model - Type of queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model
  *
  * In the single queue model, the same transmit descriptor queue is used by
  * software to post descriptors to hardware and by hardware to post completed
  * descriptors to software.
  * Likewise, the same receive descriptor queue is used by hardware to post
  * completions to software and by software to post buffers to hardware.
- */
-#define VIRTCHNL2_QUEUE_MODEL_SINGLE		0
-/* In the split queue model, hardware uses transmit completion queues to post
+ *
+ * In the split queue model, hardware uses transmit completion queues to post
  * descriptor/buffer completions to software, while software uses transmit
  * descriptor queues to post descriptors to hardware.
  * Likewise, hardware posts descriptor completions to the receive descriptor
  * queue, while software uses receive buffer queues to post buffers to hardware.
  */
-#define VIRTCHNL2_QUEUE_MODEL_SPLIT		1
-
-/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
- * Checksum offload capability flags
- */
-#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		BIT(0)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	BIT(1)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	BIT(2)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	BIT(3)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	BIT(4)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	BIT(5)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	BIT(6)
-#define VIRTCHNL2_CAP_TX_CSUM_GENERIC		BIT(7)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		BIT(8)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	BIT(9)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	BIT(10)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	BIT(11)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	BIT(12)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	BIT(13)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	BIT(14)
-#define VIRTCHNL2_CAP_RX_CSUM_GENERIC		BIT(15)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	BIT(16)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	BIT(17)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	BIT(18)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	BIT(19)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	BIT(20)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	BIT(21)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	BIT(22)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	BIT(23)
-
-/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
- * Segmentation offload capability flags
- */
-#define VIRTCHNL2_CAP_SEG_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_SEG_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_SEG_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_SEG_IPV6_TCP		BIT(3)
-#define VIRTCHNL2_CAP_SEG_IPV6_UDP		BIT(4)
-#define VIRTCHNL2_CAP_SEG_IPV6_SCTP		BIT(5)
-#define VIRTCHNL2_CAP_SEG_GENERIC		BIT(6)
-#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	BIT(7)
-#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	BIT(8)
-
-/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
- * Receive Side Scaling Flow type capability flags
- */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
-
-/* VIRTCHNL2_HEADER_SPLIT_CAPS
- * Header split capability flags
- */
-/* for prepended metadata  */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		BIT(0)
-/* all VLANs go into header buffer */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		BIT(1)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		BIT(2)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		BIT(3)
-
-/* VIRTCHNL2_RSC_OFFLOAD_CAPS
- * Receive Side Coalescing offload capability flags
- */
-#define VIRTCHNL2_CAP_RSC_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSC_IPV4_SCTP		BIT(1)
-#define VIRTCHNL2_CAP_RSC_IPV6_TCP		BIT(2)
-#define VIRTCHNL2_CAP_RSC_IPV6_SCTP		BIT(3)
-
-/* VIRTCHNL2_OTHER_CAPS
- * Other capability flags
- * SPLITQ_QSCHED: Queue based scheduling using split queue model
- * TX_VLAN: VLAN tag insertion
- * RX_VLAN: VLAN tag stripping
- */
-#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
-#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
-/* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
-#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
-#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
-/* Enable miss completion types plus ability to detect a miss completion if a
- * reserved bit is set in a standared completion's tag.
- */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
-/* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
-
-/* VIRTCHNL2_TXQ_SCHED_MODE
- * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
- * completions where descriptors and buffers are completed at the same time.
- * Flow scheduling mode allows for out of order packet processing where
- * descriptors are cleaned in order, but buffers can be completed out of order.
- */
-#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		0
-#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW		1
-
-/* VIRTCHNL2_TXQ_FLAGS
- * Transmit Queue feature flags
- *
- * Enable rule miss completion type; packet completion for a packet
- * sent on exception path; only relevant in flow scheduling mode
- */
-#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		BIT(0)
-
-/* VIRTCHNL2_PEER_TYPE
- * Transmit mailbox peer type
- */
-#define VIRTCHNL2_RDMA_CPF			0
-#define VIRTCHNL2_NVME_CPF			1
-#define VIRTCHNL2_ATE_CPF			2
-#define VIRTCHNL2_LCE_CPF			3
-
-/* VIRTCHNL2_RXQ_FLAGS
- * Receive Queue Feature flags
- */
-#define VIRTCHNL2_RXQ_RSC			BIT(0)
-#define VIRTCHNL2_RXQ_HDR_SPLIT			BIT(1)
-/* When set, packet descriptors are flushed by hardware immediately after
- * processing each packet.
- */
-#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	BIT(2)
-#define VIRTCHNL2_RX_DESC_SIZE_16BYTE		BIT(3)
-#define VIRTCHNL2_RX_DESC_SIZE_32BYTE		BIT(4)
-
-/* VIRTCHNL2_RSS_ALGORITHM
- * Type of RSS algorithm
- */
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC		0
-#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC			1
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC		2
-#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC			3
-
-/* VIRTCHNL2_EVENT_CODES
- * Type of event
- */
-#define VIRTCHNL2_EVENT_UNKNOWN			0
-#define VIRTCHNL2_EVENT_LINK_CHANGE		1
-/* These messages are only sent to PF from CP */
-#define VIRTCHNL2_EVENT_START_RESET_ADI		2
-#define VIRTCHNL2_EVENT_FINISH_RESET_ADI	3
-#define VIRTCHNL2_EVENT_ADI_ACTIVE		4
-
-/* VIRTCHNL2_QUEUE_TYPE
- * Transmit and Receive queue types are valid in legacy as well as split queue
- * models. With Split Queue model, 2 additional types are introduced -
- * TX_COMPLETION and RX_BUFFER. In split queue model, receive  corresponds to
+enum virtchnl2_queue_model {
+	VIRTCHNL2_QUEUE_MODEL_SINGLE		= 0,
+	VIRTCHNL2_QUEUE_MODEL_SPLIT		= 1,
+};
+
+/* Checksum offload capability flags */
+enum virtchnl2_cap_txrx_csum {
+	VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		= BIT(0),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	= BIT(1),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	= BIT(2),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	= BIT(3),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	= BIT(4),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	= BIT(5),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_CAP_TX_CSUM_GENERIC		= BIT(7),
+	VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		= BIT(8),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	= BIT(9),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	= BIT(10),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	= BIT(11),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	= BIT(12),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	= BIT(13),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	= BIT(14),
+	VIRTCHNL2_CAP_RX_CSUM_GENERIC		= BIT(15),
+	VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	= BIT(16),
+	VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	= BIT(17),
+	VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	= BIT(18),
+	VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	= BIT(19),
+	VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	= BIT(20),
+	VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	= BIT(21),
+	VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	= BIT(22),
+	VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	= BIT(23),
+};
+
+/* Segmentation offload capability flags */
+enum virtchnl2_cap_seg {
+	VIRTCHNL2_CAP_SEG_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_SEG_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_SEG_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_SEG_IPV6_TCP		= BIT(3),
+	VIRTCHNL2_CAP_SEG_IPV6_UDP		= BIT(4),
+	VIRTCHNL2_CAP_SEG_IPV6_SCTP		= BIT(5),
+	VIRTCHNL2_CAP_SEG_GENERIC		= BIT(6),
+	VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	= BIT(7),
+	VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	= BIT(8),
+};
+
+/* Receive Side Scaling Flow type capability flags */
+enum virtchnl2_cap_rss {
+	VIRTCHNL2_CAP_RSS_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSS_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_RSS_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_RSS_IPV4_OTHER		= BIT(3),
+	VIRTCHNL2_CAP_RSS_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_CAP_RSS_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_CAP_RSS_IPV6_SCTP		= BIT(6),
+	VIRTCHNL2_CAP_RSS_IPV6_OTHER		= BIT(7),
+	VIRTCHNL2_CAP_RSS_IPV4_AH		= BIT(8),
+	VIRTCHNL2_CAP_RSS_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		= BIT(10),
+	VIRTCHNL2_CAP_RSS_IPV6_AH		= BIT(11),
+	VIRTCHNL2_CAP_RSS_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		= BIT(13),
+};
+
+/* Header split capability flags */
+enum virtchnl2_cap_rx_hsplit_at {
+	/* For prepended metadata  */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		= BIT(0),
+	/* All VLANs go into header buffer */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		= BIT(1),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		= BIT(2),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		= BIT(3),
+};
+
+/* Receive Side Coalescing offload capability flags */
+enum virtchnl2_cap_rsc {
+	VIRTCHNL2_CAP_RSC_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSC_IPV4_SCTP		= BIT(1),
+	VIRTCHNL2_CAP_RSC_IPV6_TCP		= BIT(2),
+	VIRTCHNL2_CAP_RSC_IPV6_SCTP		= BIT(3),
+};
+
+/* Other capability flags */
+enum virtchnl2_cap_other {
+	VIRTCHNL2_CAP_RDMA			= BIT_ULL(0),
+	VIRTCHNL2_CAP_SRIOV			= BIT_ULL(1),
+	VIRTCHNL2_CAP_MACFILTER			= BIT_ULL(2),
+	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
+	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
+	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
+	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
+	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
+	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
+	VIRTCHNL2_CAP_INLINE_IPSEC		= BIT_ULL(10),
+	VIRTCHNL2_CAP_LARGE_NUM_QUEUES		= BIT_ULL(11),
+	/* Require additional info */
+	VIRTCHNL2_CAP_VLAN			= BIT_ULL(12),
+	VIRTCHNL2_CAP_PTP			= BIT_ULL(13),
+	VIRTCHNL2_CAP_ADV_RSS			= BIT_ULL(15),
+	VIRTCHNL2_CAP_FDIR			= BIT_ULL(16),
+	VIRTCHNL2_CAP_RX_FLEX_DESC		= BIT_ULL(17),
+	VIRTCHNL2_CAP_PTYPE			= BIT_ULL(18),
+	VIRTCHNL2_CAP_LOOPBACK			= BIT_ULL(19),
+	/* Enable miss completion types plus ability to detect a miss completion
+	 * if a reserved bit is set in a standard completion's tag.
+	 */
+	VIRTCHNL2_CAP_MISS_COMPL_TAG		= BIT_ULL(20),
+	/* This must be the last capability */
+	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
+};
+
+/**
+ * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
+ * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
+ *				    completions where descriptors and buffers
+ *				    are completed at the same time.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
+ *				   packet processing where descriptors are
+ *				   cleaned in order, but buffers can be
+ *				   completed out of order.
+ */
+enum virtchnl2_txq_sched_mode {
+	VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		= 0,
+	VIRTCHNL2_TXQ_SCHED_MODE_FLOW		= 1,
+};
+
+/**
+ * enum virtchnl2_txq_flags - Transmit Queue feature flags
+ * @VIRTCHNL2_TXQ_ENABLE_MISS_COMPL: Enable rule miss completion type. Packet
+ *				     completion for a packet sent on exception
+ *				     path and only relevant in flow scheduling
+ *				     mode.
+ */
+enum virtchnl2_txq_flags {
+	VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		= BIT(0),
+};
+
+/**
+ * enum virtchnl2_peer_type - Transmit mailbox peer type
+ * @VIRTCHNL2_RDMA_CPF: RDMA peer type
+ * @VIRTCHNL2_NVME_CPF: NVME peer type
+ * @VIRTCHNL2_ATE_CPF: ATE peer type
+ * @VIRTCHNL2_LCE_CPF: LCE peer type
+ */
+enum virtchnl2_peer_type {
+	VIRTCHNL2_RDMA_CPF			= 0,
+	VIRTCHNL2_NVME_CPF			= 1,
+	VIRTCHNL2_ATE_CPF			= 2,
+	VIRTCHNL2_LCE_CPF			= 3,
+};
+
+/**
+ * enum virtchnl2_rxq_flags - Receive Queue Feature flags
+ * @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag
+ * @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag
+ * @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
+ *					by hardware immediately after processing
+ *					each packet.
+ * @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size
+ * @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size
+ */
+enum virtchnl2_rxq_flags {
+	VIRTCHNL2_RXQ_RSC			= BIT(0),
+	VIRTCHNL2_RXQ_HDR_SPLIT			= BIT(1),
+	VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	= BIT(2),
+	VIRTCHNL2_RX_DESC_SIZE_16BYTE		= BIT(3),
+	VIRTCHNL2_RX_DESC_SIZE_32BYTE		= BIT(4),
+};
+
+/**
+ * enum virtchnl2_rss_alg - Type of RSS algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC: TOEPLITZ_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_R_ASYMMETRIC: R_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC: TOEPLITZ_SYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC: XOR_SYMMETRIC algorithm
+ */
+enum virtchnl2_rss_alg {
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC	= 0,
+	VIRTCHNL2_RSS_ALG_R_ASYMMETRIC		= 1,
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC	= 2,
+	VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC		= 3,
+};
+
+/**
+ * enum virtchnl2_event_codes - Type of event
+ * @VIRTCHNL2_EVENT_UNKNOWN: Unknown event type
+ * @VIRTCHNL2_EVENT_LINK_CHANGE: Link change event type
+ * @VIRTCHNL2_EVENT_START_RESET_ADI: Start reset ADI event type
+ * @VIRTCHNL2_EVENT_FINISH_RESET_ADI: Finish reset ADI event type
+ * @VIRTCHNL2_EVENT_ADI_ACTIVE: Event type to indicate 'function active' state
+ *				of ADI.
+ */
+enum virtchnl2_event_codes {
+	VIRTCHNL2_EVENT_UNKNOWN			= 0,
+	VIRTCHNL2_EVENT_LINK_CHANGE		= 1,
+	/* These messages are only sent to PF from CP */
+	VIRTCHNL2_EVENT_START_RESET_ADI		= 2,
+	VIRTCHNL2_EVENT_FINISH_RESET_ADI	= 3,
+	VIRTCHNL2_EVENT_ADI_ACTIVE		= 4,
+};
+
+/**
+ * enum virtchnl2_queue_type - Various queue types
+ * @VIRTCHNL2_QUEUE_TYPE_TX: TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX: RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: TX completion queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: RX buffer queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_TX: Config TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_RX: Config RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_TX: TX mailbox queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_RX: RX mailbox queue type
+ *
+ * Transmit and Receive queue types are valid in single as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced which are
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
  * the queue where hardware posts completions.
  */
-#define VIRTCHNL2_QUEUE_TYPE_TX			0
-#define VIRTCHNL2_QUEUE_TYPE_RX			1
-#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	2
-#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		3
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		4
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		5
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX		6
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX		7
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	8
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	9
-#define VIRTCHNL2_QUEUE_TYPE_MBX_TX		10
-#define VIRTCHNL2_QUEUE_TYPE_MBX_RX		11
-
-/* VIRTCHNL2_ITR_IDX
- * Virtchannel interrupt throttling rate index
- */
-#define VIRTCHNL2_ITR_IDX_0			0
-#define VIRTCHNL2_ITR_IDX_1			1
-
-/* VIRTCHNL2_VECTOR_LIMITS
+enum virtchnl2_queue_type {
+	VIRTCHNL2_QUEUE_TYPE_TX			= 0,
+	VIRTCHNL2_QUEUE_TYPE_RX			= 1,
+	VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	= 2,
+	VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		= 3,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		= 4,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		= 5,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX		= 6,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX		= 7,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	= 8,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	= 9,
+	VIRTCHNL2_QUEUE_TYPE_MBX_TX		= 10,
+	VIRTCHNL2_QUEUE_TYPE_MBX_RX		= 11,
+};
+
+/**
+ * enum virtchnl2_itr_idx - Interrupt throttling rate index
+ * @VIRTCHNL2_ITR_IDX_0: ITR index 0
+ * @VIRTCHNL2_ITR_IDX_1: ITR index 1
+ */
+enum virtchnl2_itr_idx {
+	VIRTCHNL2_ITR_IDX_0			= 0,
+	VIRTCHNL2_ITR_IDX_1			= 1,
+};
+
+/**
+ * VIRTCHNL2_VECTOR_LIMITS
  * Since PF/VF messages are limited by __le16 size, precalculate the maximum
  * possible values of nested elements in virtchnl structures that virtual
  * channel can possibly handle in a single message.
@@ -332,131 +411,150 @@
 		((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
 		sizeof(struct virtchnl2_queue_vector))
 
-/* VIRTCHNL2_MAC_TYPE
- * VIRTCHNL2_MAC_ADDR_PRIMARY
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_PRIMARY for the
- * primary/device unicast MAC address filter for VIRTCHNL2_OP_ADD_MAC_ADDR and
- * VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the underlying control plane
- * function to accurately track the MAC address and for VM/function reset.
- *
- * VIRTCHNL2_MAC_ADDR_EXTRA
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_EXTRA for any extra
- * unicast and/or multicast filters that are being added/deleted via
- * VIRTCHNL2_OP_ADD_MAC_ADDR/VIRTCHNL2_OP_DEL_MAC_ADDR respectively.
+/**
+ * enum virtchnl2_mac_addr_type - MAC address types
+ * @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
+ *				primary/device unicast MAC address filter for
+ *				VIRTCHNL2_OP_ADD_MAC_ADDR and
+ *				VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
+ *				underlying control plane function to accurately
+ *				track the MAC address and for VM/function reset.
+ * @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
+ *			      unicast and/or multicast filters that are being
+ *			      added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
+ *			      VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
-#define VIRTCHNL2_MAC_ADDR_PRIMARY		1
-#define VIRTCHNL2_MAC_ADDR_EXTRA		2
+enum virtchnl2_mac_addr_type {
+	VIRTCHNL2_MAC_ADDR_PRIMARY		= 1,
+	VIRTCHNL2_MAC_ADDR_EXTRA		= 2,
+};
 
-/* VIRTCHNL2_PROMISC_FLAGS
- * Flags used for promiscuous mode
+/**
+ * enum virtchnl2_promisc_flags - Flags used for promiscuous mode
+ * @VIRTCHNL2_UNICAST_PROMISC: Unicast promiscuous mode
+ * @VIRTCHNL2_MULTICAST_PROMISC: Multicast promiscuous mode
  */
-#define VIRTCHNL2_UNICAST_PROMISC		BIT(0)
-#define VIRTCHNL2_MULTICAST_PROMISC		BIT(1)
+enum virtchnl2_promisc_flags {
+	VIRTCHNL2_UNICAST_PROMISC		= BIT(0),
+	VIRTCHNL2_MULTICAST_PROMISC		= BIT(1),
+};
 
-/* VIRTCHNL2_QUEUE_GROUP_TYPE
- * Type of queue groups
+/**
+ * enum virtchnl2_queue_group_type - Type of queue groups
+ * @VIRTCHNL2_QUEUE_GROUP_DATA: Data queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_MBX: Mailbox queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_CONFIG: Config queue group type
+ *
  * 0 till 0xFF is for general use
  */
-#define VIRTCHNL2_QUEUE_GROUP_DATA		1
-#define VIRTCHNL2_QUEUE_GROUP_MBX		2
-#define VIRTCHNL2_QUEUE_GROUP_CONFIG		3
+enum virtchnl2_queue_group_type {
+	VIRTCHNL2_QUEUE_GROUP_DATA		= 1,
+	VIRTCHNL2_QUEUE_GROUP_MBX		= 2,
+	VIRTCHNL2_QUEUE_GROUP_CONFIG		= 3,
+};
 
-/* VIRTCHNL2_PROTO_HDR_TYPE
- * Protocol header type within a packet segment. A segment consists of one or
+/* Protocol header type within a packet segment. A segment consists of one or
  * more protocol headers that make up a logical group of protocol headers. Each
  * logical group of protocol headers encapsulates or is encapsulated using/by
  * tunneling or encapsulation protocols for network virtualization.
  */
-/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ANY			0
-#define VIRTCHNL2_PROTO_HDR_PRE_MAC		1
-/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_MAC			2
-#define VIRTCHNL2_PROTO_HDR_POST_MAC		3
-#define VIRTCHNL2_PROTO_HDR_ETHERTYPE		4
-#define VIRTCHNL2_PROTO_HDR_VLAN		5
-#define VIRTCHNL2_PROTO_HDR_SVLAN		6
-#define VIRTCHNL2_PROTO_HDR_CVLAN		7
-#define VIRTCHNL2_PROTO_HDR_MPLS		8
-#define VIRTCHNL2_PROTO_HDR_UMPLS		9
-#define VIRTCHNL2_PROTO_HDR_MMPLS		10
-#define VIRTCHNL2_PROTO_HDR_PTP			11
-#define VIRTCHNL2_PROTO_HDR_CTRL		12
-#define VIRTCHNL2_PROTO_HDR_LLDP		13
-#define VIRTCHNL2_PROTO_HDR_ARP			14
-#define VIRTCHNL2_PROTO_HDR_ECP			15
-#define VIRTCHNL2_PROTO_HDR_EAPOL		16
-#define VIRTCHNL2_PROTO_HDR_PPPOD		17
-#define VIRTCHNL2_PROTO_HDR_PPPOE		18
-/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4		19
-/* IPv4 and IPv6 Fragment header types are only associated to
- * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
- * cannot be used independently.
- */
-/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG		20
-/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6		21
-/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG		22
-#define VIRTCHNL2_PROTO_HDR_IPV6_EH		23
-/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_UDP			24
-/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TCP			25
-/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_SCTP		26
-/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMP		27
-/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMPV6		28
-#define VIRTCHNL2_PROTO_HDR_IGMP		29
-#define VIRTCHNL2_PROTO_HDR_AH			30
-#define VIRTCHNL2_PROTO_HDR_ESP			31
-#define VIRTCHNL2_PROTO_HDR_IKE			32
-#define VIRTCHNL2_PROTO_HDR_NATT_KEEP		33
-/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_PAY			34
-#define VIRTCHNL2_PROTO_HDR_L2TPV2		35
-#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	36
-#define VIRTCHNL2_PROTO_HDR_L2TPV3		37
-#define VIRTCHNL2_PROTO_HDR_GTP			38
-#define VIRTCHNL2_PROTO_HDR_GTP_EH		39
-#define VIRTCHNL2_PROTO_HDR_GTPCV2		40
-#define VIRTCHNL2_PROTO_HDR_GTPC_TEID		41
-#define VIRTCHNL2_PROTO_HDR_GTPU		42
-#define VIRTCHNL2_PROTO_HDR_GTPU_UL		43
-#define VIRTCHNL2_PROTO_HDR_GTPU_DL		44
-#define VIRTCHNL2_PROTO_HDR_ECPRI		45
-#define VIRTCHNL2_PROTO_HDR_VRRP		46
-#define VIRTCHNL2_PROTO_HDR_OSPF		47
-/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TUN			48
-#define VIRTCHNL2_PROTO_HDR_GRE			49
-#define VIRTCHNL2_PROTO_HDR_NVGRE		50
-#define VIRTCHNL2_PROTO_HDR_VXLAN		51
-#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE		52
-#define VIRTCHNL2_PROTO_HDR_GENEVE		53
-#define VIRTCHNL2_PROTO_HDR_NSH			54
-#define VIRTCHNL2_PROTO_HDR_QUIC		55
-#define VIRTCHNL2_PROTO_HDR_PFCP		56
-#define VIRTCHNL2_PROTO_HDR_PFCP_NODE		57
-#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION	58
-#define VIRTCHNL2_PROTO_HDR_RTP			59
-#define VIRTCHNL2_PROTO_HDR_ROCE		60
-#define VIRTCHNL2_PROTO_HDR_ROCEV1		61
-#define VIRTCHNL2_PROTO_HDR_ROCEV2		62
-/* protocol ids up to 32767 are reserved for AVF use */
-/* 32768 - 65534 are used for user defined protocol ids */
-/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_NO_PROTO		65535
-
-#define VIRTCHNL2_VERSION_MAJOR_2        2
-#define VIRTCHNL2_VERSION_MINOR_0        0
-
-
-/* VIRTCHNL2_OP_VERSION
+enum virtchnl2_proto_hdr_type {
+	/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ANY			= 0,
+	VIRTCHNL2_PROTO_HDR_PRE_MAC		= 1,
+	/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_MAC			= 2,
+	VIRTCHNL2_PROTO_HDR_POST_MAC		= 3,
+	VIRTCHNL2_PROTO_HDR_ETHERTYPE		= 4,
+	VIRTCHNL2_PROTO_HDR_VLAN		= 5,
+	VIRTCHNL2_PROTO_HDR_SVLAN		= 6,
+	VIRTCHNL2_PROTO_HDR_CVLAN		= 7,
+	VIRTCHNL2_PROTO_HDR_MPLS		= 8,
+	VIRTCHNL2_PROTO_HDR_UMPLS		= 9,
+	VIRTCHNL2_PROTO_HDR_MMPLS		= 10,
+	VIRTCHNL2_PROTO_HDR_PTP			= 11,
+	VIRTCHNL2_PROTO_HDR_CTRL		= 12,
+	VIRTCHNL2_PROTO_HDR_LLDP		= 13,
+	VIRTCHNL2_PROTO_HDR_ARP			= 14,
+	VIRTCHNL2_PROTO_HDR_ECP			= 15,
+	VIRTCHNL2_PROTO_HDR_EAPOL		= 16,
+	VIRTCHNL2_PROTO_HDR_PPPOD		= 17,
+	VIRTCHNL2_PROTO_HDR_PPPOE		= 18,
+	/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4		= 19,
+	/* IPv4 and IPv6 Fragment header types are only associated to
+	 * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+	 * cannot be used independently.
+	 */
+	/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4_FRAG		= 20,
+	/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6		= 21,
+	/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6_FRAG		= 22,
+	VIRTCHNL2_PROTO_HDR_IPV6_EH		= 23,
+	/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_UDP			= 24,
+	/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TCP			= 25,
+	/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_SCTP		= 26,
+	/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMP		= 27,
+	/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMPV6		= 28,
+	VIRTCHNL2_PROTO_HDR_IGMP		= 29,
+	VIRTCHNL2_PROTO_HDR_AH			= 30,
+	VIRTCHNL2_PROTO_HDR_ESP			= 31,
+	VIRTCHNL2_PROTO_HDR_IKE			= 32,
+	VIRTCHNL2_PROTO_HDR_NATT_KEEP		= 33,
+	/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_PAY			= 34,
+	VIRTCHNL2_PROTO_HDR_L2TPV2		= 35,
+	VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	= 36,
+	VIRTCHNL2_PROTO_HDR_L2TPV3		= 37,
+	VIRTCHNL2_PROTO_HDR_GTP			= 38,
+	VIRTCHNL2_PROTO_HDR_GTP_EH		= 39,
+	VIRTCHNL2_PROTO_HDR_GTPCV2		= 40,
+	VIRTCHNL2_PROTO_HDR_GTPC_TEID		= 41,
+	VIRTCHNL2_PROTO_HDR_GTPU		= 42,
+	VIRTCHNL2_PROTO_HDR_GTPU_UL		= 43,
+	VIRTCHNL2_PROTO_HDR_GTPU_DL		= 44,
+	VIRTCHNL2_PROTO_HDR_ECPRI		= 45,
+	VIRTCHNL2_PROTO_HDR_VRRP		= 46,
+	VIRTCHNL2_PROTO_HDR_OSPF		= 47,
+	/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TUN			= 48,
+	VIRTCHNL2_PROTO_HDR_GRE			= 49,
+	VIRTCHNL2_PROTO_HDR_NVGRE		= 50,
+	VIRTCHNL2_PROTO_HDR_VXLAN		= 51,
+	VIRTCHNL2_PROTO_HDR_VXLAN_GPE		= 52,
+	VIRTCHNL2_PROTO_HDR_GENEVE		= 53,
+	VIRTCHNL2_PROTO_HDR_NSH			= 54,
+	VIRTCHNL2_PROTO_HDR_QUIC		= 55,
+	VIRTCHNL2_PROTO_HDR_PFCP		= 56,
+	VIRTCHNL2_PROTO_HDR_PFCP_NODE		= 57,
+	VIRTCHNL2_PROTO_HDR_PFCP_SESSION	= 58,
+	VIRTCHNL2_PROTO_HDR_RTP			= 59,
+	VIRTCHNL2_PROTO_HDR_ROCE		= 60,
+	VIRTCHNL2_PROTO_HDR_ROCEV1		= 61,
+	VIRTCHNL2_PROTO_HDR_ROCEV2		= 62,
+	/* Protocol ids up to 32767 are reserved */
+	/* 32768 - 65534 are used for user defined protocol ids */
+	/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_NO_PROTO		= 65535,
+};
+
+enum virtchl2_version {
+	VIRTCHNL2_VERSION_MINOR_0		= 0,
+	VIRTCHNL2_VERSION_MAJOR_2		= 2,
+};
+
+/**
+ * struct virtchnl2_version_info - Version information
+ * @major: Major version
+ * @minor: Minor version
+ *
  * PF/VF posts its version number to the CP. CP responds with its version number
  * in the same format, along with a return code.
  * If there is a major version mismatch, then the PF/VF cannot operate.
@@ -466,6 +564,8 @@
  * This version opcode MUST always be specified as == 1, regardless of other
  * changes in the API. The CP must always respond to this message without
  * error regardless of version mismatch.
+ *
+ * Associated with VIRTCHNL2_OP_VERSION.
  */
 struct virtchnl2_version_info {
 	__le32 major;
@@ -474,7 +574,39 @@ struct virtchnl2_version_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
 
-/* VIRTCHNL2_OP_GET_CAPS
+/**
+ * struct virtchnl2_get_capabilities - Capabilities info
+ * @csum_caps: See enum virtchnl2_cap_txrx_csum
+ * @seg_caps: See enum virtchnl2_cap_seg
+ * @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at
+ * @rsc_caps: See enum virtchnl2_cap_rsc
+ * @rss_caps: See enum virtchnl2_cap_rss
+ * @other_caps: See enum virtchnl2_cap_other
+ * @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
+ *		     provided by CP.
+ * @mailbox_vector_id: Mailbox vector id
+ * @num_allocated_vectors: Maximum number of allocated vectors for the device
+ * @max_rx_q: Maximum number of supported Rx queues
+ * @max_tx_q: Maximum number of supported Tx queues
+ * @max_rx_bufq: Maximum number of supported buffer queues
+ * @max_tx_complq: Maximum number of supported completion queues
+ * @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
+ *		   responds with the maximum VFs granted.
+ * @max_vports: Maximum number of vports that can be supported
+ * @default_num_vports: Default number of vports driver should allocate on load
+ * @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes
+ * @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
+ *			    sent per transmit packet without needing to be
+ *			    linearized.
+ * @reserved: Reserved field
+ * @max_adis: Max number of ADIs
+ * @device_type: See enum virtchl2_device_type
+ * @min_sso_packet_len: Min packet length supported by device for single
+ *			segment offload
+ * @max_hdr_buf_per_lso: Max number of header buffers that can be used for
+ *			 an LSO
+ * @pad1: Padding for future extensions
+ *
  * Dataplane driver sends this message to CP to negotiate capabilities and
  * provides a virtchnl2_get_capabilities structure with its desired
  * capabilities, max_sriov_vfs and num_allocated_vectors.
@@ -492,60 +624,30 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
  * mailbox_vector_id and the number of itr index registers in itr_idx_map.
  * It also responds with default number of vports that the dataplane driver
  * should comeup with in default_num_vports and maximum number of vports that
- * can be supported in max_vports
+ * can be supported in max_vports.
+ *
+ * Associated with VIRTCHNL2_OP_GET_CAPS.
  */
 struct virtchnl2_get_capabilities {
-	/* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
 	__le32 csum_caps;
-
-	/* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
 	__le32 seg_caps;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 hsplit_caps;
-
-	/* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
 	__le32 rsc_caps;
-
-	/* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions  */
 	__le64 rss_caps;
-
-
-	/* see VIRTCHNL2_OTHER_CAPS definitions  */
 	__le64 other_caps;
-
-	/* DYN_CTL register offset and vector id for mailbox provided by CP */
 	__le32 mailbox_dyn_ctl;
 	__le16 mailbox_vector_id;
-	/* Maximum number of allocated vectors for the device */
 	__le16 num_allocated_vectors;
-
-	/* Maximum number of queues that can be supported */
 	__le16 max_rx_q;
 	__le16 max_tx_q;
 	__le16 max_rx_bufq;
 	__le16 max_tx_complq;
-
-	/* The PF sends the maximum VFs it is requesting. The CP responds with
-	 * the maximum VFs granted.
-	 */
 	__le16 max_sriov_vfs;
-
-	/* maximum number of vports that can be supported */
 	__le16 max_vports;
-	/* default number of vports driver should allocate on load */
 	__le16 default_num_vports;
-
-	/* Max header length hardware can parse/checksum, in bytes */
 	__le16 max_tx_hdr_size;
-
-	/* Max number of scatter gather buffers that can be sent per transmit
-	 * packet without needing to be linearized
-	 */
 	u8 max_sg_bufs_per_tx_pkt;
-
-	u8 reserved1;
-	/* upper bound of number of ADIs supported */
+	u8 reserved;
 	__le16 max_adis;
 
 	/* version of Control Plane that is running */
@@ -553,10 +655,7 @@ struct virtchnl2_get_capabilities {
 	__le16 oem_cp_ver_minor;
 	/* see VIRTCHNL2_DEVICE_TYPE definitions */
 	__le32 device_type;
-
-	/* min packet length supported by device for single segment offload */
 	u8 min_sso_packet_len;
-	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
 	u8 pad1[10];
@@ -564,14 +663,21 @@ struct virtchnl2_get_capabilities {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
 
+/**
+ * struct virtchnl2_queue_reg_chunk - Single queue chunk
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Start Queue ID
+ * @num_queues: Number of queues in the chunk
+ * @pad: Padding
+ * @qtail_reg_start: Queue tail register offset
+ * @qtail_reg_spacing: Queue tail register spacing
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_queue_reg_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
 	__le32 pad;
-
-	/* Queue tail register offset and spacing provided by CP */
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
@@ -580,7 +686,13 @@ struct virtchnl2_queue_reg_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/**
+ * struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
+ *				       queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info
+ */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -589,77 +701,91 @@ struct virtchnl2_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
-/* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
-#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
+/**
+ * enum virtchnl2_vport_flags - Vport flags
+ * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ */
+enum virtchnl2_vport_flags {
+	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+};
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-/* VIRTCHNL2_OP_CREATE_VPORT
- * PF sends this message to CP to create a vport by filling in required
+
+/**
+ * struct virtchnl2_create_vport - Create vport config info
+ * @vport_type: See enum virtchnl2_vport_type
+ * @txq_model: See virtchnl2_queue_model
+ * @rxq_model: See virtchnl2_queue_model
+ * @num_tx_q: Number of Tx queues
+ * @num_tx_complq: Valid only if txq_model is split queue
+ * @num_rx_q: Number of Rx queues
+ * @num_rx_bufq: Valid only if rxq_model is split queue
+ * @default_rx_q: Relative receive queue index to be used as default
+ * @vport_index: Used to align PF and CP in case of default multiple vports,
+ *		 it is filled by the PF and CP returns the same value, to
+ *		 enable the driver to support multiple asynchronous parallel
+ *		 CREATE_VPORT requests and associate a response to a specific
+ *		 request.
+ * @max_mtu: Max MTU. CP populates this field on response
+ * @vport_id: Vport id. CP populates this field on response
+ * @default_mac_addr: Default MAC address
+ * @vport_flags: See enum virtchnl2_vport_flags
+ * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
+ * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
+ * @reserved: Reserved bytes and cannot be used
+ * @rss_algorithm: RSS algorithm
+ * @rss_key_size: RSS key size
+ * @rss_lut_size: RSS LUT size
+ * @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to CP to create a vport by filling in required
  * fields of virtchnl2_create_vport structure.
  * CP responds with the updated virtchnl2_create_vport structure containing the
  * necessary fields followed by chunks which in turn will have an array of
  * num_chunks entries of virtchnl2_queue_chunk structures.
  */
 struct virtchnl2_create_vport {
-	/* PF/VF populates the following fields on request */
-	/* see VIRTCHNL2_VPORT_TYPE definitions */
 	__le16 vport_type;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 txq_model;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 rxq_model;
 	__le16 num_tx_q;
-	/* valid only if txq_model is split queue */
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
-	/* valid only if rxq_model is split queue */
 	__le16 num_rx_bufq;
-	/* relative receive queue index to be used as default */
 	__le16 default_rx_q;
-	/* used to align PF and CP in case of default multiple vports, it is
-	 * filled by the PF and CP returns the same value, to enable the driver
-	 * to support multiple asynchronous parallel CREATE_VPORT requests and
-	 * associate a response to a specific request
-	 */
 	__le16 vport_index;
-
-	/* CP populates the following fields on response */
 	__le16 max_mtu;
 	__le32 vport_id;
 	u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_VPORT_FLAGS definitions */
 	__le16 vport_flags;
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 rx_desc_ids;
-	/* see VIRTCHNL2_TX_DESC_IDS definitions */
 	__le64 tx_desc_ids;
-
-	u8 reserved1[72];
-
-	/* see VIRTCHNL2_RSS_ALGORITHM definitions */
+	u8 reserved[72];
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
-
-	u8 pad2[20];
+	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
 
-/* VIRTCHNL2_OP_DESTROY_VPORT
- * VIRTCHNL2_OP_ENABLE_VPORT
- * VIRTCHNL2_OP_DISABLE_VPORT
- * PF sends this message to CP to destroy, enable or disable a vport by filling
- * in the vport_id in virtchnl2_vport structure.
+/**
+ * struct virtchnl2_vport - Vport identifier information
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends this message to CP to destroy, enable or disable a vport by
+ * filling in the vport_id in virtchnl2_vport structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
+ * VIRTCHNL2_OP_DISABLE_VPORT.
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
@@ -668,42 +794,43 @@ struct virtchnl2_vport {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
 
-/* Transmit queue config info */
+/**
+ * struct virtchnl2_txq_info - Transmit queue config info
+ * @dma_ring_addr: DMA address
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue ID
+ * @relative_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue. Used in many to one mapping of transmit queues to
+ *		       completion queue.
+ * @model: See enum virtchnl2_queue_model
+ * @sched_mode: See enum virtchnl2_txq_sched_mode
+ * @qflags: TX queue feature flags
+ * @ring_len: Ring length
+ * @tx_compl_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue.
+ * @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
+ * @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
+ *		      messages for the respective CONFIG_TX queue.
+ * @pad: Padding
+ * @egress_pasid: Egress PASID info
+ * @egress_hdr_pasid: Egress HDR passid
+ * @egress_buf_pasid: Egress buf passid
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_txq_info {
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
-
 	__le32 queue_id;
-	/* valid only if queue model is split and type is transmit queue. Used
-	 * in many to one mapping of transmit queues to completion queue
-	 */
 	__le16 relative_queue_id;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 model;
-
-	/* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
 	__le16 sched_mode;
-
-	/* see VIRTCHNL2_TXQ_FLAGS definitions */
 	__le16 qflags;
 	__le16 ring_len;
-
-	/* valid only if queue model is split and type is transmit queue */
 	__le16 tx_compl_queue_id;
-	/* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
-	/* see VIRTCHNL2_PEER_TYPE definitions */
 	__le16 peer_type;
-	/* valid only if queue type is CONFIG_TX and used to deliver messages
-	 * for the respective CONFIG_TX queue
-	 */
 	__le16 peer_rx_queue_id;
 
 	u8 pad[4];
-
-	/* Egress pasid is used for SIOV use case */
 	__le32 egress_pasid;
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
@@ -713,12 +840,20 @@ struct virtchnl2_txq_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
 
-/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
- * PF sends this message to set up parameters for one or more transmit queues.
- * This message contains an array of num_qinfo instances of virtchnl2_txq_info
- * structures. CP configures requested queues and returns a status code. If
- * num_qinfo specified is greater than the number of queues associated with the
- * vport, an error is returned and no queues are configured.
+/**
+ * struct virtchnl2_config_tx_queues - TX queue config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of virtchnl2_txq_info structs
+ * @pad: Padding for future extensions
+ * @qinfo: Tx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more transmit
+ * queues. This message contains an array of num_qinfo instances of
+ * virtchnl2_txq_info structures. CP configures requested queues and returns
+ * a status code. If num_qinfo specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
  */
 struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
@@ -730,47 +865,55 @@ struct virtchnl2_config_tx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
 
-/* Receive queue config info */
+/**
+ * struct virtchnl2_rxq_info - Receive queue config info
+ * @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue id
+ * @model: See enum virtchnl2_queue_model
+ * @hdr_buffer_size: Header buffer size
+ * @data_buffer_size: Data buffer size
+ * @max_pkt_size: Max packet size
+ * @ring_len: Ring length
+ * @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
+ *			 This field must be a power of 2.
+ * @pad: Padding
+ * @dma_head_wb_addr: Applicable only for receive buffer queues
+ * @qflags: Applicable only for receive completion queues.
+ *	    See enum virtchnl2_rxq_flags.
+ * @rx_buffer_low_watermark: Rx buffer low watermark
+ * @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
+ *	       only if this field is set.
+ * @pad1: Padding
+ * @ingress_pasid: Ingress PASID
+ * @ingress_hdr_pasid: Ingress PASID header
+ * @ingress_buf_pasid: Ingress PASID buffer
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_rxq_info {
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 desc_ids;
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 queue_id;
-
-	/* see QUEUE_MODEL definitions */
 	__le16 model;
-
 	__le16 hdr_buffer_size;
 	__le32 data_buffer_size;
 	__le32 max_pkt_size;
-
 	__le16 ring_len;
 	u8 buffer_notif_stride;
 	u8 pad;
-
-	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
-
-	/* Applicable only for receive completion queues */
-	/* see VIRTCHNL2_RXQ_FLAGS definitions */
 	__le16 qflags;
-
 	__le16 rx_buffer_low_watermark;
-
-	/* valid only in split queue model */
 	__le16 rx_bufq1_id;
-	/* valid only in split queue model */
 	__le16 rx_bufq2_id;
-	/* it indicates if there is a second buffer, rx_bufq2_id is valid only
-	 * if this field is set
-	 */
 	u8 bufq2_ena;
 	u8 pad1[3];
-
-	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
@@ -779,12 +922,20 @@ struct virtchnl2_rxq_info {
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
-/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
- * PF sends this message to set up parameters for one or more receive queues.
+/**
+ * struct virtchnl2_config_rx_queues - Rx queues config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of instances
+ * @pad: Padding for future extensions
+ * @qinfo: Rx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more receive queues.
  * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
  * structures. CP configures requested queues and returns a status code.
  * If the number of queues specified is greater than the number of queues
  * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
  */
 struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
@@ -796,12 +947,23 @@ struct virtchnl2_config_rx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
 
-/* VIRTCHNL2_OP_ADD_QUEUES
- * PF sends this message to request additional transmit/receive queues beyond
+/**
+ * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
+ * @vport_id: Vport id
+ * @num_tx_q: Number of Tx qieues
+ * @num_tx_complq: Number of Tx completion queues
+ * @num_rx_q:  Number of Rx queues
+ * @num_rx_bufq:  Number of Rx buffer queues
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to request additional transmit/receive queues beyond
  * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
  * structure is used to specify the number of each type of queues.
  * CP responds with the same structure with the actual number of queues assigned
  * followed by num_chunks of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUES.
  */
 struct virtchnl2_add_queues {
 	__le32 vport_id;
@@ -817,65 +979,81 @@ struct virtchnl2_add_queues {
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
 
 /* Queue Groups Extension */
-
+/**
+ * struct virtchnl2_rx_queue_group_info - RX queue group info
+ * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
+ *		  allocated by CreateVport command. New size will be returned
+ *		  if allocation succeeded, otherwise original rss_size from
+ *		  CreateVport will be returned.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_rx_queue_group_info {
-	/* IN/OUT, user can ask to update rss_lut size originally allocated
-	 * by CreateVport command. New size will be returned if allocation
-	 * succeeded, otherwise original rss_size from CreateVport will
-	 * be returned.
-	 */
 	__le16 rss_lut_size;
-	/* Future extension purpose */
 	u8 pad[6];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
 
+/**
+ * struct virtchnl2_tx_queue_group_info - TX queue group info
+ * @tx_tc: TX TC queue group will be connected to
+ * @priority: Each group can have its own priority, value 0-7, while each group
+ *	      with unique priority is strict priority. It can be single set of
+ *	      queue groups which configured with same priority, then they are
+ *	      assumed part of WFQ arbitration group and are expected to be
+ *	      assigned with weight.
+ * @is_sp: Determines if queue group is expected to be Strict Priority according
+ *	   to its priority.
+ * @pad: Padding
+ * @pir_weight: Peak Info Rate Weight in case Queue Group is part of WFQ
+ *		arbitration set.
+ *		The weights of the groups are independent of each other.
+ *		Possible values: 1-200
+ * @cir_pad: Future extension purpose for CIR only
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_tx_queue_group_info { /* IN */
-	/* TX TC queue group will be connected to */
 	u8 tx_tc;
-	/* Each group can have its own priority, value 0-7, while each group
-	 * with unique priority is strict priority.
-	 * It can be single set of queue groups which configured with
-	 * same priority, then they are assumed part of WFQ arbitration
-	 * group and are expected to be assigned with weight.
-	 */
 	u8 priority;
-	/* Determines if queue group is expected to be Strict Priority
-	 * according to its priority
-	 */
 	u8 is_sp;
 	u8 pad;
-
-	/* Peak Info Rate Weight in case Queue Group is part of WFQ
-	 * arbitration set.
-	 * The weights of the groups are independent of each other.
-	 * Possible values: 1-200
-	 */
 	__le16 pir_weight;
-	/* Future extension purpose for CIR only */
 	u8 cir_pad[2];
-	/* Future extension purpose*/
 	u8 pad2[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_tx_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_group_id - Queue group ID
+ * @queue_group_id: Queue group ID - Depended on it's type
+ *		    Data: Is an ID which is relative to Vport
+ *		    Config & Mailbox: Is an ID which is relative to func
+ *		    This ID is use in future calls, i.e. delete.
+ *		    Requested by host and assigned by Control plane.
+ * @queue_group_type: Functional type: See enum virtchnl2_queue_group_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_group_id {
-	/* Queue group ID - depended on it's type
-	 * Data: is an ID which is relative to Vport
-	 * Config & Mailbox: is an ID which is relative to func.
-	 * This ID is use in future calls, i.e. delete.
-	 * Requested by host and assigned by Control plane.
-	 */
 	__le16 queue_group_id;
-	/* Functional type: see VIRTCHNL2_QUEUE_GROUP_TYPE definitions */
 	__le16 queue_group_type;
 	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 
+/**
+ * struct virtchnl2_queue_group_info - Queue group info
+ * @qg_id: Queue group ID
+ * @num_tx_q: Number of TX queues
+ * @num_tx_complq: Number of completion queues
+ * @num_rx_q: Number of RX queues
+ * @num_rx_bufq: Number of RX buffer queues
+ * @tx_q_grp_info: TX queue group info
+ * @rx_q_grp_info: RX queue group info
+ * @pad: Padding for future extensions
+ * @chunks: Queue register chunks
+ */
 struct virtchnl2_queue_group_info {
 	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
@@ -887,13 +1065,18 @@ struct virtchnl2_queue_group_info {
 
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
-	/* Future extension purpose */
 	u8 pad[40];
 	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_groups - Queue groups list
+ * @num_queue_groups: Total number of queue groups
+ * @pad: Padding for future extensions
+ * @groups: Array of queue group info
+ */
 struct virtchnl2_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[6];
@@ -902,78 +1085,107 @@ struct virtchnl2_queue_groups {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
 
-/* VIRTCHNL2_OP_ADD_QUEUE_GROUPS
+/**
+ * struct virtchnl2_add_queue_groups - Add queue groups
+ * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @pad: Padding for future extensions
+ * @qg_info: IN/OUT. List of all the queue groups
+ *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
  * groups and queues assigned followed by num_queue_groups and num_chunks of
  * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
-	/* IN, vport_id to add queue group to, same as allocated by CreateVport.
-	 * NA for mailbox and other types not assigned to vport
-	 */
 	__le32 vport_id;
 	u8 pad[4];
-	/* IN/OUT */
 	struct virtchnl2_queue_groups qg_info;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
-/* VIRTCHNL2_OP_DEL_QUEUE_GROUPS
+/**
+ * struct virtchnl2_delete_queue_groups - Delete queue groups
+ * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ *	      CreateVport.
+ * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @pad: Padding
+ * @qg_ids: IN, IDs & types of Queue Groups to delete
+ *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
  * to be deleted. CP performs requested action and returns status and update
  * num_queue_groups with number of successfully deleted queue groups.
+ *
+ * Associated with VIRTCHNL2_OP_DEL_QUEUE_GROUPS.
  */
 struct virtchnl2_delete_queue_groups {
-	/* IN, vport_id to delete queue group from, same as
-	 * allocated by CreateVport.
-	 */
 	__le32 vport_id;
-	/* IN/OUT, Defines number of groups provided below */
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	/* IN, IDs & types of Queue Groups to delete */
 	struct virtchnl2_queue_group_id qg_ids[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
 
-/* Structure to specify a chunk of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
+ *				   interrupt vectors.
+ * @start_vector_id: Start vector id
+ * @start_evv_id: Start EVV id
+ * @num_vectors: Number of vectors
+ * @pad: Padding
+ * @dynctl_reg_start: DYN_CTL register offset
+ * @dynctl_reg_spacing: Register spacing between DYN_CTL registers of 2
+ *			consecutive vectors.
+ * @itrn_reg_start: ITRN register offset
+ * @itrn_reg_spacing: Register spacing between dynctl registers of 2
+ *		      consecutive vectors.
+ * @itrn_index_spacing: Register spacing between itrn registers of the same
+ *			vector where n=0..2.
+ * @pad1: Padding for future extensions
+ *
+ * Register offsets and spacing provided by CP.
+ * Dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
 struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
 	__le16 pad;
 
-	/* Register offsets and spacing provided by CP.
-	 * dynamic control registers are used for enabling/disabling/re-enabling
-	 * interrupts and updating interrupt rates in the hotpath. Any changes
-	 * to interrupt rates in the dynamic control registers will be reflected
-	 * in the interrupt throttling rate registers.
-	 * itrn registers are used to update interrupt rates for specific
-	 * interrupt indices without modifying the state of the interrupt.
-	 */
 	__le32 dynctl_reg_start;
-	/* register spacing between dynctl registers of 2 consecutive vectors */
 	__le32 dynctl_reg_spacing;
 
 	__le32 itrn_reg_start;
-	/* register spacing between itrn registers of 2 consecutive vectors */
 	__le32 itrn_reg_spacing;
-	/* register spacing between itrn registers of the same vector
-	 * where n=0..2
-	 */
 	__le32 itrn_index_spacing;
 	u8 pad1[4];
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
-/* Structure to specify several chunks of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunks - Chunks of contiguous interrupt vectors
+ * @num_vchunks: number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends virtchnl2_vector_chunks struct to specify the vectors it is
+ * giving away. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
+ */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -983,12 +1195,19 @@ struct virtchnl2_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
 
-/* VIRTCHNL2_OP_ALLOC_VECTORS
- * PF sends this message to request additional interrupt vectors beyond the
+/**
+ * struct virtchnl2_alloc_vectors - Vector allocation info
+ * @num_vectors: Number of vectors
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends this message to request additional interrupt vectors beyond the
  * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
  * structure is used to specify the number of vectors requested. CP responds
  * with the same structure with the actual number of vectors assigned followed
  * by virtchnl2_vector_chunks structure identifying the vector ids.
+ *
+ * Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
@@ -999,46 +1218,46 @@ struct virtchnl2_alloc_vectors {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
 
-/* VIRTCHNL2_OP_DEALLOC_VECTORS
- * PF sends this message to release the vectors.
- * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
- * away. CP performs requested action and returns status.
- */
-
-/* VIRTCHNL2_OP_GET_RSS_LUT
- * VIRTCHNL2_OP_SET_RSS_LUT
- * PF sends this message to get or set RSS lookup table. Only supported if
+/**
+ * struct virtchnl2_rss_lut - RSS LUT info
+ * @vport_id: Vport id
+ * @lut_entries_start: Start of LUT entries
+ * @lut_entries: Number of LUT entrties
+ * @pad: Padding
+ * @lut: RSS lookup table
+ *
+ * PF/VF sends this message to get or set RSS lookup table. Only supported if
  * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_lut structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
  */
 struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	/* RSS lookup table */
 	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * PF sends this message to get RSS key. Only supported if both PF and CP
- * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
- * the virtchnl2_rss_key structure
- */
-
-/* VIRTCHNL2_OP_GET_RSS_HASH
- * VIRTCHNL2_OP_SET_RSS_HASH
- * PF sends these messages to get and set the hash filter enable bits for RSS.
- * By default, the CP sets these to all possible traffic types that the
+/**
+ * struct virtchnl2_rss_hash - RSS hash info
+ * @ptype_groups: Packet type groups bitmap
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends these messages to get and set the hash filter enable bits for
+ * RSS. By default, the CP sets these to all possible traffic types that the
  * hardware supports. The PF can query this value if it wants to change the
  * traffic types that are hashed by the hardware.
  * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
  * during configuration negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
  */
 struct virtchnl2_rss_hash {
-	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
 	u8 pad[4];
@@ -1046,12 +1265,18 @@ struct virtchnl2_rss_hash {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
 
-/* VIRTCHNL2_OP_SET_SRIOV_VFS
+/**
+ * struct virtchnl2_sriov_vfs_info - VFs info
+ * @num_vfs: Number of VFs
+ * @pad: Padding for future extensions
+ *
  * This message is used to set number of SRIOV VFs to be created. The actual
  * allocation of resources for the VFs in terms of vport, queues and interrupts
- * is done by CP. When this call completes, the APF driver calls
+ * is done by CP. When this call completes, the IDPF driver calls
  * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
  * The number of VFs set to 0 will destroy all the VFs of this function.
+ *
+ * Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
  */
 
 struct virtchnl2_sriov_vfs_info {
@@ -1061,8 +1286,14 @@ struct virtchnl2_sriov_vfs_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 
-/* structure to specify single chunk of queue */
-/* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_queue_reg_chunks - Specify several chunks of
+ *						contiguous queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info. 'chunks' is fixed size(not flexible) and
+ *	    will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1071,8 +1302,14 @@ struct virtchnl2_non_flex_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 
-/* structure to specify single chunk of interrupt vector */
-/* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_vector_chunks - Chunks of contiguous interrupt
+ *					     vectors.
+ * @num_vchunks: Number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info. 'vchunks' is fixed size
+ *	     (not flexible) and will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -1081,40 +1318,49 @@ struct virtchnl2_non_flex_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_non_flex_vector_chunks);
 
-/* VIRTCHNL2_OP_NON_FLEX_CREATE_ADI
+/**
+ * struct virtchnl2_non_flex_create_adi - Create ADI
+ * @pasid: PF sends PASID to CP
+ * @mbx_id: mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
+ *	    id else it is set to 0 by PF.
+ * @mbx_vec_id: PF sends mailbox vector id to CP
+ * @adi_index: PF populates this ADI index
+ * @adi_id: CP populates ADI id
+ * @pad: Padding
+ * @chunks: CP populates queue chunks
+ * @vchunks: PF sends vector chunks to CP
+ *
  * PF sends this message to CP to create ADI by filling in required
  * fields of virtchnl2_non_flex_create_adi structure.
- * CP responds with the updated virtchnl2_non_flex_create_adi structure containing
- * the necessary fields followed by chunks which in turn will have an array of
- * num_chunks entries of virtchnl2_queue_chunk structures.
+ * CP responds with the updated virtchnl2_non_flex_create_adi structure
+ * containing the necessary fields followed by chunks which in turn will have
+ * an array of num_chunks entries of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_CREATE_ADI.
  */
 struct virtchnl2_non_flex_create_adi {
-	/* PF sends PASID to CP */
 	__le32 pasid;
-	/*
-	 * mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
-	 * id else it is set to 0 by PF
-	 */
 	__le16 mbx_id;
-	/* PF sends mailbox vector id to CP */
 	__le16 mbx_vec_id;
-	/* PF populates this ADI index */
 	__le16 adi_index;
-	/* CP populates ADI id */
 	__le16 adi_id;
 	u8 pad[68];
-	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
-	/* PF sends vector chunks to CP */
 	struct virtchnl2_non_flex_vector_chunks vchunks;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
 
-/* VIRTCHNL2_OP_DESTROY_ADI
+/**
+ * struct virtchnl2_non_flex_destroy_adi - Destroy ADI
+ * @adi_id: ADI id to destroy
+ * @pad: Padding
+ *
  * PF sends this message to CP to destroy ADI by filling
  * in the adi_id in virtchnl2_destropy_adi structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI.
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
@@ -1123,7 +1369,17 @@ struct virtchnl2_non_flex_destroy_adi {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 
-/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+/**
+ * struct virtchnl2_ptype - Packet type info
+ * @ptype_id_10: 10-bit packet type
+ * @ptype_id_8: 8-bit packet type
+ * @proto_id_count: Number of protocol ids the packet supports, maximum of 32
+ *		    protocol ids are supported.
+ * @pad: Padding
+ * @proto_id: proto_id_count decides the allocation of protocol id array.
+ *	      See enum virtchnl2_proto_hdr_type.
+ *
+ * Based on the descriptor type the PF supports, CP fills ptype_id_10 or
  * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
  * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
  * last ptype.
@@ -1131,32 +1387,42 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 struct virtchnl2_ptype {
 	__le16 ptype_id_10;
 	u8 ptype_id_8;
-	/* number of protocol ids the packet supports, maximum of 32
-	 * protocol ids are supported
-	 */
 	u8 proto_id_count;
 	__le16 pad;
-	/* proto_id_count decides the allocation of protocol id array */
-	/* see VIRTCHNL2_PROTO_HDR_TYPE */
 	__le16 proto_id[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
 
-/* VIRTCHNL2_OP_GET_PTYPE_INFO
- * PF sends this message to CP to get all supported packet types. It does by
- * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
- * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
- * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
- * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
- * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
- * messages, where each message will have the start ptype id, number of ptypes
- * sent in that message and the ptype array itself. When CP is done updating
- * all ptype information it extracted from the package (number of ptypes
- * extracted might be less than what PF expects), it will append a dummy ptype
- * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
- * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
- * messages.
+/**
+ * struct virtchnl2_get_ptype_info - Packet type info
+ * @start_ptype_id: Starting ptype ID
+ * @num_ptypes: Number of packet types from start_ptype_id
+ * @pad: Padding for future extensions
+ * @ptype: Array of packet type info
+ *
+ * The total number of supported packet types is based on the descriptor type.
+ * For the flex descriptor, it is 1024 (10-bit ptype), and for the base
+ * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
+ * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
+ * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
+ * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
+ * is no specific field for the ptypes but are added at the end of the
+ * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * Reason for doing this is because compiler doesn't allow nested flexible
+ * array fields).
+ *
+ * If all the ptypes don't fit into one mailbox buffer, CP splits the
+ * ptype info into multiple messages, where each message will have its own
+ * 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
+ * updating all the ptype information extracted from the package (the number of
+ * ptypes extracted might be less than what PF/VF expects), it will append a
+ * dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
+ * to the ptype array.
+ *
+ * PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
  */
 struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
@@ -1167,25 +1433,46 @@ struct virtchnl2_get_ptype_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
 
-/* VIRTCHNL2_OP_GET_STATS
+/**
+ * struct virtchnl2_vport_stats - Vport statistics
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @rx_bytes: Received bytes
+ * @rx_unicast: Received unicast packets
+ * @rx_multicast: Received multicast packets
+ * @rx_broadcast: Received broadcast packets
+ * @rx_discards: Discarded packets on receive
+ * @rx_errors: Receive errors
+ * @rx_unknown_protocol: Unlnown protocol
+ * @tx_bytes: Transmitted bytes
+ * @tx_unicast: Transmitted unicast packets
+ * @tx_multicast: Transmitted multicast packets
+ * @tx_broadcast: Transmitted broadcast packets
+ * @tx_discards: Discarded packets on transmit
+ * @tx_errors: Transmit errors
+ * @rx_invalid_frame_length: Packets with invalid frame length
+ * @rx_overflow_drop: Packets dropped on buffer overflow
+ *
  * PF/VF sends this message to CP to get the update stats by specifying the
  * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ *
+ * Associated with VIRTCHNL2_OP_GET_STATS.
  */
 struct virtchnl2_vport_stats {
 	__le32 vport_id;
 	u8 pad[4];
 
-	__le64 rx_bytes;		/* received bytes */
-	__le64 rx_unicast;		/* received unicast pkts */
-	__le64 rx_multicast;		/* received multicast pkts */
-	__le64 rx_broadcast;		/* received broadcast pkts */
+	__le64 rx_bytes;
+	__le64 rx_unicast;
+	__le64 rx_multicast;
+	__le64 rx_broadcast;
 	__le64 rx_discards;
 	__le64 rx_errors;
 	__le64 rx_unknown_protocol;
-	__le64 tx_bytes;		/* transmitted bytes */
-	__le64 tx_unicast;		/* transmitted unicast pkts */
-	__le64 tx_multicast;		/* transmitted multicast pkts */
-	__le64 tx_broadcast;		/* transmitted broadcast pkts */
+	__le64 tx_bytes;
+	__le64 tx_unicast;
+	__le64 tx_multicast;
+	__le64 tx_broadcast;
 	__le64 tx_discards;
 	__le64 tx_errors;
 	__le64 rx_invalid_frame_length;
@@ -1194,7 +1481,9 @@ struct virtchnl2_vport_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
 
-/* physical port statistics */
+/**
+ * struct virtchnl2_phy_port_stats - Physical port statistics
+ */
 struct virtchnl2_phy_port_stats {
 	__le64 rx_bytes;
 	__le64 rx_unicast_pkts;
@@ -1247,10 +1536,17 @@ struct virtchnl2_phy_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(600, virtchnl2_phy_port_stats);
 
-/* VIRTCHNL2_OP_GET_PORT_STATS
- * PF/VF sends this message to CP to get the updated stats by specifying the
+/**
+ * struct virtchnl2_port_stats - Port statistics
+ * @vport_id: Vport ID
+ * @pad: Padding
+ * @phy_port_stats: Physical port statistics
+ * @virt_port_stats: Vport statistics
+ *
  * vport_id. CP responds with stats in struct virtchnl2_port_stats that
  * includes both physical port as well as vport statistics.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PORT_STATS.
  */
 struct virtchnl2_port_stats {
 	__le32 vport_id;
@@ -1262,44 +1558,61 @@ struct virtchnl2_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(736, virtchnl2_port_stats);
 
-/* VIRTCHNL2_OP_EVENT
+/**
+ * struct virtchnl2_event - Event info
+ * @event: Event opcode. See enum virtchnl2_event_codes
+ * @link_speed: Link_speed provided in Mbps
+ * @vport_id: Vport ID
+ * @link_status: Link status
+ * @pad: Padding
+ * @adi_id: ADI id
+ *
  * CP sends this message to inform the PF/VF driver of events that may affect
  * it. No direct response is expected from the driver, though it may generate
  * other messages in response to this one.
+ *
+ * Associated with VIRTCHNL2_OP_EVENT.
  */
 struct virtchnl2_event {
-	/* see VIRTCHNL2_EVENT_CODES definitions */
 	__le32 event;
-	/* link_speed provided in Mbps */
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
 	u8 pad;
-
-	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * VIRTCHNL2_OP_SET_RSS_KEY
+/**
+ * struct virtchnl2_rss_key - RSS key info
+ * @vport_id: Vport id
+ * @key_len: Length of RSS key
+ * @pad: Padding
+ * @key: RSS hash key, packed bytes
  * PF/VF sends this message to get or set RSS key. Only supported if both
  * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_key structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
  */
 struct virtchnl2_rss_key {
 	__le32 vport_id;
 	__le16 key_len;
 	u8 pad;
-	u8 key[1];         /* RSS hash key, packed bytes */
+	u8 key[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
 
-/* structure to specify a chunk of contiguous queues */
+/**
+ * struct virtchnl2_queue_chunk - Chunk of contiguous queues
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Starting queue id
+ * @num_queues: Number of queues
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
@@ -1308,7 +1621,11 @@ struct virtchnl2_queue_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/* struct virtchnl2_queue_chunks - Chunks of contiguous queues
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
+ */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1317,14 +1634,19 @@ struct virtchnl2_queue_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
 
-/* VIRTCHNL2_OP_ENABLE_QUEUES
- * VIRTCHNL2_OP_DISABLE_QUEUES
- * VIRTCHNL2_OP_DEL_QUEUES
+/**
+ * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
  *
- * PF sends these messages to enable, disable or delete queues specified in
- * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * PF/VF sends these messages to enable, disable or delete queues specified in
+ * chunks. It sends virtchnl2_del_ena_dis_queues struct to specify the queues
  * to be enabled/disabled/deleted. Also applicable to single queue receive or
  * transmit. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
+ * VIRTCHNL2_OP_DISABLE_QUEUES.
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
@@ -1335,30 +1657,43 @@ struct virtchnl2_del_ena_dis_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
 
-/* Queue to vector mapping */
+/**
+ * struct virtchnl2_queue_vector - Queue to vector mapping
+ * @queue_id: Queue id
+ * @vector_id: Vector id
+ * @pad: Padding
+ * @itr_idx: See enum virtchnl2_itr_idx
+ * @queue_type: See enum virtchnl2_queue_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_vector {
 	__le32 queue_id;
 	__le16 vector_id;
 	u8 pad[2];
 
-	/* see VIRTCHNL2_ITR_IDX definitions */
 	__le32 itr_idx;
 
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
 	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
 
-/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
- * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+/**
+ * struct virtchnl2_queue_vector_maps - Map/unmap queues info
+ * @vport_id: Vport id
+ * @num_qv_maps: Number of queue vector maps
+ * @pad: Padding
+ * @qv_maps: Queue to vector maps
  *
- * PF sends this message to map or unmap queues to vectors and interrupt
+ * PF/VF sends this message to map or unmap queues to vectors and interrupt
  * throttling rate index registers. External data buffer contains
  * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
  * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
  * after validating the queue and vector ids and returns a status code.
+ *
+ * Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
  */
 struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
@@ -1369,11 +1704,17 @@ struct virtchnl2_queue_vector_maps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
 
-/* VIRTCHNL2_OP_LOOPBACK
+/**
+ * struct virtchnl2_loopback - Loopback info
+ * @vport_id: Vport id
+ * @enable: Enable/disable
+ * @pad: Padding for future extensions
  *
  * PF/VF sends this message to transition to/from the loopback state. Setting
  * the 'enable' to 1 enables the loopback state and setting 'enable' to 0
  * disables it. CP configures the state to loopback and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_LOOPBACK.
  */
 struct virtchnl2_loopback {
 	__le32 vport_id;
@@ -1383,22 +1724,31 @@ struct virtchnl2_loopback {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
 
-/* structure to specify each MAC address */
+/* struct virtchnl2_mac_addr - MAC address info
+ * @addr: MAC address
+ * @type: MAC type. See enum virtchnl2_mac_addr_type.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_mac_addr {
 	u8 addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_MAC_TYPE definitions */
 	u8 type;
 	u8 pad;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
 
-/* VIRTCHNL2_OP_ADD_MAC_ADDR
- * VIRTCHNL2_OP_DEL_MAC_ADDR
+/**
+ * struct virtchnl2_mac_addr_list - List of MAC addresses
+ * @vport_id: Vport id
+ * @num_mac_addr: Number of MAC addresses
+ * @pad: Padding
+ * @mac_addr_list: List with MAC address info
  *
  * PF/VF driver uses this structure to send list of MAC addresses to be
  * added/deleted to the CP where as CP performs the action and returns the
  * status.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
 struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
@@ -1409,30 +1759,40 @@ struct virtchnl2_mac_addr_list {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
 
-/* VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE
+/**
+ * struct virtchnl2_promisc_info - Promiscuous type information
+ * @vport_id: Vport id
+ * @flags: See enum virtchnl2_promisc_flags
+ * @pad: Padding for future extensions
  *
  * PF/VF sends vport id and flags to the CP where as CP performs the action
  * and returns the status.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
  */
 struct virtchnl2_promisc_info {
 	__le32 vport_id;
-	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
 	__le16 flags;
 	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
 
-/* VIRTCHNL2_PTP_CAPS
- * PTP capabilities
+/**
+ * enum virtchnl2_ptp_caps - PTP capabilities
  */
-#define VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	BIT(0)
-#define VIRTCHNL2_PTP_CAP_PTM			BIT(1)
-#define VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	BIT(2)
-#define VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	BIT(3)
-#define	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	BIT(4)
+enum virtchnl2_ptp_caps {
+	VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	= BIT(0),
+	VIRTCHNL2_PTP_CAP_PTM			= BIT(1),
+	VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	= BIT(2),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	= BIT(3),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	= BIT(4),
+};
 
-/* Legacy cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_legacy_cross_time_reg - Legacy cross time registers
+ *						offsets.
+ */
 struct virtchnl2_ptp_legacy_cross_time_reg {
 	__le32 shadow_time_0;
 	__le32 shadow_time_l;
@@ -1442,7 +1802,9 @@ struct virtchnl2_ptp_legacy_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_legacy_cross_time_reg);
 
-/* PTM cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_ptm_cross_time_reg - PTM cross time registers offsets
+ */
 struct virtchnl2_ptp_ptm_cross_time_reg {
 	__le32 art_l;
 	__le32 art_h;
@@ -1452,7 +1814,10 @@ struct virtchnl2_ptp_ptm_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_ptm_cross_time_reg);
 
-/* Registers needed to control the main clock */
+/**
+ * struct virtchnl2_ptp_device_clock_control - Registers needed to control the
+ *					       main clock.
+ */
 struct virtchnl2_ptp_device_clock_control {
 	__le32 cmd;
 	__le32 incval_l;
@@ -1464,7 +1829,13 @@ struct virtchnl2_ptp_device_clock_control {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_device_clock_control);
 
-/* Structure that defines tx tstamp entry - index and register offset */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_entry - PTP TX timestamp entry
+ * @tx_latch_register_base: TX latch register base
+ * @tx_latch_register_offset: TX latch register offset
+ * @index: Index
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_entry {
 	__le32 tx_latch_register_base;
 	__le32 tx_latch_register_offset;
@@ -1474,12 +1845,15 @@ struct virtchnl2_ptp_tx_tstamp_entry {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_entry);
 
-/* Structure that defines tx tstamp entries - total number of latches
- * and the array of entries.
+/**
+ * struct virtchnl2_ptp_tx_tstamp - Structure that defines tx tstamp entries
+ * @num_latches: Total number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @ptp_tx_tstamp_entries: Aarray of TX timestamp entries
  */
 struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
 	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
@@ -1487,13 +1861,21 @@ struct virtchnl2_ptp_tx_tstamp {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
 
-/* VIRTCHNL2_OP_GET_PTP_CAPS
+/**
+ * struct virtchnl2_get_ptp_caps - Get PTP capabilities
+ * @ptp_caps: PTP capability bitmap. See enum virtchnl2_ptp_caps.
+ * @pad: Padding
+ * @legacy_cross_time_reg: Legacy cross time register
+ * @ptm_cross_time_reg: PTM cross time register
+ * @device_clock_control: Device clock control
+ * @tx_tstamp: TX timestamp
+ *
  * PV/VF sends this message to negotiate PTP capabilities. CP updates bitmap
  * with supported features and fulfills appropriate structures.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_CAPS.
  */
 struct virtchnl2_get_ptp_caps {
-	/* PTP capability bitmap */
-	/* see VIRTCHNL2_PTP_CAPS definitions */
 	__le32 ptp_caps;
 	u8 pad[4];
 
@@ -1505,7 +1887,15 @@ struct virtchnl2_get_ptp_caps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
 
-/* Structure that describes tx tstamp values, index and validity */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
+ *					  values, index and validity.
+ * @tstamp_h: Timestamp high
+ * @tstamp_l: Timestamp low
+ * @index: Index
+ * @valid: Timestamp validity
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_latch {
 	__le32 tstamp_h;
 	__le32 tstamp_l;
@@ -1516,9 +1906,17 @@ struct virtchnl2_ptp_tx_tstamp_latch {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
 
-/* VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latches - PTP TX timestamp latches
+ * @num_latches: Number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @tstamp_latches: PTP TX timestamp latch
+ *
  * PF/VF sends this message to receive a specified number of timestamps
  * entries.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES.
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
@@ -1613,7 +2011,7 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
  * @msg: pointer to the msg buffer
  * @msglen: msg length
  *
- * validate msg format against struct for each opcode
+ * Validate msg format against struct for each opcode.
  */
 static inline int
 virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
@@ -1622,7 +2020,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	bool err_msg_format = false;
 	__le32 valid_len = 0;
 
-	/* Validate message length. */
+	/* Validate message length */
 	switch (v_opcode) {
 	case VIRTCHNL2_OP_VERSION:
 		valid_len = sizeof(struct virtchnl2_version_info);
@@ -1637,7 +2035,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_create_vport *)msg;
 
 			if (cvport->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1652,7 +2050,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_non_flex_create_adi *)msg;
 
 			if (cadi->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1707,7 +2105,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_add_queues *)msg;
 
 			if (add_q->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1734,7 +2132,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
 		valid_len = sizeof(struct virtchnl2_add_queue_groups);
 		if (msglen != valid_len) {
-			__le32 i = 0, offset = 0;
+			__le64 offset;
+			__le32 i;
 			struct virtchnl2_add_queue_groups *add_queue_grp =
 				(struct virtchnl2_add_queue_groups *)msg;
 			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
@@ -1801,7 +2200,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_alloc_vectors *)msg;
 
 			if (v_av->vchunks.num_vchunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1830,7 +2229,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_key *)msg;
 
 			if (vrk->key_len == 0) {
-				/* zero length is allowed as input */
+				/* Zero length is allowed as input */
 				break;
 			}
 
@@ -1845,7 +2244,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_lut *)msg;
 
 			if (vrl->lut_entries == 0) {
-				/* zero entries is allowed as input */
+				/* Zero entries is allowed as input */
 				break;
 			}
 
@@ -1902,13 +2301,13 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
 		}
 		break;
-	/* These are always errors coming from the VF. */
+	/* These are always errors coming from the VF */
 	case VIRTCHNL2_OP_EVENT:
 	case VIRTCHNL2_OP_UNKNOWN:
 	default:
 		return VIRTCHNL2_STATUS_ERR_ESRCH;
 	}
-	/* few more checks */
+	/* Few more checks */
 	if (err_msg_format || valid_len != msglen)
 		return VIRTCHNL2_STATUS_ERR_EINVAL;
 
diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index 9e04cf8628..f7521d87a7 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 /*
  * Copyright (C) 2019 Intel Corporation
@@ -12,199 +12,220 @@
 /* VIRTCHNL2_TX_DESC_IDS
  * Transmit descriptor ID flags
  */
-#define VIRTCHNL2_TXDID_DATA				BIT(0)
-#define VIRTCHNL2_TXDID_CTX				BIT(1)
-#define VIRTCHNL2_TXDID_REINJECT_CTX			BIT(2)
-#define VIRTCHNL2_TXDID_FLEX_DATA			BIT(3)
-#define VIRTCHNL2_TXDID_FLEX_CTX			BIT(4)
-#define VIRTCHNL2_TXDID_FLEX_TSO_CTX			BIT(5)
-#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		BIT(6)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		BIT(7)
-#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	BIT(8)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	BIT(9)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		BIT(10)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			BIT(11)
-#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			BIT(12)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		BIT(13)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		BIT(14)
-#define VIRTCHNL2_TXDID_DESC_DONE			BIT(15)
-
-/* VIRTCHNL2_RX_DESC_IDS
+enum virtchnl2_tx_desc_ids {
+	VIRTCHNL2_TXDID_DATA				= BIT(0),
+	VIRTCHNL2_TXDID_CTX				= BIT(1),
+	VIRTCHNL2_TXDID_REINJECT_CTX			= BIT(2),
+	VIRTCHNL2_TXDID_FLEX_DATA			= BIT(3),
+	VIRTCHNL2_TXDID_FLEX_CTX			= BIT(4),
+	VIRTCHNL2_TXDID_FLEX_TSO_CTX			= BIT(5),
+	VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		= BIT(6),
+	VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		= BIT(7),
+	VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	= BIT(8),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	= BIT(9),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		= BIT(10),
+	VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			= BIT(11),
+	VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			= BIT(12),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		= BIT(13),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		= BIT(14),
+	VIRTCHNL2_TXDID_DESC_DONE			= BIT(15),
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_IDS
  * Receive descriptor IDs (range from 0 to 63)
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE			0
-#define VIRTCHNL2_RXDID_1_32B_BASE			1
-/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
- * differentiated based on queue model; e.g. single queue model can
- * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
- * for DID 2.
- */
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ			2
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC			2
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW			3
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB		4
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL		5
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2			6
-#define VIRTCHNL2_RXDID_7_HW_RSVD			7
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC		16
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN		17
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4		18
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6		19
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW		20
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP		21
-/* 22 through 63 are reserved */
-
-/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+enum virtchnl2_rx_desc_ids {
+	VIRTCHNL2_RXDID_0_16B_BASE,
+	VIRTCHNL2_RXDID_1_32B_BASE,
+	/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+	 * differentiated based on queue model; e.g. single queue model can
+	 * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+	 * for DID 2.
+	 */
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ		= 2,
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC		= VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW		= 3,
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB	= 4,
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL	= 5,
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2		= 6,
+	VIRTCHNL2_RXDID_7_HW_RSVD		= 7,
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC	= 16,
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN	= 17,
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4	= 18,
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6	= 19,
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW	= 20,
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP	= 21,
+	/* 22 through 63 are reserved */
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
-/* 22 through 63 are reserved */
-
-/* Rx */
+#define VIRTCHNL2_RXDID_M(bit)			BIT_ULL(VIRTCHNL2_RXDID_##bit)
+
+enum virtchnl2_rx_desc_id_bitmasks {
+	VIRTCHNL2_RXDID_0_16B_BASE_M		= VIRTCHNL2_RXDID_M(0_16B_BASE),
+	VIRTCHNL2_RXDID_1_32B_BASE_M		= VIRTCHNL2_RXDID_M(1_32B_BASE),
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		= VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		= VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		= VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW),
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	= VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB),
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	= VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL),
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	= VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2),
+	VIRTCHNL2_RXDID_7_HW_RSVD_M		= VIRTCHNL2_RXDID_M(7_HW_RSVD),
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	= VIRTCHNL2_RXDID_M(16_COMMS_GENERIC),
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	= VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN),
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	= VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4),
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	= VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6),
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	= VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW),
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	= VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP),
+	/* 22 through 63 are reserved */
+};
+
 /* For splitq virtchnl2_rx_flex_desc_adv desc members */
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		GENMASK(3, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		GENMASK(7, 6)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		\
-	IDPF_M(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M			\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M		GENMASK(15, 13)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M	\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M		GENMASK(13, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M		GENMASK(14, 12)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
 
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S		7
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST			8 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		0 /* 2 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			8 /* this entry must be last!!! */
-
-/* for singleq (flex) virtchnl2_rx_flex_desc fields */
-/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		= 0,
+	/* 2 bits */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		= 2,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		= 3,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	= 4,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	= 5,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	= 6,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	= 7,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			= 8,
+};
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields
+ * for virtchnl2_rx_flex_desc.ptype_flex_flags0 member
+ */
 #define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			GENMASK(9, 0)
 
-/* for virtchnl2_rx_flex_desc.pkt_length member */
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M			\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+/* For virtchnl2_rx_flex_desc.pkt_len member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S		0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M		GENMASK(13, 0)
 
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S			1
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S			2
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S			3
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S		7
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S			8
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S		9
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S			10
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S			11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST			16 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+enum virtchnl2_rx_flex_desc_status_error_0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			0 /* 4 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			5
-/* [10:6] reserved */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			16 /* this entry must be last!!! */
-
-/* for virtchnl2_rx_flex_desc.ts_low member */
+enum virtchnl2_rx_flex_desc_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			= 0,
+	/* 4 bits */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			= 4,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			= 5,
+	/* [10:6] reserved */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		= 11,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		= 12,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		= 13,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		= 14,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		= 15,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			= 16,
+};
+
+/* For virtchnl2_rx_flex_desc.ts_low member */
 #define VIRTCHNL2_RX_FLEX_TSTAMP_VALID				BIT(0)
 
 /* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
@@ -212,72 +233,89 @@
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M	\
 	BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S	52
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	\
-	IDPF_M(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	GENMASK_ULL(62, 52)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S	38
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	\
-	IDPF_M(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	GENMASK_ULL(51, 38)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S	30
-#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	\
-	IDPF_M(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	GENMASK_ULL(37, 30)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S	19
-#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	\
-	IDPF_M(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	GENMASK_ULL(26, 19)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S	0
-#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	\
-	IDPF_M(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	GENMASK_ULL(18, 0)
 
-/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		0
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		1
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		2
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		3
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		4
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		5 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	8
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		9 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		11
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		12 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		14
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	15
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		16 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	18
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		19 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		= 1,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		= 5, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	= 8,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		= 9, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		= 11,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		= 12, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		= 14,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	= 15,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		= 16, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	= 18,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		= 19, /* this entry must be last!!! */
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S	0
+enum virtcnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S,
+};
 
-/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		0
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	1
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		2
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		3 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		3
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		4
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		5
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		6
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		7
-
-/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_error_bits {
+	VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	= 1,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		= 3, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		= 5,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		= 6,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		= 7,
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA		0
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID		1
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV		2
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH		3
+enum virtchnl2_rx_base_desc_flstat_values {
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH,
+};
 
-/* Receive Descriptors */
-/* splitq buf
+/**
+ * struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
+ * @qword0: RX buffer struct
+ * @qword0.buf_id: Buffer identifier
+ * @qword0.rsvd0: Reserved
+ * @qword0.rsvd1: Reserved
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd2: Reserved
+ *
+ * Receive Descriptors
+ * SplitQ buffer
  * |                                       16|                   0|
  * ----------------------------------------------------------------
  * | RSV                                     | Buffer ID          |
@@ -292,16 +330,23 @@
  */
 struct virtchnl2_splitq_rx_buf_desc {
 	struct {
-		__le16  buf_id; /* Buffer Identifier */
+		__le16  buf_id;
 		__le16  rsvd0;
 		__le32  rsvd1;
 	} qword0;
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
-/* singleq buf
+/**
+ * struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd1: Reserved
+ * @rsvd2: Reserved
+ *
+ * SingleQ buffer
  * |                                                             0|
  * ----------------------------------------------------------------
  * | Rx packet buffer address                                     |
@@ -315,18 +360,44 @@ struct virtchnl2_splitq_rx_buf_desc {
  * |                                                             0|
  */
 struct virtchnl2_singleq_rx_buf_desc {
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd1;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
+/**
+ * union virtchnl2_rx_buf_desc - RX buffer descriptor
+ * @read: Singleq RX buffer descriptor format
+ * @split_rd: Splitq RX buffer descriptor format
+ */
 union virtchnl2_rx_buf_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
 	struct virtchnl2_splitq_rx_buf_desc		split_rd;
 };
 
-/* (0x00) singleq wb(compl) */
+/**
+ * struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format
+ * @qword0: First quad word struct
+ * @qword0.lo_dword: Lower dual word struct
+ * @qword0.lo_dword.mirroring_status: Mirrored packet status
+ * @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet
+ * @qword0.hi_dword: High dual word union
+ * @qword0.hi_dword.rss: RSS hash
+ * @qword0.hi_dword.fd_id: Flow director filter id
+ * @qword1: Second quad word struct
+ * @qword1.status_error_ptype_len: Status/error/PTYPE/length
+ * @qword2: Third quad word struct
+ * @qword2.ext_status: Extended status
+ * @qword2.rsvd: Reserved
+ * @qword2.l2tag2_1: Extracted L2 tag 2 from the packet
+ * @qword2.l2tag2_2: Reserved
+ * @qword3: Fourth quad word struct
+ * @qword3.reserved: Reserved
+ * @qword3.fd_id: Flow director filter id
+ *
+ * Profile ID 0x1, SingleQ, base writeback format.
+ */
 struct virtchnl2_singleq_base_rx_desc {
 	struct {
 		struct {
@@ -334,16 +405,15 @@ struct virtchnl2_singleq_base_rx_desc {
 			__le16 l2tag1;
 		} lo_dword;
 		union {
-			__le32 rss; /* RSS Hash */
-			__le32 fd_id; /* Flow Director filter id */
+			__le32 rss;
+			__le32 fd_id;
 		} hi_dword;
 	} qword0;
 	struct {
-		/* status/error/PTYPE/length */
 		__le64 status_error_ptype_len;
 	} qword1;
 	struct {
-		__le16 ext_status; /* extended status */
+		__le16 ext_status;
 		__le16 rsvd;
 		__le16 l2tag2_1;
 		__le16 l2tag2_2;
@@ -352,19 +422,40 @@ struct virtchnl2_singleq_base_rx_desc {
 		__le32 reserved;
 		__le32 fd_id;
 	} qword3;
-}; /* writeback */
+};
 
-/* (0x01) singleq flex compl */
+/**
+ * struct virtchnl2_rx_flex_desc - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @flex_meta0: Flexible metadata container 0
+ * @flex_meta1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @time_stamp_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flex_meta2: Flexible metadata container 2
+ * @flex_meta3: Flexible metadata container 3
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.flex_meta4: Flexible metadata container 4
+ * @flex_ts.flex.flex_meta5: Flexible metadata container 5
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x1, SingleQ, flex completion writeback format.
+ */
 struct virtchnl2_rx_flex_desc {
 	/* Qword 0 */
-	u8 rxdid; /* descriptor builder profile id */
-	u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
-	__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
-	__le16 pkt_len; /* [15:14] are reserved */
-	__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
-					/* sph=[11:11] */
-					/* ff1/ext=[15:12] */
-
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
@@ -390,7 +481,29 @@ struct virtchnl2_rx_flex_desc {
 	} flex_ts;
 };
 
-/* (0x02) */
+/**
+ * struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SingleQ, flex writeback format.
+ */
 struct virtchnl2_rx_flex_desc_nic {
 	/* Qword 0 */
 	u8 rxdid;
@@ -422,8 +535,27 @@ struct virtchnl2_rx_flex_desc_nic {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Switch Profile
- * RxDID Profile Id 3
+/**
+ * struct virtchnl2_rx_flex_desc_sw - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @src_vsi: Source VSI, [10:15] are reserved
+ * @flex_md1_rsvd: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor Switch Profile
+ * RxDID Profile ID 0x3, SingleQ
  * Flex-field 0: Source Vsi
  */
 struct virtchnl2_rx_flex_desc_sw {
@@ -437,9 +569,55 @@ struct virtchnl2_rx_flex_desc_sw {
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
-	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 src_vsi;
 	__le16 flex_md1_rsvd;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le32 rsvd;
+	__le32 ts_high;
+};
 
+#ifndef EXTERNAL_RELEASE
+/**
+ * struct virtchnl2_rx_flex_desc_nic_veb_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @dst_vsi: Destination VSI, [10:15] are reserved
+ * @flex_field_1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 0x4
+ * Flex-field 0: Destination Vsi
+ */
+struct virtchnl2_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi;
+	__le16 flex_field_1;
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flex_flags2;
@@ -448,13 +626,85 @@ struct virtchnl2_rx_flex_desc_sw {
 	__le16 l2tag2_2nd;
 
 	/* Qword 3 */
-	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 rsvd;
 	__le32 ts_high;
 };
 
-
-/* Rx Flex Descriptor NIC Profile
- * RxDID Profile Id 6
+/**
+ * struct virtchnl2_rx_flex_desc_nic_acl_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @acl_ctr0: ACL counter 0
+ * @acl_ctr1: ACL counter 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @acl_ctr2: ACL counter 2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile ID 0x5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct virtchnl2_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd;
+	__le32 ts_high;
+};
+#endif /* !EXTERNAL_RELEASE */
+
+/**
+ * struct virtchnl2_rx_flex_desc_nic_2 - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @src_vsi: Source VSI
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC Profile
+ * RxDID Profile ID 0x6
  * Flex-field 0: RSS hash lower 16-bits
  * Flex-field 1: RSS hash upper 16-bits
  * Flex-field 2: Flow Id lower 16-bits
@@ -493,29 +743,43 @@ struct virtchnl2_rx_flex_desc_nic_2 {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Advanced (Split Queue Model)
- * RxDID Profile Id 7
+/**
+ * struct virtchnl2_rx_flex_desc_adv - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @fmd0: Flexible metadata container 0
+ * @fmd1: Flexible metadata container 1
+ * @fmd2: Flexible metadata container 2
+ * @fflags2: Flags
+ * @hash3: Upper bits of Rx hash value
+ * @fmd3: Flexible metadata container 3
+ * @fmd4: Flexible metadata container 4
+ * @fmd5: Flexible metadata container 5
+ * @fmd6: Flexible metadata container 6
+ * @fmd7_0: Flexible metadata container 7.0
+ * @fmd7_1: Flexible metadata container 7.1
+ *
+ * RX Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile ID 0x2
  */
 struct virtchnl2_rx_flex_desc_adv {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
@@ -534,10 +798,42 @@ struct virtchnl2_rx_flex_desc_adv {
 	__le16 fmd6;
 	__le16 fmd7_0;
 	__le16 fmd7_1;
-}; /* writeback */
+};
 
-/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
- * RxDID Profile Id 8
+/**
+ * struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @buf_id: Buffer identifier. Only in splitq mode.
+ * @misc: Union
+ * @misc.raw_cs: Raw checksum
+ * @misc.l2tag1: Stripped L2 tag from the received packet
+ * @misc.rscseglen: RSC segment length
+ * @hash1: Lower 16 bits of Rx hash value, hash[15:0]
+ * @ff2_mirrid_hash2: Union
+ * @ff2_mirrid_hash2.fflags2: Flexible flags section 2
+ * @ff2_mirrid_hash2.mirrorid: Mirror id
+ * @ff2_mirrid_hash2.hash2: 8 bits of Rx hash value, hash[23:16]
+ * @hash3: Upper 8 bits of Rx hash value, hash[31:24]
+ * @l2tag2: Extracted L2 tag 2 from the packet
+ * @fmd4: Flexible metadata container 4
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @fmd6: Flexible metadata container 6
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SplitQ, flex writeback format.
+ *
  * Flex-field 0: BufferID
  * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
  * Flex-field 2: Hash[15:0]
@@ -548,30 +844,17 @@ struct virtchnl2_rx_flex_desc_adv {
  */
 struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
 	u8 fflags1;
 	u8 ts_low;
-	__le16 buf_id; /* only in splitq */
+	__le16 buf_id;
 	union {
 		__le16 raw_cs;
 		__le16 l2tag1;
@@ -591,7 +874,7 @@ struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	__le16 l2tag1;
 	__le16 fmd6;
 	__le32 ts_high;
-}; /* writeback */
+};
 
 union virtchnl2_rx_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 13/22] common/idpf: avoid variable 0-init
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (11 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 12/22] common/idpf: move related defines into enums Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 14/22] common/idpf: update in PTP message validation Soumyadeep Hore
                       ` (9 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Don't initialize the variables if not needed.

Also use 'err' instead of 'status', 'ret_code', 'ret' etc.
for consistency and change the return label 'sq_send_command_out'
to 'err_unlock'.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c      | 63 +++++++++----------
 .../common/idpf/base/idpf_controlq_setup.c    | 18 +++---
 2 files changed, 39 insertions(+), 42 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index b5ba9c3bd0..bd23e54421 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -61,7 +61,7 @@ static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
  */
 static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	for (i = 0; i < cq->ring_size; i++) {
 		struct idpf_ctlq_desc *desc = IDPF_CTLQ_DESC(cq, i);
@@ -134,7 +134,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 {
 	struct idpf_ctlq_info *cq;
 	bool is_rxq = false;
-	int status = 0;
+	int err;
 
 	if (!qinfo->len || !qinfo->buf_size ||
 	    qinfo->len > IDPF_CTLQ_MAX_RING_SIZE ||
@@ -164,16 +164,16 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 		is_rxq = true;
 		/* fallthrough */
 	case IDPF_CTLQ_TYPE_MAILBOX_TX:
-		status = idpf_ctlq_alloc_ring_res(hw, cq);
+		err = idpf_ctlq_alloc_ring_res(hw, cq);
 		break;
 	default:
-		status = -EINVAL;
+		err = -EINVAL;
 		break;
 	}
 
-	if (status)
+	if (err)
 #ifdef NVME_CPF
-		return status;
+		return err;
 #else
 		goto init_free_q;
 #endif
@@ -187,7 +187,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			idpf_calloc(hw, qinfo->len,
 				    sizeof(struct idpf_ctlq_msg *));
 		if (!cq->bi.tx_msg) {
-			status = -ENOMEM;
+			err = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
 #endif
@@ -203,17 +203,16 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 
 #ifndef NVME_CPF
 	*cq_out = cq;
-	return status;
+	return 0;
 
 init_dealloc_q_mem:
 	/* free ring buffers and the ring itself */
 	idpf_ctlq_dealloc_ring_res(hw, cq);
 init_free_q:
 	idpf_free(hw, cq);
-	cq = NULL;
 #endif
 
-	return status;
+	return err;
 }
 
 /**
@@ -249,8 +248,8 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 #endif
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
-	int ret_code = 0;
-	int i = 0;
+	int err;
+	int i;
 
 	LIST_INIT(&hw->cq_list_head);
 
@@ -261,19 +260,19 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		cq = *(ctlq + i);
 #endif
 
-		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
-		if (ret_code)
+		err = idpf_ctlq_add(hw, qinfo, &cq);
+		if (err)
 			goto init_destroy_qs;
 	}
 
-	return ret_code;
+	return 0;
 
 init_destroy_qs:
 	LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
 				 idpf_ctlq_info, cq_list)
 		idpf_ctlq_remove(hw, cq);
 
-	return ret_code;
+	return err;
 }
 
 /**
@@ -307,9 +306,9 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 		   u16 num_q_msg, struct idpf_ctlq_msg q_msg[])
 {
 	struct idpf_ctlq_desc *desc;
-	int num_desc_avail = 0;
-	int status = 0;
-	int i = 0;
+	int num_desc_avail;
+	int err = 0;
+	int i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -319,8 +318,8 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* Ensure there are enough descriptors to send all messages */
 	num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
 	if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
-		status = -ENOSPC;
-		goto sq_send_command_out;
+		err = -ENOSPC;
+		goto err_unlock;
 	}
 
 	for (i = 0; i < num_q_msg; i++) {
@@ -391,10 +390,10 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 
 	wr32(hw, cq->reg.tail, cq->next_to_use);
 
-sq_send_command_out:
+err_unlock:
 	idpf_release_lock(&cq->cq_lock);
 
-	return status;
+	return err;
 }
 
 /**
@@ -418,9 +417,8 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 				struct idpf_ctlq_msg *msg_status[], bool force)
 {
 	struct idpf_ctlq_desc *desc;
-	u16 i = 0, num_to_clean;
+	u16 i, num_to_clean;
 	u16 ntc, desc_err;
-	int ret = 0;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -467,7 +465,7 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 	/* Return number of descriptors actually cleaned */
 	*clean_count = i;
 
-	return ret;
+	return 0;
 }
 
 /**
@@ -534,7 +532,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	u16 ntp = cq->next_to_post;
 	bool buffs_avail = false;
 	u16 tbp = ntp + 1;
-	int status = 0;
 	int i = 0;
 
 	if (*buff_count > cq->ring_size)
@@ -635,7 +632,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* return the number of buffers that were not posted */
 	*buff_count = *buff_count - i;
 
-	return status;
+	return 0;
 }
 
 /**
@@ -654,8 +651,8 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 {
 	u16 num_to_clean, ntc, ret_val, flags;
 	struct idpf_ctlq_desc *desc;
-	int ret_code = 0;
-	u16 i = 0;
+	int err = 0;
+	u16 i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -688,7 +685,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 				      IDPF_CTLQ_FLAG_FTYPE_S;
 
 		if (flags & IDPF_CTLQ_FLAG_ERR)
-			ret_code = -EBADMSG;
+			err = -EBADMSG;
 
 		q_msg[i].cookie.mbx.chnl_opcode = LE32_TO_CPU(desc->cookie_high);
 		q_msg[i].cookie.mbx.chnl_retval = LE32_TO_CPU(desc->cookie_low);
@@ -734,7 +731,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 
 	*num_q_msg = i;
 	if (*num_q_msg == 0)
-		ret_code = -ENOMSG;
+		err = -ENOMSG;
 
-	return ret_code;
+	return err;
 }
diff --git a/drivers/common/idpf/base/idpf_controlq_setup.c b/drivers/common/idpf/base/idpf_controlq_setup.c
index 21f43c74f5..cd6bcb1cf0 100644
--- a/drivers/common/idpf/base/idpf_controlq_setup.c
+++ b/drivers/common/idpf/base/idpf_controlq_setup.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 
@@ -34,7 +34,7 @@ static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
 static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
 				struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	/* Do not allocate DMA buffers for transmit queues */
 	if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
@@ -153,20 +153,20 @@ void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
  */
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
-	int ret_code;
+	int err;
 
 	/* verify input for valid configuration */
 	if (!cq->ring_size || !cq->buf_size)
 		return -EINVAL;
 
 	/* allocate the ring memory */
-	ret_code = idpf_ctlq_alloc_desc_ring(hw, cq);
-	if (ret_code)
-		return ret_code;
+	err = idpf_ctlq_alloc_desc_ring(hw, cq);
+	if (err)
+		return err;
 
 	/* allocate buffers in the rings */
-	ret_code = idpf_ctlq_alloc_bufs(hw, cq);
-	if (ret_code)
+	err = idpf_ctlq_alloc_bufs(hw, cq);
+	if (err)
 		goto idpf_init_cq_free_ring;
 
 	/* success! */
@@ -174,5 +174,5 @@ int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 
 idpf_init_cq_free_ring:
 	idpf_free_dma_mem(hw, &cq->desc_ring);
-	return ret_code;
+	return err;
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 14/22] common/idpf: update in PTP message validation
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (12 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 13/22] common/idpf: avoid variable 0-init Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 15/22] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
                       ` (8 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

When the message for getting timestamp latches is sent by the driver,
number of latches is equal to 0. Current implementation of message
validation function incorrectly notifies this kind of message length as
invalid.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index e76ccbd46f..24a8b37876 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -2272,7 +2272,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_CAPS:
 		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_get_ptp_caps *ptp_caps =
 			(struct virtchnl2_get_ptp_caps *)msg;
 
@@ -2288,7 +2288,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
 		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
 			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 15/22] common/idpf: rename INLINE FLOW STEER to FLOW STEER
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (13 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 14/22] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 16/22] common/idpf: add wmb before tail Soumyadeep Hore
                       ` (7 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

This capability bit indicates both inline as well as side band flow
steering capability.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 24a8b37876..9dd5191c0e 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -243,7 +243,7 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
 	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
 	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
-	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_FLOW_STEER		= BIT_ULL(6),
 	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
 	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
 	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 16/22] common/idpf: add wmb before tail
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (14 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 15/22] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 17/22] drivers: add flex array support and fix issues Soumyadeep Hore
                       ` (6 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Introduced through customer's feedback in their attempt to address some
bugs this introduces a memory barrier before posting ctlq tail. This
makes sure memory writes have a chance to take place before HW starts
messing with the descriptors.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index bd23e54421..ba2e328122 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -624,6 +624,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			/* Wrap to end of end ring since current ntp is 0 */
 			cq->next_to_post = cq->ring_size - 1;
 
+		idpf_wmb();
+
 		wr32(hw, cq->reg.tail, cq->next_to_post);
 	}
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 17/22] drivers: add flex array support and fix issues
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (15 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 16/22] common/idpf: add wmb before tail Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 18/22] common/idpf: enable flow steer capability for vports Soumyadeep Hore
                       ` (5 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

With the internal Linux upstream feedback that is received on
IDPF driver and also some references available online, it
is discouraged to use 1-sized array fields in the structures,
especially in the new Linux drivers that are going to be
upstreamed. Instead, it is recommended to use flex array fields
for the dynamic sized structures.

Some fixes based on code change is introduced to compile dpdk.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h       | 466 ++++-----------------
 drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
 drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
 3 files changed, 86 insertions(+), 410 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 9dd5191c0e..317bd06c0f 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -63,6 +63,10 @@ enum virtchnl2_status {
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
+
+#define STRUCT_VAR_LEN		1
 
 /**
  * New major set of opcodes introduced and so leaving room for
@@ -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_reg_chunk chunks[1];
+	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 
 /**
  * enum virtchnl2_vport_flags - Vport flags
@@ -773,7 +776,7 @@ struct virtchnl2_create_vport {
 	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(192, virtchnl2_create_vport, chunks.chunks);
 
 /**
  * struct virtchnl2_vport - Vport identifier information
@@ -860,10 +863,9 @@ struct virtchnl2_config_tx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[10];
-	struct virtchnl2_txq_info qinfo[1];
+	struct virtchnl2_txq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(72, virtchnl2_config_tx_queues, qinfo);
 
 /**
  * struct virtchnl2_rxq_info - Receive queue config info
@@ -942,10 +944,9 @@ struct virtchnl2_config_rx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[18];
-	struct virtchnl2_rxq_info qinfo[1];
+	struct virtchnl2_rxq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(112, virtchnl2_config_rx_queues, qinfo);
 
 /**
  * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
@@ -975,16 +976,15 @@ struct virtchnl2_add_queues {
 
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(56, virtchnl2_add_queues, chunks.chunks);
 
 /* Queue Groups Extension */
 /**
  * struct virtchnl2_rx_queue_group_info - RX queue group info
- * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
- *		  allocated by CreateVport command. New size will be returned
- *		  if allocation succeeded, otherwise original rss_size from
- *		  CreateVport will be returned.
+ * @rss_lut_size: User can ask to update rss_lut size originally allocated by
+ *		  CreateVport command. New size will be returned if allocation
+ *		  succeeded, otherwise original rss_size from CreateVport
+ *		  will be returned.
  * @pad: Padding for future extensions
  */
 struct virtchnl2_rx_queue_group_info {
@@ -1012,7 +1012,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
  * @cir_pad: Future extension purpose for CIR only
  * @pad2: Padding for future extensions
  */
-struct virtchnl2_tx_queue_group_info { /* IN */
+struct virtchnl2_tx_queue_group_info {
 	u8 tx_tc;
 	u8 priority;
 	u8 is_sp;
@@ -1045,19 +1045,17 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 /**
  * struct virtchnl2_queue_group_info - Queue group info
  * @qg_id: Queue group ID
- * @num_tx_q: Number of TX queues
- * @num_tx_complq: Number of completion queues
- * @num_rx_q: Number of RX queues
- * @num_rx_bufq: Number of RX buffer queues
+ * @num_tx_q: Number of TX queues requested
+ * @num_tx_complq: Number of completion queues requested
+ * @num_rx_q: Number of RX queues requested
+ * @num_rx_bufq: Number of RX buffer queues requested
  * @tx_q_grp_info: TX queue group info
  * @rx_q_grp_info: RX queue group info
  * @pad: Padding for future extensions
- * @chunks: Queue register chunks
+ * @chunks: Queue register chunks from CP
  */
 struct virtchnl2_queue_group_info {
-	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
-	/* IN, Number of queue of different types in the group. */
 	__le16 num_tx_q;
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
@@ -1066,56 +1064,52 @@ struct virtchnl2_queue_group_info {
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
 	u8 pad[40];
-	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
-};
-
-VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
-
-/**
- * struct virtchnl2_queue_groups - Queue groups list
- * @num_queue_groups: Total number of queue groups
- * @pad: Padding for future extensions
- * @groups: Array of queue group info
- */
-struct virtchnl2_queue_groups {
-	__le16 num_queue_groups;
-	u8 pad[6];
-	struct virtchnl2_queue_group_info groups[1];
+	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(120, virtchnl2_queue_group_info, chunks.chunks);
 
 /**
  * struct virtchnl2_add_queue_groups - Add queue groups
- * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ * @vport_id: Vport_id to add queue group to, same as allocated by
  *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @num_queue_groups: Total number of queue groups
  * @pad: Padding for future extensions
- * @qg_info: IN/OUT. List of all the queue groups
+#ifndef FLEX_ARRAY_SUPPORT
+ * @groups: List of all the queue group info structures
+#endif
  *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
- * groups and queues assigned followed by num_queue_groups and num_chunks of
- * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ * groups and queues assigned followed by num_queue_groups and groups of
+ * virtchnl2_queue_group_info and virtchnl2_queue_chunk structures.
+#ifdef FLEX_ARRAY_SUPPORT
+ * (Note: There is no specific field for the queue group info but are added at
+ * the end of the add queue groups message. Receiver of this message is expected
+ * to extract the queue group info accordingly. Reason for doing this is because
+ * compiler doesn't allow nested flexible array fields).
+#endif
  *
  * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
 	__le32 vport_id;
-	u8 pad[4];
-	struct virtchnl2_queue_groups qg_info;
+	__le16 num_queue_groups;
+	u8 pad[10];
+	struct virtchnl2_queue_group_info groups[STRUCT_VAR_LEN];
+
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
 /**
  * struct virtchnl2_delete_queue_groups - Delete queue groups
- * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ * @vport_id: Vport ID to delete queue group from, same as allocated by
  *	      CreateVport.
- * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @num_queue_groups: Defines number of groups provided
  * @pad: Padding
- * @qg_ids: IN, IDs & types of Queue Groups to delete
+ * @qg_ids: IDs & types of Queue Groups to delete
  *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
@@ -1129,10 +1123,9 @@ struct virtchnl2_delete_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	struct virtchnl2_queue_group_id qg_ids[1];
+	struct virtchnl2_queue_group_id qg_ids[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_delete_queue_groups, qg_ids);
 
 /**
  * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
@@ -1190,10 +1183,9 @@ struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
 
-	struct virtchnl2_vector_chunk vchunks[1];
+	struct virtchnl2_vector_chunk vchunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(48, virtchnl2_vector_chunks, vchunks);
 
 /**
  * struct virtchnl2_alloc_vectors - Vector allocation info
@@ -1215,8 +1207,7 @@ struct virtchnl2_alloc_vectors {
 
 	struct virtchnl2_vector_chunks vchunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(64, virtchnl2_alloc_vectors, vchunks.vchunks);
 
 /**
  * struct virtchnl2_rss_lut - RSS LUT info
@@ -1237,10 +1228,9 @@ struct virtchnl2_rss_lut {
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	__le32 lut[1];
+	__le32 lut[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_rss_lut, lut);
 
 /**
  * struct virtchnl2_rss_hash - RSS hash info
@@ -1389,10 +1379,9 @@ struct virtchnl2_ptype {
 	u8 ptype_id_8;
 	u8 proto_id_count;
 	__le16 pad;
-	__le16 proto_id[1];
+	__le16 proto_id[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(8, virtchnl2_ptype, proto_id);
 
 /**
  * struct virtchnl2_get_ptype_info - Packet type info
@@ -1428,7 +1417,7 @@ struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
-	struct virtchnl2_ptype ptype[1];
+	struct virtchnl2_ptype ptype[STRUCT_VAR_LEN];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
@@ -1629,10 +1618,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_chunk chunks[1];
+	struct virtchnl2_queue_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_queue_chunks, chunks);
 
 /**
  * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
@@ -1654,8 +1642,7 @@ struct virtchnl2_del_ena_dis_queues {
 
 	struct virtchnl2_queue_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(32, virtchnl2_del_ena_dis_queues, chunks.chunks);
 
 /**
  * struct virtchnl2_queue_vector - Queue to vector mapping
@@ -1699,10 +1686,10 @@ struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
 	__le16 num_qv_maps;
 	u8 pad[10];
-	struct virtchnl2_queue_vector qv_maps[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+	struct virtchnl2_queue_vector qv_maps[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_vector_maps, qv_maps);
 
 /**
  * struct virtchnl2_loopback - Loopback info
@@ -1754,10 +1741,10 @@ struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
 	__le16 num_mac_addr;
 	u8 pad[2];
-	struct virtchnl2_mac_addr mac_addr_list[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
+	struct virtchnl2_mac_addr mac_addr_list[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_mac_addr_list, mac_addr_list);
 
 /**
  * struct virtchnl2_promisc_info - Promiscuous type information
@@ -1856,10 +1843,10 @@ struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
+	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp,
+			       ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_get_ptp_caps - Get PTP capabilities
@@ -1884,8 +1871,8 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(88, virtchnl2_get_ptp_caps,
+			       tx_tstamp.ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
@@ -1920,13 +1907,12 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
+	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp_latches);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp_latches,
+			       tstamp_latches);
 
 static inline const char *virtchnl2_op_str(__le32 v_opcode)
 {
@@ -2004,314 +1990,4 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
 	}
 }
 
-/**
- * virtchnl2_vc_validate_vf_msg
- * @ver: Virtchnl2 version info
- * @v_opcode: Opcode for the message
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- *
- * Validate msg format against struct for each opcode.
- */
-static inline int
-virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
-			     u8 *msg, __le16 msglen)
-{
-	bool err_msg_format = false;
-	__le32 valid_len = 0;
-
-	/* Validate message length */
-	switch (v_opcode) {
-	case VIRTCHNL2_OP_VERSION:
-		valid_len = sizeof(struct virtchnl2_version_info);
-		break;
-	case VIRTCHNL2_OP_GET_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_capabilities);
-		break;
-	case VIRTCHNL2_OP_CREATE_VPORT:
-		valid_len = sizeof(struct virtchnl2_create_vport);
-		if (msglen >= valid_len) {
-			struct virtchnl2_create_vport *cvport =
-				(struct virtchnl2_create_vport *)msg;
-
-			if (cvport->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (cvport->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_CREATE_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_create_adi);
-		if (msglen >= valid_len) {
-			struct virtchnl2_non_flex_create_adi *cadi =
-				(struct virtchnl2_non_flex_create_adi *)msg;
-
-			if (cadi->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			if (cadi->vchunks.num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (cadi->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-			valid_len += (cadi->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_destroy_adi);
-		break;
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-		valid_len = sizeof(struct virtchnl2_vport);
-		break;
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_tx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_tx_queues *ctq =
-				(struct virtchnl2_config_tx_queues *)msg;
-			if (ctq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (ctq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_txq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_rx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_rx_queues *crq =
-				(struct virtchnl2_config_rx_queues *)msg;
-			if (crq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (crq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_rxq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUES:
-		valid_len = sizeof(struct virtchnl2_add_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_add_queues *add_q =
-				(struct virtchnl2_add_queues *)msg;
-
-			if (add_q->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (add_q->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_DEL_QUEUES:
-		valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_del_ena_dis_queues *qs =
-				(struct virtchnl2_del_ena_dis_queues *)msg;
-			if (qs->chunks.num_chunks == 0 ||
-			    qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (qs->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_add_queue_groups);
-		if (msglen != valid_len) {
-			__le64 offset;
-			__le32 i;
-			struct virtchnl2_add_queue_groups *add_queue_grp =
-				(struct virtchnl2_add_queue_groups *)msg;
-			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
-			struct virtchnl2_queue_group_info *grp_info;
-			__le32 chunk_size = sizeof(struct virtchnl2_queue_reg_chunk);
-			__le32 group_size = sizeof(struct virtchnl2_queue_group_info);
-			__le32 total_chunks_size;
-
-			if (groups->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (groups->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_info);
-			offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
-
-			for (i = 0; i < groups->num_queue_groups; i++) {
-				grp_info = (struct virtchnl2_queue_group_info *)
-						   ((u8 *)groups + offset);
-				if (grp_info->chunks.num_chunks == 0) {
-					offset += group_size;
-					continue;
-				}
-				total_chunks_size = (grp_info->chunks.num_chunks - 1) * chunk_size;
-				offset += group_size + total_chunks_size;
-				valid_len += total_chunks_size;
-			}
-		}
-		break;
-	case VIRTCHNL2_OP_DEL_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_delete_queue_groups);
-		if (msglen != valid_len) {
-			struct virtchnl2_delete_queue_groups *del_queue_grp =
-				(struct virtchnl2_delete_queue_groups *)msg;
-
-			if (del_queue_grp->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += (del_queue_grp->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_id);
-		}
-		break;
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-		valid_len = sizeof(struct virtchnl2_queue_vector_maps);
-		if (msglen >= valid_len) {
-			struct virtchnl2_queue_vector_maps *v_qp =
-				(struct virtchnl2_queue_vector_maps *)msg;
-			if (v_qp->num_qv_maps == 0 ||
-			    v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_qp->num_qv_maps - 1) *
-				      sizeof(struct virtchnl2_queue_vector);
-		}
-		break;
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_alloc_vectors);
-		if (msglen >= valid_len) {
-			struct virtchnl2_alloc_vectors *v_av =
-				(struct virtchnl2_alloc_vectors *)msg;
-
-			if (v_av->vchunks.num_vchunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (v_av->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_vector_chunks);
-		if (msglen >= valid_len) {
-			struct virtchnl2_vector_chunks *v_chunks =
-				(struct virtchnl2_vector_chunks *)msg;
-			if (v_chunks->num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_chunks->num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-		valid_len = sizeof(struct virtchnl2_rss_key);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_key *vrk =
-				(struct virtchnl2_rss_key *)msg;
-
-			if (vrk->key_len == 0) {
-				/* Zero length is allowed as input */
-				break;
-			}
-
-			valid_len += vrk->key_len - 1;
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-		valid_len = sizeof(struct virtchnl2_rss_lut);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_lut *vrl =
-				(struct virtchnl2_rss_lut *)msg;
-
-			if (vrl->lut_entries == 0) {
-				/* Zero entries is allowed as input */
-				break;
-			}
-
-			valid_len += (vrl->lut_entries - 1) * sizeof(vrl->lut);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-		valid_len = sizeof(struct virtchnl2_rss_hash);
-		break;
-	case VIRTCHNL2_OP_SET_SRIOV_VFS:
-		valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		valid_len = sizeof(struct virtchnl2_get_ptype_info);
-		break;
-	case VIRTCHNL2_OP_GET_STATS:
-		valid_len = sizeof(struct virtchnl2_vport_stats);
-		break;
-	case VIRTCHNL2_OP_GET_PORT_STATS:
-		valid_len = sizeof(struct virtchnl2_port_stats);
-		break;
-	case VIRTCHNL2_OP_RESET_VF:
-		break;
-	case VIRTCHNL2_OP_GET_PTP_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_get_ptp_caps *ptp_caps =
-			(struct virtchnl2_get_ptp_caps *)msg;
-
-			if (ptp_caps->tx_tstamp.num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_entry));
-		}
-		break;
-	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
-		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
-			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
-
-			if (tx_tstamp_latches->num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((tx_tstamp_latches->num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
-		}
-		break;
-	/* These are always errors coming from the VF */
-	case VIRTCHNL2_OP_EVENT:
-	case VIRTCHNL2_OP_UNKNOWN:
-	default:
-		return VIRTCHNL2_STATUS_ERR_ESRCH;
-	}
-	/* Few more checks */
-	if (err_msg_format || valid_len != msglen)
-		return VIRTCHNL2_STATUS_ERR_EINVAL;
-
-	return 0;
-}
-
 #endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index c46ed50eb5..f00202f43c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -366,7 +366,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	int err = -1;
 
 	size = sizeof(*p2p_queue_grps_info) +
-	       (p2p_queue_grps_info->qg_info.num_queue_groups - 1) *
+	       (p2p_queue_grps_info->num_queue_groups - 1) *
 		   sizeof(struct virtchnl2_queue_group_info);
 
 	memset(&args, 0, sizeof(args));
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e718e9e19..e707043bf7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2393,18 +2393,18 @@ cpfl_p2p_q_grps_add(struct idpf_vport *vport,
 	int ret;
 
 	p2p_queue_grps_info->vport_id = vport->vport_id;
-	p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
-	p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0;
+	p2p_queue_grps_info->num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
+	p2p_queue_grps_info->groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
+	p2p_queue_grps_info->groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
+	p2p_queue_grps_info->groups[0].rx_q_grp_info.rss_lut_size = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.tx_tc = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.priority = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.is_sp = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.pir_weight = 0;
 
 	ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info);
 	if (ret != 0) {
@@ -2423,13 +2423,13 @@ cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
 	struct virtchnl2_queue_reg_chunks *vc_chunks_out;
 	int i, type;
 
-	if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=
+	if (p2p_q_vc_out_info->groups[0].qg_id.queue_group_type !=
 	    VIRTCHNL2_QUEUE_GROUP_P2P) {
 		PMD_DRV_LOG(ERR, "Add queue group response mismatch.");
 		return -EINVAL;
 	}
 
-	vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks;
+	vc_chunks_out = &p2p_q_vc_out_info->groups[0].chunks;
 
 	for (i = 0; i < vc_chunks_out->num_chunks; i++) {
 		type = vc_chunks_out->chunks[i].type;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 18/22] common/idpf: enable flow steer capability for vports
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (16 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 17/22] drivers: add flex array support and fix issues Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 19/22] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
                       ` (4 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Added virtchnl2_flow_types to be used for flow steering.

Added flow steer cap flags for vport create.

Add flow steer flow types and action types for vport create.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 60 ++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 317bd06c0f..c14a4e2c7d 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -269,6 +269,43 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
 };
 
+/**
+ * enum virtchnl2_action_types - Available actions for sideband flow steering
+ * @VIRTCHNL2_ACTION_DROP: Drop the packet
+ * @VIRTCHNL2_ACTION_PASSTHRU: Forward the packet to the next classifier/stage
+ * @VIRTCHNL2_ACTION_QUEUE: Forward the packet to a receive queue
+ * @VIRTCHNL2_ACTION_Q_GROUP: Forward the packet to a receive queue group
+ * @VIRTCHNL2_ACTION_MARK: Mark the packet with specific marker value
+ * @VIRTCHNL2_ACTION_COUNT: Increment the corresponding counter
+ */
+
+enum virtchnl2_action_types {
+	VIRTCHNL2_ACTION_DROP		= BIT(0),
+	VIRTCHNL2_ACTION_PASSTHRU	= BIT(1),
+	VIRTCHNL2_ACTION_QUEUE		= BIT(2),
+	VIRTCHNL2_ACTION_Q_GROUP	= BIT(3),
+	VIRTCHNL2_ACTION_MARK		= BIT(4),
+	VIRTCHNL2_ACTION_COUNT		= BIT(5),
+};
+
+/* Flow type capabilities for Flow Steering and Receive-Side Scaling */
+enum virtchnl2_flow_types {
+	VIRTCHNL2_FLOW_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_FLOW_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_FLOW_IPV4_SCTP	= BIT(2),
+	VIRTCHNL2_FLOW_IPV4_OTHER	= BIT(3),
+	VIRTCHNL2_FLOW_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_FLOW_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_FLOW_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_FLOW_IPV6_OTHER	= BIT(7),
+	VIRTCHNL2_FLOW_IPV4_AH		= BIT(8),
+	VIRTCHNL2_FLOW_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_FLOW_IPV4_AH_ESP	= BIT(10),
+	VIRTCHNL2_FLOW_IPV6_AH		= BIT(11),
+	VIRTCHNL2_FLOW_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_FLOW_IPV6_AH_ESP	= BIT(13),
+};
+
 /**
  * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
  * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
@@ -707,11 +744,16 @@ VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 /**
  * enum virtchnl2_vport_flags - Vport flags
  * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
- * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER: Inline flow steering enabled
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ: Inline flow steering enabled
+ * with explicit Rx queue action
+ * @VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER: Sideband flow steering enabled
  */
 enum virtchnl2_vport_flags {
 	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
-	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ	= BIT(2),
+	VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER	= BIT(3),
 };
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
@@ -739,6 +781,14 @@ enum virtchnl2_vport_flags {
  * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
  * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
  * @reserved: Reserved bytes and cannot be used
+ * @inline_flow_types: Bit mask of supported inline-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_types: Bit mask of supported sideband-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_actions: Bit mask of supported action types
+ *  for sideband flow steering (See enum virtchnl2_action_types)
+ * @flow_steer_max_rules: Max rules allowed for inline and sideband
+ *  flow steering combined
  * @rss_algorithm: RSS algorithm
  * @rss_key_size: RSS key size
  * @rss_lut_size: RSS LUT size
@@ -768,7 +818,11 @@ struct virtchnl2_create_vport {
 	__le16 vport_flags;
 	__le64 rx_desc_ids;
 	__le64 tx_desc_ids;
-	u8 reserved[72];
+	u8 reserved[48];
+	__le64 inline_flow_types;
+	__le64 sideband_flow_types;
+	__le32 sideband_flow_actions;
+	__le32 flow_steer_max_rules;
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 19/22] common/idpf: add a new Tx context descriptor structure
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (17 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 18/22] common/idpf: enable flow steer capability for vports Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 20/22] common/idpf: remove idpf common file Soumyadeep Hore
                       ` (3 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding a new structure for the context descriptor that contains
the support for timesync packets, where the index for timestamping is set.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_lan_txrx.h | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/idpf_lan_txrx.h b/drivers/common/idpf/base/idpf_lan_txrx.h
index c9eaeb5d3f..be27973a33 100644
--- a/drivers/common/idpf/base/idpf_lan_txrx.h
+++ b/drivers/common/idpf/base/idpf_lan_txrx.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_LAN_TXRX_H_
@@ -286,6 +286,24 @@ struct idpf_flex_tx_tso_ctx_qw {
 };
 
 union idpf_flex_tx_ctx_desc {
+		/* DTYPE = IDPF_TX_DESC_DTYPE_CTX (0x01) */
+	struct  {
+		struct {
+			u8 rsv[4];
+			__le16 l2tag2;
+			u8 rsv_2[2];
+		} qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 tsyn_reg_l;
+#define IDPF_TX_DESC_CTX_TSYN_L_M	GENMASK(15, 14)
+			__le16 tsyn_reg_h;
+#define IDPF_TX_DESC_CTX_TSYN_H_M	GENMASK(15, 0)
+			__le16 mss;
+#define IDPF_TX_DESC_CTX_MSS_M		GENMASK(14, 2)
+		} qw1;
+	} tsyn;
+
 	/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
 	struct {
 		struct idpf_flex_tx_tso_ctx_qw qw0;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 20/22] common/idpf: remove idpf common file
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (18 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 19/22] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 21/22] drivers: adding type to idpf vc queue switch Soumyadeep Hore
                       ` (2 subsequent siblings)
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

The file is redundant in our implementation and is not required
further.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 382 -------------------------
 drivers/common/idpf/base/meson.build   |   1 -
 2 files changed, 383 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
deleted file mode 100644
index bb540345c2..0000000000
--- a/drivers/common/idpf/base/idpf_common.c
+++ /dev/null
@@ -1,382 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2024 Intel Corporation
- */
-
-#include "idpf_prototype.h"
-#include "idpf_type.h"
-#include <virtchnl.h>
-
-
-/**
- * idpf_set_mac_type - Sets MAC type
- * @hw: pointer to the HW structure
- *
- * This function sets the mac type of the adapter based on the
- * vendor ID and device ID stored in the hw structure.
- */
-int idpf_set_mac_type(struct idpf_hw *hw)
-{
-	int status = 0;
-
-	DEBUGFUNC("Set MAC type\n");
-
-	if (hw->vendor_id == IDPF_INTEL_VENDOR_ID) {
-		switch (hw->device_id) {
-		case IDPF_DEV_ID_PF:
-			hw->mac.type = IDPF_MAC_PF;
-			break;
-		case IDPF_DEV_ID_VF:
-			hw->mac.type = IDPF_MAC_VF;
-			break;
-		default:
-			hw->mac.type = IDPF_MAC_GENERIC;
-			break;
-		}
-	} else {
-		status = -ENODEV;
-	}
-
-	DEBUGOUT2("Setting MAC type found mac: %d, returns: %d\n",
-		  hw->mac.type, status);
-	return status;
-}
-
-/**
- *  idpf_init_hw - main initialization routine
- *  @hw: pointer to the hardware structure
- *  @ctlq_size: struct to pass ctlq size data
- */
-int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
-{
-	struct idpf_ctlq_create_info *q_info;
-	int status = 0;
-	struct idpf_ctlq_info *cq = NULL;
-
-	/* Setup initial control queues */
-	q_info = (struct idpf_ctlq_create_info *)
-		 idpf_calloc(hw, 2, sizeof(struct idpf_ctlq_create_info));
-	if (!q_info)
-		return -ENOMEM;
-
-	q_info[0].type             = IDPF_CTLQ_TYPE_MAILBOX_TX;
-	q_info[0].buf_size         = ctlq_size.asq_buf_size;
-	q_info[0].len              = ctlq_size.asq_ring_size;
-	q_info[0].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[0].reg.head         = PF_FW_ATQH;
-		q_info[0].reg.tail         = PF_FW_ATQT;
-		q_info[0].reg.len          = PF_FW_ATQLEN;
-		q_info[0].reg.bah          = PF_FW_ATQBAH;
-		q_info[0].reg.bal          = PF_FW_ATQBAL;
-		q_info[0].reg.len_mask     = PF_FW_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = PF_FW_ATQH_ATQH_M;
-	} else {
-		q_info[0].reg.head         = VF_ATQH;
-		q_info[0].reg.tail         = VF_ATQT;
-		q_info[0].reg.len          = VF_ATQLEN;
-		q_info[0].reg.bah          = VF_ATQBAH;
-		q_info[0].reg.bal          = VF_ATQBAL;
-		q_info[0].reg.len_mask     = VF_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = VF_ATQH_ATQH_M;
-	}
-
-	q_info[1].type             = IDPF_CTLQ_TYPE_MAILBOX_RX;
-	q_info[1].buf_size         = ctlq_size.arq_buf_size;
-	q_info[1].len              = ctlq_size.arq_ring_size;
-	q_info[1].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[1].reg.head         = PF_FW_ARQH;
-		q_info[1].reg.tail         = PF_FW_ARQT;
-		q_info[1].reg.len          = PF_FW_ARQLEN;
-		q_info[1].reg.bah          = PF_FW_ARQBAH;
-		q_info[1].reg.bal          = PF_FW_ARQBAL;
-		q_info[1].reg.len_mask     = PF_FW_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = PF_FW_ARQH_ARQH_M;
-	} else {
-		q_info[1].reg.head         = VF_ARQH;
-		q_info[1].reg.tail         = VF_ARQT;
-		q_info[1].reg.len          = VF_ARQLEN;
-		q_info[1].reg.bah          = VF_ARQBAH;
-		q_info[1].reg.bal          = VF_ARQBAL;
-		q_info[1].reg.len_mask     = VF_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = VF_ARQH_ARQH_M;
-	}
-
-	status = idpf_ctlq_init(hw, 2, q_info);
-	if (status) {
-		/* TODO return error */
-		idpf_free(hw, q_info);
-		return status;
-	}
-
-	LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, idpf_ctlq_info, cq_list) {
-		if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = cq;
-		else if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = cq;
-	}
-
-	/* TODO hardcode a mac addr for now */
-	hw->mac.addr[0] = 0x00;
-	hw->mac.addr[1] = 0x00;
-	hw->mac.addr[2] = 0x00;
-	hw->mac.addr[3] = 0x00;
-	hw->mac.addr[4] = 0x03;
-	hw->mac.addr[5] = 0x14;
-
-	idpf_free(hw, q_info);
-
-	return 0;
-}
-
-/**
- * idpf_send_msg_to_cp
- * @hw: pointer to the hardware structure
- * @v_opcode: opcodes for VF-PF communication
- * @v_retval: return error code
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- * @cmd_details: pointer to command details
- *
- * Send message to CP. By default, this message
- * is sent asynchronously, i.e. idpf_asq_send_command() does not wait for
- * completion before returning.
- */
-int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
-			int v_retval, u8 *msg, u16 msglen)
-{
-	struct idpf_ctlq_msg ctlq_msg = { 0 };
-	struct idpf_dma_mem dma_mem = { 0 };
-	int status;
-
-	ctlq_msg.opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg.func_id = 0;
-	ctlq_msg.data_len = msglen;
-	ctlq_msg.cookie.mbx.chnl_retval = v_retval;
-	ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
-
-	if (msglen > 0) {
-		dma_mem.va = (struct idpf_dma_mem *)
-			  idpf_alloc_dma_mem(hw, &dma_mem, msglen);
-		if (!dma_mem.va)
-			return -ENOMEM;
-
-		idpf_memcpy(dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
-		ctlq_msg.ctx.indirect.payload = &dma_mem;
-	}
-	status = idpf_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
-
-	if (dma_mem.va)
-		idpf_free_dma_mem(hw, &dma_mem);
-
-	return status;
-}
-
-/**
- *  idpf_asq_done - check if FW has processed the Admin Send Queue
- *  @hw: pointer to the hw struct
- *
- *  Returns true if the firmware has processed all descriptors on the
- *  admin send queue. Returns false if there are still requests pending.
- */
-bool idpf_asq_done(struct idpf_hw *hw)
-{
-	/* AQ designers suggest use of head for better
-	 * timing reliability than DD bit
-	 */
-	return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
-}
-
-/**
- * idpf_check_asq_alive
- * @hw: pointer to the hw struct
- *
- * Returns true if Queue is enabled else false.
- */
-bool idpf_check_asq_alive(struct idpf_hw *hw)
-{
-	if (hw->asq->reg.len)
-		return !!(rd32(hw, hw->asq->reg.len) &
-			  PF_FW_ATQLEN_ATQENABLE_M);
-
-	return false;
-}
-
-/**
- *  idpf_clean_arq_element
- *  @hw: pointer to the hw struct
- *  @e: event info from the receive descriptor, includes any buffers
- *  @pending: number of events that could be left to process
- *
- *  This function cleans one Admin Receive Queue element and returns
- *  the contents through e.  It can also return how many events are
- *  left to process through 'pending'
- */
-int idpf_clean_arq_element(struct idpf_hw *hw,
-			   struct idpf_arq_event_info *e, u16 *pending)
-{
-	struct idpf_dma_mem *dma_mem = NULL;
-	struct idpf_ctlq_msg msg = { 0 };
-	int status;
-	u16 msg_data_len;
-
-	*pending = 1;
-
-	status = idpf_ctlq_recv(hw->arq, pending, &msg);
-	if (status == -ENOMSG)
-		goto exit;
-
-	/* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
-	e->desc.opcode = msg.opcode;
-	e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
-	e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
-	e->desc.ret_val = msg.status;
-	e->desc.datalen = msg.data_len;
-	if (msg.data_len > 0) {
-		if (!msg.ctx.indirect.payload || !msg.ctx.indirect.payload->va ||
-		    !e->msg_buf) {
-			return -EFAULT;
-		}
-		e->buf_len = msg.data_len;
-		msg_data_len = msg.data_len;
-		idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len,
-			    IDPF_DMA_TO_NONDMA);
-		dma_mem = msg.ctx.indirect.payload;
-	} else {
-		*pending = 0;
-	}
-
-	status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem);
-
-exit:
-	return status;
-}
-
-/**
- *  idpf_deinit_hw - shutdown routine
- *  @hw: pointer to the hardware structure
- */
-void idpf_deinit_hw(struct idpf_hw *hw)
-{
-	hw->asq = NULL;
-	hw->arq = NULL;
-
-	idpf_ctlq_deinit(hw);
-}
-
-/**
- * idpf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a RESET message to the CPF. Does not wait for response from CPF
- * as none will be forthcoming. Immediately after calling this function,
- * the control queue should be shut down and (optionally) reinitialized.
- */
-int idpf_reset(struct idpf_hw *hw)
-{
-	return idpf_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
-				      0, NULL, 0);
-}
-
-/**
- * idpf_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- */
-STATIC int idpf_get_set_rss_lut(struct idpf_hw *hw, u16 vsi_id,
-				bool pf_lut, u8 *lut, u16 lut_size,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- */
-int idpf_get_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
-}
-
-/**
- * idpf_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- */
-int idpf_set_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * idpf_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- */
-STATIC int idpf_get_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-				struct idpf_get_set_rss_key_data *key,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- */
-int idpf_get_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * idpf_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-int idpf_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, true);
-}
-
-RTE_LOG_REGISTER_DEFAULT(idpf_common_logger, NOTICE);
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 96d7642209..649c44d0ae 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -2,7 +2,6 @@
 # Copyright(c) 2023 Intel Corporation
 
 sources += files(
-        'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
 )
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 21/22] drivers: adding type to idpf vc queue switch
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (19 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 20/22] common/idpf: remove idpf common file Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-12  3:52     ` [PATCH v3 22/22] doc: updated the documentation for cpfl PMD Soumyadeep Hore
  2024-06-14 12:48     ` [PATCH v3 00/22] Update MEV TS Base Driver Burakov, Anatoly
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding an argument named type to define queue type
in idpf_vc_queue_switch(). This solves the issue of
improper queue type in virtchnl2 message.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/idpf_common_virtchnl.c |  8 ++------
 drivers/common/idpf/idpf_common_virtchnl.h |  2 +-
 drivers/net/cpfl/cpfl_ethdev.c             | 12 ++++++++----
 drivers/net/cpfl/cpfl_rxtx.c               | 12 ++++++++----
 drivers/net/idpf/idpf_rxtx.c               | 12 ++++++++----
 5 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f00202f43c..de511da788 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -769,15 +769,11 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
+		     bool rx, bool on, uint32_t type)
 {
-	uint32_t type;
 	int err, queue_id;
 
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+	if (rx)
 		queue_id = vport->chunks_info.rx_start_qid + qid;
 	else
 		queue_id = vport->chunks_info.tx_start_qid + qid;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 73446ded86..d6555978d5 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -31,7 +31,7 @@ int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
 int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-			 bool rx, bool on);
+			 bool rx, bool on, uint32_t type);
 __rte_internal
 int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e707043bf7..9e2a74371e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -1907,7 +1907,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	int i, ret;
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
 			return ret;
@@ -1915,7 +1916,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
 			return ret;
@@ -1943,7 +1945,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
 			return ret;
@@ -1951,7 +1954,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
 			return ret;
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ab8bec4645..47351ca102 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -1200,7 +1200,8 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1252,7 +1253,8 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1283,7 +1285,8 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 						     rx_queue_id - cpfl_vport->nb_data_txq,
 						     true, false);
 	else
-		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+								VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1331,7 +1334,8 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 						     tx_queue_id - cpfl_vport->nb_data_txq,
 						     false, false);
 	else
-		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+								VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 64f2235580..858bbefe3b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -595,7 +595,8 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -646,7 +647,8 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -669,7 +671,8 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -701,7 +704,8 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v3 22/22] doc: updated the documentation for cpfl PMD
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (20 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 21/22] drivers: adding type to idpf vc queue switch Soumyadeep Hore
@ 2024-06-12  3:52     ` Soumyadeep Hore
  2024-06-14 12:48     ` [PATCH v3 00/22] Update MEV TS Base Driver Burakov, Anatoly
  22 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-12  3:52 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Updated the latest support for CPFL PMD in MEV TS
firmware version 1.4.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 doc/guides/nics/cpfl.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 9b7a99c894..528c809819 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -35,6 +35,8 @@ Here is the suggested matching list which has been tested and verified.
    +------------+------------------+
    |    23.11   |       1.0        |
    +------------+------------------+
+   |    24.07   |       1.4        |
+   +------------+------------------+
 
 
 Configuration
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines
  2024-06-12  3:52     ` [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
@ 2024-06-14 10:33       ` Burakov, Anatoly
  0 siblings, 0 replies; 125+ messages in thread
From: Burakov, Anatoly @ 2024-06-14 10:33 UTC (permalink / raw)
  To: Soumyadeep Hore, bruce.richardson; +Cc: dev

On 6/12/2024 5:52 AM, Soumyadeep Hore wrote:
> Removes NVME dependency on memory allocations and
> uses a prepared buffer instead.
> 
> The changes do not affect other components.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---

<snip>

>   	return status;
>   }
> @@ -232,8 +244,13 @@ void idpf_ctlq_remove(struct idpf_hw *hw,
>    * destroyed. This must be called prior to using the individual add/remove
>    * APIs.
>    */
> +#ifdef NVME_CPF
> +int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
> +			struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq)
> +#else
>   int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
>   		   struct idpf_ctlq_create_info *q_info)
> +#endif

Nitpicking, but the added function's indentation seems different from 
the rest of the functions in this file file. Is this how it is in base code?

>   {
>   	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
>   	int ret_code = 0;
> @@ -244,6 +261,10 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
>   	for (i = 0; i < num_q; i++) {
>   		struct idpf_ctlq_create_info *qinfo = q_info + i;
>   
> +#ifdef NVME_CPF
> +		cq = *(ctlq + i);
> +#endif
> +
>   		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
>   		if (ret_code)
>   			goto init_destroy_qs;
> diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
> index 38f5d2df3c..6b6f3e84c2 100644
> --- a/drivers/common/idpf/base/idpf_controlq_api.h
> +++ b/drivers/common/idpf/base/idpf_controlq_api.h
> @@ -1,5 +1,5 @@
>   /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2001-2023 Intel Corporation
> + * Copyright(c) 2001-2024 Intel Corporation
>    */
>   
>   #ifndef _IDPF_CONTROLQ_API_H_
> @@ -158,8 +158,13 @@ enum idpf_mbx_opc {
>   /* Will init all required q including default mb.  "q_info" is an array of
>    * create_info structs equal to the number of control queues to be created.
>    */
> +#ifdef NVME_CPF
> +int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
> +			struct idpf_ctlq_create_info *q_info, struct idpf_ctlq_info **ctlq);
> +#else

Same question as above.

Also, a more general question on #ifdef - is it expected to be enabled 
somehow?

>   int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
>   		   struct idpf_ctlq_create_info *q_info);
> +#endif
>   
>   /* Allocate and initialize a single control queue, which will be added to the
>    * control queue list; returns a handle to the created control queue

-- 
Thanks,
Anatoly


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v3 02/22] common/idpf: updated IDPF VF device ID
  2024-06-12  3:52     ` [PATCH v3 02/22] common/idpf: updated IDPF VF device ID Soumyadeep Hore
@ 2024-06-14 10:36       ` Burakov, Anatoly
  0 siblings, 0 replies; 125+ messages in thread
From: Burakov, Anatoly @ 2024-06-14 10:36 UTC (permalink / raw)
  To: Soumyadeep Hore, bruce.richardson; +Cc: dev

On 6/12/2024 5:52 AM, Soumyadeep Hore wrote:
> Update IDPF VF device id to 145C.
> 
> Also added device ID for S-IOV device.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>   drivers/common/idpf/base/idpf_devids.h | 7 +++++--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/idpf_devids.h b/drivers/common/idpf/base/idpf_devids.h
> index c47762d5b7..0eb2def264 100644
> --- a/drivers/common/idpf/base/idpf_devids.h
> +++ b/drivers/common/idpf/base/idpf_devids.h
> @@ -1,5 +1,5 @@
>   /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2001-2023 Intel Corporation
> + * Copyright(c) 2001-2024 Intel Corporation
>    */
>   
>   #ifndef _IDPF_DEVIDS_H_
> @@ -10,7 +10,10 @@
>   
>   /* Device IDs */
>   #define IDPF_DEV_ID_PF			0x1452
> -#define IDPF_DEV_ID_VF			0x1889
> +#define IDPF_DEV_ID_VF			0x145C
> +#ifdef SIOV_SUPPORT
> +#define IDPF_DEV_ID_VF_SIOV		0x0DD5
> +#endif /* SIOV_SUPPORT */
>   

Is there any reason why we need those ifdefs, and are they expected to 
be enabled/disabled in some way by the build process?

-- 
Thanks,
Anatoly


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v3 05/22] common/idpf: avoid defensive programming
  2024-06-12  3:52     ` [PATCH v3 05/22] common/idpf: avoid defensive programming Soumyadeep Hore
@ 2024-06-14 12:16       ` Burakov, Anatoly
  0 siblings, 0 replies; 125+ messages in thread
From: Burakov, Anatoly @ 2024-06-14 12:16 UTC (permalink / raw)
  To: Soumyadeep Hore, bruce.richardson; +Cc: dev

On 6/12/2024 5:52 AM, Soumyadeep Hore wrote:
> Based on the upstream feedback, driver should not use any
> defensive programming strategy by checking for NULL pointers
> and other conditional checks unnecessarily in the code flow
> to fall back, instead fail and fix the bug in a proper way.
> 
> Some of the checks are identified and removed/wrapped
> in this patch:
> - As the control queue is freed and deleted from the list after the
> idpf_ctlq_shutdown call, there is no need to have the ring_size
> check in idpf_ctlq_shutdown.
> - From the upstream perspective shared code is part of the Linux
> driver and it doesn't make sense to add zero 'len' and 'buf_size'
> check in idpf_ctlq_add as to start with, driver provides valid
> sizes, if not it is a bug.
> - Remove cq NULL and zero ring_size check wherever possible as
> the IDPF driver code flow does not pass any NULL cq pointer to
> the control queue callbacks. If it passes then it is a bug and
> should be fixed rather than checking for NULL pointer and falling
> back which is not the right way.

It seems that the commit log calls out changes that weren't made in this 
patch?

> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>   drivers/common/idpf/base/idpf_controlq.c | 4 ----
>   1 file changed, 4 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
> index bada75abfc..b5ba9c3bd0 100644
> --- a/drivers/common/idpf/base/idpf_controlq.c
> +++ b/drivers/common/idpf/base/idpf_controlq.c
> @@ -98,9 +98,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
>   {
>   	idpf_acquire_lock(&cq->cq_lock);
>   
> -	if (!cq->ring_size)
> -		goto shutdown_sq_out;
> -
>   #ifdef SIMICS_BUILD
>   	wr32(hw, cq->reg.head, 0);
>   	wr32(hw, cq->reg.tail, 0);
> @@ -115,7 +112,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
>   	/* Set ring_size to 0 to indicate uninitialized queue */
>   	cq->ring_size = 0;
>   
> -shutdown_sq_out:
>   	idpf_release_lock(&cq->cq_lock);
>   	idpf_destroy_lock(&cq->cq_lock);
>   }

-- 
Thanks,
Anatoly


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v3 07/22] common/idpf: convert data type to 'le'
  2024-06-12  3:52     ` [PATCH v3 07/22] common/idpf: convert data type to 'le' Soumyadeep Hore
@ 2024-06-14 12:19       ` Burakov, Anatoly
  0 siblings, 0 replies; 125+ messages in thread
From: Burakov, Anatoly @ 2024-06-14 12:19 UTC (permalink / raw)
  To: Soumyadeep Hore, bruce.richardson; +Cc: dev

On 6/12/2024 5:52 AM, Soumyadeep Hore wrote:
> 'u32' data type is used for the struct members in
> 'virtchnl2_version_info' which should be '__le32'.
> Make the change accordingly.

It is not stated why the data type "should" be __le32. Can the commit 
message be clarified to explain why this should be the case?

> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>   drivers/common/idpf/base/virtchnl2.h | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
> index 851c6629dd..1f59730297 100644
> --- a/drivers/common/idpf/base/virtchnl2.h
> +++ b/drivers/common/idpf/base/virtchnl2.h
> @@ -471,8 +471,8 @@
>    * error regardless of version mismatch.
>    */
>   struct virtchnl2_version_info {
> -	u32 major;
> -	u32 minor;
> +	__le32 major;
> +	__le32 minor;
>   };
>   
>   VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);

-- 
Thanks,
Anatoly


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v3 09/22] common/idpf: refactor size check macro
  2024-06-12  3:52     ` [PATCH v3 09/22] common/idpf: refactor size check macro Soumyadeep Hore
@ 2024-06-14 12:21       ` Burakov, Anatoly
  0 siblings, 0 replies; 125+ messages in thread
From: Burakov, Anatoly @ 2024-06-14 12:21 UTC (permalink / raw)
  To: Soumyadeep Hore, bruce.richardson; +Cc: dev

On 6/12/2024 5:52 AM, Soumyadeep Hore wrote:
> Instead of using 'divide by 0' to check the struct length,
> use the static_assert macro
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>   drivers/common/idpf/base/virtchnl2.h | 13 +++++--------
>   1 file changed, 5 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
> index 1f59730297..f8b97f2e06 100644
> --- a/drivers/common/idpf/base/virtchnl2.h
> +++ b/drivers/common/idpf/base/virtchnl2.h
> @@ -41,15 +41,12 @@
>   /* State Machine error - Command sequence problem */
>   #define	VIRTCHNL2_STATUS_ERR_ESM	201
>   
> -/* These macros are used to generate compilation errors if a structure/union
> - * is not exactly the correct length. It gives a divide by zero error if the
> - * structure/union is not of the correct size, otherwise it creates an enum
> - * that is never used.
> +/* This macro is used to generate compilation errors if a structure
> + * is not exactly the correct length.
>    */
> -#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
> -	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
> -#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
> -	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
> +#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
> +	static_assert((n) == sizeof(struct X),	\
> +		      "Structure length does not match with the expected value")
>   

It appears that the patch also removed CHECK_UNION_LEN macro (presumably 
because it is unused and no longer needed). Would be good to add this to 
commit message.

>   /* New major set of opcodes introduced and so leaving room for
>    * old misc opcodes to be added in future. Also these opcodes may only

-- 
Thanks,
Anatoly


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v3 00/22] Update MEV TS Base Driver
  2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
                       ` (21 preceding siblings ...)
  2024-06-12  3:52     ` [PATCH v3 22/22] doc: updated the documentation for cpfl PMD Soumyadeep Hore
@ 2024-06-14 12:48     ` Burakov, Anatoly
  22 siblings, 0 replies; 125+ messages in thread
From: Burakov, Anatoly @ 2024-06-14 12:48 UTC (permalink / raw)
  To: Soumyadeep Hore, bruce.richardson; +Cc: dev

On 6/12/2024 5:52 AM, Soumyadeep Hore wrote:
> These patches integrate the latest changes in MEV TS IDPF Base driver.
> 
> ---
> v3:
> - Removed additional whitespace changes
> - Fixed warnings of CI
> - Updated documentation relating to MEV TS FW release
> ---

Off list,

I don't know anything about idpf driver, but as far as I'm aware, 
usually when base code is updated, we resolve all #ifdef's by stripping 
these tags and assuming that they were (or were not) defined at compile 
time, so the resulting code is devoid of any compile-time switching. 
This is the biggest question I have for this review.

Is there any reason why you've kept the #ifdef's in this base code 
update? Should we perhaps strip them out and assume NVME_CPF is always 
defined? And if not, then perhaps we should at least provide feedback to 
base driver developers that we'd rather their code handle both cases 
when NVME_CPF is defined, rather than switching between two different 
code paths at compile time with no ability to combine them?

-- 
Thanks,
Anatoly


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 00/21] Update MEV TS Base Driver
  2024-06-12  3:52     ` [PATCH v3 10/22] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
@ 2024-06-18 10:57       ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
                           ` (20 more replies)
  0 siblings, 21 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

These patches integrate the latest changes in MEV TS IDPF Base driver.

---
v4:
- Removed 1st patch as we are not using NVME_CPF flag
- Addressed comments 
---
v3:
- Removed additional whitespace changes
- Fixed warnings of CI
- Updated documentation relating to MEV TS FW release
---
v2:
- Changed implementation based on review comments
- Fixed compilation errors for Windows, Alpine and FreeBSD
---

Soumyadeep Hore (21):
  common/idpf: updated IDPF VF device ID
  common/idpf: added new virtchnl2 capability and vport flag
  common/idpf: moved the idpf HW into API header file
  common/idpf: avoid defensive programming
  common/idpf: use BIT ULL for large bitmaps
  common/idpf: convert data type to 'le'
  common/idpf: compress RXDID mask definitions
  common/idpf: refactor size check macro
  common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  common/idpf: use 'pad' and 'reserved' fields appropriately
  common/idpf: move related defines into enums
  common/idpf: avoid variable 0-init
  common/idpf: update in PTP message validation
  common/idpf: rename INLINE FLOW STEER to FLOW STEER
  common/idpf: add wmb before tail
  drivers: add flex array support and fix issues
  common/idpf: enable flow steer capability for vports
  common/idpf: add a new Tx context descriptor structure
  common/idpf: remove idpf common file
  drivers: adding type to idpf vc queue switch
  doc: updated the documentation for cpfl PMD

 doc/guides/nics/cpfl.rst                      |    2 +
 drivers/common/idpf/base/idpf_common.c        |  382 ---
 drivers/common/idpf/base/idpf_controlq.c      |   66 +-
 drivers/common/idpf/base/idpf_controlq.h      |  107 +-
 drivers/common/idpf/base/idpf_controlq_api.h  |   35 +
 .../common/idpf/base/idpf_controlq_setup.c    |   18 +-
 drivers/common/idpf/base/idpf_devids.h        |    5 +-
 drivers/common/idpf/base/idpf_lan_txrx.h      |   20 +-
 drivers/common/idpf/base/idpf_osdep.h         |   72 +-
 drivers/common/idpf/base/idpf_type.h          |    4 +-
 drivers/common/idpf/base/meson.build          |    1 -
 drivers/common/idpf/base/virtchnl2.h          | 2388 +++++++++--------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  842 ++++--
 drivers/common/idpf/idpf_common_virtchnl.c    |   10 +-
 drivers/common/idpf/idpf_common_virtchnl.h    |    2 +-
 drivers/net/cpfl/cpfl_ethdev.c                |   40 +-
 drivers/net/cpfl/cpfl_rxtx.c                  |   12 +-
 drivers/net/idpf/idpf_rxtx.c                  |   12 +-
 18 files changed, 2037 insertions(+), 1981 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 01/21] common/idpf: updated IDPF VF device ID
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 02/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
                           ` (19 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Update IDPF VF device id to 145C.

Also added device ID for S-IOV device.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_devids.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_devids.h b/drivers/common/idpf/base/idpf_devids.h
index c47762d5b7..1ae99fcee1 100644
--- a/drivers/common/idpf/base/idpf_devids.h
+++ b/drivers/common/idpf/base/idpf_devids.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_DEVIDS_H_
@@ -10,7 +10,8 @@
 
 /* Device IDs */
 #define IDPF_DEV_ID_PF			0x1452
-#define IDPF_DEV_ID_VF			0x1889
+#define IDPF_DEV_ID_VF			0x145C
+#define IDPF_DEV_ID_VF_SIOV		0x0DD5
 
 
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 02/21] common/idpf: added new virtchnl2 capability and vport flag
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 03/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
                           ` (18 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Removed unused VIRTCHNL2_CAP_ADQ capability and use that bit for
VIRTCHNL2_CAP_INLINE_FLOW_STEER capability.

Added VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA port flag to allow
enable/disable per vport.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 3900b784d0..6eff0f1ea1 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _VIRTCHNL2_H_
@@ -220,7 +220,7 @@
 #define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
 #define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
 #define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_ADQ			BIT(6)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
 #define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
 #define VIRTCHNL2_CAP_PROMISC			BIT(8)
 #define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
@@ -593,7 +593,8 @@ struct virtchnl2_queue_reg_chunks {
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
 /* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT	BIT(0)
+#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
+#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 03/21] common/idpf: moved the idpf HW into API header file
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 02/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 04/21] common/idpf: avoid defensive programming Soumyadeep Hore
                           ` (17 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

There is an issue of recursive header file includes in accessing the
idpf_hw structure. The controlq.h has the structure definition and osdep
header file needs that. The problem is the controlq.h also needs
the osdep header file contents, basically both dependent on each other.

Moving the definition from controlq.h into api.h resolves the problem.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c       |   4 +-
 drivers/common/idpf/base/idpf_controlq.h     | 107 +------------------
 drivers/common/idpf/base/idpf_controlq_api.h |  35 ++++++
 drivers/common/idpf/base/idpf_osdep.h        |  72 ++++++++++++-
 drivers/common/idpf/base/idpf_type.h         |   4 +-
 5 files changed, 111 insertions(+), 111 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
index 7181a7f14c..bb540345c2 100644
--- a/drivers/common/idpf/base/idpf_common.c
+++ b/drivers/common/idpf/base/idpf_common.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
-#include "idpf_type.h"
 #include "idpf_prototype.h"
+#include "idpf_type.h"
 #include <virtchnl.h>
 
 
diff --git a/drivers/common/idpf/base/idpf_controlq.h b/drivers/common/idpf/base/idpf_controlq.h
index 80ca06e632..3f74b5a898 100644
--- a/drivers/common/idpf/base/idpf_controlq.h
+++ b/drivers/common/idpf/base/idpf_controlq.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_H_
@@ -96,111 +96,6 @@ struct idpf_mbxq_desc {
 	u32 pf_vf_id;		/* used by CP when sending to PF */
 };
 
-enum idpf_mac_type {
-	IDPF_MAC_UNKNOWN = 0,
-	IDPF_MAC_PF,
-	IDPF_MAC_VF,
-	IDPF_MAC_GENERIC
-};
-
-#define ETH_ALEN 6
-
-struct idpf_mac_info {
-	enum idpf_mac_type type;
-	u8 addr[ETH_ALEN];
-	u8 perm_addr[ETH_ALEN];
-};
-
-#define IDPF_AQ_LINK_UP 0x1
-
-/* PCI bus types */
-enum idpf_bus_type {
-	idpf_bus_type_unknown = 0,
-	idpf_bus_type_pci,
-	idpf_bus_type_pcix,
-	idpf_bus_type_pci_express,
-	idpf_bus_type_reserved
-};
-
-/* PCI bus speeds */
-enum idpf_bus_speed {
-	idpf_bus_speed_unknown	= 0,
-	idpf_bus_speed_33	= 33,
-	idpf_bus_speed_66	= 66,
-	idpf_bus_speed_100	= 100,
-	idpf_bus_speed_120	= 120,
-	idpf_bus_speed_133	= 133,
-	idpf_bus_speed_2500	= 2500,
-	idpf_bus_speed_5000	= 5000,
-	idpf_bus_speed_8000	= 8000,
-	idpf_bus_speed_reserved
-};
-
-/* PCI bus widths */
-enum idpf_bus_width {
-	idpf_bus_width_unknown	= 0,
-	idpf_bus_width_pcie_x1	= 1,
-	idpf_bus_width_pcie_x2	= 2,
-	idpf_bus_width_pcie_x4	= 4,
-	idpf_bus_width_pcie_x8	= 8,
-	idpf_bus_width_32	= 32,
-	idpf_bus_width_64	= 64,
-	idpf_bus_width_reserved
-};
-
-/* Bus parameters */
-struct idpf_bus_info {
-	enum idpf_bus_speed speed;
-	enum idpf_bus_width width;
-	enum idpf_bus_type type;
-
-	u16 func;
-	u16 device;
-	u16 lan_id;
-	u16 bus_id;
-};
-
-/* Function specific capabilities */
-struct idpf_hw_func_caps {
-	u32 num_alloc_vfs;
-	u32 vf_base_id;
-};
-
-/* Define the APF hardware struct to replace other control structs as needed
- * Align to ctlq_hw_info
- */
-struct idpf_hw {
-	/* Some part of BAR0 address space is not mapped by the LAN driver.
-	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
-	 * will have its own base hardware address when mapped.
-	 */
-	u8 *hw_addr;
-	u8 *hw_addr_region2;
-	u64 hw_addr_len;
-	u64 hw_addr_region2_len;
-
-	void *back;
-
-	/* control queue - send and receive */
-	struct idpf_ctlq_info *asq;
-	struct idpf_ctlq_info *arq;
-
-	/* subsystem structs */
-	struct idpf_mac_info mac;
-	struct idpf_bus_info bus;
-	struct idpf_hw_func_caps func_caps;
-
-	/* pci info */
-	u16 device_id;
-	u16 vendor_id;
-	u16 subsystem_device_id;
-	u16 subsystem_vendor_id;
-	u8 revision_id;
-	bool adapter_stopped;
-
-	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
-};
-
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
 			     struct idpf_ctlq_info *cq);
 
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 38f5d2df3c..8a90258099 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -154,6 +154,41 @@ enum idpf_mbx_opc {
 	idpf_mbq_opc_send_msg_to_peer_drv	= 0x0804,
 };
 
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct idpf_hw {
+	/* Some part of BAR0 address space is not mapped by the LAN driver.
+	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
+	 * will have its own base hardware address when mapped.
+	 */
+	u8 *hw_addr;
+	u8 *hw_addr_region2;
+	u64 hw_addr_len;
+	u64 hw_addr_region2_len;
+
+	void *back;
+
+	/* control queue - send and receive */
+	struct idpf_ctlq_info *asq;
+	struct idpf_ctlq_info *arq;
+
+	/* subsystem structs */
+	struct idpf_mac_info mac;
+	struct idpf_bus_info bus;
+	struct idpf_hw_func_caps func_caps;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	bool adapter_stopped;
+
+	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
+};
+
 /* API supported for control queue management */
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
diff --git a/drivers/common/idpf/base/idpf_osdep.h b/drivers/common/idpf/base/idpf_osdep.h
index 74a376cb13..b2af8f443d 100644
--- a/drivers/common/idpf/base/idpf_osdep.h
+++ b/drivers/common/idpf/base/idpf_osdep.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_OSDEP_H_
@@ -353,4 +353,74 @@ idpf_hweight32(u32 num)
 
 #endif
 
+enum idpf_mac_type {
+	IDPF_MAC_UNKNOWN = 0,
+	IDPF_MAC_PF,
+	IDPF_MAC_VF,
+	IDPF_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct idpf_mac_info {
+	enum idpf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+};
+
+#define IDPF_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum idpf_bus_type {
+	idpf_bus_type_unknown = 0,
+	idpf_bus_type_pci,
+	idpf_bus_type_pcix,
+	idpf_bus_type_pci_express,
+	idpf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum idpf_bus_speed {
+	idpf_bus_speed_unknown	= 0,
+	idpf_bus_speed_33	= 33,
+	idpf_bus_speed_66	= 66,
+	idpf_bus_speed_100	= 100,
+	idpf_bus_speed_120	= 120,
+	idpf_bus_speed_133	= 133,
+	idpf_bus_speed_2500	= 2500,
+	idpf_bus_speed_5000	= 5000,
+	idpf_bus_speed_8000	= 8000,
+	idpf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum idpf_bus_width {
+	idpf_bus_width_unknown	= 0,
+	idpf_bus_width_pcie_x1	= 1,
+	idpf_bus_width_pcie_x2	= 2,
+	idpf_bus_width_pcie_x4	= 4,
+	idpf_bus_width_pcie_x8	= 8,
+	idpf_bus_width_32	= 32,
+	idpf_bus_width_64	= 64,
+	idpf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct idpf_bus_info {
+	enum idpf_bus_speed speed;
+	enum idpf_bus_width width;
+	enum idpf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct idpf_hw_func_caps {
+	u32 num_alloc_vfs;
+	u32 vf_base_id;
+};
+
 #endif /* _IDPF_OSDEP_H_ */
diff --git a/drivers/common/idpf/base/idpf_type.h b/drivers/common/idpf/base/idpf_type.h
index a22d28f448..2ff818035b 100644
--- a/drivers/common/idpf/base/idpf_type.h
+++ b/drivers/common/idpf/base/idpf_type.h
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_TYPE_H_
 #define _IDPF_TYPE_H_
 
-#include "idpf_controlq.h"
+#include "idpf_osdep.h"
 
 #define UNREFERENCED_XPARAMETER
 #define UNREFERENCED_1PARAMETER(_p)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 04/21] common/idpf: avoid defensive programming
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (2 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 03/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 05/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
                           ` (16 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Based on the upstream feedback, driver should not use any
defensive programming strategy by checking for NULL pointers
and other conditional checks unnecessarily in the code flow
to fall back, instead fail and fix the bug in a proper way.

As the control queue is freed and deleted from the list after the
idpf_ctlq_shutdown call, there is no need to have the ring_size
check in idpf_ctlq_shutdown.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index a82ca628de..d9ca33cdb9 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -98,9 +98,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
 	idpf_acquire_lock(&cq->cq_lock);
 
-	if (!cq->ring_size)
-		goto shutdown_sq_out;
-
 #ifdef SIMICS_BUILD
 	wr32(hw, cq->reg.head, 0);
 	wr32(hw, cq->reg.tail, 0);
@@ -115,7 +112,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 	/* Set ring_size to 0 to indicate uninitialized queue */
 	cq->ring_size = 0;
 
-shutdown_sq_out:
 	idpf_release_lock(&cq->cq_lock);
 	idpf_destroy_lock(&cq->cq_lock);
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 05/21] common/idpf: use BIT ULL for large bitmaps
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (3 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 04/21] common/idpf: avoid defensive programming Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 06/21] common/idpf: convert data type to 'le' Soumyadeep Hore
                           ` (15 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

For bitmaps greater than 32 bits, use BIT_ULL instead of BIT
macro as reported by compiler.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 70 ++++++++++++++--------------
 1 file changed, 35 insertions(+), 35 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 6eff0f1ea1..851c6629dd 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -175,20 +175,20 @@
 /* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
  * Receive Side Scaling Flow type capability flags
  */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT(13)
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
 
 /* VIRTCHNL2_HEADER_SPLIT_CAPS
  * Header split capability flags
@@ -214,32 +214,32 @@
  * TX_VLAN: VLAN tag insertion
  * RX_VLAN: VLAN tag stripping
  */
-#define VIRTCHNL2_CAP_RDMA			BIT(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
-#define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT(11)
+#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
+#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
+#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
+#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
+#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
+#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
 /* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT(12)
-#define VIRTCHNL2_CAP_PTP			BIT(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT(15)
-#define VIRTCHNL2_CAP_FDIR			BIT(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT(19)
+#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
+#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
+#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
+#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
+#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
+#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
 /* Enable miss completion types plus ability to detect a miss completion if a
  * reserved bit is set in a standared completion's tag.
  */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT(20)
+#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
 /* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT(63)
+#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
 
 /* VIRTCHNL2_TXQ_SCHED_MODE
  * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 06/21] common/idpf: convert data type to 'le'
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (4 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 05/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 07/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
                           ` (14 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'u32' data type is used for the struct members in
'virtchnl2_version_info' which should be '__le32'.
Make the change accordingly.

It is a Little Endian specific type defination.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 851c6629dd..1f59730297 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -471,8 +471,8 @@
  * error regardless of version mismatch.
  */
 struct virtchnl2_version_info {
-	u32 major;
-	u32 minor;
+	__le32 major;
+	__le32 minor;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 07/21] common/idpf: compress RXDID mask definitions
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (5 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 06/21] common/idpf: convert data type to 'le' Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 08/21] common/idpf: refactor size check macro Soumyadeep Hore
                           ` (13 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using the long RXDID definitions, introduce a
macro which uses common part of the RXDID definitions i.e.
VIRTCHNL2_RXDID_ and the bit passed to generate a mask.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 31 ++++++++++---------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index e6e782a219..f632271788 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -58,22 +58,23 @@
 /* VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		BIT(VIRTCHNL2_RXDID_0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		BIT(VIRTCHNL2_RXDID_1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
+#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
 /* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
 /* 22 through 63 are reserved */
 
 /* Rx */
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 08/21] common/idpf: refactor size check macro
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (6 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 07/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
                           ` (12 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using 'divide by 0' to check the struct length,
use the static_assert macro

Removed redundant CHECK_UNION_LEN macro.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 1f59730297..f8b97f2e06 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -41,15 +41,12 @@
 /* State Machine error - Command sequence problem */
 #define	VIRTCHNL2_STATUS_ERR_ESM	201
 
-/* These macros are used to generate compilation errors if a structure/union
- * is not exactly the correct length. It gives a divide by zero error if the
- * structure/union is not of the correct size, otherwise it creates an enum
- * that is never used.
+/* This macro is used to generate compilation errors if a structure
+ * is not exactly the correct length.
  */
-#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
-#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
+	static_assert((n) == sizeof(struct X),	\
+		      "Structure length does not match with the expected value")
 
 /* New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (7 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 08/21] common/idpf: refactor size check macro Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
                           ` (11 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Mask for VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M was defined wrongly
and this patch fixes it.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index f632271788..9e04cf8628 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -111,7 +111,7 @@
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (8 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 11/21] common/idpf: move related defines into enums Soumyadeep Hore
                           ` (10 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'pad' naming is used if the field is actually a padding byte
and is also used for bytes meant for future addition of new
fields, whereas 'reserved' is only used if the field is reserved
and cannot be used for any other purpose.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 71 +++++++++++++++-------------
 1 file changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index f8b97f2e06..d007c2f540 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -95,7 +95,7 @@
 #define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
 #define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
 #define		VIRTCHNL2_OP_GET_PORT_STATS		540
-	/* TimeSync opcodes */
+/* TimeSync opcodes */
 #define		VIRTCHNL2_OP_GET_PTP_CAPS		541
 #define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
 
@@ -559,7 +559,7 @@ struct virtchnl2_get_capabilities {
 	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
-	u8 reserved[10];
+	u8 pad1[10];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
@@ -575,7 +575,7 @@ struct virtchnl2_queue_reg_chunk {
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
-	u8 reserved[4];
+	u8 pad1[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
@@ -583,7 +583,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -648,7 +648,7 @@ struct virtchnl2_create_vport {
 	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
 
-	u8 reserved[20];
+	u8 pad2[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -663,7 +663,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
@@ -708,7 +708,7 @@ struct virtchnl2_txq_info {
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
 
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
@@ -724,7 +724,7 @@ struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 
-	u8 reserved[10];
+	u8 pad[10];
 	struct virtchnl2_txq_info qinfo[1];
 };
 
@@ -749,7 +749,7 @@ struct virtchnl2_rxq_info {
 
 	__le16 ring_len;
 	u8 buffer_notif_stride;
-	u8 pad[1];
+	u8 pad;
 
 	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
@@ -768,16 +768,15 @@ struct virtchnl2_rxq_info {
 	 * if this field is set
 	 */
 	u8 bufq2_ena;
-	u8 pad2[3];
+	u8 pad1[3];
 
 	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
 
-	u8 reserved[16];
+	u8 pad2[16];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
 /* VIRTCHNL2_OP_CONFIG_RX_QUEUES
@@ -791,7 +790,7 @@ struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 
-	u8 reserved[18];
+	u8 pad[18];
 	struct virtchnl2_rxq_info qinfo[1];
 };
 
@@ -810,7 +809,8 @@ struct virtchnl2_add_queues {
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
 	__le16 num_rx_bufq;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -948,7 +948,7 @@ struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
-	__le16 pad1;
+	__le16 pad;
 
 	/* Register offsets and spacing provided by CP.
 	 * dynamic control registers are used for enabling/disabling/re-enabling
@@ -969,15 +969,15 @@ struct virtchnl2_vector_chunk {
 	 * where n=0..2
 	 */
 	__le32 itrn_index_spacing;
-	u8 reserved[4];
+	u8 pad1[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
 /* Structure to specify several chunks of contiguous interrupt vectors */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -992,7 +992,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunks vchunks;
 };
 
@@ -1014,8 +1015,9 @@ struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
-	u8 reserved[4];
-	__le32 lut[1]; /* RSS lookup table */
+	u8 pad[4];
+	/* RSS lookup table */
+	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
@@ -1039,7 +1041,7 @@ struct virtchnl2_rss_hash {
 	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
@@ -1063,7 +1065,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 /* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -1073,7 +1075,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 /* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -1100,8 +1102,7 @@ struct virtchnl2_non_flex_create_adi {
 	__le16 adi_index;
 	/* CP populates ADI id */
 	__le16 adi_id;
-	u8 reserved[64];
-	u8 pad[4];
+	u8 pad[68];
 	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
 	/* PF sends vector chunks to CP */
@@ -1117,7 +1118,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
-	u8 reserved[2];
+	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
@@ -1220,7 +1221,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 rx_runt_errors;
 	__le64 rx_illegal_bytes;
 	__le64 rx_total_pkts;
-	u8 rx_reserved[128];
+	u8 rx_pad[128];
 
 	__le64 tx_bytes;
 	__le64 tx_unicast_pkts;
@@ -1239,7 +1240,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 tx_xoff_events;
 	__le64 tx_dropped_link_down_pkts;
 	__le64 tx_total_pkts;
-	u8 tx_reserved[128];
+	u8 tx_pad[128];
 	__le64 mac_local_faults;
 	__le64 mac_remote_faults;
 };
@@ -1273,7 +1274,8 @@ struct virtchnl2_event {
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
-	u8 pad[1];
+	u8 pad;
+
 	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
@@ -1301,7 +1303,7 @@ struct virtchnl2_queue_chunk {
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
@@ -1309,7 +1311,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_chunk chunks[1];
 };
 
@@ -1326,7 +1328,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_chunks chunks;
 };
 
@@ -1343,7 +1346,7 @@ struct virtchnl2_queue_vector {
 
 	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 11/21] common/idpf: move related defines into enums
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (9 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 12/21] common/idpf: avoid variable 0-init Soumyadeep Hore
                           ` (9 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Changes all groups of related defines to enums. The names of
the enums are chosen to follow the common part of the naming
pattern as much as possible.

Replaced the common labels from the comments with the enum names.

While at it, modify header description based on upstream feedback.

Some variable names modified and comments updated in descriptive way.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h          | 1847 ++++++++++-------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  843 +++++---
 2 files changed, 1686 insertions(+), 1004 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index d007c2f540..e76ccbd46f 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -8,317 +8,396 @@
 /* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
  * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
  * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ *
+ * PF/VF uses the virtchnl interface defined in this header file to communicate
+ * with device Control Plane (CP). Driver and the CP may run on different
+ * platforms with different endianness. To avoid byte order discrepancies,
+ * all the structures in this header follow little-endian format.
+ *
+ * This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
  */
 
 #include "virtchnl2_lan_desc.h"
 
-/* VIRTCHNL2_ERROR_CODES */
-/* success */
-#define	VIRTCHNL2_STATUS_SUCCESS	0
-/* Operation not permitted, used in case of command not permitted for sender */
-#define	VIRTCHNL2_STATUS_ERR_EPERM	1
-/* Bad opcode - virtchnl interface problem */
-#define	VIRTCHNL2_STATUS_ERR_ESRCH	3
-/* I/O error - HW access error */
-#define	VIRTCHNL2_STATUS_ERR_EIO	5
-/* No such resource - Referenced resource is not allacated */
-#define	VIRTCHNL2_STATUS_ERR_ENXIO	6
-/* Permission denied - Resource is not permitted to caller */
-#define	VIRTCHNL2_STATUS_ERR_EACCES	13
-/* Device or resource busy - In case shared resource is in use by others */
-#define	VIRTCHNL2_STATUS_ERR_EBUSY	16
-/* Object already exists and not free */
-#define	VIRTCHNL2_STATUS_ERR_EEXIST	17
-/* Invalid input argument in command */
-#define	VIRTCHNL2_STATUS_ERR_EINVAL	22
-/* No space left or allocation failure */
-#define	VIRTCHNL2_STATUS_ERR_ENOSPC	28
-/* Parameter out of range */
-#define	VIRTCHNL2_STATUS_ERR_ERANGE	34
-
-/* Op not allowed in current dev mode */
-#define	VIRTCHNL2_STATUS_ERR_EMODE	200
-/* State Machine error - Command sequence problem */
-#define	VIRTCHNL2_STATUS_ERR_ESM	201
-
-/* This macro is used to generate compilation errors if a structure
+/**
+ * enum virtchnl2_status - Error codes.
+ * @VIRTCHNL2_STATUS_SUCCESS: Success
+ * @VIRTCHNL2_STATUS_ERR_EPERM: Operation not permitted, used in case of command
+ *				not permitted for sender
+ * @VIRTCHNL2_STATUS_ERR_ESRCH: Bad opcode - virtchnl interface problem
+ * @VIRTCHNL2_STATUS_ERR_EIO: I/O error - HW access error
+ * @VIRTCHNL2_STATUS_ERR_ENXIO: No such resource - Referenced resource is not
+ *				allocated
+ * @VIRTCHNL2_STATUS_ERR_EACCES: Permission denied - Resource is not permitted
+ *				 to caller
+ * @VIRTCHNL2_STATUS_ERR_EBUSY: Device or resource busy - In case shared
+ *				resource is in use by others
+ * @VIRTCHNL2_STATUS_ERR_EEXIST: Object already exists and not free
+ * @VIRTCHNL2_STATUS_ERR_EINVAL: Invalid input argument in command
+ * @VIRTCHNL2_STATUS_ERR_ENOSPC: No space left or allocation failure
+ * @VIRTCHNL2_STATUS_ERR_ERANGE: Parameter out of range
+ * @VIRTCHNL2_STATUS_ERR_EMODE: Operation not allowed in current dev mode
+ * @VIRTCHNL2_STATUS_ERR_ESM: State Machine error - Command sequence problem
+ */
+enum virtchnl2_status {
+	VIRTCHNL2_STATUS_SUCCESS	= 0,
+	VIRTCHNL2_STATUS_ERR_EPERM	= 1,
+	VIRTCHNL2_STATUS_ERR_ESRCH	= 3,
+	VIRTCHNL2_STATUS_ERR_EIO	= 5,
+	VIRTCHNL2_STATUS_ERR_ENXIO	= 6,
+	VIRTCHNL2_STATUS_ERR_EACCES	= 13,
+	VIRTCHNL2_STATUS_ERR_EBUSY	= 16,
+	VIRTCHNL2_STATUS_ERR_EEXIST	= 17,
+	VIRTCHNL2_STATUS_ERR_EINVAL	= 22,
+	VIRTCHNL2_STATUS_ERR_ENOSPC	= 28,
+	VIRTCHNL2_STATUS_ERR_ERANGE	= 34,
+	VIRTCHNL2_STATUS_ERR_EMODE	= 200,
+	VIRTCHNL2_STATUS_ERR_ESM	= 201,
+};
+
+/**
+ * This macro is used to generate compilation errors if a structure
  * is not exactly the correct length.
  */
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
 
-/* New major set of opcodes introduced and so leaving room for
+/**
+ * New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
  * be used if both the PF and VF have successfully negotiated the
- * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
- */
-#define		VIRTCHNL2_OP_UNKNOWN			0
-#define		VIRTCHNL2_OP_VERSION			1
-#define		VIRTCHNL2_OP_GET_CAPS			500
-#define		VIRTCHNL2_OP_CREATE_VPORT		501
-#define		VIRTCHNL2_OP_DESTROY_VPORT		502
-#define		VIRTCHNL2_OP_ENABLE_VPORT		503
-#define		VIRTCHNL2_OP_DISABLE_VPORT		504
-#define		VIRTCHNL2_OP_CONFIG_TX_QUEUES		505
-#define		VIRTCHNL2_OP_CONFIG_RX_QUEUES		506
-#define		VIRTCHNL2_OP_ENABLE_QUEUES		507
-#define		VIRTCHNL2_OP_DISABLE_QUEUES		508
-#define		VIRTCHNL2_OP_ADD_QUEUES			509
-#define		VIRTCHNL2_OP_DEL_QUEUES			510
-#define		VIRTCHNL2_OP_MAP_QUEUE_VECTOR		511
-#define		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		512
-#define		VIRTCHNL2_OP_GET_RSS_KEY		513
-#define		VIRTCHNL2_OP_SET_RSS_KEY		514
-#define		VIRTCHNL2_OP_GET_RSS_LUT		515
-#define		VIRTCHNL2_OP_SET_RSS_LUT		516
-#define		VIRTCHNL2_OP_GET_RSS_HASH		517
-#define		VIRTCHNL2_OP_SET_RSS_HASH		518
-#define		VIRTCHNL2_OP_SET_SRIOV_VFS		519
-#define		VIRTCHNL2_OP_ALLOC_VECTORS		520
-#define		VIRTCHNL2_OP_DEALLOC_VECTORS		521
-#define		VIRTCHNL2_OP_EVENT			522
-#define		VIRTCHNL2_OP_GET_STATS			523
-#define		VIRTCHNL2_OP_RESET_VF			524
-	/* opcode 525 is reserved */
-#define		VIRTCHNL2_OP_GET_PTYPE_INFO		526
-	/* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
-	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ * VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
+ */
+enum virtchnl2_op {
+	VIRTCHNL2_OP_UNKNOWN			= 0,
+	VIRTCHNL2_OP_VERSION			= 1,
+	VIRTCHNL2_OP_GET_CAPS			= 500,
+	VIRTCHNL2_OP_CREATE_VPORT		= 501,
+	VIRTCHNL2_OP_DESTROY_VPORT		= 502,
+	VIRTCHNL2_OP_ENABLE_VPORT		= 503,
+	VIRTCHNL2_OP_DISABLE_VPORT		= 504,
+	VIRTCHNL2_OP_CONFIG_TX_QUEUES		= 505,
+	VIRTCHNL2_OP_CONFIG_RX_QUEUES		= 506,
+	VIRTCHNL2_OP_ENABLE_QUEUES		= 507,
+	VIRTCHNL2_OP_DISABLE_QUEUES		= 508,
+	VIRTCHNL2_OP_ADD_QUEUES			= 509,
+	VIRTCHNL2_OP_DEL_QUEUES			= 510,
+	VIRTCHNL2_OP_MAP_QUEUE_VECTOR		= 511,
+	VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		= 512,
+	VIRTCHNL2_OP_GET_RSS_KEY		= 513,
+	VIRTCHNL2_OP_SET_RSS_KEY		= 514,
+	VIRTCHNL2_OP_GET_RSS_LUT		= 515,
+	VIRTCHNL2_OP_SET_RSS_LUT		= 516,
+	VIRTCHNL2_OP_GET_RSS_HASH		= 517,
+	VIRTCHNL2_OP_SET_RSS_HASH		= 518,
+	VIRTCHNL2_OP_SET_SRIOV_VFS		= 519,
+	VIRTCHNL2_OP_ALLOC_VECTORS		= 520,
+	VIRTCHNL2_OP_DEALLOC_VECTORS		= 521,
+	VIRTCHNL2_OP_EVENT			= 522,
+	VIRTCHNL2_OP_GET_STATS			= 523,
+	VIRTCHNL2_OP_RESET_VF			= 524,
+	/* Opcode 525 is reserved */
+	VIRTCHNL2_OP_GET_PTYPE_INFO		= 526,
+	/* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
 	 */
-	/* opcodes 529, 530, and 531 are reserved */
-#define		VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	532
-#define		VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	533
-#define		VIRTCHNL2_OP_LOOPBACK			534
-#define		VIRTCHNL2_OP_ADD_MAC_ADDR		535
-#define		VIRTCHNL2_OP_DEL_MAC_ADDR		536
-#define		VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	537
-#define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
-#define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
-#define		VIRTCHNL2_OP_GET_PORT_STATS		540
-/* TimeSync opcodes */
-#define		VIRTCHNL2_OP_GET_PTP_CAPS		541
-#define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
+/* Opcodes 529, 530, and 531 are reserved */
+	VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	= 532,
+	VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	= 533,
+	VIRTCHNL2_OP_LOOPBACK			= 534,
+	VIRTCHNL2_OP_ADD_MAC_ADDR		= 535,
+	VIRTCHNL2_OP_DEL_MAC_ADDR		= 536,
+	VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	= 537,
+	VIRTCHNL2_OP_ADD_QUEUE_GROUPS		= 538,
+	VIRTCHNL2_OP_DEL_QUEUE_GROUPS		= 539,
+	VIRTCHNL2_OP_GET_PORT_STATS		= 540,
+	/* TimeSync opcodes */
+	VIRTCHNL2_OP_GET_PTP_CAPS		= 541,
+	VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	= 542,
+};
 
 #define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX	0xFFFF
 
-/* VIRTCHNL2_VPORT_TYPE
- * Type of virtual port
+/**
+ * enum virtchnl2_vport_type - Type of virtual port
+ * @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SRIOV: SRIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SIOV: SIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SUBDEV: Subdevice virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_MNG: Management virtual port type
  */
-#define VIRTCHNL2_VPORT_TYPE_DEFAULT		0
-#define VIRTCHNL2_VPORT_TYPE_SRIOV		1
-#define VIRTCHNL2_VPORT_TYPE_SIOV		2
-#define VIRTCHNL2_VPORT_TYPE_SUBDEV		3
-#define VIRTCHNL2_VPORT_TYPE_MNG		4
+enum virtchnl2_vport_type {
+	VIRTCHNL2_VPORT_TYPE_DEFAULT		= 0,
+	VIRTCHNL2_VPORT_TYPE_SRIOV		= 1,
+	VIRTCHNL2_VPORT_TYPE_SIOV		= 2,
+	VIRTCHNL2_VPORT_TYPE_SUBDEV		= 3,
+	VIRTCHNL2_VPORT_TYPE_MNG		= 4,
+};
 
-/* VIRTCHNL2_QUEUE_MODEL
- * Type of queue model
+/**
+ * enum virtchnl2_queue_model - Type of queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model
  *
  * In the single queue model, the same transmit descriptor queue is used by
  * software to post descriptors to hardware and by hardware to post completed
  * descriptors to software.
  * Likewise, the same receive descriptor queue is used by hardware to post
  * completions to software and by software to post buffers to hardware.
- */
-#define VIRTCHNL2_QUEUE_MODEL_SINGLE		0
-/* In the split queue model, hardware uses transmit completion queues to post
+ *
+ * In the split queue model, hardware uses transmit completion queues to post
  * descriptor/buffer completions to software, while software uses transmit
  * descriptor queues to post descriptors to hardware.
  * Likewise, hardware posts descriptor completions to the receive descriptor
  * queue, while software uses receive buffer queues to post buffers to hardware.
  */
-#define VIRTCHNL2_QUEUE_MODEL_SPLIT		1
-
-/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
- * Checksum offload capability flags
- */
-#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		BIT(0)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	BIT(1)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	BIT(2)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	BIT(3)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	BIT(4)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	BIT(5)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	BIT(6)
-#define VIRTCHNL2_CAP_TX_CSUM_GENERIC		BIT(7)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		BIT(8)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	BIT(9)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	BIT(10)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	BIT(11)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	BIT(12)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	BIT(13)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	BIT(14)
-#define VIRTCHNL2_CAP_RX_CSUM_GENERIC		BIT(15)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	BIT(16)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	BIT(17)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	BIT(18)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	BIT(19)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	BIT(20)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	BIT(21)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	BIT(22)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	BIT(23)
-
-/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
- * Segmentation offload capability flags
- */
-#define VIRTCHNL2_CAP_SEG_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_SEG_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_SEG_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_SEG_IPV6_TCP		BIT(3)
-#define VIRTCHNL2_CAP_SEG_IPV6_UDP		BIT(4)
-#define VIRTCHNL2_CAP_SEG_IPV6_SCTP		BIT(5)
-#define VIRTCHNL2_CAP_SEG_GENERIC		BIT(6)
-#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	BIT(7)
-#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	BIT(8)
-
-/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
- * Receive Side Scaling Flow type capability flags
- */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
-
-/* VIRTCHNL2_HEADER_SPLIT_CAPS
- * Header split capability flags
- */
-/* for prepended metadata  */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		BIT(0)
-/* all VLANs go into header buffer */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		BIT(1)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		BIT(2)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		BIT(3)
-
-/* VIRTCHNL2_RSC_OFFLOAD_CAPS
- * Receive Side Coalescing offload capability flags
- */
-#define VIRTCHNL2_CAP_RSC_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSC_IPV4_SCTP		BIT(1)
-#define VIRTCHNL2_CAP_RSC_IPV6_TCP		BIT(2)
-#define VIRTCHNL2_CAP_RSC_IPV6_SCTP		BIT(3)
-
-/* VIRTCHNL2_OTHER_CAPS
- * Other capability flags
- * SPLITQ_QSCHED: Queue based scheduling using split queue model
- * TX_VLAN: VLAN tag insertion
- * RX_VLAN: VLAN tag stripping
- */
-#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
-#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
-/* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
-#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
-#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
-/* Enable miss completion types plus ability to detect a miss completion if a
- * reserved bit is set in a standared completion's tag.
- */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
-/* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
-
-/* VIRTCHNL2_TXQ_SCHED_MODE
- * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
- * completions where descriptors and buffers are completed at the same time.
- * Flow scheduling mode allows for out of order packet processing where
- * descriptors are cleaned in order, but buffers can be completed out of order.
- */
-#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		0
-#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW		1
-
-/* VIRTCHNL2_TXQ_FLAGS
- * Transmit Queue feature flags
- *
- * Enable rule miss completion type; packet completion for a packet
- * sent on exception path; only relevant in flow scheduling mode
- */
-#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		BIT(0)
-
-/* VIRTCHNL2_PEER_TYPE
- * Transmit mailbox peer type
- */
-#define VIRTCHNL2_RDMA_CPF			0
-#define VIRTCHNL2_NVME_CPF			1
-#define VIRTCHNL2_ATE_CPF			2
-#define VIRTCHNL2_LCE_CPF			3
-
-/* VIRTCHNL2_RXQ_FLAGS
- * Receive Queue Feature flags
- */
-#define VIRTCHNL2_RXQ_RSC			BIT(0)
-#define VIRTCHNL2_RXQ_HDR_SPLIT			BIT(1)
-/* When set, packet descriptors are flushed by hardware immediately after
- * processing each packet.
- */
-#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	BIT(2)
-#define VIRTCHNL2_RX_DESC_SIZE_16BYTE		BIT(3)
-#define VIRTCHNL2_RX_DESC_SIZE_32BYTE		BIT(4)
-
-/* VIRTCHNL2_RSS_ALGORITHM
- * Type of RSS algorithm
- */
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC		0
-#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC			1
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC		2
-#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC			3
-
-/* VIRTCHNL2_EVENT_CODES
- * Type of event
- */
-#define VIRTCHNL2_EVENT_UNKNOWN			0
-#define VIRTCHNL2_EVENT_LINK_CHANGE		1
-/* These messages are only sent to PF from CP */
-#define VIRTCHNL2_EVENT_START_RESET_ADI		2
-#define VIRTCHNL2_EVENT_FINISH_RESET_ADI	3
-#define VIRTCHNL2_EVENT_ADI_ACTIVE		4
-
-/* VIRTCHNL2_QUEUE_TYPE
- * Transmit and Receive queue types are valid in legacy as well as split queue
- * models. With Split Queue model, 2 additional types are introduced -
- * TX_COMPLETION and RX_BUFFER. In split queue model, receive  corresponds to
+enum virtchnl2_queue_model {
+	VIRTCHNL2_QUEUE_MODEL_SINGLE		= 0,
+	VIRTCHNL2_QUEUE_MODEL_SPLIT		= 1,
+};
+
+/* Checksum offload capability flags */
+enum virtchnl2_cap_txrx_csum {
+	VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		= BIT(0),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	= BIT(1),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	= BIT(2),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	= BIT(3),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	= BIT(4),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	= BIT(5),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_CAP_TX_CSUM_GENERIC		= BIT(7),
+	VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		= BIT(8),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	= BIT(9),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	= BIT(10),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	= BIT(11),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	= BIT(12),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	= BIT(13),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	= BIT(14),
+	VIRTCHNL2_CAP_RX_CSUM_GENERIC		= BIT(15),
+	VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	= BIT(16),
+	VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	= BIT(17),
+	VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	= BIT(18),
+	VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	= BIT(19),
+	VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	= BIT(20),
+	VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	= BIT(21),
+	VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	= BIT(22),
+	VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	= BIT(23),
+};
+
+/* Segmentation offload capability flags */
+enum virtchnl2_cap_seg {
+	VIRTCHNL2_CAP_SEG_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_SEG_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_SEG_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_SEG_IPV6_TCP		= BIT(3),
+	VIRTCHNL2_CAP_SEG_IPV6_UDP		= BIT(4),
+	VIRTCHNL2_CAP_SEG_IPV6_SCTP		= BIT(5),
+	VIRTCHNL2_CAP_SEG_GENERIC		= BIT(6),
+	VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	= BIT(7),
+	VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	= BIT(8),
+};
+
+/* Receive Side Scaling Flow type capability flags */
+enum virtchnl2_cap_rss {
+	VIRTCHNL2_CAP_RSS_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSS_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_RSS_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_RSS_IPV4_OTHER		= BIT(3),
+	VIRTCHNL2_CAP_RSS_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_CAP_RSS_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_CAP_RSS_IPV6_SCTP		= BIT(6),
+	VIRTCHNL2_CAP_RSS_IPV6_OTHER		= BIT(7),
+	VIRTCHNL2_CAP_RSS_IPV4_AH		= BIT(8),
+	VIRTCHNL2_CAP_RSS_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		= BIT(10),
+	VIRTCHNL2_CAP_RSS_IPV6_AH		= BIT(11),
+	VIRTCHNL2_CAP_RSS_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		= BIT(13),
+};
+
+/* Header split capability flags */
+enum virtchnl2_cap_rx_hsplit_at {
+	/* For prepended metadata  */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		= BIT(0),
+	/* All VLANs go into header buffer */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		= BIT(1),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		= BIT(2),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		= BIT(3),
+};
+
+/* Receive Side Coalescing offload capability flags */
+enum virtchnl2_cap_rsc {
+	VIRTCHNL2_CAP_RSC_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSC_IPV4_SCTP		= BIT(1),
+	VIRTCHNL2_CAP_RSC_IPV6_TCP		= BIT(2),
+	VIRTCHNL2_CAP_RSC_IPV6_SCTP		= BIT(3),
+};
+
+/* Other capability flags */
+enum virtchnl2_cap_other {
+	VIRTCHNL2_CAP_RDMA			= BIT_ULL(0),
+	VIRTCHNL2_CAP_SRIOV			= BIT_ULL(1),
+	VIRTCHNL2_CAP_MACFILTER			= BIT_ULL(2),
+	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
+	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
+	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
+	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
+	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
+	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
+	VIRTCHNL2_CAP_INLINE_IPSEC		= BIT_ULL(10),
+	VIRTCHNL2_CAP_LARGE_NUM_QUEUES		= BIT_ULL(11),
+	/* Require additional info */
+	VIRTCHNL2_CAP_VLAN			= BIT_ULL(12),
+	VIRTCHNL2_CAP_PTP			= BIT_ULL(13),
+	VIRTCHNL2_CAP_ADV_RSS			= BIT_ULL(15),
+	VIRTCHNL2_CAP_FDIR			= BIT_ULL(16),
+	VIRTCHNL2_CAP_RX_FLEX_DESC		= BIT_ULL(17),
+	VIRTCHNL2_CAP_PTYPE			= BIT_ULL(18),
+	VIRTCHNL2_CAP_LOOPBACK			= BIT_ULL(19),
+	/* Enable miss completion types plus ability to detect a miss completion
+	 * if a reserved bit is set in a standard completion's tag.
+	 */
+	VIRTCHNL2_CAP_MISS_COMPL_TAG		= BIT_ULL(20),
+	/* This must be the last capability */
+	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
+};
+
+/**
+ * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
+ * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
+ *				    completions where descriptors and buffers
+ *				    are completed at the same time.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
+ *				   packet processing where descriptors are
+ *				   cleaned in order, but buffers can be
+ *				   completed out of order.
+ */
+enum virtchnl2_txq_sched_mode {
+	VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		= 0,
+	VIRTCHNL2_TXQ_SCHED_MODE_FLOW		= 1,
+};
+
+/**
+ * enum virtchnl2_txq_flags - Transmit Queue feature flags
+ * @VIRTCHNL2_TXQ_ENABLE_MISS_COMPL: Enable rule miss completion type. Packet
+ *				     completion for a packet sent on exception
+ *				     path and only relevant in flow scheduling
+ *				     mode.
+ */
+enum virtchnl2_txq_flags {
+	VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		= BIT(0),
+};
+
+/**
+ * enum virtchnl2_peer_type - Transmit mailbox peer type
+ * @VIRTCHNL2_RDMA_CPF: RDMA peer type
+ * @VIRTCHNL2_NVME_CPF: NVME peer type
+ * @VIRTCHNL2_ATE_CPF: ATE peer type
+ * @VIRTCHNL2_LCE_CPF: LCE peer type
+ */
+enum virtchnl2_peer_type {
+	VIRTCHNL2_RDMA_CPF			= 0,
+	VIRTCHNL2_NVME_CPF			= 1,
+	VIRTCHNL2_ATE_CPF			= 2,
+	VIRTCHNL2_LCE_CPF			= 3,
+};
+
+/**
+ * enum virtchnl2_rxq_flags - Receive Queue Feature flags
+ * @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag
+ * @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag
+ * @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
+ *					by hardware immediately after processing
+ *					each packet.
+ * @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size
+ * @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size
+ */
+enum virtchnl2_rxq_flags {
+	VIRTCHNL2_RXQ_RSC			= BIT(0),
+	VIRTCHNL2_RXQ_HDR_SPLIT			= BIT(1),
+	VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	= BIT(2),
+	VIRTCHNL2_RX_DESC_SIZE_16BYTE		= BIT(3),
+	VIRTCHNL2_RX_DESC_SIZE_32BYTE		= BIT(4),
+};
+
+/**
+ * enum virtchnl2_rss_alg - Type of RSS algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC: TOEPLITZ_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_R_ASYMMETRIC: R_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC: TOEPLITZ_SYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC: XOR_SYMMETRIC algorithm
+ */
+enum virtchnl2_rss_alg {
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC	= 0,
+	VIRTCHNL2_RSS_ALG_R_ASYMMETRIC		= 1,
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC	= 2,
+	VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC		= 3,
+};
+
+/**
+ * enum virtchnl2_event_codes - Type of event
+ * @VIRTCHNL2_EVENT_UNKNOWN: Unknown event type
+ * @VIRTCHNL2_EVENT_LINK_CHANGE: Link change event type
+ * @VIRTCHNL2_EVENT_START_RESET_ADI: Start reset ADI event type
+ * @VIRTCHNL2_EVENT_FINISH_RESET_ADI: Finish reset ADI event type
+ * @VIRTCHNL2_EVENT_ADI_ACTIVE: Event type to indicate 'function active' state
+ *				of ADI.
+ */
+enum virtchnl2_event_codes {
+	VIRTCHNL2_EVENT_UNKNOWN			= 0,
+	VIRTCHNL2_EVENT_LINK_CHANGE		= 1,
+	/* These messages are only sent to PF from CP */
+	VIRTCHNL2_EVENT_START_RESET_ADI		= 2,
+	VIRTCHNL2_EVENT_FINISH_RESET_ADI	= 3,
+	VIRTCHNL2_EVENT_ADI_ACTIVE		= 4,
+};
+
+/**
+ * enum virtchnl2_queue_type - Various queue types
+ * @VIRTCHNL2_QUEUE_TYPE_TX: TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX: RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: TX completion queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: RX buffer queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_TX: Config TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_RX: Config RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_TX: TX mailbox queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_RX: RX mailbox queue type
+ *
+ * Transmit and Receive queue types are valid in single as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced which are
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
  * the queue where hardware posts completions.
  */
-#define VIRTCHNL2_QUEUE_TYPE_TX			0
-#define VIRTCHNL2_QUEUE_TYPE_RX			1
-#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	2
-#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		3
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		4
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		5
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX		6
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX		7
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	8
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	9
-#define VIRTCHNL2_QUEUE_TYPE_MBX_TX		10
-#define VIRTCHNL2_QUEUE_TYPE_MBX_RX		11
-
-/* VIRTCHNL2_ITR_IDX
- * Virtchannel interrupt throttling rate index
- */
-#define VIRTCHNL2_ITR_IDX_0			0
-#define VIRTCHNL2_ITR_IDX_1			1
-
-/* VIRTCHNL2_VECTOR_LIMITS
+enum virtchnl2_queue_type {
+	VIRTCHNL2_QUEUE_TYPE_TX			= 0,
+	VIRTCHNL2_QUEUE_TYPE_RX			= 1,
+	VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	= 2,
+	VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		= 3,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		= 4,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		= 5,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX		= 6,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX		= 7,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	= 8,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	= 9,
+	VIRTCHNL2_QUEUE_TYPE_MBX_TX		= 10,
+	VIRTCHNL2_QUEUE_TYPE_MBX_RX		= 11,
+};
+
+/**
+ * enum virtchnl2_itr_idx - Interrupt throttling rate index
+ * @VIRTCHNL2_ITR_IDX_0: ITR index 0
+ * @VIRTCHNL2_ITR_IDX_1: ITR index 1
+ */
+enum virtchnl2_itr_idx {
+	VIRTCHNL2_ITR_IDX_0			= 0,
+	VIRTCHNL2_ITR_IDX_1			= 1,
+};
+
+/**
+ * VIRTCHNL2_VECTOR_LIMITS
  * Since PF/VF messages are limited by __le16 size, precalculate the maximum
  * possible values of nested elements in virtchnl structures that virtual
  * channel can possibly handle in a single message.
@@ -332,131 +411,150 @@
 		((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
 		sizeof(struct virtchnl2_queue_vector))
 
-/* VIRTCHNL2_MAC_TYPE
- * VIRTCHNL2_MAC_ADDR_PRIMARY
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_PRIMARY for the
- * primary/device unicast MAC address filter for VIRTCHNL2_OP_ADD_MAC_ADDR and
- * VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the underlying control plane
- * function to accurately track the MAC address and for VM/function reset.
- *
- * VIRTCHNL2_MAC_ADDR_EXTRA
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_EXTRA for any extra
- * unicast and/or multicast filters that are being added/deleted via
- * VIRTCHNL2_OP_ADD_MAC_ADDR/VIRTCHNL2_OP_DEL_MAC_ADDR respectively.
+/**
+ * enum virtchnl2_mac_addr_type - MAC address types
+ * @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
+ *				primary/device unicast MAC address filter for
+ *				VIRTCHNL2_OP_ADD_MAC_ADDR and
+ *				VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
+ *				underlying control plane function to accurately
+ *				track the MAC address and for VM/function reset.
+ * @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
+ *			      unicast and/or multicast filters that are being
+ *			      added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
+ *			      VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
-#define VIRTCHNL2_MAC_ADDR_PRIMARY		1
-#define VIRTCHNL2_MAC_ADDR_EXTRA		2
+enum virtchnl2_mac_addr_type {
+	VIRTCHNL2_MAC_ADDR_PRIMARY		= 1,
+	VIRTCHNL2_MAC_ADDR_EXTRA		= 2,
+};
 
-/* VIRTCHNL2_PROMISC_FLAGS
- * Flags used for promiscuous mode
+/**
+ * enum virtchnl2_promisc_flags - Flags used for promiscuous mode
+ * @VIRTCHNL2_UNICAST_PROMISC: Unicast promiscuous mode
+ * @VIRTCHNL2_MULTICAST_PROMISC: Multicast promiscuous mode
  */
-#define VIRTCHNL2_UNICAST_PROMISC		BIT(0)
-#define VIRTCHNL2_MULTICAST_PROMISC		BIT(1)
+enum virtchnl2_promisc_flags {
+	VIRTCHNL2_UNICAST_PROMISC		= BIT(0),
+	VIRTCHNL2_MULTICAST_PROMISC		= BIT(1),
+};
 
-/* VIRTCHNL2_QUEUE_GROUP_TYPE
- * Type of queue groups
+/**
+ * enum virtchnl2_queue_group_type - Type of queue groups
+ * @VIRTCHNL2_QUEUE_GROUP_DATA: Data queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_MBX: Mailbox queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_CONFIG: Config queue group type
+ *
  * 0 till 0xFF is for general use
  */
-#define VIRTCHNL2_QUEUE_GROUP_DATA		1
-#define VIRTCHNL2_QUEUE_GROUP_MBX		2
-#define VIRTCHNL2_QUEUE_GROUP_CONFIG		3
+enum virtchnl2_queue_group_type {
+	VIRTCHNL2_QUEUE_GROUP_DATA		= 1,
+	VIRTCHNL2_QUEUE_GROUP_MBX		= 2,
+	VIRTCHNL2_QUEUE_GROUP_CONFIG		= 3,
+};
 
-/* VIRTCHNL2_PROTO_HDR_TYPE
- * Protocol header type within a packet segment. A segment consists of one or
+/* Protocol header type within a packet segment. A segment consists of one or
  * more protocol headers that make up a logical group of protocol headers. Each
  * logical group of protocol headers encapsulates or is encapsulated using/by
  * tunneling or encapsulation protocols for network virtualization.
  */
-/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ANY			0
-#define VIRTCHNL2_PROTO_HDR_PRE_MAC		1
-/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_MAC			2
-#define VIRTCHNL2_PROTO_HDR_POST_MAC		3
-#define VIRTCHNL2_PROTO_HDR_ETHERTYPE		4
-#define VIRTCHNL2_PROTO_HDR_VLAN		5
-#define VIRTCHNL2_PROTO_HDR_SVLAN		6
-#define VIRTCHNL2_PROTO_HDR_CVLAN		7
-#define VIRTCHNL2_PROTO_HDR_MPLS		8
-#define VIRTCHNL2_PROTO_HDR_UMPLS		9
-#define VIRTCHNL2_PROTO_HDR_MMPLS		10
-#define VIRTCHNL2_PROTO_HDR_PTP			11
-#define VIRTCHNL2_PROTO_HDR_CTRL		12
-#define VIRTCHNL2_PROTO_HDR_LLDP		13
-#define VIRTCHNL2_PROTO_HDR_ARP			14
-#define VIRTCHNL2_PROTO_HDR_ECP			15
-#define VIRTCHNL2_PROTO_HDR_EAPOL		16
-#define VIRTCHNL2_PROTO_HDR_PPPOD		17
-#define VIRTCHNL2_PROTO_HDR_PPPOE		18
-/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4		19
-/* IPv4 and IPv6 Fragment header types are only associated to
- * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
- * cannot be used independently.
- */
-/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG		20
-/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6		21
-/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG		22
-#define VIRTCHNL2_PROTO_HDR_IPV6_EH		23
-/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_UDP			24
-/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TCP			25
-/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_SCTP		26
-/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMP		27
-/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMPV6		28
-#define VIRTCHNL2_PROTO_HDR_IGMP		29
-#define VIRTCHNL2_PROTO_HDR_AH			30
-#define VIRTCHNL2_PROTO_HDR_ESP			31
-#define VIRTCHNL2_PROTO_HDR_IKE			32
-#define VIRTCHNL2_PROTO_HDR_NATT_KEEP		33
-/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_PAY			34
-#define VIRTCHNL2_PROTO_HDR_L2TPV2		35
-#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	36
-#define VIRTCHNL2_PROTO_HDR_L2TPV3		37
-#define VIRTCHNL2_PROTO_HDR_GTP			38
-#define VIRTCHNL2_PROTO_HDR_GTP_EH		39
-#define VIRTCHNL2_PROTO_HDR_GTPCV2		40
-#define VIRTCHNL2_PROTO_HDR_GTPC_TEID		41
-#define VIRTCHNL2_PROTO_HDR_GTPU		42
-#define VIRTCHNL2_PROTO_HDR_GTPU_UL		43
-#define VIRTCHNL2_PROTO_HDR_GTPU_DL		44
-#define VIRTCHNL2_PROTO_HDR_ECPRI		45
-#define VIRTCHNL2_PROTO_HDR_VRRP		46
-#define VIRTCHNL2_PROTO_HDR_OSPF		47
-/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TUN			48
-#define VIRTCHNL2_PROTO_HDR_GRE			49
-#define VIRTCHNL2_PROTO_HDR_NVGRE		50
-#define VIRTCHNL2_PROTO_HDR_VXLAN		51
-#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE		52
-#define VIRTCHNL2_PROTO_HDR_GENEVE		53
-#define VIRTCHNL2_PROTO_HDR_NSH			54
-#define VIRTCHNL2_PROTO_HDR_QUIC		55
-#define VIRTCHNL2_PROTO_HDR_PFCP		56
-#define VIRTCHNL2_PROTO_HDR_PFCP_NODE		57
-#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION	58
-#define VIRTCHNL2_PROTO_HDR_RTP			59
-#define VIRTCHNL2_PROTO_HDR_ROCE		60
-#define VIRTCHNL2_PROTO_HDR_ROCEV1		61
-#define VIRTCHNL2_PROTO_HDR_ROCEV2		62
-/* protocol ids up to 32767 are reserved for AVF use */
-/* 32768 - 65534 are used for user defined protocol ids */
-/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_NO_PROTO		65535
-
-#define VIRTCHNL2_VERSION_MAJOR_2        2
-#define VIRTCHNL2_VERSION_MINOR_0        0
-
-
-/* VIRTCHNL2_OP_VERSION
+enum virtchnl2_proto_hdr_type {
+	/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ANY			= 0,
+	VIRTCHNL2_PROTO_HDR_PRE_MAC		= 1,
+	/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_MAC			= 2,
+	VIRTCHNL2_PROTO_HDR_POST_MAC		= 3,
+	VIRTCHNL2_PROTO_HDR_ETHERTYPE		= 4,
+	VIRTCHNL2_PROTO_HDR_VLAN		= 5,
+	VIRTCHNL2_PROTO_HDR_SVLAN		= 6,
+	VIRTCHNL2_PROTO_HDR_CVLAN		= 7,
+	VIRTCHNL2_PROTO_HDR_MPLS		= 8,
+	VIRTCHNL2_PROTO_HDR_UMPLS		= 9,
+	VIRTCHNL2_PROTO_HDR_MMPLS		= 10,
+	VIRTCHNL2_PROTO_HDR_PTP			= 11,
+	VIRTCHNL2_PROTO_HDR_CTRL		= 12,
+	VIRTCHNL2_PROTO_HDR_LLDP		= 13,
+	VIRTCHNL2_PROTO_HDR_ARP			= 14,
+	VIRTCHNL2_PROTO_HDR_ECP			= 15,
+	VIRTCHNL2_PROTO_HDR_EAPOL		= 16,
+	VIRTCHNL2_PROTO_HDR_PPPOD		= 17,
+	VIRTCHNL2_PROTO_HDR_PPPOE		= 18,
+	/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4		= 19,
+	/* IPv4 and IPv6 Fragment header types are only associated to
+	 * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+	 * cannot be used independently.
+	 */
+	/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4_FRAG		= 20,
+	/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6		= 21,
+	/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6_FRAG		= 22,
+	VIRTCHNL2_PROTO_HDR_IPV6_EH		= 23,
+	/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_UDP			= 24,
+	/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TCP			= 25,
+	/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_SCTP		= 26,
+	/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMP		= 27,
+	/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMPV6		= 28,
+	VIRTCHNL2_PROTO_HDR_IGMP		= 29,
+	VIRTCHNL2_PROTO_HDR_AH			= 30,
+	VIRTCHNL2_PROTO_HDR_ESP			= 31,
+	VIRTCHNL2_PROTO_HDR_IKE			= 32,
+	VIRTCHNL2_PROTO_HDR_NATT_KEEP		= 33,
+	/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_PAY			= 34,
+	VIRTCHNL2_PROTO_HDR_L2TPV2		= 35,
+	VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	= 36,
+	VIRTCHNL2_PROTO_HDR_L2TPV3		= 37,
+	VIRTCHNL2_PROTO_HDR_GTP			= 38,
+	VIRTCHNL2_PROTO_HDR_GTP_EH		= 39,
+	VIRTCHNL2_PROTO_HDR_GTPCV2		= 40,
+	VIRTCHNL2_PROTO_HDR_GTPC_TEID		= 41,
+	VIRTCHNL2_PROTO_HDR_GTPU		= 42,
+	VIRTCHNL2_PROTO_HDR_GTPU_UL		= 43,
+	VIRTCHNL2_PROTO_HDR_GTPU_DL		= 44,
+	VIRTCHNL2_PROTO_HDR_ECPRI		= 45,
+	VIRTCHNL2_PROTO_HDR_VRRP		= 46,
+	VIRTCHNL2_PROTO_HDR_OSPF		= 47,
+	/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TUN			= 48,
+	VIRTCHNL2_PROTO_HDR_GRE			= 49,
+	VIRTCHNL2_PROTO_HDR_NVGRE		= 50,
+	VIRTCHNL2_PROTO_HDR_VXLAN		= 51,
+	VIRTCHNL2_PROTO_HDR_VXLAN_GPE		= 52,
+	VIRTCHNL2_PROTO_HDR_GENEVE		= 53,
+	VIRTCHNL2_PROTO_HDR_NSH			= 54,
+	VIRTCHNL2_PROTO_HDR_QUIC		= 55,
+	VIRTCHNL2_PROTO_HDR_PFCP		= 56,
+	VIRTCHNL2_PROTO_HDR_PFCP_NODE		= 57,
+	VIRTCHNL2_PROTO_HDR_PFCP_SESSION	= 58,
+	VIRTCHNL2_PROTO_HDR_RTP			= 59,
+	VIRTCHNL2_PROTO_HDR_ROCE		= 60,
+	VIRTCHNL2_PROTO_HDR_ROCEV1		= 61,
+	VIRTCHNL2_PROTO_HDR_ROCEV2		= 62,
+	/* Protocol ids up to 32767 are reserved */
+	/* 32768 - 65534 are used for user defined protocol ids */
+	/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_NO_PROTO		= 65535,
+};
+
+enum virtchl2_version {
+	VIRTCHNL2_VERSION_MINOR_0		= 0,
+	VIRTCHNL2_VERSION_MAJOR_2		= 2,
+};
+
+/**
+ * struct virtchnl2_version_info - Version information
+ * @major: Major version
+ * @minor: Minor version
+ *
  * PF/VF posts its version number to the CP. CP responds with its version number
  * in the same format, along with a return code.
  * If there is a major version mismatch, then the PF/VF cannot operate.
@@ -466,6 +564,8 @@
  * This version opcode MUST always be specified as == 1, regardless of other
  * changes in the API. The CP must always respond to this message without
  * error regardless of version mismatch.
+ *
+ * Associated with VIRTCHNL2_OP_VERSION.
  */
 struct virtchnl2_version_info {
 	__le32 major;
@@ -474,7 +574,39 @@ struct virtchnl2_version_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
 
-/* VIRTCHNL2_OP_GET_CAPS
+/**
+ * struct virtchnl2_get_capabilities - Capabilities info
+ * @csum_caps: See enum virtchnl2_cap_txrx_csum
+ * @seg_caps: See enum virtchnl2_cap_seg
+ * @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at
+ * @rsc_caps: See enum virtchnl2_cap_rsc
+ * @rss_caps: See enum virtchnl2_cap_rss
+ * @other_caps: See enum virtchnl2_cap_other
+ * @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
+ *		     provided by CP.
+ * @mailbox_vector_id: Mailbox vector id
+ * @num_allocated_vectors: Maximum number of allocated vectors for the device
+ * @max_rx_q: Maximum number of supported Rx queues
+ * @max_tx_q: Maximum number of supported Tx queues
+ * @max_rx_bufq: Maximum number of supported buffer queues
+ * @max_tx_complq: Maximum number of supported completion queues
+ * @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
+ *		   responds with the maximum VFs granted.
+ * @max_vports: Maximum number of vports that can be supported
+ * @default_num_vports: Default number of vports driver should allocate on load
+ * @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes
+ * @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
+ *			    sent per transmit packet without needing to be
+ *			    linearized.
+ * @reserved: Reserved field
+ * @max_adis: Max number of ADIs
+ * @device_type: See enum virtchl2_device_type
+ * @min_sso_packet_len: Min packet length supported by device for single
+ *			segment offload
+ * @max_hdr_buf_per_lso: Max number of header buffers that can be used for
+ *			 an LSO
+ * @pad1: Padding for future extensions
+ *
  * Dataplane driver sends this message to CP to negotiate capabilities and
  * provides a virtchnl2_get_capabilities structure with its desired
  * capabilities, max_sriov_vfs and num_allocated_vectors.
@@ -492,60 +624,30 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
  * mailbox_vector_id and the number of itr index registers in itr_idx_map.
  * It also responds with default number of vports that the dataplane driver
  * should comeup with in default_num_vports and maximum number of vports that
- * can be supported in max_vports
+ * can be supported in max_vports.
+ *
+ * Associated with VIRTCHNL2_OP_GET_CAPS.
  */
 struct virtchnl2_get_capabilities {
-	/* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
 	__le32 csum_caps;
-
-	/* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
 	__le32 seg_caps;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 hsplit_caps;
-
-	/* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
 	__le32 rsc_caps;
-
-	/* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions  */
 	__le64 rss_caps;
-
-
-	/* see VIRTCHNL2_OTHER_CAPS definitions  */
 	__le64 other_caps;
-
-	/* DYN_CTL register offset and vector id for mailbox provided by CP */
 	__le32 mailbox_dyn_ctl;
 	__le16 mailbox_vector_id;
-	/* Maximum number of allocated vectors for the device */
 	__le16 num_allocated_vectors;
-
-	/* Maximum number of queues that can be supported */
 	__le16 max_rx_q;
 	__le16 max_tx_q;
 	__le16 max_rx_bufq;
 	__le16 max_tx_complq;
-
-	/* The PF sends the maximum VFs it is requesting. The CP responds with
-	 * the maximum VFs granted.
-	 */
 	__le16 max_sriov_vfs;
-
-	/* maximum number of vports that can be supported */
 	__le16 max_vports;
-	/* default number of vports driver should allocate on load */
 	__le16 default_num_vports;
-
-	/* Max header length hardware can parse/checksum, in bytes */
 	__le16 max_tx_hdr_size;
-
-	/* Max number of scatter gather buffers that can be sent per transmit
-	 * packet without needing to be linearized
-	 */
 	u8 max_sg_bufs_per_tx_pkt;
-
-	u8 reserved1;
-	/* upper bound of number of ADIs supported */
+	u8 reserved;
 	__le16 max_adis;
 
 	/* version of Control Plane that is running */
@@ -553,10 +655,7 @@ struct virtchnl2_get_capabilities {
 	__le16 oem_cp_ver_minor;
 	/* see VIRTCHNL2_DEVICE_TYPE definitions */
 	__le32 device_type;
-
-	/* min packet length supported by device for single segment offload */
 	u8 min_sso_packet_len;
-	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
 	u8 pad1[10];
@@ -564,14 +663,21 @@ struct virtchnl2_get_capabilities {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
 
+/**
+ * struct virtchnl2_queue_reg_chunk - Single queue chunk
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Start Queue ID
+ * @num_queues: Number of queues in the chunk
+ * @pad: Padding
+ * @qtail_reg_start: Queue tail register offset
+ * @qtail_reg_spacing: Queue tail register spacing
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_queue_reg_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
 	__le32 pad;
-
-	/* Queue tail register offset and spacing provided by CP */
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
@@ -580,7 +686,13 @@ struct virtchnl2_queue_reg_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/**
+ * struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
+ *				       queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info
+ */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -589,77 +701,91 @@ struct virtchnl2_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
-/* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
-#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
+/**
+ * enum virtchnl2_vport_flags - Vport flags
+ * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ */
+enum virtchnl2_vport_flags {
+	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+};
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-/* VIRTCHNL2_OP_CREATE_VPORT
- * PF sends this message to CP to create a vport by filling in required
+
+/**
+ * struct virtchnl2_create_vport - Create vport config info
+ * @vport_type: See enum virtchnl2_vport_type
+ * @txq_model: See virtchnl2_queue_model
+ * @rxq_model: See virtchnl2_queue_model
+ * @num_tx_q: Number of Tx queues
+ * @num_tx_complq: Valid only if txq_model is split queue
+ * @num_rx_q: Number of Rx queues
+ * @num_rx_bufq: Valid only if rxq_model is split queue
+ * @default_rx_q: Relative receive queue index to be used as default
+ * @vport_index: Used to align PF and CP in case of default multiple vports,
+ *		 it is filled by the PF and CP returns the same value, to
+ *		 enable the driver to support multiple asynchronous parallel
+ *		 CREATE_VPORT requests and associate a response to a specific
+ *		 request.
+ * @max_mtu: Max MTU. CP populates this field on response
+ * @vport_id: Vport id. CP populates this field on response
+ * @default_mac_addr: Default MAC address
+ * @vport_flags: See enum virtchnl2_vport_flags
+ * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
+ * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
+ * @reserved: Reserved bytes and cannot be used
+ * @rss_algorithm: RSS algorithm
+ * @rss_key_size: RSS key size
+ * @rss_lut_size: RSS LUT size
+ * @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to CP to create a vport by filling in required
  * fields of virtchnl2_create_vport structure.
  * CP responds with the updated virtchnl2_create_vport structure containing the
  * necessary fields followed by chunks which in turn will have an array of
  * num_chunks entries of virtchnl2_queue_chunk structures.
  */
 struct virtchnl2_create_vport {
-	/* PF/VF populates the following fields on request */
-	/* see VIRTCHNL2_VPORT_TYPE definitions */
 	__le16 vport_type;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 txq_model;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 rxq_model;
 	__le16 num_tx_q;
-	/* valid only if txq_model is split queue */
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
-	/* valid only if rxq_model is split queue */
 	__le16 num_rx_bufq;
-	/* relative receive queue index to be used as default */
 	__le16 default_rx_q;
-	/* used to align PF and CP in case of default multiple vports, it is
-	 * filled by the PF and CP returns the same value, to enable the driver
-	 * to support multiple asynchronous parallel CREATE_VPORT requests and
-	 * associate a response to a specific request
-	 */
 	__le16 vport_index;
-
-	/* CP populates the following fields on response */
 	__le16 max_mtu;
 	__le32 vport_id;
 	u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_VPORT_FLAGS definitions */
 	__le16 vport_flags;
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 rx_desc_ids;
-	/* see VIRTCHNL2_TX_DESC_IDS definitions */
 	__le64 tx_desc_ids;
-
-	u8 reserved1[72];
-
-	/* see VIRTCHNL2_RSS_ALGORITHM definitions */
+	u8 reserved[72];
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
-
-	u8 pad2[20];
+	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
 
-/* VIRTCHNL2_OP_DESTROY_VPORT
- * VIRTCHNL2_OP_ENABLE_VPORT
- * VIRTCHNL2_OP_DISABLE_VPORT
- * PF sends this message to CP to destroy, enable or disable a vport by filling
- * in the vport_id in virtchnl2_vport structure.
+/**
+ * struct virtchnl2_vport - Vport identifier information
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends this message to CP to destroy, enable or disable a vport by
+ * filling in the vport_id in virtchnl2_vport structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
+ * VIRTCHNL2_OP_DISABLE_VPORT.
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
@@ -668,42 +794,43 @@ struct virtchnl2_vport {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
 
-/* Transmit queue config info */
+/**
+ * struct virtchnl2_txq_info - Transmit queue config info
+ * @dma_ring_addr: DMA address
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue ID
+ * @relative_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue. Used in many to one mapping of transmit queues to
+ *		       completion queue.
+ * @model: See enum virtchnl2_queue_model
+ * @sched_mode: See enum virtchnl2_txq_sched_mode
+ * @qflags: TX queue feature flags
+ * @ring_len: Ring length
+ * @tx_compl_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue.
+ * @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
+ * @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
+ *		      messages for the respective CONFIG_TX queue.
+ * @pad: Padding
+ * @egress_pasid: Egress PASID info
+ * @egress_hdr_pasid: Egress HDR passid
+ * @egress_buf_pasid: Egress buf passid
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_txq_info {
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
-
 	__le32 queue_id;
-	/* valid only if queue model is split and type is transmit queue. Used
-	 * in many to one mapping of transmit queues to completion queue
-	 */
 	__le16 relative_queue_id;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 model;
-
-	/* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
 	__le16 sched_mode;
-
-	/* see VIRTCHNL2_TXQ_FLAGS definitions */
 	__le16 qflags;
 	__le16 ring_len;
-
-	/* valid only if queue model is split and type is transmit queue */
 	__le16 tx_compl_queue_id;
-	/* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
-	/* see VIRTCHNL2_PEER_TYPE definitions */
 	__le16 peer_type;
-	/* valid only if queue type is CONFIG_TX and used to deliver messages
-	 * for the respective CONFIG_TX queue
-	 */
 	__le16 peer_rx_queue_id;
 
 	u8 pad[4];
-
-	/* Egress pasid is used for SIOV use case */
 	__le32 egress_pasid;
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
@@ -713,12 +840,20 @@ struct virtchnl2_txq_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
 
-/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
- * PF sends this message to set up parameters for one or more transmit queues.
- * This message contains an array of num_qinfo instances of virtchnl2_txq_info
- * structures. CP configures requested queues and returns a status code. If
- * num_qinfo specified is greater than the number of queues associated with the
- * vport, an error is returned and no queues are configured.
+/**
+ * struct virtchnl2_config_tx_queues - TX queue config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of virtchnl2_txq_info structs
+ * @pad: Padding for future extensions
+ * @qinfo: Tx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more transmit
+ * queues. This message contains an array of num_qinfo instances of
+ * virtchnl2_txq_info structures. CP configures requested queues and returns
+ * a status code. If num_qinfo specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
  */
 struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
@@ -730,47 +865,55 @@ struct virtchnl2_config_tx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
 
-/* Receive queue config info */
+/**
+ * struct virtchnl2_rxq_info - Receive queue config info
+ * @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue id
+ * @model: See enum virtchnl2_queue_model
+ * @hdr_buffer_size: Header buffer size
+ * @data_buffer_size: Data buffer size
+ * @max_pkt_size: Max packet size
+ * @ring_len: Ring length
+ * @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
+ *			 This field must be a power of 2.
+ * @pad: Padding
+ * @dma_head_wb_addr: Applicable only for receive buffer queues
+ * @qflags: Applicable only for receive completion queues.
+ *	    See enum virtchnl2_rxq_flags.
+ * @rx_buffer_low_watermark: Rx buffer low watermark
+ * @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
+ *	       only if this field is set.
+ * @pad1: Padding
+ * @ingress_pasid: Ingress PASID
+ * @ingress_hdr_pasid: Ingress PASID header
+ * @ingress_buf_pasid: Ingress PASID buffer
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_rxq_info {
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 desc_ids;
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 queue_id;
-
-	/* see QUEUE_MODEL definitions */
 	__le16 model;
-
 	__le16 hdr_buffer_size;
 	__le32 data_buffer_size;
 	__le32 max_pkt_size;
-
 	__le16 ring_len;
 	u8 buffer_notif_stride;
 	u8 pad;
-
-	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
-
-	/* Applicable only for receive completion queues */
-	/* see VIRTCHNL2_RXQ_FLAGS definitions */
 	__le16 qflags;
-
 	__le16 rx_buffer_low_watermark;
-
-	/* valid only in split queue model */
 	__le16 rx_bufq1_id;
-	/* valid only in split queue model */
 	__le16 rx_bufq2_id;
-	/* it indicates if there is a second buffer, rx_bufq2_id is valid only
-	 * if this field is set
-	 */
 	u8 bufq2_ena;
 	u8 pad1[3];
-
-	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
@@ -779,12 +922,20 @@ struct virtchnl2_rxq_info {
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
-/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
- * PF sends this message to set up parameters for one or more receive queues.
+/**
+ * struct virtchnl2_config_rx_queues - Rx queues config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of instances
+ * @pad: Padding for future extensions
+ * @qinfo: Rx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more receive queues.
  * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
  * structures. CP configures requested queues and returns a status code.
  * If the number of queues specified is greater than the number of queues
  * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
  */
 struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
@@ -796,12 +947,23 @@ struct virtchnl2_config_rx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
 
-/* VIRTCHNL2_OP_ADD_QUEUES
- * PF sends this message to request additional transmit/receive queues beyond
+/**
+ * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
+ * @vport_id: Vport id
+ * @num_tx_q: Number of Tx qieues
+ * @num_tx_complq: Number of Tx completion queues
+ * @num_rx_q:  Number of Rx queues
+ * @num_rx_bufq:  Number of Rx buffer queues
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to request additional transmit/receive queues beyond
  * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
  * structure is used to specify the number of each type of queues.
  * CP responds with the same structure with the actual number of queues assigned
  * followed by num_chunks of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUES.
  */
 struct virtchnl2_add_queues {
 	__le32 vport_id;
@@ -817,65 +979,81 @@ struct virtchnl2_add_queues {
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
 
 /* Queue Groups Extension */
-
+/**
+ * struct virtchnl2_rx_queue_group_info - RX queue group info
+ * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
+ *		  allocated by CreateVport command. New size will be returned
+ *		  if allocation succeeded, otherwise original rss_size from
+ *		  CreateVport will be returned.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_rx_queue_group_info {
-	/* IN/OUT, user can ask to update rss_lut size originally allocated
-	 * by CreateVport command. New size will be returned if allocation
-	 * succeeded, otherwise original rss_size from CreateVport will
-	 * be returned.
-	 */
 	__le16 rss_lut_size;
-	/* Future extension purpose */
 	u8 pad[6];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
 
+/**
+ * struct virtchnl2_tx_queue_group_info - TX queue group info
+ * @tx_tc: TX TC queue group will be connected to
+ * @priority: Each group can have its own priority, value 0-7, while each group
+ *	      with unique priority is strict priority. It can be single set of
+ *	      queue groups which configured with same priority, then they are
+ *	      assumed part of WFQ arbitration group and are expected to be
+ *	      assigned with weight.
+ * @is_sp: Determines if queue group is expected to be Strict Priority according
+ *	   to its priority.
+ * @pad: Padding
+ * @pir_weight: Peak Info Rate Weight in case Queue Group is part of WFQ
+ *		arbitration set.
+ *		The weights of the groups are independent of each other.
+ *		Possible values: 1-200
+ * @cir_pad: Future extension purpose for CIR only
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_tx_queue_group_info { /* IN */
-	/* TX TC queue group will be connected to */
 	u8 tx_tc;
-	/* Each group can have its own priority, value 0-7, while each group
-	 * with unique priority is strict priority.
-	 * It can be single set of queue groups which configured with
-	 * same priority, then they are assumed part of WFQ arbitration
-	 * group and are expected to be assigned with weight.
-	 */
 	u8 priority;
-	/* Determines if queue group is expected to be Strict Priority
-	 * according to its priority
-	 */
 	u8 is_sp;
 	u8 pad;
-
-	/* Peak Info Rate Weight in case Queue Group is part of WFQ
-	 * arbitration set.
-	 * The weights of the groups are independent of each other.
-	 * Possible values: 1-200
-	 */
 	__le16 pir_weight;
-	/* Future extension purpose for CIR only */
 	u8 cir_pad[2];
-	/* Future extension purpose*/
 	u8 pad2[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_tx_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_group_id - Queue group ID
+ * @queue_group_id: Queue group ID - Depended on it's type
+ *		    Data: Is an ID which is relative to Vport
+ *		    Config & Mailbox: Is an ID which is relative to func
+ *		    This ID is use in future calls, i.e. delete.
+ *		    Requested by host and assigned by Control plane.
+ * @queue_group_type: Functional type: See enum virtchnl2_queue_group_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_group_id {
-	/* Queue group ID - depended on it's type
-	 * Data: is an ID which is relative to Vport
-	 * Config & Mailbox: is an ID which is relative to func.
-	 * This ID is use in future calls, i.e. delete.
-	 * Requested by host and assigned by Control plane.
-	 */
 	__le16 queue_group_id;
-	/* Functional type: see VIRTCHNL2_QUEUE_GROUP_TYPE definitions */
 	__le16 queue_group_type;
 	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 
+/**
+ * struct virtchnl2_queue_group_info - Queue group info
+ * @qg_id: Queue group ID
+ * @num_tx_q: Number of TX queues
+ * @num_tx_complq: Number of completion queues
+ * @num_rx_q: Number of RX queues
+ * @num_rx_bufq: Number of RX buffer queues
+ * @tx_q_grp_info: TX queue group info
+ * @rx_q_grp_info: RX queue group info
+ * @pad: Padding for future extensions
+ * @chunks: Queue register chunks
+ */
 struct virtchnl2_queue_group_info {
 	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
@@ -887,13 +1065,18 @@ struct virtchnl2_queue_group_info {
 
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
-	/* Future extension purpose */
 	u8 pad[40];
 	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_groups - Queue groups list
+ * @num_queue_groups: Total number of queue groups
+ * @pad: Padding for future extensions
+ * @groups: Array of queue group info
+ */
 struct virtchnl2_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[6];
@@ -902,78 +1085,107 @@ struct virtchnl2_queue_groups {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
 
-/* VIRTCHNL2_OP_ADD_QUEUE_GROUPS
+/**
+ * struct virtchnl2_add_queue_groups - Add queue groups
+ * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @pad: Padding for future extensions
+ * @qg_info: IN/OUT. List of all the queue groups
+ *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
  * groups and queues assigned followed by num_queue_groups and num_chunks of
  * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
-	/* IN, vport_id to add queue group to, same as allocated by CreateVport.
-	 * NA for mailbox and other types not assigned to vport
-	 */
 	__le32 vport_id;
 	u8 pad[4];
-	/* IN/OUT */
 	struct virtchnl2_queue_groups qg_info;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
-/* VIRTCHNL2_OP_DEL_QUEUE_GROUPS
+/**
+ * struct virtchnl2_delete_queue_groups - Delete queue groups
+ * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ *	      CreateVport.
+ * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @pad: Padding
+ * @qg_ids: IN, IDs & types of Queue Groups to delete
+ *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
  * to be deleted. CP performs requested action and returns status and update
  * num_queue_groups with number of successfully deleted queue groups.
+ *
+ * Associated with VIRTCHNL2_OP_DEL_QUEUE_GROUPS.
  */
 struct virtchnl2_delete_queue_groups {
-	/* IN, vport_id to delete queue group from, same as
-	 * allocated by CreateVport.
-	 */
 	__le32 vport_id;
-	/* IN/OUT, Defines number of groups provided below */
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	/* IN, IDs & types of Queue Groups to delete */
 	struct virtchnl2_queue_group_id qg_ids[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
 
-/* Structure to specify a chunk of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
+ *				   interrupt vectors.
+ * @start_vector_id: Start vector id
+ * @start_evv_id: Start EVV id
+ * @num_vectors: Number of vectors
+ * @pad: Padding
+ * @dynctl_reg_start: DYN_CTL register offset
+ * @dynctl_reg_spacing: Register spacing between DYN_CTL registers of 2
+ *			consecutive vectors.
+ * @itrn_reg_start: ITRN register offset
+ * @itrn_reg_spacing: Register spacing between dynctl registers of 2
+ *		      consecutive vectors.
+ * @itrn_index_spacing: Register spacing between itrn registers of the same
+ *			vector where n=0..2.
+ * @pad1: Padding for future extensions
+ *
+ * Register offsets and spacing provided by CP.
+ * Dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
 struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
 	__le16 pad;
 
-	/* Register offsets and spacing provided by CP.
-	 * dynamic control registers are used for enabling/disabling/re-enabling
-	 * interrupts and updating interrupt rates in the hotpath. Any changes
-	 * to interrupt rates in the dynamic control registers will be reflected
-	 * in the interrupt throttling rate registers.
-	 * itrn registers are used to update interrupt rates for specific
-	 * interrupt indices without modifying the state of the interrupt.
-	 */
 	__le32 dynctl_reg_start;
-	/* register spacing between dynctl registers of 2 consecutive vectors */
 	__le32 dynctl_reg_spacing;
 
 	__le32 itrn_reg_start;
-	/* register spacing between itrn registers of 2 consecutive vectors */
 	__le32 itrn_reg_spacing;
-	/* register spacing between itrn registers of the same vector
-	 * where n=0..2
-	 */
 	__le32 itrn_index_spacing;
 	u8 pad1[4];
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
-/* Structure to specify several chunks of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunks - Chunks of contiguous interrupt vectors
+ * @num_vchunks: number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends virtchnl2_vector_chunks struct to specify the vectors it is
+ * giving away. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
+ */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -983,12 +1195,19 @@ struct virtchnl2_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
 
-/* VIRTCHNL2_OP_ALLOC_VECTORS
- * PF sends this message to request additional interrupt vectors beyond the
+/**
+ * struct virtchnl2_alloc_vectors - Vector allocation info
+ * @num_vectors: Number of vectors
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends this message to request additional interrupt vectors beyond the
  * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
  * structure is used to specify the number of vectors requested. CP responds
  * with the same structure with the actual number of vectors assigned followed
  * by virtchnl2_vector_chunks structure identifying the vector ids.
+ *
+ * Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
@@ -999,46 +1218,46 @@ struct virtchnl2_alloc_vectors {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
 
-/* VIRTCHNL2_OP_DEALLOC_VECTORS
- * PF sends this message to release the vectors.
- * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
- * away. CP performs requested action and returns status.
- */
-
-/* VIRTCHNL2_OP_GET_RSS_LUT
- * VIRTCHNL2_OP_SET_RSS_LUT
- * PF sends this message to get or set RSS lookup table. Only supported if
+/**
+ * struct virtchnl2_rss_lut - RSS LUT info
+ * @vport_id: Vport id
+ * @lut_entries_start: Start of LUT entries
+ * @lut_entries: Number of LUT entrties
+ * @pad: Padding
+ * @lut: RSS lookup table
+ *
+ * PF/VF sends this message to get or set RSS lookup table. Only supported if
  * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_lut structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
  */
 struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	/* RSS lookup table */
 	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * PF sends this message to get RSS key. Only supported if both PF and CP
- * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
- * the virtchnl2_rss_key structure
- */
-
-/* VIRTCHNL2_OP_GET_RSS_HASH
- * VIRTCHNL2_OP_SET_RSS_HASH
- * PF sends these messages to get and set the hash filter enable bits for RSS.
- * By default, the CP sets these to all possible traffic types that the
+/**
+ * struct virtchnl2_rss_hash - RSS hash info
+ * @ptype_groups: Packet type groups bitmap
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends these messages to get and set the hash filter enable bits for
+ * RSS. By default, the CP sets these to all possible traffic types that the
  * hardware supports. The PF can query this value if it wants to change the
  * traffic types that are hashed by the hardware.
  * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
  * during configuration negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
  */
 struct virtchnl2_rss_hash {
-	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
 	u8 pad[4];
@@ -1046,12 +1265,18 @@ struct virtchnl2_rss_hash {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
 
-/* VIRTCHNL2_OP_SET_SRIOV_VFS
+/**
+ * struct virtchnl2_sriov_vfs_info - VFs info
+ * @num_vfs: Number of VFs
+ * @pad: Padding for future extensions
+ *
  * This message is used to set number of SRIOV VFs to be created. The actual
  * allocation of resources for the VFs in terms of vport, queues and interrupts
- * is done by CP. When this call completes, the APF driver calls
+ * is done by CP. When this call completes, the IDPF driver calls
  * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
  * The number of VFs set to 0 will destroy all the VFs of this function.
+ *
+ * Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
  */
 
 struct virtchnl2_sriov_vfs_info {
@@ -1061,8 +1286,14 @@ struct virtchnl2_sriov_vfs_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 
-/* structure to specify single chunk of queue */
-/* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_queue_reg_chunks - Specify several chunks of
+ *						contiguous queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info. 'chunks' is fixed size(not flexible) and
+ *	    will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1071,8 +1302,14 @@ struct virtchnl2_non_flex_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 
-/* structure to specify single chunk of interrupt vector */
-/* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_vector_chunks - Chunks of contiguous interrupt
+ *					     vectors.
+ * @num_vchunks: Number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info. 'vchunks' is fixed size
+ *	     (not flexible) and will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -1081,40 +1318,49 @@ struct virtchnl2_non_flex_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_non_flex_vector_chunks);
 
-/* VIRTCHNL2_OP_NON_FLEX_CREATE_ADI
+/**
+ * struct virtchnl2_non_flex_create_adi - Create ADI
+ * @pasid: PF sends PASID to CP
+ * @mbx_id: mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
+ *	    id else it is set to 0 by PF.
+ * @mbx_vec_id: PF sends mailbox vector id to CP
+ * @adi_index: PF populates this ADI index
+ * @adi_id: CP populates ADI id
+ * @pad: Padding
+ * @chunks: CP populates queue chunks
+ * @vchunks: PF sends vector chunks to CP
+ *
  * PF sends this message to CP to create ADI by filling in required
  * fields of virtchnl2_non_flex_create_adi structure.
- * CP responds with the updated virtchnl2_non_flex_create_adi structure containing
- * the necessary fields followed by chunks which in turn will have an array of
- * num_chunks entries of virtchnl2_queue_chunk structures.
+ * CP responds with the updated virtchnl2_non_flex_create_adi structure
+ * containing the necessary fields followed by chunks which in turn will have
+ * an array of num_chunks entries of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_CREATE_ADI.
  */
 struct virtchnl2_non_flex_create_adi {
-	/* PF sends PASID to CP */
 	__le32 pasid;
-	/*
-	 * mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
-	 * id else it is set to 0 by PF
-	 */
 	__le16 mbx_id;
-	/* PF sends mailbox vector id to CP */
 	__le16 mbx_vec_id;
-	/* PF populates this ADI index */
 	__le16 adi_index;
-	/* CP populates ADI id */
 	__le16 adi_id;
 	u8 pad[68];
-	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
-	/* PF sends vector chunks to CP */
 	struct virtchnl2_non_flex_vector_chunks vchunks;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
 
-/* VIRTCHNL2_OP_DESTROY_ADI
+/**
+ * struct virtchnl2_non_flex_destroy_adi - Destroy ADI
+ * @adi_id: ADI id to destroy
+ * @pad: Padding
+ *
  * PF sends this message to CP to destroy ADI by filling
  * in the adi_id in virtchnl2_destropy_adi structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI.
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
@@ -1123,7 +1369,17 @@ struct virtchnl2_non_flex_destroy_adi {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 
-/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+/**
+ * struct virtchnl2_ptype - Packet type info
+ * @ptype_id_10: 10-bit packet type
+ * @ptype_id_8: 8-bit packet type
+ * @proto_id_count: Number of protocol ids the packet supports, maximum of 32
+ *		    protocol ids are supported.
+ * @pad: Padding
+ * @proto_id: proto_id_count decides the allocation of protocol id array.
+ *	      See enum virtchnl2_proto_hdr_type.
+ *
+ * Based on the descriptor type the PF supports, CP fills ptype_id_10 or
  * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
  * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
  * last ptype.
@@ -1131,32 +1387,42 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 struct virtchnl2_ptype {
 	__le16 ptype_id_10;
 	u8 ptype_id_8;
-	/* number of protocol ids the packet supports, maximum of 32
-	 * protocol ids are supported
-	 */
 	u8 proto_id_count;
 	__le16 pad;
-	/* proto_id_count decides the allocation of protocol id array */
-	/* see VIRTCHNL2_PROTO_HDR_TYPE */
 	__le16 proto_id[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
 
-/* VIRTCHNL2_OP_GET_PTYPE_INFO
- * PF sends this message to CP to get all supported packet types. It does by
- * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
- * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
- * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
- * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
- * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
- * messages, where each message will have the start ptype id, number of ptypes
- * sent in that message and the ptype array itself. When CP is done updating
- * all ptype information it extracted from the package (number of ptypes
- * extracted might be less than what PF expects), it will append a dummy ptype
- * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
- * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
- * messages.
+/**
+ * struct virtchnl2_get_ptype_info - Packet type info
+ * @start_ptype_id: Starting ptype ID
+ * @num_ptypes: Number of packet types from start_ptype_id
+ * @pad: Padding for future extensions
+ * @ptype: Array of packet type info
+ *
+ * The total number of supported packet types is based on the descriptor type.
+ * For the flex descriptor, it is 1024 (10-bit ptype), and for the base
+ * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
+ * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
+ * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
+ * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
+ * is no specific field for the ptypes but are added at the end of the
+ * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * Reason for doing this is because compiler doesn't allow nested flexible
+ * array fields).
+ *
+ * If all the ptypes don't fit into one mailbox buffer, CP splits the
+ * ptype info into multiple messages, where each message will have its own
+ * 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
+ * updating all the ptype information extracted from the package (the number of
+ * ptypes extracted might be less than what PF/VF expects), it will append a
+ * dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
+ * to the ptype array.
+ *
+ * PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
  */
 struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
@@ -1167,25 +1433,46 @@ struct virtchnl2_get_ptype_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
 
-/* VIRTCHNL2_OP_GET_STATS
+/**
+ * struct virtchnl2_vport_stats - Vport statistics
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @rx_bytes: Received bytes
+ * @rx_unicast: Received unicast packets
+ * @rx_multicast: Received multicast packets
+ * @rx_broadcast: Received broadcast packets
+ * @rx_discards: Discarded packets on receive
+ * @rx_errors: Receive errors
+ * @rx_unknown_protocol: Unlnown protocol
+ * @tx_bytes: Transmitted bytes
+ * @tx_unicast: Transmitted unicast packets
+ * @tx_multicast: Transmitted multicast packets
+ * @tx_broadcast: Transmitted broadcast packets
+ * @tx_discards: Discarded packets on transmit
+ * @tx_errors: Transmit errors
+ * @rx_invalid_frame_length: Packets with invalid frame length
+ * @rx_overflow_drop: Packets dropped on buffer overflow
+ *
  * PF/VF sends this message to CP to get the update stats by specifying the
  * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ *
+ * Associated with VIRTCHNL2_OP_GET_STATS.
  */
 struct virtchnl2_vport_stats {
 	__le32 vport_id;
 	u8 pad[4];
 
-	__le64 rx_bytes;		/* received bytes */
-	__le64 rx_unicast;		/* received unicast pkts */
-	__le64 rx_multicast;		/* received multicast pkts */
-	__le64 rx_broadcast;		/* received broadcast pkts */
+	__le64 rx_bytes;
+	__le64 rx_unicast;
+	__le64 rx_multicast;
+	__le64 rx_broadcast;
 	__le64 rx_discards;
 	__le64 rx_errors;
 	__le64 rx_unknown_protocol;
-	__le64 tx_bytes;		/* transmitted bytes */
-	__le64 tx_unicast;		/* transmitted unicast pkts */
-	__le64 tx_multicast;		/* transmitted multicast pkts */
-	__le64 tx_broadcast;		/* transmitted broadcast pkts */
+	__le64 tx_bytes;
+	__le64 tx_unicast;
+	__le64 tx_multicast;
+	__le64 tx_broadcast;
 	__le64 tx_discards;
 	__le64 tx_errors;
 	__le64 rx_invalid_frame_length;
@@ -1194,7 +1481,9 @@ struct virtchnl2_vport_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
 
-/* physical port statistics */
+/**
+ * struct virtchnl2_phy_port_stats - Physical port statistics
+ */
 struct virtchnl2_phy_port_stats {
 	__le64 rx_bytes;
 	__le64 rx_unicast_pkts;
@@ -1247,10 +1536,17 @@ struct virtchnl2_phy_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(600, virtchnl2_phy_port_stats);
 
-/* VIRTCHNL2_OP_GET_PORT_STATS
- * PF/VF sends this message to CP to get the updated stats by specifying the
+/**
+ * struct virtchnl2_port_stats - Port statistics
+ * @vport_id: Vport ID
+ * @pad: Padding
+ * @phy_port_stats: Physical port statistics
+ * @virt_port_stats: Vport statistics
+ *
  * vport_id. CP responds with stats in struct virtchnl2_port_stats that
  * includes both physical port as well as vport statistics.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PORT_STATS.
  */
 struct virtchnl2_port_stats {
 	__le32 vport_id;
@@ -1262,44 +1558,61 @@ struct virtchnl2_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(736, virtchnl2_port_stats);
 
-/* VIRTCHNL2_OP_EVENT
+/**
+ * struct virtchnl2_event - Event info
+ * @event: Event opcode. See enum virtchnl2_event_codes
+ * @link_speed: Link_speed provided in Mbps
+ * @vport_id: Vport ID
+ * @link_status: Link status
+ * @pad: Padding
+ * @adi_id: ADI id
+ *
  * CP sends this message to inform the PF/VF driver of events that may affect
  * it. No direct response is expected from the driver, though it may generate
  * other messages in response to this one.
+ *
+ * Associated with VIRTCHNL2_OP_EVENT.
  */
 struct virtchnl2_event {
-	/* see VIRTCHNL2_EVENT_CODES definitions */
 	__le32 event;
-	/* link_speed provided in Mbps */
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
 	u8 pad;
-
-	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * VIRTCHNL2_OP_SET_RSS_KEY
+/**
+ * struct virtchnl2_rss_key - RSS key info
+ * @vport_id: Vport id
+ * @key_len: Length of RSS key
+ * @pad: Padding
+ * @key: RSS hash key, packed bytes
  * PF/VF sends this message to get or set RSS key. Only supported if both
  * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_key structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
  */
 struct virtchnl2_rss_key {
 	__le32 vport_id;
 	__le16 key_len;
 	u8 pad;
-	u8 key[1];         /* RSS hash key, packed bytes */
+	u8 key[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
 
-/* structure to specify a chunk of contiguous queues */
+/**
+ * struct virtchnl2_queue_chunk - Chunk of contiguous queues
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Starting queue id
+ * @num_queues: Number of queues
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
@@ -1308,7 +1621,11 @@ struct virtchnl2_queue_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/* struct virtchnl2_queue_chunks - Chunks of contiguous queues
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
+ */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1317,14 +1634,19 @@ struct virtchnl2_queue_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
 
-/* VIRTCHNL2_OP_ENABLE_QUEUES
- * VIRTCHNL2_OP_DISABLE_QUEUES
- * VIRTCHNL2_OP_DEL_QUEUES
+/**
+ * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
  *
- * PF sends these messages to enable, disable or delete queues specified in
- * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * PF/VF sends these messages to enable, disable or delete queues specified in
+ * chunks. It sends virtchnl2_del_ena_dis_queues struct to specify the queues
  * to be enabled/disabled/deleted. Also applicable to single queue receive or
  * transmit. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
+ * VIRTCHNL2_OP_DISABLE_QUEUES.
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
@@ -1335,30 +1657,43 @@ struct virtchnl2_del_ena_dis_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
 
-/* Queue to vector mapping */
+/**
+ * struct virtchnl2_queue_vector - Queue to vector mapping
+ * @queue_id: Queue id
+ * @vector_id: Vector id
+ * @pad: Padding
+ * @itr_idx: See enum virtchnl2_itr_idx
+ * @queue_type: See enum virtchnl2_queue_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_vector {
 	__le32 queue_id;
 	__le16 vector_id;
 	u8 pad[2];
 
-	/* see VIRTCHNL2_ITR_IDX definitions */
 	__le32 itr_idx;
 
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
 	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
 
-/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
- * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+/**
+ * struct virtchnl2_queue_vector_maps - Map/unmap queues info
+ * @vport_id: Vport id
+ * @num_qv_maps: Number of queue vector maps
+ * @pad: Padding
+ * @qv_maps: Queue to vector maps
  *
- * PF sends this message to map or unmap queues to vectors and interrupt
+ * PF/VF sends this message to map or unmap queues to vectors and interrupt
  * throttling rate index registers. External data buffer contains
  * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
  * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
  * after validating the queue and vector ids and returns a status code.
+ *
+ * Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
  */
 struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
@@ -1369,11 +1704,17 @@ struct virtchnl2_queue_vector_maps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
 
-/* VIRTCHNL2_OP_LOOPBACK
+/**
+ * struct virtchnl2_loopback - Loopback info
+ * @vport_id: Vport id
+ * @enable: Enable/disable
+ * @pad: Padding for future extensions
  *
  * PF/VF sends this message to transition to/from the loopback state. Setting
  * the 'enable' to 1 enables the loopback state and setting 'enable' to 0
  * disables it. CP configures the state to loopback and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_LOOPBACK.
  */
 struct virtchnl2_loopback {
 	__le32 vport_id;
@@ -1383,22 +1724,31 @@ struct virtchnl2_loopback {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
 
-/* structure to specify each MAC address */
+/* struct virtchnl2_mac_addr - MAC address info
+ * @addr: MAC address
+ * @type: MAC type. See enum virtchnl2_mac_addr_type.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_mac_addr {
 	u8 addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_MAC_TYPE definitions */
 	u8 type;
 	u8 pad;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
 
-/* VIRTCHNL2_OP_ADD_MAC_ADDR
- * VIRTCHNL2_OP_DEL_MAC_ADDR
+/**
+ * struct virtchnl2_mac_addr_list - List of MAC addresses
+ * @vport_id: Vport id
+ * @num_mac_addr: Number of MAC addresses
+ * @pad: Padding
+ * @mac_addr_list: List with MAC address info
  *
  * PF/VF driver uses this structure to send list of MAC addresses to be
  * added/deleted to the CP where as CP performs the action and returns the
  * status.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
 struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
@@ -1409,30 +1759,40 @@ struct virtchnl2_mac_addr_list {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
 
-/* VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE
+/**
+ * struct virtchnl2_promisc_info - Promiscuous type information
+ * @vport_id: Vport id
+ * @flags: See enum virtchnl2_promisc_flags
+ * @pad: Padding for future extensions
  *
  * PF/VF sends vport id and flags to the CP where as CP performs the action
  * and returns the status.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
  */
 struct virtchnl2_promisc_info {
 	__le32 vport_id;
-	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
 	__le16 flags;
 	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
 
-/* VIRTCHNL2_PTP_CAPS
- * PTP capabilities
+/**
+ * enum virtchnl2_ptp_caps - PTP capabilities
  */
-#define VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	BIT(0)
-#define VIRTCHNL2_PTP_CAP_PTM			BIT(1)
-#define VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	BIT(2)
-#define VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	BIT(3)
-#define	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	BIT(4)
+enum virtchnl2_ptp_caps {
+	VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	= BIT(0),
+	VIRTCHNL2_PTP_CAP_PTM			= BIT(1),
+	VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	= BIT(2),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	= BIT(3),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	= BIT(4),
+};
 
-/* Legacy cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_legacy_cross_time_reg - Legacy cross time registers
+ *						offsets.
+ */
 struct virtchnl2_ptp_legacy_cross_time_reg {
 	__le32 shadow_time_0;
 	__le32 shadow_time_l;
@@ -1442,7 +1802,9 @@ struct virtchnl2_ptp_legacy_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_legacy_cross_time_reg);
 
-/* PTM cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_ptm_cross_time_reg - PTM cross time registers offsets
+ */
 struct virtchnl2_ptp_ptm_cross_time_reg {
 	__le32 art_l;
 	__le32 art_h;
@@ -1452,7 +1814,10 @@ struct virtchnl2_ptp_ptm_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_ptm_cross_time_reg);
 
-/* Registers needed to control the main clock */
+/**
+ * struct virtchnl2_ptp_device_clock_control - Registers needed to control the
+ *					       main clock.
+ */
 struct virtchnl2_ptp_device_clock_control {
 	__le32 cmd;
 	__le32 incval_l;
@@ -1464,7 +1829,13 @@ struct virtchnl2_ptp_device_clock_control {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_device_clock_control);
 
-/* Structure that defines tx tstamp entry - index and register offset */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_entry - PTP TX timestamp entry
+ * @tx_latch_register_base: TX latch register base
+ * @tx_latch_register_offset: TX latch register offset
+ * @index: Index
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_entry {
 	__le32 tx_latch_register_base;
 	__le32 tx_latch_register_offset;
@@ -1474,12 +1845,15 @@ struct virtchnl2_ptp_tx_tstamp_entry {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_entry);
 
-/* Structure that defines tx tstamp entries - total number of latches
- * and the array of entries.
+/**
+ * struct virtchnl2_ptp_tx_tstamp - Structure that defines tx tstamp entries
+ * @num_latches: Total number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @ptp_tx_tstamp_entries: Aarray of TX timestamp entries
  */
 struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
 	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
@@ -1487,13 +1861,21 @@ struct virtchnl2_ptp_tx_tstamp {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
 
-/* VIRTCHNL2_OP_GET_PTP_CAPS
+/**
+ * struct virtchnl2_get_ptp_caps - Get PTP capabilities
+ * @ptp_caps: PTP capability bitmap. See enum virtchnl2_ptp_caps.
+ * @pad: Padding
+ * @legacy_cross_time_reg: Legacy cross time register
+ * @ptm_cross_time_reg: PTM cross time register
+ * @device_clock_control: Device clock control
+ * @tx_tstamp: TX timestamp
+ *
  * PV/VF sends this message to negotiate PTP capabilities. CP updates bitmap
  * with supported features and fulfills appropriate structures.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_CAPS.
  */
 struct virtchnl2_get_ptp_caps {
-	/* PTP capability bitmap */
-	/* see VIRTCHNL2_PTP_CAPS definitions */
 	__le32 ptp_caps;
 	u8 pad[4];
 
@@ -1505,7 +1887,15 @@ struct virtchnl2_get_ptp_caps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
 
-/* Structure that describes tx tstamp values, index and validity */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
+ *					  values, index and validity.
+ * @tstamp_h: Timestamp high
+ * @tstamp_l: Timestamp low
+ * @index: Index
+ * @valid: Timestamp validity
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_latch {
 	__le32 tstamp_h;
 	__le32 tstamp_l;
@@ -1516,9 +1906,17 @@ struct virtchnl2_ptp_tx_tstamp_latch {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
 
-/* VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latches - PTP TX timestamp latches
+ * @num_latches: Number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @tstamp_latches: PTP TX timestamp latch
+ *
  * PF/VF sends this message to receive a specified number of timestamps
  * entries.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES.
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
@@ -1613,7 +2011,7 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
  * @msg: pointer to the msg buffer
  * @msglen: msg length
  *
- * validate msg format against struct for each opcode
+ * Validate msg format against struct for each opcode.
  */
 static inline int
 virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
@@ -1622,7 +2020,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	bool err_msg_format = false;
 	__le32 valid_len = 0;
 
-	/* Validate message length. */
+	/* Validate message length */
 	switch (v_opcode) {
 	case VIRTCHNL2_OP_VERSION:
 		valid_len = sizeof(struct virtchnl2_version_info);
@@ -1637,7 +2035,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_create_vport *)msg;
 
 			if (cvport->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1652,7 +2050,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_non_flex_create_adi *)msg;
 
 			if (cadi->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1707,7 +2105,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_add_queues *)msg;
 
 			if (add_q->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1734,7 +2132,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
 		valid_len = sizeof(struct virtchnl2_add_queue_groups);
 		if (msglen != valid_len) {
-			__le32 i = 0, offset = 0;
+			__le64 offset;
+			__le32 i;
 			struct virtchnl2_add_queue_groups *add_queue_grp =
 				(struct virtchnl2_add_queue_groups *)msg;
 			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
@@ -1801,7 +2200,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_alloc_vectors *)msg;
 
 			if (v_av->vchunks.num_vchunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1830,7 +2229,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_key *)msg;
 
 			if (vrk->key_len == 0) {
-				/* zero length is allowed as input */
+				/* Zero length is allowed as input */
 				break;
 			}
 
@@ -1845,7 +2244,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_lut *)msg;
 
 			if (vrl->lut_entries == 0) {
-				/* zero entries is allowed as input */
+				/* Zero entries is allowed as input */
 				break;
 			}
 
@@ -1902,13 +2301,13 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
 		}
 		break;
-	/* These are always errors coming from the VF. */
+	/* These are always errors coming from the VF */
 	case VIRTCHNL2_OP_EVENT:
 	case VIRTCHNL2_OP_UNKNOWN:
 	default:
 		return VIRTCHNL2_STATUS_ERR_ESRCH;
 	}
-	/* few more checks */
+	/* Few more checks */
 	if (err_msg_format || valid_len != msglen)
 		return VIRTCHNL2_STATUS_ERR_EINVAL;
 
diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index 9e04cf8628..f7521d87a7 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 /*
  * Copyright (C) 2019 Intel Corporation
@@ -12,199 +12,220 @@
 /* VIRTCHNL2_TX_DESC_IDS
  * Transmit descriptor ID flags
  */
-#define VIRTCHNL2_TXDID_DATA				BIT(0)
-#define VIRTCHNL2_TXDID_CTX				BIT(1)
-#define VIRTCHNL2_TXDID_REINJECT_CTX			BIT(2)
-#define VIRTCHNL2_TXDID_FLEX_DATA			BIT(3)
-#define VIRTCHNL2_TXDID_FLEX_CTX			BIT(4)
-#define VIRTCHNL2_TXDID_FLEX_TSO_CTX			BIT(5)
-#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		BIT(6)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		BIT(7)
-#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	BIT(8)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	BIT(9)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		BIT(10)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			BIT(11)
-#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			BIT(12)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		BIT(13)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		BIT(14)
-#define VIRTCHNL2_TXDID_DESC_DONE			BIT(15)
-
-/* VIRTCHNL2_RX_DESC_IDS
+enum virtchnl2_tx_desc_ids {
+	VIRTCHNL2_TXDID_DATA				= BIT(0),
+	VIRTCHNL2_TXDID_CTX				= BIT(1),
+	VIRTCHNL2_TXDID_REINJECT_CTX			= BIT(2),
+	VIRTCHNL2_TXDID_FLEX_DATA			= BIT(3),
+	VIRTCHNL2_TXDID_FLEX_CTX			= BIT(4),
+	VIRTCHNL2_TXDID_FLEX_TSO_CTX			= BIT(5),
+	VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		= BIT(6),
+	VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		= BIT(7),
+	VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	= BIT(8),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	= BIT(9),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		= BIT(10),
+	VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			= BIT(11),
+	VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			= BIT(12),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		= BIT(13),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		= BIT(14),
+	VIRTCHNL2_TXDID_DESC_DONE			= BIT(15),
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_IDS
  * Receive descriptor IDs (range from 0 to 63)
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE			0
-#define VIRTCHNL2_RXDID_1_32B_BASE			1
-/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
- * differentiated based on queue model; e.g. single queue model can
- * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
- * for DID 2.
- */
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ			2
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC			2
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW			3
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB		4
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL		5
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2			6
-#define VIRTCHNL2_RXDID_7_HW_RSVD			7
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC		16
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN		17
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4		18
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6		19
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW		20
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP		21
-/* 22 through 63 are reserved */
-
-/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+enum virtchnl2_rx_desc_ids {
+	VIRTCHNL2_RXDID_0_16B_BASE,
+	VIRTCHNL2_RXDID_1_32B_BASE,
+	/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+	 * differentiated based on queue model; e.g. single queue model can
+	 * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+	 * for DID 2.
+	 */
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ		= 2,
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC		= VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW		= 3,
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB	= 4,
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL	= 5,
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2		= 6,
+	VIRTCHNL2_RXDID_7_HW_RSVD		= 7,
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC	= 16,
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN	= 17,
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4	= 18,
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6	= 19,
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW	= 20,
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP	= 21,
+	/* 22 through 63 are reserved */
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
-/* 22 through 63 are reserved */
-
-/* Rx */
+#define VIRTCHNL2_RXDID_M(bit)			BIT_ULL(VIRTCHNL2_RXDID_##bit)
+
+enum virtchnl2_rx_desc_id_bitmasks {
+	VIRTCHNL2_RXDID_0_16B_BASE_M		= VIRTCHNL2_RXDID_M(0_16B_BASE),
+	VIRTCHNL2_RXDID_1_32B_BASE_M		= VIRTCHNL2_RXDID_M(1_32B_BASE),
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		= VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		= VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		= VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW),
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	= VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB),
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	= VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL),
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	= VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2),
+	VIRTCHNL2_RXDID_7_HW_RSVD_M		= VIRTCHNL2_RXDID_M(7_HW_RSVD),
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	= VIRTCHNL2_RXDID_M(16_COMMS_GENERIC),
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	= VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN),
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	= VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4),
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	= VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6),
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	= VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW),
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	= VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP),
+	/* 22 through 63 are reserved */
+};
+
 /* For splitq virtchnl2_rx_flex_desc_adv desc members */
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		GENMASK(3, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		GENMASK(7, 6)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		\
-	IDPF_M(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M			\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M		GENMASK(15, 13)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M	\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M		GENMASK(13, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M		GENMASK(14, 12)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
 
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S		7
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST			8 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		0 /* 2 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			8 /* this entry must be last!!! */
-
-/* for singleq (flex) virtchnl2_rx_flex_desc fields */
-/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		= 0,
+	/* 2 bits */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		= 2,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		= 3,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	= 4,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	= 5,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	= 6,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	= 7,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			= 8,
+};
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields
+ * for virtchnl2_rx_flex_desc.ptype_flex_flags0 member
+ */
 #define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			GENMASK(9, 0)
 
-/* for virtchnl2_rx_flex_desc.pkt_length member */
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M			\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+/* For virtchnl2_rx_flex_desc.pkt_len member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S		0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M		GENMASK(13, 0)
 
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S			1
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S			2
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S			3
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S		7
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S			8
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S		9
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S			10
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S			11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST			16 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+enum virtchnl2_rx_flex_desc_status_error_0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			0 /* 4 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			5
-/* [10:6] reserved */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			16 /* this entry must be last!!! */
-
-/* for virtchnl2_rx_flex_desc.ts_low member */
+enum virtchnl2_rx_flex_desc_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			= 0,
+	/* 4 bits */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			= 4,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			= 5,
+	/* [10:6] reserved */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		= 11,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		= 12,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		= 13,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		= 14,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		= 15,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			= 16,
+};
+
+/* For virtchnl2_rx_flex_desc.ts_low member */
 #define VIRTCHNL2_RX_FLEX_TSTAMP_VALID				BIT(0)
 
 /* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
@@ -212,72 +233,89 @@
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M	\
 	BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S	52
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	\
-	IDPF_M(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	GENMASK_ULL(62, 52)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S	38
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	\
-	IDPF_M(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	GENMASK_ULL(51, 38)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S	30
-#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	\
-	IDPF_M(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	GENMASK_ULL(37, 30)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S	19
-#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	\
-	IDPF_M(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	GENMASK_ULL(26, 19)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S	0
-#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	\
-	IDPF_M(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	GENMASK_ULL(18, 0)
 
-/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		0
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		1
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		2
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		3
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		4
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		5 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	8
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		9 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		11
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		12 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		14
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	15
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		16 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	18
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		19 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		= 1,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		= 5, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	= 8,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		= 9, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		= 11,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		= 12, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		= 14,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	= 15,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		= 16, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	= 18,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		= 19, /* this entry must be last!!! */
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S	0
+enum virtcnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S,
+};
 
-/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		0
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	1
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		2
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		3 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		3
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		4
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		5
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		6
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		7
-
-/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_error_bits {
+	VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	= 1,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		= 3, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		= 5,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		= 6,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		= 7,
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA		0
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID		1
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV		2
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH		3
+enum virtchnl2_rx_base_desc_flstat_values {
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH,
+};
 
-/* Receive Descriptors */
-/* splitq buf
+/**
+ * struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
+ * @qword0: RX buffer struct
+ * @qword0.buf_id: Buffer identifier
+ * @qword0.rsvd0: Reserved
+ * @qword0.rsvd1: Reserved
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd2: Reserved
+ *
+ * Receive Descriptors
+ * SplitQ buffer
  * |                                       16|                   0|
  * ----------------------------------------------------------------
  * | RSV                                     | Buffer ID          |
@@ -292,16 +330,23 @@
  */
 struct virtchnl2_splitq_rx_buf_desc {
 	struct {
-		__le16  buf_id; /* Buffer Identifier */
+		__le16  buf_id;
 		__le16  rsvd0;
 		__le32  rsvd1;
 	} qword0;
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
-/* singleq buf
+/**
+ * struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd1: Reserved
+ * @rsvd2: Reserved
+ *
+ * SingleQ buffer
  * |                                                             0|
  * ----------------------------------------------------------------
  * | Rx packet buffer address                                     |
@@ -315,18 +360,44 @@ struct virtchnl2_splitq_rx_buf_desc {
  * |                                                             0|
  */
 struct virtchnl2_singleq_rx_buf_desc {
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd1;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
+/**
+ * union virtchnl2_rx_buf_desc - RX buffer descriptor
+ * @read: Singleq RX buffer descriptor format
+ * @split_rd: Splitq RX buffer descriptor format
+ */
 union virtchnl2_rx_buf_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
 	struct virtchnl2_splitq_rx_buf_desc		split_rd;
 };
 
-/* (0x00) singleq wb(compl) */
+/**
+ * struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format
+ * @qword0: First quad word struct
+ * @qword0.lo_dword: Lower dual word struct
+ * @qword0.lo_dword.mirroring_status: Mirrored packet status
+ * @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet
+ * @qword0.hi_dword: High dual word union
+ * @qword0.hi_dword.rss: RSS hash
+ * @qword0.hi_dword.fd_id: Flow director filter id
+ * @qword1: Second quad word struct
+ * @qword1.status_error_ptype_len: Status/error/PTYPE/length
+ * @qword2: Third quad word struct
+ * @qword2.ext_status: Extended status
+ * @qword2.rsvd: Reserved
+ * @qword2.l2tag2_1: Extracted L2 tag 2 from the packet
+ * @qword2.l2tag2_2: Reserved
+ * @qword3: Fourth quad word struct
+ * @qword3.reserved: Reserved
+ * @qword3.fd_id: Flow director filter id
+ *
+ * Profile ID 0x1, SingleQ, base writeback format.
+ */
 struct virtchnl2_singleq_base_rx_desc {
 	struct {
 		struct {
@@ -334,16 +405,15 @@ struct virtchnl2_singleq_base_rx_desc {
 			__le16 l2tag1;
 		} lo_dword;
 		union {
-			__le32 rss; /* RSS Hash */
-			__le32 fd_id; /* Flow Director filter id */
+			__le32 rss;
+			__le32 fd_id;
 		} hi_dword;
 	} qword0;
 	struct {
-		/* status/error/PTYPE/length */
 		__le64 status_error_ptype_len;
 	} qword1;
 	struct {
-		__le16 ext_status; /* extended status */
+		__le16 ext_status;
 		__le16 rsvd;
 		__le16 l2tag2_1;
 		__le16 l2tag2_2;
@@ -352,19 +422,40 @@ struct virtchnl2_singleq_base_rx_desc {
 		__le32 reserved;
 		__le32 fd_id;
 	} qword3;
-}; /* writeback */
+};
 
-/* (0x01) singleq flex compl */
+/**
+ * struct virtchnl2_rx_flex_desc - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @flex_meta0: Flexible metadata container 0
+ * @flex_meta1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @time_stamp_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flex_meta2: Flexible metadata container 2
+ * @flex_meta3: Flexible metadata container 3
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.flex_meta4: Flexible metadata container 4
+ * @flex_ts.flex.flex_meta5: Flexible metadata container 5
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x1, SingleQ, flex completion writeback format.
+ */
 struct virtchnl2_rx_flex_desc {
 	/* Qword 0 */
-	u8 rxdid; /* descriptor builder profile id */
-	u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
-	__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
-	__le16 pkt_len; /* [15:14] are reserved */
-	__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
-					/* sph=[11:11] */
-					/* ff1/ext=[15:12] */
-
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
@@ -390,7 +481,29 @@ struct virtchnl2_rx_flex_desc {
 	} flex_ts;
 };
 
-/* (0x02) */
+/**
+ * struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SingleQ, flex writeback format.
+ */
 struct virtchnl2_rx_flex_desc_nic {
 	/* Qword 0 */
 	u8 rxdid;
@@ -422,8 +535,27 @@ struct virtchnl2_rx_flex_desc_nic {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Switch Profile
- * RxDID Profile Id 3
+/**
+ * struct virtchnl2_rx_flex_desc_sw - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @src_vsi: Source VSI, [10:15] are reserved
+ * @flex_md1_rsvd: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor Switch Profile
+ * RxDID Profile ID 0x3, SingleQ
  * Flex-field 0: Source Vsi
  */
 struct virtchnl2_rx_flex_desc_sw {
@@ -437,9 +569,55 @@ struct virtchnl2_rx_flex_desc_sw {
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
-	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 src_vsi;
 	__le16 flex_md1_rsvd;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le32 rsvd;
+	__le32 ts_high;
+};
 
+#ifndef EXTERNAL_RELEASE
+/**
+ * struct virtchnl2_rx_flex_desc_nic_veb_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @dst_vsi: Destination VSI, [10:15] are reserved
+ * @flex_field_1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 0x4
+ * Flex-field 0: Destination Vsi
+ */
+struct virtchnl2_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi;
+	__le16 flex_field_1;
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flex_flags2;
@@ -448,13 +626,85 @@ struct virtchnl2_rx_flex_desc_sw {
 	__le16 l2tag2_2nd;
 
 	/* Qword 3 */
-	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 rsvd;
 	__le32 ts_high;
 };
 
-
-/* Rx Flex Descriptor NIC Profile
- * RxDID Profile Id 6
+/**
+ * struct virtchnl2_rx_flex_desc_nic_acl_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @acl_ctr0: ACL counter 0
+ * @acl_ctr1: ACL counter 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @acl_ctr2: ACL counter 2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile ID 0x5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct virtchnl2_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd;
+	__le32 ts_high;
+};
+#endif /* !EXTERNAL_RELEASE */
+
+/**
+ * struct virtchnl2_rx_flex_desc_nic_2 - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @src_vsi: Source VSI
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC Profile
+ * RxDID Profile ID 0x6
  * Flex-field 0: RSS hash lower 16-bits
  * Flex-field 1: RSS hash upper 16-bits
  * Flex-field 2: Flow Id lower 16-bits
@@ -493,29 +743,43 @@ struct virtchnl2_rx_flex_desc_nic_2 {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Advanced (Split Queue Model)
- * RxDID Profile Id 7
+/**
+ * struct virtchnl2_rx_flex_desc_adv - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @fmd0: Flexible metadata container 0
+ * @fmd1: Flexible metadata container 1
+ * @fmd2: Flexible metadata container 2
+ * @fflags2: Flags
+ * @hash3: Upper bits of Rx hash value
+ * @fmd3: Flexible metadata container 3
+ * @fmd4: Flexible metadata container 4
+ * @fmd5: Flexible metadata container 5
+ * @fmd6: Flexible metadata container 6
+ * @fmd7_0: Flexible metadata container 7.0
+ * @fmd7_1: Flexible metadata container 7.1
+ *
+ * RX Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile ID 0x2
  */
 struct virtchnl2_rx_flex_desc_adv {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
@@ -534,10 +798,42 @@ struct virtchnl2_rx_flex_desc_adv {
 	__le16 fmd6;
 	__le16 fmd7_0;
 	__le16 fmd7_1;
-}; /* writeback */
+};
 
-/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
- * RxDID Profile Id 8
+/**
+ * struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @buf_id: Buffer identifier. Only in splitq mode.
+ * @misc: Union
+ * @misc.raw_cs: Raw checksum
+ * @misc.l2tag1: Stripped L2 tag from the received packet
+ * @misc.rscseglen: RSC segment length
+ * @hash1: Lower 16 bits of Rx hash value, hash[15:0]
+ * @ff2_mirrid_hash2: Union
+ * @ff2_mirrid_hash2.fflags2: Flexible flags section 2
+ * @ff2_mirrid_hash2.mirrorid: Mirror id
+ * @ff2_mirrid_hash2.hash2: 8 bits of Rx hash value, hash[23:16]
+ * @hash3: Upper 8 bits of Rx hash value, hash[31:24]
+ * @l2tag2: Extracted L2 tag 2 from the packet
+ * @fmd4: Flexible metadata container 4
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @fmd6: Flexible metadata container 6
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SplitQ, flex writeback format.
+ *
  * Flex-field 0: BufferID
  * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
  * Flex-field 2: Hash[15:0]
@@ -548,30 +844,17 @@ struct virtchnl2_rx_flex_desc_adv {
  */
 struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
 	u8 fflags1;
 	u8 ts_low;
-	__le16 buf_id; /* only in splitq */
+	__le16 buf_id;
 	union {
 		__le16 raw_cs;
 		__le16 l2tag1;
@@ -591,7 +874,7 @@ struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	__le16 l2tag1;
 	__le16 fmd6;
 	__le32 ts_high;
-}; /* writeback */
+};
 
 union virtchnl2_rx_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 12/21] common/idpf: avoid variable 0-init
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (10 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 11/21] common/idpf: move related defines into enums Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
                           ` (8 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Don't initialize the variables if not needed.

Also use 'err' instead of 'status', 'ret_code', 'ret' etc.
for consistency and change the return label 'sq_send_command_out'
to 'err_unlock'.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c      | 60 +++++++++----------
 .../common/idpf/base/idpf_controlq_setup.c    | 18 +++---
 2 files changed, 38 insertions(+), 40 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index d9ca33cdb9..65e5599614 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -61,7 +61,7 @@ static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
  */
 static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	for (i = 0; i < cq->ring_size; i++) {
 		struct idpf_ctlq_desc *desc = IDPF_CTLQ_DESC(cq, i);
@@ -134,7 +134,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 {
 	struct idpf_ctlq_info *cq;
 	bool is_rxq = false;
-	int status = 0;
+	int err;
 
 	if (!qinfo->len || !qinfo->buf_size ||
 	    qinfo->len > IDPF_CTLQ_MAX_RING_SIZE ||
@@ -160,14 +160,14 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 		is_rxq = true;
 		/* fallthrough */
 	case IDPF_CTLQ_TYPE_MAILBOX_TX:
-		status = idpf_ctlq_alloc_ring_res(hw, cq);
+		err = idpf_ctlq_alloc_ring_res(hw, cq);
 		break;
 	default:
-		status = -EINVAL;
+		err = -EINVAL;
 		break;
 	}
 
-	if (status)
+	if (err)
 		goto init_free_q;
 
 	if (is_rxq) {
@@ -178,7 +178,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			idpf_calloc(hw, qinfo->len,
 				    sizeof(struct idpf_ctlq_msg *));
 		if (!cq->bi.tx_msg) {
-			status = -ENOMEM;
+			err = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
 	}
@@ -192,7 +192,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	LIST_INSERT_HEAD(&hw->cq_list_head, cq, cq_list);
 
 	*cq_out = cq;
-	return status;
+	return 0;
 
 init_dealloc_q_mem:
 	/* free ring buffers and the ring itself */
@@ -201,7 +201,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	idpf_free(hw, cq);
 	cq = NULL;
 
-	return status;
+	return err;
 }
 
 /**
@@ -232,27 +232,27 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info)
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
-	int ret_code = 0;
-	int i = 0;
+	int err;
+	int i;
 
 	LIST_INIT(&hw->cq_list_head);
 
 	for (i = 0; i < num_q; i++) {
 		struct idpf_ctlq_create_info *qinfo = q_info + i;
 
-		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
-		if (ret_code)
+		err = idpf_ctlq_add(hw, qinfo, &cq);
+		if (err)
 			goto init_destroy_qs;
 	}
 
-	return ret_code;
+	return 0;
 
 init_destroy_qs:
 	LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
 				 idpf_ctlq_info, cq_list)
 		idpf_ctlq_remove(hw, cq);
 
-	return ret_code;
+	return err;
 }
 
 /**
@@ -286,9 +286,9 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 		   u16 num_q_msg, struct idpf_ctlq_msg q_msg[])
 {
 	struct idpf_ctlq_desc *desc;
-	int num_desc_avail = 0;
-	int status = 0;
-	int i = 0;
+	int num_desc_avail;
+	int err = 0;
+	int i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -298,8 +298,8 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* Ensure there are enough descriptors to send all messages */
 	num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
 	if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
-		status = -ENOSPC;
-		goto sq_send_command_out;
+		err = -ENOSPC;
+		goto err_unlock;
 	}
 
 	for (i = 0; i < num_q_msg; i++) {
@@ -370,10 +370,10 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 
 	wr32(hw, cq->reg.tail, cq->next_to_use);
 
-sq_send_command_out:
+err_unlock:
 	idpf_release_lock(&cq->cq_lock);
 
-	return status;
+	return err;
 }
 
 /**
@@ -397,9 +397,8 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 				struct idpf_ctlq_msg *msg_status[], bool force)
 {
 	struct idpf_ctlq_desc *desc;
-	u16 i = 0, num_to_clean;
+	u16 i, num_to_clean;
 	u16 ntc, desc_err;
-	int ret = 0;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -446,7 +445,7 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 	/* Return number of descriptors actually cleaned */
 	*clean_count = i;
 
-	return ret;
+	return 0;
 }
 
 /**
@@ -513,7 +512,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	u16 ntp = cq->next_to_post;
 	bool buffs_avail = false;
 	u16 tbp = ntp + 1;
-	int status = 0;
 	int i = 0;
 
 	if (*buff_count > cq->ring_size)
@@ -614,7 +612,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* return the number of buffers that were not posted */
 	*buff_count = *buff_count - i;
 
-	return status;
+	return 0;
 }
 
 /**
@@ -633,8 +631,8 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 {
 	u16 num_to_clean, ntc, ret_val, flags;
 	struct idpf_ctlq_desc *desc;
-	int ret_code = 0;
-	u16 i = 0;
+	int err = 0;
+	u16 i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -667,7 +665,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 				      IDPF_CTLQ_FLAG_FTYPE_S;
 
 		if (flags & IDPF_CTLQ_FLAG_ERR)
-			ret_code = -EBADMSG;
+			err = -EBADMSG;
 
 		q_msg[i].cookie.mbx.chnl_opcode = LE32_TO_CPU(desc->cookie_high);
 		q_msg[i].cookie.mbx.chnl_retval = LE32_TO_CPU(desc->cookie_low);
@@ -713,7 +711,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 
 	*num_q_msg = i;
 	if (*num_q_msg == 0)
-		ret_code = -ENOMSG;
+		err = -ENOMSG;
 
-	return ret_code;
+	return err;
 }
diff --git a/drivers/common/idpf/base/idpf_controlq_setup.c b/drivers/common/idpf/base/idpf_controlq_setup.c
index 21f43c74f5..cd6bcb1cf0 100644
--- a/drivers/common/idpf/base/idpf_controlq_setup.c
+++ b/drivers/common/idpf/base/idpf_controlq_setup.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 
@@ -34,7 +34,7 @@ static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
 static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
 				struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	/* Do not allocate DMA buffers for transmit queues */
 	if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
@@ -153,20 +153,20 @@ void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
  */
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
-	int ret_code;
+	int err;
 
 	/* verify input for valid configuration */
 	if (!cq->ring_size || !cq->buf_size)
 		return -EINVAL;
 
 	/* allocate the ring memory */
-	ret_code = idpf_ctlq_alloc_desc_ring(hw, cq);
-	if (ret_code)
-		return ret_code;
+	err = idpf_ctlq_alloc_desc_ring(hw, cq);
+	if (err)
+		return err;
 
 	/* allocate buffers in the rings */
-	ret_code = idpf_ctlq_alloc_bufs(hw, cq);
-	if (ret_code)
+	err = idpf_ctlq_alloc_bufs(hw, cq);
+	if (err)
 		goto idpf_init_cq_free_ring;
 
 	/* success! */
@@ -174,5 +174,5 @@ int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 
 idpf_init_cq_free_ring:
 	idpf_free_dma_mem(hw, &cq->desc_ring);
-	return ret_code;
+	return err;
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 13/21] common/idpf: update in PTP message validation
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (11 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 12/21] common/idpf: avoid variable 0-init Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
                           ` (7 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

When the message for getting timestamp latches is sent by the driver,
number of latches is equal to 0. Current implementation of message
validation function incorrectly notifies this kind of message length as
invalid.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index e76ccbd46f..24a8b37876 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -2272,7 +2272,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_CAPS:
 		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_get_ptp_caps *ptp_caps =
 			(struct virtchnl2_get_ptp_caps *)msg;
 
@@ -2288,7 +2288,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
 		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
 			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (12 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 15/21] common/idpf: add wmb before tail Soumyadeep Hore
                           ` (6 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

This capability bit indicates both inline as well as side band flow
steering capability.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 24a8b37876..9dd5191c0e 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -243,7 +243,7 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
 	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
 	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
-	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_FLOW_STEER		= BIT_ULL(6),
 	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
 	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
 	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 15/21] common/idpf: add wmb before tail
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (13 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
                           ` (5 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Introduced through customer's feedback in their attempt to address some
bugs this introduces a memory barrier before posting ctlq tail. This
makes sure memory writes have a chance to take place before HW starts
messing with the descriptors.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index 65e5599614..4f47759a4f 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -604,6 +604,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			/* Wrap to end of end ring since current ntp is 0 */
 			cq->next_to_post = cq->ring_size - 1;
 
+		idpf_wmb();
+
 		wr32(hw, cq->reg.tail, cq->next_to_post);
 	}
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 16/21] drivers: add flex array support and fix issues
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (14 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 15/21] common/idpf: add wmb before tail Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 17/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
                           ` (4 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

With the internal Linux upstream feedback that is received on
IDPF driver and also some references available online, it
is discouraged to use 1-sized array fields in the structures,
especially in the new Linux drivers that are going to be
upstreamed. Instead, it is recommended to use flex array fields
for the dynamic sized structures.

Some fixes based on code change is introduced to compile dpdk.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h       | 466 ++++-----------------
 drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
 drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
 3 files changed, 86 insertions(+), 410 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 9dd5191c0e..317bd06c0f 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -63,6 +63,10 @@ enum virtchnl2_status {
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
+
+#define STRUCT_VAR_LEN		1
 
 /**
  * New major set of opcodes introduced and so leaving room for
@@ -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_reg_chunk chunks[1];
+	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 
 /**
  * enum virtchnl2_vport_flags - Vport flags
@@ -773,7 +776,7 @@ struct virtchnl2_create_vport {
 	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(192, virtchnl2_create_vport, chunks.chunks);
 
 /**
  * struct virtchnl2_vport - Vport identifier information
@@ -860,10 +863,9 @@ struct virtchnl2_config_tx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[10];
-	struct virtchnl2_txq_info qinfo[1];
+	struct virtchnl2_txq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(72, virtchnl2_config_tx_queues, qinfo);
 
 /**
  * struct virtchnl2_rxq_info - Receive queue config info
@@ -942,10 +944,9 @@ struct virtchnl2_config_rx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[18];
-	struct virtchnl2_rxq_info qinfo[1];
+	struct virtchnl2_rxq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(112, virtchnl2_config_rx_queues, qinfo);
 
 /**
  * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
@@ -975,16 +976,15 @@ struct virtchnl2_add_queues {
 
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(56, virtchnl2_add_queues, chunks.chunks);
 
 /* Queue Groups Extension */
 /**
  * struct virtchnl2_rx_queue_group_info - RX queue group info
- * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
- *		  allocated by CreateVport command. New size will be returned
- *		  if allocation succeeded, otherwise original rss_size from
- *		  CreateVport will be returned.
+ * @rss_lut_size: User can ask to update rss_lut size originally allocated by
+ *		  CreateVport command. New size will be returned if allocation
+ *		  succeeded, otherwise original rss_size from CreateVport
+ *		  will be returned.
  * @pad: Padding for future extensions
  */
 struct virtchnl2_rx_queue_group_info {
@@ -1012,7 +1012,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
  * @cir_pad: Future extension purpose for CIR only
  * @pad2: Padding for future extensions
  */
-struct virtchnl2_tx_queue_group_info { /* IN */
+struct virtchnl2_tx_queue_group_info {
 	u8 tx_tc;
 	u8 priority;
 	u8 is_sp;
@@ -1045,19 +1045,17 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 /**
  * struct virtchnl2_queue_group_info - Queue group info
  * @qg_id: Queue group ID
- * @num_tx_q: Number of TX queues
- * @num_tx_complq: Number of completion queues
- * @num_rx_q: Number of RX queues
- * @num_rx_bufq: Number of RX buffer queues
+ * @num_tx_q: Number of TX queues requested
+ * @num_tx_complq: Number of completion queues requested
+ * @num_rx_q: Number of RX queues requested
+ * @num_rx_bufq: Number of RX buffer queues requested
  * @tx_q_grp_info: TX queue group info
  * @rx_q_grp_info: RX queue group info
  * @pad: Padding for future extensions
- * @chunks: Queue register chunks
+ * @chunks: Queue register chunks from CP
  */
 struct virtchnl2_queue_group_info {
-	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
-	/* IN, Number of queue of different types in the group. */
 	__le16 num_tx_q;
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
@@ -1066,56 +1064,52 @@ struct virtchnl2_queue_group_info {
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
 	u8 pad[40];
-	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
-};
-
-VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
-
-/**
- * struct virtchnl2_queue_groups - Queue groups list
- * @num_queue_groups: Total number of queue groups
- * @pad: Padding for future extensions
- * @groups: Array of queue group info
- */
-struct virtchnl2_queue_groups {
-	__le16 num_queue_groups;
-	u8 pad[6];
-	struct virtchnl2_queue_group_info groups[1];
+	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(120, virtchnl2_queue_group_info, chunks.chunks);
 
 /**
  * struct virtchnl2_add_queue_groups - Add queue groups
- * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ * @vport_id: Vport_id to add queue group to, same as allocated by
  *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @num_queue_groups: Total number of queue groups
  * @pad: Padding for future extensions
- * @qg_info: IN/OUT. List of all the queue groups
+#ifndef FLEX_ARRAY_SUPPORT
+ * @groups: List of all the queue group info structures
+#endif
  *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
- * groups and queues assigned followed by num_queue_groups and num_chunks of
- * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ * groups and queues assigned followed by num_queue_groups and groups of
+ * virtchnl2_queue_group_info and virtchnl2_queue_chunk structures.
+#ifdef FLEX_ARRAY_SUPPORT
+ * (Note: There is no specific field for the queue group info but are added at
+ * the end of the add queue groups message. Receiver of this message is expected
+ * to extract the queue group info accordingly. Reason for doing this is because
+ * compiler doesn't allow nested flexible array fields).
+#endif
  *
  * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
 	__le32 vport_id;
-	u8 pad[4];
-	struct virtchnl2_queue_groups qg_info;
+	__le16 num_queue_groups;
+	u8 pad[10];
+	struct virtchnl2_queue_group_info groups[STRUCT_VAR_LEN];
+
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
 /**
  * struct virtchnl2_delete_queue_groups - Delete queue groups
- * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ * @vport_id: Vport ID to delete queue group from, same as allocated by
  *	      CreateVport.
- * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @num_queue_groups: Defines number of groups provided
  * @pad: Padding
- * @qg_ids: IN, IDs & types of Queue Groups to delete
+ * @qg_ids: IDs & types of Queue Groups to delete
  *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
@@ -1129,10 +1123,9 @@ struct virtchnl2_delete_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	struct virtchnl2_queue_group_id qg_ids[1];
+	struct virtchnl2_queue_group_id qg_ids[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_delete_queue_groups, qg_ids);
 
 /**
  * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
@@ -1190,10 +1183,9 @@ struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
 
-	struct virtchnl2_vector_chunk vchunks[1];
+	struct virtchnl2_vector_chunk vchunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(48, virtchnl2_vector_chunks, vchunks);
 
 /**
  * struct virtchnl2_alloc_vectors - Vector allocation info
@@ -1215,8 +1207,7 @@ struct virtchnl2_alloc_vectors {
 
 	struct virtchnl2_vector_chunks vchunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(64, virtchnl2_alloc_vectors, vchunks.vchunks);
 
 /**
  * struct virtchnl2_rss_lut - RSS LUT info
@@ -1237,10 +1228,9 @@ struct virtchnl2_rss_lut {
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	__le32 lut[1];
+	__le32 lut[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_rss_lut, lut);
 
 /**
  * struct virtchnl2_rss_hash - RSS hash info
@@ -1389,10 +1379,9 @@ struct virtchnl2_ptype {
 	u8 ptype_id_8;
 	u8 proto_id_count;
 	__le16 pad;
-	__le16 proto_id[1];
+	__le16 proto_id[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(8, virtchnl2_ptype, proto_id);
 
 /**
  * struct virtchnl2_get_ptype_info - Packet type info
@@ -1428,7 +1417,7 @@ struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
-	struct virtchnl2_ptype ptype[1];
+	struct virtchnl2_ptype ptype[STRUCT_VAR_LEN];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
@@ -1629,10 +1618,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_chunk chunks[1];
+	struct virtchnl2_queue_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_queue_chunks, chunks);
 
 /**
  * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
@@ -1654,8 +1642,7 @@ struct virtchnl2_del_ena_dis_queues {
 
 	struct virtchnl2_queue_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(32, virtchnl2_del_ena_dis_queues, chunks.chunks);
 
 /**
  * struct virtchnl2_queue_vector - Queue to vector mapping
@@ -1699,10 +1686,10 @@ struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
 	__le16 num_qv_maps;
 	u8 pad[10];
-	struct virtchnl2_queue_vector qv_maps[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+	struct virtchnl2_queue_vector qv_maps[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_vector_maps, qv_maps);
 
 /**
  * struct virtchnl2_loopback - Loopback info
@@ -1754,10 +1741,10 @@ struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
 	__le16 num_mac_addr;
 	u8 pad[2];
-	struct virtchnl2_mac_addr mac_addr_list[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
+	struct virtchnl2_mac_addr mac_addr_list[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_mac_addr_list, mac_addr_list);
 
 /**
  * struct virtchnl2_promisc_info - Promiscuous type information
@@ -1856,10 +1843,10 @@ struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
+	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp,
+			       ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_get_ptp_caps - Get PTP capabilities
@@ -1884,8 +1871,8 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(88, virtchnl2_get_ptp_caps,
+			       tx_tstamp.ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
@@ -1920,13 +1907,12 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
+	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp_latches);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp_latches,
+			       tstamp_latches);
 
 static inline const char *virtchnl2_op_str(__le32 v_opcode)
 {
@@ -2004,314 +1990,4 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
 	}
 }
 
-/**
- * virtchnl2_vc_validate_vf_msg
- * @ver: Virtchnl2 version info
- * @v_opcode: Opcode for the message
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- *
- * Validate msg format against struct for each opcode.
- */
-static inline int
-virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
-			     u8 *msg, __le16 msglen)
-{
-	bool err_msg_format = false;
-	__le32 valid_len = 0;
-
-	/* Validate message length */
-	switch (v_opcode) {
-	case VIRTCHNL2_OP_VERSION:
-		valid_len = sizeof(struct virtchnl2_version_info);
-		break;
-	case VIRTCHNL2_OP_GET_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_capabilities);
-		break;
-	case VIRTCHNL2_OP_CREATE_VPORT:
-		valid_len = sizeof(struct virtchnl2_create_vport);
-		if (msglen >= valid_len) {
-			struct virtchnl2_create_vport *cvport =
-				(struct virtchnl2_create_vport *)msg;
-
-			if (cvport->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (cvport->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_CREATE_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_create_adi);
-		if (msglen >= valid_len) {
-			struct virtchnl2_non_flex_create_adi *cadi =
-				(struct virtchnl2_non_flex_create_adi *)msg;
-
-			if (cadi->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			if (cadi->vchunks.num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (cadi->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-			valid_len += (cadi->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_destroy_adi);
-		break;
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-		valid_len = sizeof(struct virtchnl2_vport);
-		break;
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_tx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_tx_queues *ctq =
-				(struct virtchnl2_config_tx_queues *)msg;
-			if (ctq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (ctq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_txq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_rx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_rx_queues *crq =
-				(struct virtchnl2_config_rx_queues *)msg;
-			if (crq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (crq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_rxq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUES:
-		valid_len = sizeof(struct virtchnl2_add_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_add_queues *add_q =
-				(struct virtchnl2_add_queues *)msg;
-
-			if (add_q->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (add_q->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_DEL_QUEUES:
-		valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_del_ena_dis_queues *qs =
-				(struct virtchnl2_del_ena_dis_queues *)msg;
-			if (qs->chunks.num_chunks == 0 ||
-			    qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (qs->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_add_queue_groups);
-		if (msglen != valid_len) {
-			__le64 offset;
-			__le32 i;
-			struct virtchnl2_add_queue_groups *add_queue_grp =
-				(struct virtchnl2_add_queue_groups *)msg;
-			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
-			struct virtchnl2_queue_group_info *grp_info;
-			__le32 chunk_size = sizeof(struct virtchnl2_queue_reg_chunk);
-			__le32 group_size = sizeof(struct virtchnl2_queue_group_info);
-			__le32 total_chunks_size;
-
-			if (groups->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (groups->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_info);
-			offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
-
-			for (i = 0; i < groups->num_queue_groups; i++) {
-				grp_info = (struct virtchnl2_queue_group_info *)
-						   ((u8 *)groups + offset);
-				if (grp_info->chunks.num_chunks == 0) {
-					offset += group_size;
-					continue;
-				}
-				total_chunks_size = (grp_info->chunks.num_chunks - 1) * chunk_size;
-				offset += group_size + total_chunks_size;
-				valid_len += total_chunks_size;
-			}
-		}
-		break;
-	case VIRTCHNL2_OP_DEL_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_delete_queue_groups);
-		if (msglen != valid_len) {
-			struct virtchnl2_delete_queue_groups *del_queue_grp =
-				(struct virtchnl2_delete_queue_groups *)msg;
-
-			if (del_queue_grp->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += (del_queue_grp->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_id);
-		}
-		break;
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-		valid_len = sizeof(struct virtchnl2_queue_vector_maps);
-		if (msglen >= valid_len) {
-			struct virtchnl2_queue_vector_maps *v_qp =
-				(struct virtchnl2_queue_vector_maps *)msg;
-			if (v_qp->num_qv_maps == 0 ||
-			    v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_qp->num_qv_maps - 1) *
-				      sizeof(struct virtchnl2_queue_vector);
-		}
-		break;
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_alloc_vectors);
-		if (msglen >= valid_len) {
-			struct virtchnl2_alloc_vectors *v_av =
-				(struct virtchnl2_alloc_vectors *)msg;
-
-			if (v_av->vchunks.num_vchunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (v_av->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_vector_chunks);
-		if (msglen >= valid_len) {
-			struct virtchnl2_vector_chunks *v_chunks =
-				(struct virtchnl2_vector_chunks *)msg;
-			if (v_chunks->num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_chunks->num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-		valid_len = sizeof(struct virtchnl2_rss_key);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_key *vrk =
-				(struct virtchnl2_rss_key *)msg;
-
-			if (vrk->key_len == 0) {
-				/* Zero length is allowed as input */
-				break;
-			}
-
-			valid_len += vrk->key_len - 1;
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-		valid_len = sizeof(struct virtchnl2_rss_lut);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_lut *vrl =
-				(struct virtchnl2_rss_lut *)msg;
-
-			if (vrl->lut_entries == 0) {
-				/* Zero entries is allowed as input */
-				break;
-			}
-
-			valid_len += (vrl->lut_entries - 1) * sizeof(vrl->lut);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-		valid_len = sizeof(struct virtchnl2_rss_hash);
-		break;
-	case VIRTCHNL2_OP_SET_SRIOV_VFS:
-		valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		valid_len = sizeof(struct virtchnl2_get_ptype_info);
-		break;
-	case VIRTCHNL2_OP_GET_STATS:
-		valid_len = sizeof(struct virtchnl2_vport_stats);
-		break;
-	case VIRTCHNL2_OP_GET_PORT_STATS:
-		valid_len = sizeof(struct virtchnl2_port_stats);
-		break;
-	case VIRTCHNL2_OP_RESET_VF:
-		break;
-	case VIRTCHNL2_OP_GET_PTP_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_get_ptp_caps *ptp_caps =
-			(struct virtchnl2_get_ptp_caps *)msg;
-
-			if (ptp_caps->tx_tstamp.num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_entry));
-		}
-		break;
-	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
-		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
-			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
-
-			if (tx_tstamp_latches->num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((tx_tstamp_latches->num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
-		}
-		break;
-	/* These are always errors coming from the VF */
-	case VIRTCHNL2_OP_EVENT:
-	case VIRTCHNL2_OP_UNKNOWN:
-	default:
-		return VIRTCHNL2_STATUS_ERR_ESRCH;
-	}
-	/* Few more checks */
-	if (err_msg_format || valid_len != msglen)
-		return VIRTCHNL2_STATUS_ERR_EINVAL;
-
-	return 0;
-}
-
 #endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index c46ed50eb5..f00202f43c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -366,7 +366,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	int err = -1;
 
 	size = sizeof(*p2p_queue_grps_info) +
-	       (p2p_queue_grps_info->qg_info.num_queue_groups - 1) *
+	       (p2p_queue_grps_info->num_queue_groups - 1) *
 		   sizeof(struct virtchnl2_queue_group_info);
 
 	memset(&args, 0, sizeof(args));
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e718e9e19..e707043bf7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2393,18 +2393,18 @@ cpfl_p2p_q_grps_add(struct idpf_vport *vport,
 	int ret;
 
 	p2p_queue_grps_info->vport_id = vport->vport_id;
-	p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
-	p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0;
+	p2p_queue_grps_info->num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
+	p2p_queue_grps_info->groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
+	p2p_queue_grps_info->groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
+	p2p_queue_grps_info->groups[0].rx_q_grp_info.rss_lut_size = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.tx_tc = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.priority = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.is_sp = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.pir_weight = 0;
 
 	ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info);
 	if (ret != 0) {
@@ -2423,13 +2423,13 @@ cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
 	struct virtchnl2_queue_reg_chunks *vc_chunks_out;
 	int i, type;
 
-	if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=
+	if (p2p_q_vc_out_info->groups[0].qg_id.queue_group_type !=
 	    VIRTCHNL2_QUEUE_GROUP_P2P) {
 		PMD_DRV_LOG(ERR, "Add queue group response mismatch.");
 		return -EINVAL;
 	}
 
-	vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks;
+	vc_chunks_out = &p2p_q_vc_out_info->groups[0].chunks;
 
 	for (i = 0; i < vc_chunks_out->num_chunks; i++) {
 		type = vc_chunks_out->chunks[i].type;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 17/21] common/idpf: enable flow steer capability for vports
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (15 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 18/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
                           ` (3 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Added virtchnl2_flow_types to be used for flow steering.

Added flow steer cap flags for vport create.

Add flow steer flow types and action types for vport create.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 60 ++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 317bd06c0f..c14a4e2c7d 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -269,6 +269,43 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
 };
 
+/**
+ * enum virtchnl2_action_types - Available actions for sideband flow steering
+ * @VIRTCHNL2_ACTION_DROP: Drop the packet
+ * @VIRTCHNL2_ACTION_PASSTHRU: Forward the packet to the next classifier/stage
+ * @VIRTCHNL2_ACTION_QUEUE: Forward the packet to a receive queue
+ * @VIRTCHNL2_ACTION_Q_GROUP: Forward the packet to a receive queue group
+ * @VIRTCHNL2_ACTION_MARK: Mark the packet with specific marker value
+ * @VIRTCHNL2_ACTION_COUNT: Increment the corresponding counter
+ */
+
+enum virtchnl2_action_types {
+	VIRTCHNL2_ACTION_DROP		= BIT(0),
+	VIRTCHNL2_ACTION_PASSTHRU	= BIT(1),
+	VIRTCHNL2_ACTION_QUEUE		= BIT(2),
+	VIRTCHNL2_ACTION_Q_GROUP	= BIT(3),
+	VIRTCHNL2_ACTION_MARK		= BIT(4),
+	VIRTCHNL2_ACTION_COUNT		= BIT(5),
+};
+
+/* Flow type capabilities for Flow Steering and Receive-Side Scaling */
+enum virtchnl2_flow_types {
+	VIRTCHNL2_FLOW_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_FLOW_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_FLOW_IPV4_SCTP	= BIT(2),
+	VIRTCHNL2_FLOW_IPV4_OTHER	= BIT(3),
+	VIRTCHNL2_FLOW_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_FLOW_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_FLOW_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_FLOW_IPV6_OTHER	= BIT(7),
+	VIRTCHNL2_FLOW_IPV4_AH		= BIT(8),
+	VIRTCHNL2_FLOW_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_FLOW_IPV4_AH_ESP	= BIT(10),
+	VIRTCHNL2_FLOW_IPV6_AH		= BIT(11),
+	VIRTCHNL2_FLOW_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_FLOW_IPV6_AH_ESP	= BIT(13),
+};
+
 /**
  * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
  * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
@@ -707,11 +744,16 @@ VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 /**
  * enum virtchnl2_vport_flags - Vport flags
  * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
- * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER: Inline flow steering enabled
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ: Inline flow steering enabled
+ * with explicit Rx queue action
+ * @VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER: Sideband flow steering enabled
  */
 enum virtchnl2_vport_flags {
 	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
-	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ	= BIT(2),
+	VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER	= BIT(3),
 };
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
@@ -739,6 +781,14 @@ enum virtchnl2_vport_flags {
  * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
  * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
  * @reserved: Reserved bytes and cannot be used
+ * @inline_flow_types: Bit mask of supported inline-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_types: Bit mask of supported sideband-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_actions: Bit mask of supported action types
+ *  for sideband flow steering (See enum virtchnl2_action_types)
+ * @flow_steer_max_rules: Max rules allowed for inline and sideband
+ *  flow steering combined
  * @rss_algorithm: RSS algorithm
  * @rss_key_size: RSS key size
  * @rss_lut_size: RSS LUT size
@@ -768,7 +818,11 @@ struct virtchnl2_create_vport {
 	__le16 vport_flags;
 	__le64 rx_desc_ids;
 	__le64 tx_desc_ids;
-	u8 reserved[72];
+	u8 reserved[48];
+	__le64 inline_flow_types;
+	__le64 sideband_flow_types;
+	__le32 sideband_flow_actions;
+	__le32 flow_steer_max_rules;
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 18/21] common/idpf: add a new Tx context descriptor structure
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (16 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 17/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 19/21] common/idpf: remove idpf common file Soumyadeep Hore
                           ` (2 subsequent siblings)
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding a new structure for the context descriptor that contains
the support for timesync packets, where the index for timestamping is set.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_lan_txrx.h | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/idpf_lan_txrx.h b/drivers/common/idpf/base/idpf_lan_txrx.h
index c9eaeb5d3f..be27973a33 100644
--- a/drivers/common/idpf/base/idpf_lan_txrx.h
+++ b/drivers/common/idpf/base/idpf_lan_txrx.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_LAN_TXRX_H_
@@ -286,6 +286,24 @@ struct idpf_flex_tx_tso_ctx_qw {
 };
 
 union idpf_flex_tx_ctx_desc {
+		/* DTYPE = IDPF_TX_DESC_DTYPE_CTX (0x01) */
+	struct  {
+		struct {
+			u8 rsv[4];
+			__le16 l2tag2;
+			u8 rsv_2[2];
+		} qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 tsyn_reg_l;
+#define IDPF_TX_DESC_CTX_TSYN_L_M	GENMASK(15, 14)
+			__le16 tsyn_reg_h;
+#define IDPF_TX_DESC_CTX_TSYN_H_M	GENMASK(15, 0)
+			__le16 mss;
+#define IDPF_TX_DESC_CTX_MSS_M		GENMASK(14, 2)
+		} qw1;
+	} tsyn;
+
 	/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
 	struct {
 		struct idpf_flex_tx_tso_ctx_qw qw0;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 19/21] common/idpf: remove idpf common file
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (17 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 18/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

The file is redundant in our implementation and is not required
further.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 382 -------------------------
 drivers/common/idpf/base/meson.build   |   1 -
 2 files changed, 383 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
deleted file mode 100644
index bb540345c2..0000000000
--- a/drivers/common/idpf/base/idpf_common.c
+++ /dev/null
@@ -1,382 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2024 Intel Corporation
- */
-
-#include "idpf_prototype.h"
-#include "idpf_type.h"
-#include <virtchnl.h>
-
-
-/**
- * idpf_set_mac_type - Sets MAC type
- * @hw: pointer to the HW structure
- *
- * This function sets the mac type of the adapter based on the
- * vendor ID and device ID stored in the hw structure.
- */
-int idpf_set_mac_type(struct idpf_hw *hw)
-{
-	int status = 0;
-
-	DEBUGFUNC("Set MAC type\n");
-
-	if (hw->vendor_id == IDPF_INTEL_VENDOR_ID) {
-		switch (hw->device_id) {
-		case IDPF_DEV_ID_PF:
-			hw->mac.type = IDPF_MAC_PF;
-			break;
-		case IDPF_DEV_ID_VF:
-			hw->mac.type = IDPF_MAC_VF;
-			break;
-		default:
-			hw->mac.type = IDPF_MAC_GENERIC;
-			break;
-		}
-	} else {
-		status = -ENODEV;
-	}
-
-	DEBUGOUT2("Setting MAC type found mac: %d, returns: %d\n",
-		  hw->mac.type, status);
-	return status;
-}
-
-/**
- *  idpf_init_hw - main initialization routine
- *  @hw: pointer to the hardware structure
- *  @ctlq_size: struct to pass ctlq size data
- */
-int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
-{
-	struct idpf_ctlq_create_info *q_info;
-	int status = 0;
-	struct idpf_ctlq_info *cq = NULL;
-
-	/* Setup initial control queues */
-	q_info = (struct idpf_ctlq_create_info *)
-		 idpf_calloc(hw, 2, sizeof(struct idpf_ctlq_create_info));
-	if (!q_info)
-		return -ENOMEM;
-
-	q_info[0].type             = IDPF_CTLQ_TYPE_MAILBOX_TX;
-	q_info[0].buf_size         = ctlq_size.asq_buf_size;
-	q_info[0].len              = ctlq_size.asq_ring_size;
-	q_info[0].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[0].reg.head         = PF_FW_ATQH;
-		q_info[0].reg.tail         = PF_FW_ATQT;
-		q_info[0].reg.len          = PF_FW_ATQLEN;
-		q_info[0].reg.bah          = PF_FW_ATQBAH;
-		q_info[0].reg.bal          = PF_FW_ATQBAL;
-		q_info[0].reg.len_mask     = PF_FW_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = PF_FW_ATQH_ATQH_M;
-	} else {
-		q_info[0].reg.head         = VF_ATQH;
-		q_info[0].reg.tail         = VF_ATQT;
-		q_info[0].reg.len          = VF_ATQLEN;
-		q_info[0].reg.bah          = VF_ATQBAH;
-		q_info[0].reg.bal          = VF_ATQBAL;
-		q_info[0].reg.len_mask     = VF_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = VF_ATQH_ATQH_M;
-	}
-
-	q_info[1].type             = IDPF_CTLQ_TYPE_MAILBOX_RX;
-	q_info[1].buf_size         = ctlq_size.arq_buf_size;
-	q_info[1].len              = ctlq_size.arq_ring_size;
-	q_info[1].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[1].reg.head         = PF_FW_ARQH;
-		q_info[1].reg.tail         = PF_FW_ARQT;
-		q_info[1].reg.len          = PF_FW_ARQLEN;
-		q_info[1].reg.bah          = PF_FW_ARQBAH;
-		q_info[1].reg.bal          = PF_FW_ARQBAL;
-		q_info[1].reg.len_mask     = PF_FW_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = PF_FW_ARQH_ARQH_M;
-	} else {
-		q_info[1].reg.head         = VF_ARQH;
-		q_info[1].reg.tail         = VF_ARQT;
-		q_info[1].reg.len          = VF_ARQLEN;
-		q_info[1].reg.bah          = VF_ARQBAH;
-		q_info[1].reg.bal          = VF_ARQBAL;
-		q_info[1].reg.len_mask     = VF_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = VF_ARQH_ARQH_M;
-	}
-
-	status = idpf_ctlq_init(hw, 2, q_info);
-	if (status) {
-		/* TODO return error */
-		idpf_free(hw, q_info);
-		return status;
-	}
-
-	LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, idpf_ctlq_info, cq_list) {
-		if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = cq;
-		else if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = cq;
-	}
-
-	/* TODO hardcode a mac addr for now */
-	hw->mac.addr[0] = 0x00;
-	hw->mac.addr[1] = 0x00;
-	hw->mac.addr[2] = 0x00;
-	hw->mac.addr[3] = 0x00;
-	hw->mac.addr[4] = 0x03;
-	hw->mac.addr[5] = 0x14;
-
-	idpf_free(hw, q_info);
-
-	return 0;
-}
-
-/**
- * idpf_send_msg_to_cp
- * @hw: pointer to the hardware structure
- * @v_opcode: opcodes for VF-PF communication
- * @v_retval: return error code
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- * @cmd_details: pointer to command details
- *
- * Send message to CP. By default, this message
- * is sent asynchronously, i.e. idpf_asq_send_command() does not wait for
- * completion before returning.
- */
-int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
-			int v_retval, u8 *msg, u16 msglen)
-{
-	struct idpf_ctlq_msg ctlq_msg = { 0 };
-	struct idpf_dma_mem dma_mem = { 0 };
-	int status;
-
-	ctlq_msg.opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg.func_id = 0;
-	ctlq_msg.data_len = msglen;
-	ctlq_msg.cookie.mbx.chnl_retval = v_retval;
-	ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
-
-	if (msglen > 0) {
-		dma_mem.va = (struct idpf_dma_mem *)
-			  idpf_alloc_dma_mem(hw, &dma_mem, msglen);
-		if (!dma_mem.va)
-			return -ENOMEM;
-
-		idpf_memcpy(dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
-		ctlq_msg.ctx.indirect.payload = &dma_mem;
-	}
-	status = idpf_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
-
-	if (dma_mem.va)
-		idpf_free_dma_mem(hw, &dma_mem);
-
-	return status;
-}
-
-/**
- *  idpf_asq_done - check if FW has processed the Admin Send Queue
- *  @hw: pointer to the hw struct
- *
- *  Returns true if the firmware has processed all descriptors on the
- *  admin send queue. Returns false if there are still requests pending.
- */
-bool idpf_asq_done(struct idpf_hw *hw)
-{
-	/* AQ designers suggest use of head for better
-	 * timing reliability than DD bit
-	 */
-	return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
-}
-
-/**
- * idpf_check_asq_alive
- * @hw: pointer to the hw struct
- *
- * Returns true if Queue is enabled else false.
- */
-bool idpf_check_asq_alive(struct idpf_hw *hw)
-{
-	if (hw->asq->reg.len)
-		return !!(rd32(hw, hw->asq->reg.len) &
-			  PF_FW_ATQLEN_ATQENABLE_M);
-
-	return false;
-}
-
-/**
- *  idpf_clean_arq_element
- *  @hw: pointer to the hw struct
- *  @e: event info from the receive descriptor, includes any buffers
- *  @pending: number of events that could be left to process
- *
- *  This function cleans one Admin Receive Queue element and returns
- *  the contents through e.  It can also return how many events are
- *  left to process through 'pending'
- */
-int idpf_clean_arq_element(struct idpf_hw *hw,
-			   struct idpf_arq_event_info *e, u16 *pending)
-{
-	struct idpf_dma_mem *dma_mem = NULL;
-	struct idpf_ctlq_msg msg = { 0 };
-	int status;
-	u16 msg_data_len;
-
-	*pending = 1;
-
-	status = idpf_ctlq_recv(hw->arq, pending, &msg);
-	if (status == -ENOMSG)
-		goto exit;
-
-	/* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
-	e->desc.opcode = msg.opcode;
-	e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
-	e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
-	e->desc.ret_val = msg.status;
-	e->desc.datalen = msg.data_len;
-	if (msg.data_len > 0) {
-		if (!msg.ctx.indirect.payload || !msg.ctx.indirect.payload->va ||
-		    !e->msg_buf) {
-			return -EFAULT;
-		}
-		e->buf_len = msg.data_len;
-		msg_data_len = msg.data_len;
-		idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len,
-			    IDPF_DMA_TO_NONDMA);
-		dma_mem = msg.ctx.indirect.payload;
-	} else {
-		*pending = 0;
-	}
-
-	status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem);
-
-exit:
-	return status;
-}
-
-/**
- *  idpf_deinit_hw - shutdown routine
- *  @hw: pointer to the hardware structure
- */
-void idpf_deinit_hw(struct idpf_hw *hw)
-{
-	hw->asq = NULL;
-	hw->arq = NULL;
-
-	idpf_ctlq_deinit(hw);
-}
-
-/**
- * idpf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a RESET message to the CPF. Does not wait for response from CPF
- * as none will be forthcoming. Immediately after calling this function,
- * the control queue should be shut down and (optionally) reinitialized.
- */
-int idpf_reset(struct idpf_hw *hw)
-{
-	return idpf_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
-				      0, NULL, 0);
-}
-
-/**
- * idpf_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- */
-STATIC int idpf_get_set_rss_lut(struct idpf_hw *hw, u16 vsi_id,
-				bool pf_lut, u8 *lut, u16 lut_size,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- */
-int idpf_get_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
-}
-
-/**
- * idpf_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- */
-int idpf_set_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * idpf_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- */
-STATIC int idpf_get_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-				struct idpf_get_set_rss_key_data *key,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- */
-int idpf_get_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * idpf_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-int idpf_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, true);
-}
-
-RTE_LOG_REGISTER_DEFAULT(idpf_common_logger, NOTICE);
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 96d7642209..649c44d0ae 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -2,7 +2,6 @@
 # Copyright(c) 2023 Intel Corporation
 
 sources += files(
-        'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
 )
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 20/21] drivers: adding type to idpf vc queue switch
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (18 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 19/21] common/idpf: remove idpf common file Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-18 10:57         ` [PATCH v4 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
  20 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding an argument named type to define queue type
in idpf_vc_queue_switch(). This solves the issue of
improper queue type in virtchnl2 message.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/idpf_common_virtchnl.c |  8 ++------
 drivers/common/idpf/idpf_common_virtchnl.h |  2 +-
 drivers/net/cpfl/cpfl_ethdev.c             | 12 ++++++++----
 drivers/net/cpfl/cpfl_rxtx.c               | 12 ++++++++----
 drivers/net/idpf/idpf_rxtx.c               | 12 ++++++++----
 5 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f00202f43c..de511da788 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -769,15 +769,11 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
+		     bool rx, bool on, uint32_t type)
 {
-	uint32_t type;
 	int err, queue_id;
 
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+	if (rx)
 		queue_id = vport->chunks_info.rx_start_qid + qid;
 	else
 		queue_id = vport->chunks_info.tx_start_qid + qid;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 73446ded86..d6555978d5 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -31,7 +31,7 @@ int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
 int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-			 bool rx, bool on);
+			 bool rx, bool on, uint32_t type);
 __rte_internal
 int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e707043bf7..9e2a74371e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -1907,7 +1907,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	int i, ret;
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
 			return ret;
@@ -1915,7 +1916,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
 			return ret;
@@ -1943,7 +1945,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
 			return ret;
@@ -1951,7 +1954,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
 			return ret;
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ab8bec4645..47351ca102 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -1200,7 +1200,8 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1252,7 +1253,8 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1283,7 +1285,8 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 						     rx_queue_id - cpfl_vport->nb_data_txq,
 						     true, false);
 	else
-		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+								VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1331,7 +1334,8 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 						     tx_queue_id - cpfl_vport->nb_data_txq,
 						     false, false);
 	else
-		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+								VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 64f2235580..858bbefe3b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -595,7 +595,8 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -646,7 +647,8 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -669,7 +671,8 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -701,7 +704,8 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v4 21/21] doc: updated the documentation for cpfl PMD
  2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
                           ` (19 preceding siblings ...)
  2024-06-18 10:57         ` [PATCH v4 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
@ 2024-06-18 10:57         ` Soumyadeep Hore
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
  20 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-18 10:57 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Updated the latest support for cpfl pmd in MEV TS
firmware version which is 1.4.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 doc/guides/nics/cpfl.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 9b7a99c894..528c809819 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -35,6 +35,8 @@ Here is the suggested matching list which has been tested and verified.
    +------------+------------------+
    |    23.11   |       1.0        |
    +------------+------------------+
+   |    24.07   |       1.4        |
+   +------------+------------------+
 
 
 Configuration
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 00/21] Update MEV TS Base Driver
  2024-06-18 10:57         ` [PATCH v4 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
@ 2024-06-24  9:16           ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
                               ` (21 more replies)
  0 siblings, 22 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

---
v5:
- Removed warning from patch 6
---
v4:
- Removed 1st patch as we are not using NVME_CPF flag
- Addressed comments
---
v3:
- Removed additional whitespace changes
- Fixed warnings of CI
- Updated documentation relating to MEV TS FW release
---
v2:
- Changed implementation based on review comments
- Fixed compilation errors for Windows, Alpine and FreeBSD
---

Soumyadeep Hore (21):
  common/idpf: updated IDPF VF device ID
  common/idpf: added new virtchnl2 capability and vport flag
  common/idpf: moved the idpf HW into API header file
  common/idpf: avoid defensive programming
  common/idpf: use BIT ULL for large bitmaps
  common/idpf: convert data type to 'le'
  common/idpf: compress RXDID mask definitions
  common/idpf: refactor size check macro
  common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  common/idpf: use 'pad' and 'reserved' fields appropriately
  common/idpf: move related defines into enums
  common/idpf: avoid variable 0-init
  common/idpf: update in PTP message validation
  common/idpf: rename INLINE FLOW STEER to FLOW STEER
  common/idpf: add wmb before tail
  drivers: add flex array support and fix issues
  common/idpf: enable flow steer capability for vports
  common/idpf: add a new Tx context descriptor structure
  common/idpf: remove idpf common file
  drivers: adding type to idpf vc queue switch
  doc: updated the documentation for cpfl PMD

 doc/guides/nics/cpfl.rst                      |    2 +
 drivers/common/idpf/base/idpf_common.c        |  382 ---
 drivers/common/idpf/base/idpf_controlq.c      |   66 +-
 drivers/common/idpf/base/idpf_controlq.h      |  107 +-
 drivers/common/idpf/base/idpf_controlq_api.h  |   35 +
 .../common/idpf/base/idpf_controlq_setup.c    |   18 +-
 drivers/common/idpf/base/idpf_devids.h        |    5 +-
 drivers/common/idpf/base/idpf_lan_txrx.h      |   20 +-
 drivers/common/idpf/base/idpf_osdep.h         |   72 +-
 drivers/common/idpf/base/idpf_type.h          |    4 +-
 drivers/common/idpf/base/meson.build          |    1 -
 drivers/common/idpf/base/virtchnl2.h          | 2388 +++++++++--------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  842 ++++--
 drivers/common/idpf/idpf_common_virtchnl.c    |   10 +-
 drivers/common/idpf/idpf_common_virtchnl.h    |    2 +-
 drivers/net/cpfl/cpfl_ethdev.c                |   40 +-
 drivers/net/cpfl/cpfl_rxtx.c                  |   12 +-
 drivers/net/idpf/idpf_rxtx.c                  |   12 +-
 18 files changed, 2037 insertions(+), 1981 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 01/21] common/idpf: updated IDPF VF device ID
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 02/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
                               ` (20 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Update IDPF VF device id to 145C.

Also added device ID for S-IOV device.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_devids.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_devids.h b/drivers/common/idpf/base/idpf_devids.h
index c47762d5b7..1ae99fcee1 100644
--- a/drivers/common/idpf/base/idpf_devids.h
+++ b/drivers/common/idpf/base/idpf_devids.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_DEVIDS_H_
@@ -10,7 +10,8 @@
 
 /* Device IDs */
 #define IDPF_DEV_ID_PF			0x1452
-#define IDPF_DEV_ID_VF			0x1889
+#define IDPF_DEV_ID_VF			0x145C
+#define IDPF_DEV_ID_VF_SIOV		0x0DD5
 
 
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 02/21] common/idpf: added new virtchnl2 capability and vport flag
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 03/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
                               ` (19 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Removed unused VIRTCHNL2_CAP_ADQ capability and use that bit for
VIRTCHNL2_CAP_INLINE_FLOW_STEER capability.

Added VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA port flag to allow
enable/disable per vport.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 3900b784d0..6eff0f1ea1 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _VIRTCHNL2_H_
@@ -220,7 +220,7 @@
 #define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
 #define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
 #define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_ADQ			BIT(6)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
 #define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
 #define VIRTCHNL2_CAP_PROMISC			BIT(8)
 #define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
@@ -593,7 +593,8 @@ struct virtchnl2_queue_reg_chunks {
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
 /* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT	BIT(0)
+#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
+#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 03/21] common/idpf: moved the idpf HW into API header file
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 02/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 04/21] common/idpf: avoid defensive programming Soumyadeep Hore
                               ` (18 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

There is an issue of recursive header file includes in accessing the
idpf_hw structure. The controlq.h has the structure definition and osdep
header file needs that. The problem is the controlq.h also needs
the osdep header file contents, basically both dependent on each other.

Moving the definition from controlq.h into api.h resolves the problem.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c       |   4 +-
 drivers/common/idpf/base/idpf_controlq.h     | 107 +------------------
 drivers/common/idpf/base/idpf_controlq_api.h |  35 ++++++
 drivers/common/idpf/base/idpf_osdep.h        |  72 ++++++++++++-
 drivers/common/idpf/base/idpf_type.h         |   4 +-
 5 files changed, 111 insertions(+), 111 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
index 7181a7f14c..bb540345c2 100644
--- a/drivers/common/idpf/base/idpf_common.c
+++ b/drivers/common/idpf/base/idpf_common.c
@@ -1,9 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
-#include "idpf_type.h"
 #include "idpf_prototype.h"
+#include "idpf_type.h"
 #include <virtchnl.h>
 
 
diff --git a/drivers/common/idpf/base/idpf_controlq.h b/drivers/common/idpf/base/idpf_controlq.h
index 80ca06e632..3f74b5a898 100644
--- a/drivers/common/idpf/base/idpf_controlq.h
+++ b/drivers/common/idpf/base/idpf_controlq.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_CONTROLQ_H_
@@ -96,111 +96,6 @@ struct idpf_mbxq_desc {
 	u32 pf_vf_id;		/* used by CP when sending to PF */
 };
 
-enum idpf_mac_type {
-	IDPF_MAC_UNKNOWN = 0,
-	IDPF_MAC_PF,
-	IDPF_MAC_VF,
-	IDPF_MAC_GENERIC
-};
-
-#define ETH_ALEN 6
-
-struct idpf_mac_info {
-	enum idpf_mac_type type;
-	u8 addr[ETH_ALEN];
-	u8 perm_addr[ETH_ALEN];
-};
-
-#define IDPF_AQ_LINK_UP 0x1
-
-/* PCI bus types */
-enum idpf_bus_type {
-	idpf_bus_type_unknown = 0,
-	idpf_bus_type_pci,
-	idpf_bus_type_pcix,
-	idpf_bus_type_pci_express,
-	idpf_bus_type_reserved
-};
-
-/* PCI bus speeds */
-enum idpf_bus_speed {
-	idpf_bus_speed_unknown	= 0,
-	idpf_bus_speed_33	= 33,
-	idpf_bus_speed_66	= 66,
-	idpf_bus_speed_100	= 100,
-	idpf_bus_speed_120	= 120,
-	idpf_bus_speed_133	= 133,
-	idpf_bus_speed_2500	= 2500,
-	idpf_bus_speed_5000	= 5000,
-	idpf_bus_speed_8000	= 8000,
-	idpf_bus_speed_reserved
-};
-
-/* PCI bus widths */
-enum idpf_bus_width {
-	idpf_bus_width_unknown	= 0,
-	idpf_bus_width_pcie_x1	= 1,
-	idpf_bus_width_pcie_x2	= 2,
-	idpf_bus_width_pcie_x4	= 4,
-	idpf_bus_width_pcie_x8	= 8,
-	idpf_bus_width_32	= 32,
-	idpf_bus_width_64	= 64,
-	idpf_bus_width_reserved
-};
-
-/* Bus parameters */
-struct idpf_bus_info {
-	enum idpf_bus_speed speed;
-	enum idpf_bus_width width;
-	enum idpf_bus_type type;
-
-	u16 func;
-	u16 device;
-	u16 lan_id;
-	u16 bus_id;
-};
-
-/* Function specific capabilities */
-struct idpf_hw_func_caps {
-	u32 num_alloc_vfs;
-	u32 vf_base_id;
-};
-
-/* Define the APF hardware struct to replace other control structs as needed
- * Align to ctlq_hw_info
- */
-struct idpf_hw {
-	/* Some part of BAR0 address space is not mapped by the LAN driver.
-	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
-	 * will have its own base hardware address when mapped.
-	 */
-	u8 *hw_addr;
-	u8 *hw_addr_region2;
-	u64 hw_addr_len;
-	u64 hw_addr_region2_len;
-
-	void *back;
-
-	/* control queue - send and receive */
-	struct idpf_ctlq_info *asq;
-	struct idpf_ctlq_info *arq;
-
-	/* subsystem structs */
-	struct idpf_mac_info mac;
-	struct idpf_bus_info bus;
-	struct idpf_hw_func_caps func_caps;
-
-	/* pci info */
-	u16 device_id;
-	u16 vendor_id;
-	u16 subsystem_device_id;
-	u16 subsystem_vendor_id;
-	u8 revision_id;
-	bool adapter_stopped;
-
-	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
-};
-
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
 			     struct idpf_ctlq_info *cq);
 
diff --git a/drivers/common/idpf/base/idpf_controlq_api.h b/drivers/common/idpf/base/idpf_controlq_api.h
index 38f5d2df3c..8a90258099 100644
--- a/drivers/common/idpf/base/idpf_controlq_api.h
+++ b/drivers/common/idpf/base/idpf_controlq_api.h
@@ -154,6 +154,41 @@ enum idpf_mbx_opc {
 	idpf_mbq_opc_send_msg_to_peer_drv	= 0x0804,
 };
 
+/* Define the APF hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct idpf_hw {
+	/* Some part of BAR0 address space is not mapped by the LAN driver.
+	 * This results in 2 regions of BAR0 to be mapped by LAN driver which
+	 * will have its own base hardware address when mapped.
+	 */
+	u8 *hw_addr;
+	u8 *hw_addr_region2;
+	u64 hw_addr_len;
+	u64 hw_addr_region2_len;
+
+	void *back;
+
+	/* control queue - send and receive */
+	struct idpf_ctlq_info *asq;
+	struct idpf_ctlq_info *arq;
+
+	/* subsystem structs */
+	struct idpf_mac_info mac;
+	struct idpf_bus_info bus;
+	struct idpf_hw_func_caps func_caps;
+
+	/* pci info */
+	u16 device_id;
+	u16 vendor_id;
+	u16 subsystem_device_id;
+	u16 subsystem_vendor_id;
+	u8 revision_id;
+	bool adapter_stopped;
+
+	LIST_HEAD_TYPE(list_head, idpf_ctlq_info) cq_list_head;
+};
+
 /* API supported for control queue management */
 /* Will init all required q including default mb.  "q_info" is an array of
  * create_info structs equal to the number of control queues to be created.
diff --git a/drivers/common/idpf/base/idpf_osdep.h b/drivers/common/idpf/base/idpf_osdep.h
index 74a376cb13..b2af8f443d 100644
--- a/drivers/common/idpf/base/idpf_osdep.h
+++ b/drivers/common/idpf/base/idpf_osdep.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_OSDEP_H_
@@ -353,4 +353,74 @@ idpf_hweight32(u32 num)
 
 #endif
 
+enum idpf_mac_type {
+	IDPF_MAC_UNKNOWN = 0,
+	IDPF_MAC_PF,
+	IDPF_MAC_VF,
+	IDPF_MAC_GENERIC
+};
+
+#define ETH_ALEN 6
+
+struct idpf_mac_info {
+	enum idpf_mac_type type;
+	u8 addr[ETH_ALEN];
+	u8 perm_addr[ETH_ALEN];
+};
+
+#define IDPF_AQ_LINK_UP 0x1
+
+/* PCI bus types */
+enum idpf_bus_type {
+	idpf_bus_type_unknown = 0,
+	idpf_bus_type_pci,
+	idpf_bus_type_pcix,
+	idpf_bus_type_pci_express,
+	idpf_bus_type_reserved
+};
+
+/* PCI bus speeds */
+enum idpf_bus_speed {
+	idpf_bus_speed_unknown	= 0,
+	idpf_bus_speed_33	= 33,
+	idpf_bus_speed_66	= 66,
+	idpf_bus_speed_100	= 100,
+	idpf_bus_speed_120	= 120,
+	idpf_bus_speed_133	= 133,
+	idpf_bus_speed_2500	= 2500,
+	idpf_bus_speed_5000	= 5000,
+	idpf_bus_speed_8000	= 8000,
+	idpf_bus_speed_reserved
+};
+
+/* PCI bus widths */
+enum idpf_bus_width {
+	idpf_bus_width_unknown	= 0,
+	idpf_bus_width_pcie_x1	= 1,
+	idpf_bus_width_pcie_x2	= 2,
+	idpf_bus_width_pcie_x4	= 4,
+	idpf_bus_width_pcie_x8	= 8,
+	idpf_bus_width_32	= 32,
+	idpf_bus_width_64	= 64,
+	idpf_bus_width_reserved
+};
+
+/* Bus parameters */
+struct idpf_bus_info {
+	enum idpf_bus_speed speed;
+	enum idpf_bus_width width;
+	enum idpf_bus_type type;
+
+	u16 func;
+	u16 device;
+	u16 lan_id;
+	u16 bus_id;
+};
+
+/* Function specific capabilities */
+struct idpf_hw_func_caps {
+	u32 num_alloc_vfs;
+	u32 vf_base_id;
+};
+
 #endif /* _IDPF_OSDEP_H_ */
diff --git a/drivers/common/idpf/base/idpf_type.h b/drivers/common/idpf/base/idpf_type.h
index a22d28f448..2ff818035b 100644
--- a/drivers/common/idpf/base/idpf_type.h
+++ b/drivers/common/idpf/base/idpf_type.h
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_TYPE_H_
 #define _IDPF_TYPE_H_
 
-#include "idpf_controlq.h"
+#include "idpf_osdep.h"
 
 #define UNREFERENCED_XPARAMETER
 #define UNREFERENCED_1PARAMETER(_p)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 04/21] common/idpf: avoid defensive programming
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (2 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 03/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 05/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
                               ` (17 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Based on the upstream feedback, driver should not use any
defensive programming strategy by checking for NULL pointers
and other conditional checks unnecessarily in the code flow
to fall back, instead fail and fix the bug in a proper way.

As the control queue is freed and deleted from the list after the
idpf_ctlq_shutdown call, there is no need to have the ring_size
check in idpf_ctlq_shutdown.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index a82ca628de..d9ca33cdb9 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -98,9 +98,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
 	idpf_acquire_lock(&cq->cq_lock);
 
-	if (!cq->ring_size)
-		goto shutdown_sq_out;
-
 #ifdef SIMICS_BUILD
 	wr32(hw, cq->reg.head, 0);
 	wr32(hw, cq->reg.tail, 0);
@@ -115,7 +112,6 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 	/* Set ring_size to 0 to indicate uninitialized queue */
 	cq->ring_size = 0;
 
-shutdown_sq_out:
 	idpf_release_lock(&cq->cq_lock);
 	idpf_destroy_lock(&cq->cq_lock);
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 05/21] common/idpf: use BIT ULL for large bitmaps
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (3 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 04/21] common/idpf: avoid defensive programming Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 06/21] common/idpf: convert data type to 'le' Soumyadeep Hore
                               ` (16 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

For bitmaps greater than 32 bits, use BIT_ULL instead of BIT
macro as reported by compiler.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 70 ++++++++++++++--------------
 1 file changed, 35 insertions(+), 35 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 6eff0f1ea1..851c6629dd 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -175,20 +175,20 @@
 /* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
  * Receive Side Scaling Flow type capability flags
  */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT(13)
+#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
+#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
+#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
+#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
+#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
+#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
+#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
+#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
+#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
+#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
+#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
+#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
 
 /* VIRTCHNL2_HEADER_SPLIT_CAPS
  * Header split capability flags
@@ -214,32 +214,32 @@
  * TX_VLAN: VLAN tag insertion
  * RX_VLAN: VLAN tag stripping
  */
-#define VIRTCHNL2_CAP_RDMA			BIT(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT(4)
-#define VIRTCHNL2_CAP_CRC			BIT(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT(11)
+#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
+#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
+#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
+#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
+#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
+#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
+#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
+#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
+#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
+#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
+#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
+#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
 /* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT(12)
-#define VIRTCHNL2_CAP_PTP			BIT(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT(15)
-#define VIRTCHNL2_CAP_FDIR			BIT(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT(19)
+#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
+#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
+#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
+#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
+#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
+#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
+#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
 /* Enable miss completion types plus ability to detect a miss completion if a
  * reserved bit is set in a standared completion's tag.
  */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT(20)
+#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
 /* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT(63)
+#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
 
 /* VIRTCHNL2_TXQ_SCHED_MODE
  * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 06/21] common/idpf: convert data type to 'le'
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (4 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 05/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 07/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
                               ` (15 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'u32' data type is used for the struct members in
'virtchnl2_version_info' which should be '__le32'.
Make the change accordingly.

It is a Little Endian specific type definition.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 851c6629dd..1f59730297 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -471,8 +471,8 @@
  * error regardless of version mismatch.
  */
 struct virtchnl2_version_info {
-	u32 major;
-	u32 minor;
+	__le32 major;
+	__le32 minor;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 07/21] common/idpf: compress RXDID mask definitions
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (5 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 06/21] common/idpf: convert data type to 'le' Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 08/21] common/idpf: refactor size check macro Soumyadeep Hore
                               ` (14 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using the long RXDID definitions, introduce a
macro which uses common part of the RXDID definitions i.e.
VIRTCHNL2_RXDID_ and the bit passed to generate a mask.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 31 ++++++++++---------
 1 file changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index e6e782a219..f632271788 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -58,22 +58,23 @@
 /* VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		BIT(VIRTCHNL2_RXDID_0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		BIT(VIRTCHNL2_RXDID_1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		BIT(VIRTCHNL2_RXDID_2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		BIT(VIRTCHNL2_RXDID_3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	BIT(VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	BIT(VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	BIT(VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		BIT(VIRTCHNL2_RXDID_7_HW_RSVD)
+#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
+#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
+#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
+#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
+#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
+#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
+#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
+#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
+#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
+#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
 /* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	BIT(VIRTCHNL2_RXDID_16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	BIT(VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	BIT(VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	BIT(VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	BIT(VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	BIT(VIRTCHNL2_RXDID_21_COMMS_AUX_TCP)
+#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
+#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
+#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
+#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
+#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
+#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
 /* 22 through 63 are reserved */
 
 /* Rx */
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 08/21] common/idpf: refactor size check macro
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (6 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 07/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
                               ` (13 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Instead of using 'divide by 0' to check the struct length,
use the static_assert macro

Removed redundant CHECK_UNION_LEN macro.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 1f59730297..f8b97f2e06 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -41,15 +41,12 @@
 /* State Machine error - Command sequence problem */
 #define	VIRTCHNL2_STATUS_ERR_ESM	201
 
-/* These macros are used to generate compilation errors if a structure/union
- * is not exactly the correct length. It gives a divide by zero error if the
- * structure/union is not of the correct size, otherwise it creates an enum
- * that is never used.
+/* This macro is used to generate compilation errors if a structure
+ * is not exactly the correct length.
  */
-#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) enum virtchnl2_static_assert_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(struct X) == (n)) ? 1 : 0) }
-#define VIRTCHNL2_CHECK_UNION_LEN(n, X) enum virtchnl2_static_asset_enum_##X \
-	{ virtchnl2_static_assert_##X = (n)/((sizeof(union X) == (n)) ? 1 : 0) }
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
+	static_assert((n) == sizeof(struct X),	\
+		      "Structure length does not match with the expected value")
 
 /* New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (7 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 08/21] common/idpf: refactor size check macro Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-28 14:16               ` Bruce Richardson
  2024-06-24  9:16             ` [PATCH v5 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
                               ` (12 subsequent siblings)
  21 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Mask for VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M was defined wrongly
and this patch fixes it.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2_lan_desc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index f632271788..9e04cf8628 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -111,7 +111,7 @@
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M)
+	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (8 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 11/21] common/idpf: move related defines into enums Soumyadeep Hore
                               ` (11 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

'pad' naming is used if the field is actually a padding byte
and is also used for bytes meant for future addition of new
fields, whereas 'reserved' is only used if the field is reserved
and cannot be used for any other purpose.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 71 +++++++++++++++-------------
 1 file changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index f8b97f2e06..d007c2f540 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -95,7 +95,7 @@
 #define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
 #define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
 #define		VIRTCHNL2_OP_GET_PORT_STATS		540
-	/* TimeSync opcodes */
+/* TimeSync opcodes */
 #define		VIRTCHNL2_OP_GET_PTP_CAPS		541
 #define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
 
@@ -559,7 +559,7 @@ struct virtchnl2_get_capabilities {
 	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
-	u8 reserved[10];
+	u8 pad1[10];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
@@ -575,7 +575,7 @@ struct virtchnl2_queue_reg_chunk {
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
-	u8 reserved[4];
+	u8 pad1[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
@@ -583,7 +583,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -648,7 +648,7 @@ struct virtchnl2_create_vport {
 	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
 
-	u8 reserved[20];
+	u8 pad2[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -663,7 +663,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
@@ -708,7 +708,7 @@ struct virtchnl2_txq_info {
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
 
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
@@ -724,7 +724,7 @@ struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 
-	u8 reserved[10];
+	u8 pad[10];
 	struct virtchnl2_txq_info qinfo[1];
 };
 
@@ -749,7 +749,7 @@ struct virtchnl2_rxq_info {
 
 	__le16 ring_len;
 	u8 buffer_notif_stride;
-	u8 pad[1];
+	u8 pad;
 
 	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
@@ -768,16 +768,15 @@ struct virtchnl2_rxq_info {
 	 * if this field is set
 	 */
 	u8 bufq2_ena;
-	u8 pad2[3];
+	u8 pad1[3];
 
 	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
 
-	u8 reserved[16];
+	u8 pad2[16];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
 /* VIRTCHNL2_OP_CONFIG_RX_QUEUES
@@ -791,7 +790,7 @@ struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
 	__le16 num_qinfo;
 
-	u8 reserved[18];
+	u8 pad[18];
 	struct virtchnl2_rxq_info qinfo[1];
 };
 
@@ -810,7 +809,8 @@ struct virtchnl2_add_queues {
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
 	__le16 num_rx_bufq;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_reg_chunks chunks;
 };
 
@@ -948,7 +948,7 @@ struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
-	__le16 pad1;
+	__le16 pad;
 
 	/* Register offsets and spacing provided by CP.
 	 * dynamic control registers are used for enabling/disabling/re-enabling
@@ -969,15 +969,15 @@ struct virtchnl2_vector_chunk {
 	 * where n=0..2
 	 */
 	__le32 itrn_index_spacing;
-	u8 reserved[4];
+	u8 pad1[4];
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
 /* Structure to specify several chunks of contiguous interrupt vectors */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -992,7 +992,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
-	u8 reserved[14];
+	u8 pad[14];
+
 	struct virtchnl2_vector_chunks vchunks;
 };
 
@@ -1014,8 +1015,9 @@ struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
-	u8 reserved[4];
-	__le32 lut[1]; /* RSS lookup table */
+	u8 pad[4];
+	/* RSS lookup table */
+	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
@@ -1039,7 +1041,7 @@ struct virtchnl2_rss_hash {
 	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
@@ -1063,7 +1065,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 /* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_reg_chunk chunks[1];
 };
 
@@ -1073,7 +1075,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 /* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
-	u8 reserved[14];
+	u8 pad[14];
 	struct virtchnl2_vector_chunk vchunks[1];
 };
 
@@ -1100,8 +1102,7 @@ struct virtchnl2_non_flex_create_adi {
 	__le16 adi_index;
 	/* CP populates ADI id */
 	__le16 adi_id;
-	u8 reserved[64];
-	u8 pad[4];
+	u8 pad[68];
 	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
 	/* PF sends vector chunks to CP */
@@ -1117,7 +1118,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
-	u8 reserved[2];
+	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
@@ -1220,7 +1221,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 rx_runt_errors;
 	__le64 rx_illegal_bytes;
 	__le64 rx_total_pkts;
-	u8 rx_reserved[128];
+	u8 rx_pad[128];
 
 	__le64 tx_bytes;
 	__le64 tx_unicast_pkts;
@@ -1239,7 +1240,7 @@ struct virtchnl2_phy_port_stats {
 	__le64 tx_xoff_events;
 	__le64 tx_dropped_link_down_pkts;
 	__le64 tx_total_pkts;
-	u8 tx_reserved[128];
+	u8 tx_pad[128];
 	__le64 mac_local_faults;
 	__le64 mac_remote_faults;
 };
@@ -1273,7 +1274,8 @@ struct virtchnl2_event {
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
-	u8 pad[1];
+	u8 pad;
+
 	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
@@ -1301,7 +1303,7 @@ struct virtchnl2_queue_chunk {
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
-	u8 reserved[4];
+	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
@@ -1309,7 +1311,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 /* structure to specify several chunks of contiguous queues */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
-	u8 reserved[6];
+	u8 pad[6];
 	struct virtchnl2_queue_chunk chunks[1];
 };
 
@@ -1326,7 +1328,8 @@ VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
-	u8 reserved[4];
+	u8 pad[4];
+
 	struct virtchnl2_queue_chunks chunks;
 };
 
@@ -1343,7 +1346,7 @@ struct virtchnl2_queue_vector {
 
 	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
-	u8 reserved[8];
+	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 11/21] common/idpf: move related defines into enums
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (9 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 12/21] common/idpf: avoid variable 0-init Soumyadeep Hore
                               ` (10 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Changes all groups of related defines to enums. The names of
the enums are chosen to follow the common part of the naming
pattern as much as possible.

Replaced the common labels from the comments with the enum names.

While at it, modify header description based on upstream feedback.

Some variable names modified and comments updated in descriptive way.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h          | 1847 ++++++++++-------
 drivers/common/idpf/base/virtchnl2_lan_desc.h |  843 +++++---
 2 files changed, 1686 insertions(+), 1004 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index d007c2f540..e76ccbd46f 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -8,317 +8,396 @@
 /* All opcodes associated with virtchnl 2 are prefixed with virtchnl2 or
  * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
  * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ *
+ * PF/VF uses the virtchnl interface defined in this header file to communicate
+ * with device Control Plane (CP). Driver and the CP may run on different
+ * platforms with different endianness. To avoid byte order discrepancies,
+ * all the structures in this header follow little-endian format.
+ *
+ * This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
  */
 
 #include "virtchnl2_lan_desc.h"
 
-/* VIRTCHNL2_ERROR_CODES */
-/* success */
-#define	VIRTCHNL2_STATUS_SUCCESS	0
-/* Operation not permitted, used in case of command not permitted for sender */
-#define	VIRTCHNL2_STATUS_ERR_EPERM	1
-/* Bad opcode - virtchnl interface problem */
-#define	VIRTCHNL2_STATUS_ERR_ESRCH	3
-/* I/O error - HW access error */
-#define	VIRTCHNL2_STATUS_ERR_EIO	5
-/* No such resource - Referenced resource is not allacated */
-#define	VIRTCHNL2_STATUS_ERR_ENXIO	6
-/* Permission denied - Resource is not permitted to caller */
-#define	VIRTCHNL2_STATUS_ERR_EACCES	13
-/* Device or resource busy - In case shared resource is in use by others */
-#define	VIRTCHNL2_STATUS_ERR_EBUSY	16
-/* Object already exists and not free */
-#define	VIRTCHNL2_STATUS_ERR_EEXIST	17
-/* Invalid input argument in command */
-#define	VIRTCHNL2_STATUS_ERR_EINVAL	22
-/* No space left or allocation failure */
-#define	VIRTCHNL2_STATUS_ERR_ENOSPC	28
-/* Parameter out of range */
-#define	VIRTCHNL2_STATUS_ERR_ERANGE	34
-
-/* Op not allowed in current dev mode */
-#define	VIRTCHNL2_STATUS_ERR_EMODE	200
-/* State Machine error - Command sequence problem */
-#define	VIRTCHNL2_STATUS_ERR_ESM	201
-
-/* This macro is used to generate compilation errors if a structure
+/**
+ * enum virtchnl2_status - Error codes.
+ * @VIRTCHNL2_STATUS_SUCCESS: Success
+ * @VIRTCHNL2_STATUS_ERR_EPERM: Operation not permitted, used in case of command
+ *				not permitted for sender
+ * @VIRTCHNL2_STATUS_ERR_ESRCH: Bad opcode - virtchnl interface problem
+ * @VIRTCHNL2_STATUS_ERR_EIO: I/O error - HW access error
+ * @VIRTCHNL2_STATUS_ERR_ENXIO: No such resource - Referenced resource is not
+ *				allocated
+ * @VIRTCHNL2_STATUS_ERR_EACCES: Permission denied - Resource is not permitted
+ *				 to caller
+ * @VIRTCHNL2_STATUS_ERR_EBUSY: Device or resource busy - In case shared
+ *				resource is in use by others
+ * @VIRTCHNL2_STATUS_ERR_EEXIST: Object already exists and not free
+ * @VIRTCHNL2_STATUS_ERR_EINVAL: Invalid input argument in command
+ * @VIRTCHNL2_STATUS_ERR_ENOSPC: No space left or allocation failure
+ * @VIRTCHNL2_STATUS_ERR_ERANGE: Parameter out of range
+ * @VIRTCHNL2_STATUS_ERR_EMODE: Operation not allowed in current dev mode
+ * @VIRTCHNL2_STATUS_ERR_ESM: State Machine error - Command sequence problem
+ */
+enum virtchnl2_status {
+	VIRTCHNL2_STATUS_SUCCESS	= 0,
+	VIRTCHNL2_STATUS_ERR_EPERM	= 1,
+	VIRTCHNL2_STATUS_ERR_ESRCH	= 3,
+	VIRTCHNL2_STATUS_ERR_EIO	= 5,
+	VIRTCHNL2_STATUS_ERR_ENXIO	= 6,
+	VIRTCHNL2_STATUS_ERR_EACCES	= 13,
+	VIRTCHNL2_STATUS_ERR_EBUSY	= 16,
+	VIRTCHNL2_STATUS_ERR_EEXIST	= 17,
+	VIRTCHNL2_STATUS_ERR_EINVAL	= 22,
+	VIRTCHNL2_STATUS_ERR_ENOSPC	= 28,
+	VIRTCHNL2_STATUS_ERR_ERANGE	= 34,
+	VIRTCHNL2_STATUS_ERR_EMODE	= 200,
+	VIRTCHNL2_STATUS_ERR_ESM	= 201,
+};
+
+/**
+ * This macro is used to generate compilation errors if a structure
  * is not exactly the correct length.
  */
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
 
-/* New major set of opcodes introduced and so leaving room for
+/**
+ * New major set of opcodes introduced and so leaving room for
  * old misc opcodes to be added in future. Also these opcodes may only
  * be used if both the PF and VF have successfully negotiated the
- * VIRTCHNL version as 2.0 during VIRTCHNL22_OP_VERSION exchange.
- */
-#define		VIRTCHNL2_OP_UNKNOWN			0
-#define		VIRTCHNL2_OP_VERSION			1
-#define		VIRTCHNL2_OP_GET_CAPS			500
-#define		VIRTCHNL2_OP_CREATE_VPORT		501
-#define		VIRTCHNL2_OP_DESTROY_VPORT		502
-#define		VIRTCHNL2_OP_ENABLE_VPORT		503
-#define		VIRTCHNL2_OP_DISABLE_VPORT		504
-#define		VIRTCHNL2_OP_CONFIG_TX_QUEUES		505
-#define		VIRTCHNL2_OP_CONFIG_RX_QUEUES		506
-#define		VIRTCHNL2_OP_ENABLE_QUEUES		507
-#define		VIRTCHNL2_OP_DISABLE_QUEUES		508
-#define		VIRTCHNL2_OP_ADD_QUEUES			509
-#define		VIRTCHNL2_OP_DEL_QUEUES			510
-#define		VIRTCHNL2_OP_MAP_QUEUE_VECTOR		511
-#define		VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		512
-#define		VIRTCHNL2_OP_GET_RSS_KEY		513
-#define		VIRTCHNL2_OP_SET_RSS_KEY		514
-#define		VIRTCHNL2_OP_GET_RSS_LUT		515
-#define		VIRTCHNL2_OP_SET_RSS_LUT		516
-#define		VIRTCHNL2_OP_GET_RSS_HASH		517
-#define		VIRTCHNL2_OP_SET_RSS_HASH		518
-#define		VIRTCHNL2_OP_SET_SRIOV_VFS		519
-#define		VIRTCHNL2_OP_ALLOC_VECTORS		520
-#define		VIRTCHNL2_OP_DEALLOC_VECTORS		521
-#define		VIRTCHNL2_OP_EVENT			522
-#define		VIRTCHNL2_OP_GET_STATS			523
-#define		VIRTCHNL2_OP_RESET_VF			524
-	/* opcode 525 is reserved */
-#define		VIRTCHNL2_OP_GET_PTYPE_INFO		526
-	/* opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
-	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW
+ * VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
+ */
+enum virtchnl2_op {
+	VIRTCHNL2_OP_UNKNOWN			= 0,
+	VIRTCHNL2_OP_VERSION			= 1,
+	VIRTCHNL2_OP_GET_CAPS			= 500,
+	VIRTCHNL2_OP_CREATE_VPORT		= 501,
+	VIRTCHNL2_OP_DESTROY_VPORT		= 502,
+	VIRTCHNL2_OP_ENABLE_VPORT		= 503,
+	VIRTCHNL2_OP_DISABLE_VPORT		= 504,
+	VIRTCHNL2_OP_CONFIG_TX_QUEUES		= 505,
+	VIRTCHNL2_OP_CONFIG_RX_QUEUES		= 506,
+	VIRTCHNL2_OP_ENABLE_QUEUES		= 507,
+	VIRTCHNL2_OP_DISABLE_QUEUES		= 508,
+	VIRTCHNL2_OP_ADD_QUEUES			= 509,
+	VIRTCHNL2_OP_DEL_QUEUES			= 510,
+	VIRTCHNL2_OP_MAP_QUEUE_VECTOR		= 511,
+	VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR		= 512,
+	VIRTCHNL2_OP_GET_RSS_KEY		= 513,
+	VIRTCHNL2_OP_SET_RSS_KEY		= 514,
+	VIRTCHNL2_OP_GET_RSS_LUT		= 515,
+	VIRTCHNL2_OP_SET_RSS_LUT		= 516,
+	VIRTCHNL2_OP_GET_RSS_HASH		= 517,
+	VIRTCHNL2_OP_SET_RSS_HASH		= 518,
+	VIRTCHNL2_OP_SET_SRIOV_VFS		= 519,
+	VIRTCHNL2_OP_ALLOC_VECTORS		= 520,
+	VIRTCHNL2_OP_DEALLOC_VECTORS		= 521,
+	VIRTCHNL2_OP_EVENT			= 522,
+	VIRTCHNL2_OP_GET_STATS			= 523,
+	VIRTCHNL2_OP_RESET_VF			= 524,
+	/* Opcode 525 is reserved */
+	VIRTCHNL2_OP_GET_PTYPE_INFO		= 526,
+	/* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+	 * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
 	 */
-	/* opcodes 529, 530, and 531 are reserved */
-#define		VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	532
-#define		VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	533
-#define		VIRTCHNL2_OP_LOOPBACK			534
-#define		VIRTCHNL2_OP_ADD_MAC_ADDR		535
-#define		VIRTCHNL2_OP_DEL_MAC_ADDR		536
-#define		VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	537
-#define		VIRTCHNL2_OP_ADD_QUEUE_GROUPS		538
-#define		VIRTCHNL2_OP_DEL_QUEUE_GROUPS		539
-#define		VIRTCHNL2_OP_GET_PORT_STATS		540
-/* TimeSync opcodes */
-#define		VIRTCHNL2_OP_GET_PTP_CAPS		541
-#define		VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	542
+/* Opcodes 529, 530, and 531 are reserved */
+	VIRTCHNL2_OP_NON_FLEX_CREATE_ADI	= 532,
+	VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI	= 533,
+	VIRTCHNL2_OP_LOOPBACK			= 534,
+	VIRTCHNL2_OP_ADD_MAC_ADDR		= 535,
+	VIRTCHNL2_OP_DEL_MAC_ADDR		= 536,
+	VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE	= 537,
+	VIRTCHNL2_OP_ADD_QUEUE_GROUPS		= 538,
+	VIRTCHNL2_OP_DEL_QUEUE_GROUPS		= 539,
+	VIRTCHNL2_OP_GET_PORT_STATS		= 540,
+	/* TimeSync opcodes */
+	VIRTCHNL2_OP_GET_PTP_CAPS		= 541,
+	VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES	= 542,
+};
 
 #define VIRTCHNL2_RDMA_INVALID_QUEUE_IDX	0xFFFF
 
-/* VIRTCHNL2_VPORT_TYPE
- * Type of virtual port
+/**
+ * enum virtchnl2_vport_type - Type of virtual port
+ * @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SRIOV: SRIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SIOV: SIOV virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_SUBDEV: Subdevice virtual port type
+ * @VIRTCHNL2_VPORT_TYPE_MNG: Management virtual port type
  */
-#define VIRTCHNL2_VPORT_TYPE_DEFAULT		0
-#define VIRTCHNL2_VPORT_TYPE_SRIOV		1
-#define VIRTCHNL2_VPORT_TYPE_SIOV		2
-#define VIRTCHNL2_VPORT_TYPE_SUBDEV		3
-#define VIRTCHNL2_VPORT_TYPE_MNG		4
+enum virtchnl2_vport_type {
+	VIRTCHNL2_VPORT_TYPE_DEFAULT		= 0,
+	VIRTCHNL2_VPORT_TYPE_SRIOV		= 1,
+	VIRTCHNL2_VPORT_TYPE_SIOV		= 2,
+	VIRTCHNL2_VPORT_TYPE_SUBDEV		= 3,
+	VIRTCHNL2_VPORT_TYPE_MNG		= 4,
+};
 
-/* VIRTCHNL2_QUEUE_MODEL
- * Type of queue model
+/**
+ * enum virtchnl2_queue_model - Type of queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model
+ * @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model
  *
  * In the single queue model, the same transmit descriptor queue is used by
  * software to post descriptors to hardware and by hardware to post completed
  * descriptors to software.
  * Likewise, the same receive descriptor queue is used by hardware to post
  * completions to software and by software to post buffers to hardware.
- */
-#define VIRTCHNL2_QUEUE_MODEL_SINGLE		0
-/* In the split queue model, hardware uses transmit completion queues to post
+ *
+ * In the split queue model, hardware uses transmit completion queues to post
  * descriptor/buffer completions to software, while software uses transmit
  * descriptor queues to post descriptors to hardware.
  * Likewise, hardware posts descriptor completions to the receive descriptor
  * queue, while software uses receive buffer queues to post buffers to hardware.
  */
-#define VIRTCHNL2_QUEUE_MODEL_SPLIT		1
-
-/* VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS
- * Checksum offload capability flags
- */
-#define VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		BIT(0)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	BIT(1)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	BIT(2)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	BIT(3)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	BIT(4)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	BIT(5)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	BIT(6)
-#define VIRTCHNL2_CAP_TX_CSUM_GENERIC		BIT(7)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		BIT(8)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	BIT(9)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	BIT(10)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	BIT(11)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	BIT(12)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	BIT(13)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	BIT(14)
-#define VIRTCHNL2_CAP_RX_CSUM_GENERIC		BIT(15)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	BIT(16)
-#define VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	BIT(17)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	BIT(18)
-#define VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	BIT(19)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	BIT(20)
-#define VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	BIT(21)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	BIT(22)
-#define VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	BIT(23)
-
-/* VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS
- * Segmentation offload capability flags
- */
-#define VIRTCHNL2_CAP_SEG_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_SEG_IPV4_UDP		BIT(1)
-#define VIRTCHNL2_CAP_SEG_IPV4_SCTP		BIT(2)
-#define VIRTCHNL2_CAP_SEG_IPV6_TCP		BIT(3)
-#define VIRTCHNL2_CAP_SEG_IPV6_UDP		BIT(4)
-#define VIRTCHNL2_CAP_SEG_IPV6_SCTP		BIT(5)
-#define VIRTCHNL2_CAP_SEG_GENERIC		BIT(6)
-#define VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	BIT(7)
-#define VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	BIT(8)
-
-/* VIRTCHNL2_RSS_FLOW_TYPE_CAPS
- * Receive Side Scaling Flow type capability flags
- */
-#define VIRTCHNL2_CAP_RSS_IPV4_TCP		BIT_ULL(0)
-#define VIRTCHNL2_CAP_RSS_IPV4_UDP		BIT_ULL(1)
-#define VIRTCHNL2_CAP_RSS_IPV4_SCTP		BIT_ULL(2)
-#define VIRTCHNL2_CAP_RSS_IPV4_OTHER		BIT_ULL(3)
-#define VIRTCHNL2_CAP_RSS_IPV6_TCP		BIT_ULL(4)
-#define VIRTCHNL2_CAP_RSS_IPV6_UDP		BIT_ULL(5)
-#define VIRTCHNL2_CAP_RSS_IPV6_SCTP		BIT_ULL(6)
-#define VIRTCHNL2_CAP_RSS_IPV6_OTHER		BIT_ULL(7)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH		BIT_ULL(8)
-#define VIRTCHNL2_CAP_RSS_IPV4_ESP		BIT_ULL(9)
-#define VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		BIT_ULL(10)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH		BIT_ULL(11)
-#define VIRTCHNL2_CAP_RSS_IPV6_ESP		BIT_ULL(12)
-#define VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		BIT_ULL(13)
-
-/* VIRTCHNL2_HEADER_SPLIT_CAPS
- * Header split capability flags
- */
-/* for prepended metadata  */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		BIT(0)
-/* all VLANs go into header buffer */
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		BIT(1)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		BIT(2)
-#define VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		BIT(3)
-
-/* VIRTCHNL2_RSC_OFFLOAD_CAPS
- * Receive Side Coalescing offload capability flags
- */
-#define VIRTCHNL2_CAP_RSC_IPV4_TCP		BIT(0)
-#define VIRTCHNL2_CAP_RSC_IPV4_SCTP		BIT(1)
-#define VIRTCHNL2_CAP_RSC_IPV6_TCP		BIT(2)
-#define VIRTCHNL2_CAP_RSC_IPV6_SCTP		BIT(3)
-
-/* VIRTCHNL2_OTHER_CAPS
- * Other capability flags
- * SPLITQ_QSCHED: Queue based scheduling using split queue model
- * TX_VLAN: VLAN tag insertion
- * RX_VLAN: VLAN tag stripping
- */
-#define VIRTCHNL2_CAP_RDMA			BIT_ULL(0)
-#define VIRTCHNL2_CAP_SRIOV			BIT_ULL(1)
-#define VIRTCHNL2_CAP_MACFILTER			BIT_ULL(2)
-#define VIRTCHNL2_CAP_FLOW_DIRECTOR		BIT_ULL(3)
-#define VIRTCHNL2_CAP_SPLITQ_QSCHED		BIT_ULL(4)
-#define VIRTCHNL2_CAP_CRC			BIT_ULL(5)
-#define VIRTCHNL2_CAP_INLINE_FLOW_STEER		BIT_ULL(6)
-#define VIRTCHNL2_CAP_WB_ON_ITR			BIT_ULL(7)
-#define VIRTCHNL2_CAP_PROMISC			BIT_ULL(8)
-#define VIRTCHNL2_CAP_LINK_SPEED		BIT_ULL(9)
-#define VIRTCHNL2_CAP_INLINE_IPSEC		BIT_ULL(10)
-#define VIRTCHNL2_CAP_LARGE_NUM_QUEUES		BIT_ULL(11)
-/* require additional info */
-#define VIRTCHNL2_CAP_VLAN			BIT_ULL(12)
-#define VIRTCHNL2_CAP_PTP			BIT_ULL(13)
-#define VIRTCHNL2_CAP_ADV_RSS			BIT_ULL(15)
-#define VIRTCHNL2_CAP_FDIR			BIT_ULL(16)
-#define VIRTCHNL2_CAP_RX_FLEX_DESC		BIT_ULL(17)
-#define VIRTCHNL2_CAP_PTYPE			BIT_ULL(18)
-#define VIRTCHNL2_CAP_LOOPBACK			BIT_ULL(19)
-/* Enable miss completion types plus ability to detect a miss completion if a
- * reserved bit is set in a standared completion's tag.
- */
-#define VIRTCHNL2_CAP_MISS_COMPL_TAG		BIT_ULL(20)
-/* this must be the last capability */
-#define VIRTCHNL2_CAP_OEM			BIT_ULL(63)
-
-/* VIRTCHNL2_TXQ_SCHED_MODE
- * Transmit Queue Scheduling Modes - Queue mode is the legacy mode i.e. inorder
- * completions where descriptors and buffers are completed at the same time.
- * Flow scheduling mode allows for out of order packet processing where
- * descriptors are cleaned in order, but buffers can be completed out of order.
- */
-#define VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		0
-#define VIRTCHNL2_TXQ_SCHED_MODE_FLOW		1
-
-/* VIRTCHNL2_TXQ_FLAGS
- * Transmit Queue feature flags
- *
- * Enable rule miss completion type; packet completion for a packet
- * sent on exception path; only relevant in flow scheduling mode
- */
-#define VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		BIT(0)
-
-/* VIRTCHNL2_PEER_TYPE
- * Transmit mailbox peer type
- */
-#define VIRTCHNL2_RDMA_CPF			0
-#define VIRTCHNL2_NVME_CPF			1
-#define VIRTCHNL2_ATE_CPF			2
-#define VIRTCHNL2_LCE_CPF			3
-
-/* VIRTCHNL2_RXQ_FLAGS
- * Receive Queue Feature flags
- */
-#define VIRTCHNL2_RXQ_RSC			BIT(0)
-#define VIRTCHNL2_RXQ_HDR_SPLIT			BIT(1)
-/* When set, packet descriptors are flushed by hardware immediately after
- * processing each packet.
- */
-#define VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	BIT(2)
-#define VIRTCHNL2_RX_DESC_SIZE_16BYTE		BIT(3)
-#define VIRTCHNL2_RX_DESC_SIZE_32BYTE		BIT(4)
-
-/* VIRTCHNL2_RSS_ALGORITHM
- * Type of RSS algorithm
- */
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC		0
-#define VIRTCHNL2_RSS_ALG_R_ASYMMETRIC			1
-#define VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC		2
-#define VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC			3
-
-/* VIRTCHNL2_EVENT_CODES
- * Type of event
- */
-#define VIRTCHNL2_EVENT_UNKNOWN			0
-#define VIRTCHNL2_EVENT_LINK_CHANGE		1
-/* These messages are only sent to PF from CP */
-#define VIRTCHNL2_EVENT_START_RESET_ADI		2
-#define VIRTCHNL2_EVENT_FINISH_RESET_ADI	3
-#define VIRTCHNL2_EVENT_ADI_ACTIVE		4
-
-/* VIRTCHNL2_QUEUE_TYPE
- * Transmit and Receive queue types are valid in legacy as well as split queue
- * models. With Split Queue model, 2 additional types are introduced -
- * TX_COMPLETION and RX_BUFFER. In split queue model, receive  corresponds to
+enum virtchnl2_queue_model {
+	VIRTCHNL2_QUEUE_MODEL_SINGLE		= 0,
+	VIRTCHNL2_QUEUE_MODEL_SPLIT		= 1,
+};
+
+/* Checksum offload capability flags */
+enum virtchnl2_cap_txrx_csum {
+	VIRTCHNL2_CAP_TX_CSUM_L3_IPV4		= BIT(0),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP	= BIT(1),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP	= BIT(2),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP	= BIT(3),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP	= BIT(4),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP	= BIT(5),
+	VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_CAP_TX_CSUM_GENERIC		= BIT(7),
+	VIRTCHNL2_CAP_RX_CSUM_L3_IPV4		= BIT(8),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP	= BIT(9),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP	= BIT(10),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP	= BIT(11),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP	= BIT(12),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP	= BIT(13),
+	VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP	= BIT(14),
+	VIRTCHNL2_CAP_RX_CSUM_GENERIC		= BIT(15),
+	VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL	= BIT(16),
+	VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL	= BIT(17),
+	VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL	= BIT(18),
+	VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL	= BIT(19),
+	VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL	= BIT(20),
+	VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL	= BIT(21),
+	VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL	= BIT(22),
+	VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL	= BIT(23),
+};
+
+/* Segmentation offload capability flags */
+enum virtchnl2_cap_seg {
+	VIRTCHNL2_CAP_SEG_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_SEG_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_SEG_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_SEG_IPV6_TCP		= BIT(3),
+	VIRTCHNL2_CAP_SEG_IPV6_UDP		= BIT(4),
+	VIRTCHNL2_CAP_SEG_IPV6_SCTP		= BIT(5),
+	VIRTCHNL2_CAP_SEG_GENERIC		= BIT(6),
+	VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL	= BIT(7),
+	VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL	= BIT(8),
+};
+
+/* Receive Side Scaling Flow type capability flags */
+enum virtchnl2_cap_rss {
+	VIRTCHNL2_CAP_RSS_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSS_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_CAP_RSS_IPV4_SCTP		= BIT(2),
+	VIRTCHNL2_CAP_RSS_IPV4_OTHER		= BIT(3),
+	VIRTCHNL2_CAP_RSS_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_CAP_RSS_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_CAP_RSS_IPV6_SCTP		= BIT(6),
+	VIRTCHNL2_CAP_RSS_IPV6_OTHER		= BIT(7),
+	VIRTCHNL2_CAP_RSS_IPV4_AH		= BIT(8),
+	VIRTCHNL2_CAP_RSS_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_CAP_RSS_IPV4_AH_ESP		= BIT(10),
+	VIRTCHNL2_CAP_RSS_IPV6_AH		= BIT(11),
+	VIRTCHNL2_CAP_RSS_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_CAP_RSS_IPV6_AH_ESP		= BIT(13),
+};
+
+/* Header split capability flags */
+enum virtchnl2_cap_rx_hsplit_at {
+	/* For prepended metadata  */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L2		= BIT(0),
+	/* All VLANs go into header buffer */
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L3		= BIT(1),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4		= BIT(2),
+	VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6		= BIT(3),
+};
+
+/* Receive Side Coalescing offload capability flags */
+enum virtchnl2_cap_rsc {
+	VIRTCHNL2_CAP_RSC_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_CAP_RSC_IPV4_SCTP		= BIT(1),
+	VIRTCHNL2_CAP_RSC_IPV6_TCP		= BIT(2),
+	VIRTCHNL2_CAP_RSC_IPV6_SCTP		= BIT(3),
+};
+
+/* Other capability flags */
+enum virtchnl2_cap_other {
+	VIRTCHNL2_CAP_RDMA			= BIT_ULL(0),
+	VIRTCHNL2_CAP_SRIOV			= BIT_ULL(1),
+	VIRTCHNL2_CAP_MACFILTER			= BIT_ULL(2),
+	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
+	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
+	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
+	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
+	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
+	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
+	VIRTCHNL2_CAP_INLINE_IPSEC		= BIT_ULL(10),
+	VIRTCHNL2_CAP_LARGE_NUM_QUEUES		= BIT_ULL(11),
+	/* Require additional info */
+	VIRTCHNL2_CAP_VLAN			= BIT_ULL(12),
+	VIRTCHNL2_CAP_PTP			= BIT_ULL(13),
+	VIRTCHNL2_CAP_ADV_RSS			= BIT_ULL(15),
+	VIRTCHNL2_CAP_FDIR			= BIT_ULL(16),
+	VIRTCHNL2_CAP_RX_FLEX_DESC		= BIT_ULL(17),
+	VIRTCHNL2_CAP_PTYPE			= BIT_ULL(18),
+	VIRTCHNL2_CAP_LOOPBACK			= BIT_ULL(19),
+	/* Enable miss completion types plus ability to detect a miss completion
+	 * if a reserved bit is set in a standard completion's tag.
+	 */
+	VIRTCHNL2_CAP_MISS_COMPL_TAG		= BIT_ULL(20),
+	/* This must be the last capability */
+	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
+};
+
+/**
+ * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
+ * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
+ *				    completions where descriptors and buffers
+ *				    are completed at the same time.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
+ *				   packet processing where descriptors are
+ *				   cleaned in order, but buffers can be
+ *				   completed out of order.
+ */
+enum virtchnl2_txq_sched_mode {
+	VIRTCHNL2_TXQ_SCHED_MODE_QUEUE		= 0,
+	VIRTCHNL2_TXQ_SCHED_MODE_FLOW		= 1,
+};
+
+/**
+ * enum virtchnl2_txq_flags - Transmit Queue feature flags
+ * @VIRTCHNL2_TXQ_ENABLE_MISS_COMPL: Enable rule miss completion type. Packet
+ *				     completion for a packet sent on exception
+ *				     path and only relevant in flow scheduling
+ *				     mode.
+ */
+enum virtchnl2_txq_flags {
+	VIRTCHNL2_TXQ_ENABLE_MISS_COMPL		= BIT(0),
+};
+
+/**
+ * enum virtchnl2_peer_type - Transmit mailbox peer type
+ * @VIRTCHNL2_RDMA_CPF: RDMA peer type
+ * @VIRTCHNL2_NVME_CPF: NVME peer type
+ * @VIRTCHNL2_ATE_CPF: ATE peer type
+ * @VIRTCHNL2_LCE_CPF: LCE peer type
+ */
+enum virtchnl2_peer_type {
+	VIRTCHNL2_RDMA_CPF			= 0,
+	VIRTCHNL2_NVME_CPF			= 1,
+	VIRTCHNL2_ATE_CPF			= 2,
+	VIRTCHNL2_LCE_CPF			= 3,
+};
+
+/**
+ * enum virtchnl2_rxq_flags - Receive Queue Feature flags
+ * @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag
+ * @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag
+ * @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
+ *					by hardware immediately after processing
+ *					each packet.
+ * @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size
+ * @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size
+ */
+enum virtchnl2_rxq_flags {
+	VIRTCHNL2_RXQ_RSC			= BIT(0),
+	VIRTCHNL2_RXQ_HDR_SPLIT			= BIT(1),
+	VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK	= BIT(2),
+	VIRTCHNL2_RX_DESC_SIZE_16BYTE		= BIT(3),
+	VIRTCHNL2_RX_DESC_SIZE_32BYTE		= BIT(4),
+};
+
+/**
+ * enum virtchnl2_rss_alg - Type of RSS algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC: TOEPLITZ_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_R_ASYMMETRIC: R_ASYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC: TOEPLITZ_SYMMETRIC algorithm
+ * @VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC: XOR_SYMMETRIC algorithm
+ */
+enum virtchnl2_rss_alg {
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC	= 0,
+	VIRTCHNL2_RSS_ALG_R_ASYMMETRIC		= 1,
+	VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC	= 2,
+	VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC		= 3,
+};
+
+/**
+ * enum virtchnl2_event_codes - Type of event
+ * @VIRTCHNL2_EVENT_UNKNOWN: Unknown event type
+ * @VIRTCHNL2_EVENT_LINK_CHANGE: Link change event type
+ * @VIRTCHNL2_EVENT_START_RESET_ADI: Start reset ADI event type
+ * @VIRTCHNL2_EVENT_FINISH_RESET_ADI: Finish reset ADI event type
+ * @VIRTCHNL2_EVENT_ADI_ACTIVE: Event type to indicate 'function active' state
+ *				of ADI.
+ */
+enum virtchnl2_event_codes {
+	VIRTCHNL2_EVENT_UNKNOWN			= 0,
+	VIRTCHNL2_EVENT_LINK_CHANGE		= 1,
+	/* These messages are only sent to PF from CP */
+	VIRTCHNL2_EVENT_START_RESET_ADI		= 2,
+	VIRTCHNL2_EVENT_FINISH_RESET_ADI	= 3,
+	VIRTCHNL2_EVENT_ADI_ACTIVE		= 4,
+};
+
+/**
+ * enum virtchnl2_queue_type - Various queue types
+ * @VIRTCHNL2_QUEUE_TYPE_TX: TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX: RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: TX completion queue type
+ * @VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: RX buffer queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_TX: Config TX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_CONFIG_RX: Config RX queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_TX: TX mailbox queue type
+ * @VIRTCHNL2_QUEUE_TYPE_MBX_RX: RX mailbox queue type
+ *
+ * Transmit and Receive queue types are valid in single as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced which are
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
  * the queue where hardware posts completions.
  */
-#define VIRTCHNL2_QUEUE_TYPE_TX			0
-#define VIRTCHNL2_QUEUE_TYPE_RX			1
-#define VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	2
-#define VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		3
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		4
-#define VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		5
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX		6
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX		7
-#define VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	8
-#define VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	9
-#define VIRTCHNL2_QUEUE_TYPE_MBX_TX		10
-#define VIRTCHNL2_QUEUE_TYPE_MBX_RX		11
-
-/* VIRTCHNL2_ITR_IDX
- * Virtchannel interrupt throttling rate index
- */
-#define VIRTCHNL2_ITR_IDX_0			0
-#define VIRTCHNL2_ITR_IDX_1			1
-
-/* VIRTCHNL2_VECTOR_LIMITS
+enum virtchnl2_queue_type {
+	VIRTCHNL2_QUEUE_TYPE_TX			= 0,
+	VIRTCHNL2_QUEUE_TYPE_RX			= 1,
+	VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION	= 2,
+	VIRTCHNL2_QUEUE_TYPE_RX_BUFFER		= 3,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_TX		= 4,
+	VIRTCHNL2_QUEUE_TYPE_CONFIG_RX		= 5,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX		= 6,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX		= 7,
+	VIRTCHNL2_QUEUE_TYPE_P2P_TX_COMPLETION	= 8,
+	VIRTCHNL2_QUEUE_TYPE_P2P_RX_BUFFER	= 9,
+	VIRTCHNL2_QUEUE_TYPE_MBX_TX		= 10,
+	VIRTCHNL2_QUEUE_TYPE_MBX_RX		= 11,
+};
+
+/**
+ * enum virtchnl2_itr_idx - Interrupt throttling rate index
+ * @VIRTCHNL2_ITR_IDX_0: ITR index 0
+ * @VIRTCHNL2_ITR_IDX_1: ITR index 1
+ */
+enum virtchnl2_itr_idx {
+	VIRTCHNL2_ITR_IDX_0			= 0,
+	VIRTCHNL2_ITR_IDX_1			= 1,
+};
+
+/**
+ * VIRTCHNL2_VECTOR_LIMITS
  * Since PF/VF messages are limited by __le16 size, precalculate the maximum
  * possible values of nested elements in virtchnl structures that virtual
  * channel can possibly handle in a single message.
@@ -332,131 +411,150 @@
 		((__le16)(~0) - sizeof(struct virtchnl2_queue_vector_maps)) / \
 		sizeof(struct virtchnl2_queue_vector))
 
-/* VIRTCHNL2_MAC_TYPE
- * VIRTCHNL2_MAC_ADDR_PRIMARY
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_PRIMARY for the
- * primary/device unicast MAC address filter for VIRTCHNL2_OP_ADD_MAC_ADDR and
- * VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the underlying control plane
- * function to accurately track the MAC address and for VM/function reset.
- *
- * VIRTCHNL2_MAC_ADDR_EXTRA
- * PF/VF driver should set @type to VIRTCHNL2_MAC_ADDR_EXTRA for any extra
- * unicast and/or multicast filters that are being added/deleted via
- * VIRTCHNL2_OP_ADD_MAC_ADDR/VIRTCHNL2_OP_DEL_MAC_ADDR respectively.
+/**
+ * enum virtchnl2_mac_addr_type - MAC address types
+ * @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
+ *				primary/device unicast MAC address filter for
+ *				VIRTCHNL2_OP_ADD_MAC_ADDR and
+ *				VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
+ *				underlying control plane function to accurately
+ *				track the MAC address and for VM/function reset.
+ * @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
+ *			      unicast and/or multicast filters that are being
+ *			      added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
+ *			      VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
-#define VIRTCHNL2_MAC_ADDR_PRIMARY		1
-#define VIRTCHNL2_MAC_ADDR_EXTRA		2
+enum virtchnl2_mac_addr_type {
+	VIRTCHNL2_MAC_ADDR_PRIMARY		= 1,
+	VIRTCHNL2_MAC_ADDR_EXTRA		= 2,
+};
 
-/* VIRTCHNL2_PROMISC_FLAGS
- * Flags used for promiscuous mode
+/**
+ * enum virtchnl2_promisc_flags - Flags used for promiscuous mode
+ * @VIRTCHNL2_UNICAST_PROMISC: Unicast promiscuous mode
+ * @VIRTCHNL2_MULTICAST_PROMISC: Multicast promiscuous mode
  */
-#define VIRTCHNL2_UNICAST_PROMISC		BIT(0)
-#define VIRTCHNL2_MULTICAST_PROMISC		BIT(1)
+enum virtchnl2_promisc_flags {
+	VIRTCHNL2_UNICAST_PROMISC		= BIT(0),
+	VIRTCHNL2_MULTICAST_PROMISC		= BIT(1),
+};
 
-/* VIRTCHNL2_QUEUE_GROUP_TYPE
- * Type of queue groups
+/**
+ * enum virtchnl2_queue_group_type - Type of queue groups
+ * @VIRTCHNL2_QUEUE_GROUP_DATA: Data queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_MBX: Mailbox queue group type
+ * @VIRTCHNL2_QUEUE_GROUP_CONFIG: Config queue group type
+ *
  * 0 till 0xFF is for general use
  */
-#define VIRTCHNL2_QUEUE_GROUP_DATA		1
-#define VIRTCHNL2_QUEUE_GROUP_MBX		2
-#define VIRTCHNL2_QUEUE_GROUP_CONFIG		3
+enum virtchnl2_queue_group_type {
+	VIRTCHNL2_QUEUE_GROUP_DATA		= 1,
+	VIRTCHNL2_QUEUE_GROUP_MBX		= 2,
+	VIRTCHNL2_QUEUE_GROUP_CONFIG		= 3,
+};
 
-/* VIRTCHNL2_PROTO_HDR_TYPE
- * Protocol header type within a packet segment. A segment consists of one or
+/* Protocol header type within a packet segment. A segment consists of one or
  * more protocol headers that make up a logical group of protocol headers. Each
  * logical group of protocol headers encapsulates or is encapsulated using/by
  * tunneling or encapsulation protocols for network virtualization.
  */
-/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ANY			0
-#define VIRTCHNL2_PROTO_HDR_PRE_MAC		1
-/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_MAC			2
-#define VIRTCHNL2_PROTO_HDR_POST_MAC		3
-#define VIRTCHNL2_PROTO_HDR_ETHERTYPE		4
-#define VIRTCHNL2_PROTO_HDR_VLAN		5
-#define VIRTCHNL2_PROTO_HDR_SVLAN		6
-#define VIRTCHNL2_PROTO_HDR_CVLAN		7
-#define VIRTCHNL2_PROTO_HDR_MPLS		8
-#define VIRTCHNL2_PROTO_HDR_UMPLS		9
-#define VIRTCHNL2_PROTO_HDR_MMPLS		10
-#define VIRTCHNL2_PROTO_HDR_PTP			11
-#define VIRTCHNL2_PROTO_HDR_CTRL		12
-#define VIRTCHNL2_PROTO_HDR_LLDP		13
-#define VIRTCHNL2_PROTO_HDR_ARP			14
-#define VIRTCHNL2_PROTO_HDR_ECP			15
-#define VIRTCHNL2_PROTO_HDR_EAPOL		16
-#define VIRTCHNL2_PROTO_HDR_PPPOD		17
-#define VIRTCHNL2_PROTO_HDR_PPPOE		18
-/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4		19
-/* IPv4 and IPv6 Fragment header types are only associated to
- * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
- * cannot be used independently.
- */
-/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV4_FRAG		20
-/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6		21
-/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_IPV6_FRAG		22
-#define VIRTCHNL2_PROTO_HDR_IPV6_EH		23
-/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_UDP			24
-/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TCP			25
-/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_SCTP		26
-/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMP		27
-/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_ICMPV6		28
-#define VIRTCHNL2_PROTO_HDR_IGMP		29
-#define VIRTCHNL2_PROTO_HDR_AH			30
-#define VIRTCHNL2_PROTO_HDR_ESP			31
-#define VIRTCHNL2_PROTO_HDR_IKE			32
-#define VIRTCHNL2_PROTO_HDR_NATT_KEEP		33
-/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_PAY			34
-#define VIRTCHNL2_PROTO_HDR_L2TPV2		35
-#define VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	36
-#define VIRTCHNL2_PROTO_HDR_L2TPV3		37
-#define VIRTCHNL2_PROTO_HDR_GTP			38
-#define VIRTCHNL2_PROTO_HDR_GTP_EH		39
-#define VIRTCHNL2_PROTO_HDR_GTPCV2		40
-#define VIRTCHNL2_PROTO_HDR_GTPC_TEID		41
-#define VIRTCHNL2_PROTO_HDR_GTPU		42
-#define VIRTCHNL2_PROTO_HDR_GTPU_UL		43
-#define VIRTCHNL2_PROTO_HDR_GTPU_DL		44
-#define VIRTCHNL2_PROTO_HDR_ECPRI		45
-#define VIRTCHNL2_PROTO_HDR_VRRP		46
-#define VIRTCHNL2_PROTO_HDR_OSPF		47
-/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_TUN			48
-#define VIRTCHNL2_PROTO_HDR_GRE			49
-#define VIRTCHNL2_PROTO_HDR_NVGRE		50
-#define VIRTCHNL2_PROTO_HDR_VXLAN		51
-#define VIRTCHNL2_PROTO_HDR_VXLAN_GPE		52
-#define VIRTCHNL2_PROTO_HDR_GENEVE		53
-#define VIRTCHNL2_PROTO_HDR_NSH			54
-#define VIRTCHNL2_PROTO_HDR_QUIC		55
-#define VIRTCHNL2_PROTO_HDR_PFCP		56
-#define VIRTCHNL2_PROTO_HDR_PFCP_NODE		57
-#define VIRTCHNL2_PROTO_HDR_PFCP_SESSION	58
-#define VIRTCHNL2_PROTO_HDR_RTP			59
-#define VIRTCHNL2_PROTO_HDR_ROCE		60
-#define VIRTCHNL2_PROTO_HDR_ROCEV1		61
-#define VIRTCHNL2_PROTO_HDR_ROCEV2		62
-/* protocol ids up to 32767 are reserved for AVF use */
-/* 32768 - 65534 are used for user defined protocol ids */
-/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
-#define VIRTCHNL2_PROTO_HDR_NO_PROTO		65535
-
-#define VIRTCHNL2_VERSION_MAJOR_2        2
-#define VIRTCHNL2_VERSION_MINOR_0        0
-
-
-/* VIRTCHNL2_OP_VERSION
+enum virtchnl2_proto_hdr_type {
+	/* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ANY			= 0,
+	VIRTCHNL2_PROTO_HDR_PRE_MAC		= 1,
+	/* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_MAC			= 2,
+	VIRTCHNL2_PROTO_HDR_POST_MAC		= 3,
+	VIRTCHNL2_PROTO_HDR_ETHERTYPE		= 4,
+	VIRTCHNL2_PROTO_HDR_VLAN		= 5,
+	VIRTCHNL2_PROTO_HDR_SVLAN		= 6,
+	VIRTCHNL2_PROTO_HDR_CVLAN		= 7,
+	VIRTCHNL2_PROTO_HDR_MPLS		= 8,
+	VIRTCHNL2_PROTO_HDR_UMPLS		= 9,
+	VIRTCHNL2_PROTO_HDR_MMPLS		= 10,
+	VIRTCHNL2_PROTO_HDR_PTP			= 11,
+	VIRTCHNL2_PROTO_HDR_CTRL		= 12,
+	VIRTCHNL2_PROTO_HDR_LLDP		= 13,
+	VIRTCHNL2_PROTO_HDR_ARP			= 14,
+	VIRTCHNL2_PROTO_HDR_ECP			= 15,
+	VIRTCHNL2_PROTO_HDR_EAPOL		= 16,
+	VIRTCHNL2_PROTO_HDR_PPPOD		= 17,
+	VIRTCHNL2_PROTO_HDR_PPPOE		= 18,
+	/* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4		= 19,
+	/* IPv4 and IPv6 Fragment header types are only associated to
+	 * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+	 * cannot be used independently.
+	 */
+	/* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV4_FRAG		= 20,
+	/* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6		= 21,
+	/* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_IPV6_FRAG		= 22,
+	VIRTCHNL2_PROTO_HDR_IPV6_EH		= 23,
+	/* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_UDP			= 24,
+	/* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TCP			= 25,
+	/* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_SCTP		= 26,
+	/* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMP		= 27,
+	/* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_ICMPV6		= 28,
+	VIRTCHNL2_PROTO_HDR_IGMP		= 29,
+	VIRTCHNL2_PROTO_HDR_AH			= 30,
+	VIRTCHNL2_PROTO_HDR_ESP			= 31,
+	VIRTCHNL2_PROTO_HDR_IKE			= 32,
+	VIRTCHNL2_PROTO_HDR_NATT_KEEP		= 33,
+	/* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_PAY			= 34,
+	VIRTCHNL2_PROTO_HDR_L2TPV2		= 35,
+	VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL	= 36,
+	VIRTCHNL2_PROTO_HDR_L2TPV3		= 37,
+	VIRTCHNL2_PROTO_HDR_GTP			= 38,
+	VIRTCHNL2_PROTO_HDR_GTP_EH		= 39,
+	VIRTCHNL2_PROTO_HDR_GTPCV2		= 40,
+	VIRTCHNL2_PROTO_HDR_GTPC_TEID		= 41,
+	VIRTCHNL2_PROTO_HDR_GTPU		= 42,
+	VIRTCHNL2_PROTO_HDR_GTPU_UL		= 43,
+	VIRTCHNL2_PROTO_HDR_GTPU_DL		= 44,
+	VIRTCHNL2_PROTO_HDR_ECPRI		= 45,
+	VIRTCHNL2_PROTO_HDR_VRRP		= 46,
+	VIRTCHNL2_PROTO_HDR_OSPF		= 47,
+	/* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_TUN			= 48,
+	VIRTCHNL2_PROTO_HDR_GRE			= 49,
+	VIRTCHNL2_PROTO_HDR_NVGRE		= 50,
+	VIRTCHNL2_PROTO_HDR_VXLAN		= 51,
+	VIRTCHNL2_PROTO_HDR_VXLAN_GPE		= 52,
+	VIRTCHNL2_PROTO_HDR_GENEVE		= 53,
+	VIRTCHNL2_PROTO_HDR_NSH			= 54,
+	VIRTCHNL2_PROTO_HDR_QUIC		= 55,
+	VIRTCHNL2_PROTO_HDR_PFCP		= 56,
+	VIRTCHNL2_PROTO_HDR_PFCP_NODE		= 57,
+	VIRTCHNL2_PROTO_HDR_PFCP_SESSION	= 58,
+	VIRTCHNL2_PROTO_HDR_RTP			= 59,
+	VIRTCHNL2_PROTO_HDR_ROCE		= 60,
+	VIRTCHNL2_PROTO_HDR_ROCEV1		= 61,
+	VIRTCHNL2_PROTO_HDR_ROCEV2		= 62,
+	/* Protocol ids up to 32767 are reserved */
+	/* 32768 - 65534 are used for user defined protocol ids */
+	/* VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id */
+	VIRTCHNL2_PROTO_HDR_NO_PROTO		= 65535,
+};
+
+enum virtchl2_version {
+	VIRTCHNL2_VERSION_MINOR_0		= 0,
+	VIRTCHNL2_VERSION_MAJOR_2		= 2,
+};
+
+/**
+ * struct virtchnl2_version_info - Version information
+ * @major: Major version
+ * @minor: Minor version
+ *
  * PF/VF posts its version number to the CP. CP responds with its version number
  * in the same format, along with a return code.
  * If there is a major version mismatch, then the PF/VF cannot operate.
@@ -466,6 +564,8 @@
  * This version opcode MUST always be specified as == 1, regardless of other
  * changes in the API. The CP must always respond to this message without
  * error regardless of version mismatch.
+ *
+ * Associated with VIRTCHNL2_OP_VERSION.
  */
 struct virtchnl2_version_info {
 	__le32 major;
@@ -474,7 +574,39 @@ struct virtchnl2_version_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
 
-/* VIRTCHNL2_OP_GET_CAPS
+/**
+ * struct virtchnl2_get_capabilities - Capabilities info
+ * @csum_caps: See enum virtchnl2_cap_txrx_csum
+ * @seg_caps: See enum virtchnl2_cap_seg
+ * @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at
+ * @rsc_caps: See enum virtchnl2_cap_rsc
+ * @rss_caps: See enum virtchnl2_cap_rss
+ * @other_caps: See enum virtchnl2_cap_other
+ * @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
+ *		     provided by CP.
+ * @mailbox_vector_id: Mailbox vector id
+ * @num_allocated_vectors: Maximum number of allocated vectors for the device
+ * @max_rx_q: Maximum number of supported Rx queues
+ * @max_tx_q: Maximum number of supported Tx queues
+ * @max_rx_bufq: Maximum number of supported buffer queues
+ * @max_tx_complq: Maximum number of supported completion queues
+ * @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
+ *		   responds with the maximum VFs granted.
+ * @max_vports: Maximum number of vports that can be supported
+ * @default_num_vports: Default number of vports driver should allocate on load
+ * @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes
+ * @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
+ *			    sent per transmit packet without needing to be
+ *			    linearized.
+ * @reserved: Reserved field
+ * @max_adis: Max number of ADIs
+ * @device_type: See enum virtchl2_device_type
+ * @min_sso_packet_len: Min packet length supported by device for single
+ *			segment offload
+ * @max_hdr_buf_per_lso: Max number of header buffers that can be used for
+ *			 an LSO
+ * @pad1: Padding for future extensions
+ *
  * Dataplane driver sends this message to CP to negotiate capabilities and
  * provides a virtchnl2_get_capabilities structure with its desired
  * capabilities, max_sriov_vfs and num_allocated_vectors.
@@ -492,60 +624,30 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
  * mailbox_vector_id and the number of itr index registers in itr_idx_map.
  * It also responds with default number of vports that the dataplane driver
  * should comeup with in default_num_vports and maximum number of vports that
- * can be supported in max_vports
+ * can be supported in max_vports.
+ *
+ * Associated with VIRTCHNL2_OP_GET_CAPS.
  */
 struct virtchnl2_get_capabilities {
-	/* see VIRTCHNL2_CHECKSUM_OFFLOAD_CAPS definitions */
 	__le32 csum_caps;
-
-	/* see VIRTCHNL2_SEGMENTATION_OFFLOAD_CAPS definitions */
 	__le32 seg_caps;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 hsplit_caps;
-
-	/* see VIRTCHNL2_RSC_OFFLOAD_CAPS definitions */
 	__le32 rsc_caps;
-
-	/* see VIRTCHNL2_RSS_FLOW_TYPE_CAPS definitions  */
 	__le64 rss_caps;
-
-
-	/* see VIRTCHNL2_OTHER_CAPS definitions  */
 	__le64 other_caps;
-
-	/* DYN_CTL register offset and vector id for mailbox provided by CP */
 	__le32 mailbox_dyn_ctl;
 	__le16 mailbox_vector_id;
-	/* Maximum number of allocated vectors for the device */
 	__le16 num_allocated_vectors;
-
-	/* Maximum number of queues that can be supported */
 	__le16 max_rx_q;
 	__le16 max_tx_q;
 	__le16 max_rx_bufq;
 	__le16 max_tx_complq;
-
-	/* The PF sends the maximum VFs it is requesting. The CP responds with
-	 * the maximum VFs granted.
-	 */
 	__le16 max_sriov_vfs;
-
-	/* maximum number of vports that can be supported */
 	__le16 max_vports;
-	/* default number of vports driver should allocate on load */
 	__le16 default_num_vports;
-
-	/* Max header length hardware can parse/checksum, in bytes */
 	__le16 max_tx_hdr_size;
-
-	/* Max number of scatter gather buffers that can be sent per transmit
-	 * packet without needing to be linearized
-	 */
 	u8 max_sg_bufs_per_tx_pkt;
-
-	u8 reserved1;
-	/* upper bound of number of ADIs supported */
+	u8 reserved;
 	__le16 max_adis;
 
 	/* version of Control Plane that is running */
@@ -553,10 +655,7 @@ struct virtchnl2_get_capabilities {
 	__le16 oem_cp_ver_minor;
 	/* see VIRTCHNL2_DEVICE_TYPE definitions */
 	__le32 device_type;
-
-	/* min packet length supported by device for single segment offload */
 	u8 min_sso_packet_len;
-	/* max number of header buffers that can be used for an LSO */
 	u8 max_hdr_buf_per_lso;
 
 	u8 pad1[10];
@@ -564,14 +663,21 @@ struct virtchnl2_get_capabilities {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
 
+/**
+ * struct virtchnl2_queue_reg_chunk - Single queue chunk
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Start Queue ID
+ * @num_queues: Number of queues in the chunk
+ * @pad: Padding
+ * @qtail_reg_start: Queue tail register offset
+ * @qtail_reg_spacing: Queue tail register spacing
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_queue_reg_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
 	__le32 pad;
-
-	/* Queue tail register offset and spacing provided by CP */
 	__le64 qtail_reg_start;
 	__le32 qtail_reg_spacing;
 
@@ -580,7 +686,13 @@ struct virtchnl2_queue_reg_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/**
+ * struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
+ *				       queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info
+ */
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -589,77 +701,91 @@ struct virtchnl2_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
 
-/* VIRTCHNL2_VPORT_FLAGS */
-#define VIRTCHNL2_VPORT_UPLINK_PORT		BIT(0)
-#define VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	BIT(1)
+/**
+ * enum virtchnl2_vport_flags - Vport flags
+ * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ */
+enum virtchnl2_vport_flags {
+	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+};
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
 
-/* VIRTCHNL2_OP_CREATE_VPORT
- * PF sends this message to CP to create a vport by filling in required
+
+/**
+ * struct virtchnl2_create_vport - Create vport config info
+ * @vport_type: See enum virtchnl2_vport_type
+ * @txq_model: See virtchnl2_queue_model
+ * @rxq_model: See virtchnl2_queue_model
+ * @num_tx_q: Number of Tx queues
+ * @num_tx_complq: Valid only if txq_model is split queue
+ * @num_rx_q: Number of Rx queues
+ * @num_rx_bufq: Valid only if rxq_model is split queue
+ * @default_rx_q: Relative receive queue index to be used as default
+ * @vport_index: Used to align PF and CP in case of default multiple vports,
+ *		 it is filled by the PF and CP returns the same value, to
+ *		 enable the driver to support multiple asynchronous parallel
+ *		 CREATE_VPORT requests and associate a response to a specific
+ *		 request.
+ * @max_mtu: Max MTU. CP populates this field on response
+ * @vport_id: Vport id. CP populates this field on response
+ * @default_mac_addr: Default MAC address
+ * @vport_flags: See enum virtchnl2_vport_flags
+ * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
+ * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
+ * @reserved: Reserved bytes and cannot be used
+ * @rss_algorithm: RSS algorithm
+ * @rss_key_size: RSS key size
+ * @rss_lut_size: RSS LUT size
+ * @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to CP to create a vport by filling in required
  * fields of virtchnl2_create_vport structure.
  * CP responds with the updated virtchnl2_create_vport structure containing the
  * necessary fields followed by chunks which in turn will have an array of
  * num_chunks entries of virtchnl2_queue_chunk structures.
  */
 struct virtchnl2_create_vport {
-	/* PF/VF populates the following fields on request */
-	/* see VIRTCHNL2_VPORT_TYPE definitions */
 	__le16 vport_type;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 txq_model;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 rxq_model;
 	__le16 num_tx_q;
-	/* valid only if txq_model is split queue */
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
-	/* valid only if rxq_model is split queue */
 	__le16 num_rx_bufq;
-	/* relative receive queue index to be used as default */
 	__le16 default_rx_q;
-	/* used to align PF and CP in case of default multiple vports, it is
-	 * filled by the PF and CP returns the same value, to enable the driver
-	 * to support multiple asynchronous parallel CREATE_VPORT requests and
-	 * associate a response to a specific request
-	 */
 	__le16 vport_index;
-
-	/* CP populates the following fields on response */
 	__le16 max_mtu;
 	__le32 vport_id;
 	u8 default_mac_addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_VPORT_FLAGS definitions */
 	__le16 vport_flags;
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 rx_desc_ids;
-	/* see VIRTCHNL2_TX_DESC_IDS definitions */
 	__le64 tx_desc_ids;
-
-	u8 reserved1[72];
-
-	/* see VIRTCHNL2_RSS_ALGORITHM definitions */
+	u8 reserved[72];
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-
-	/* see VIRTCHNL2_HEADER_SPLIT_CAPS definitions */
 	__le32 rx_split_pos;
-
-	u8 pad2[20];
+	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
 VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
 
-/* VIRTCHNL2_OP_DESTROY_VPORT
- * VIRTCHNL2_OP_ENABLE_VPORT
- * VIRTCHNL2_OP_DISABLE_VPORT
- * PF sends this message to CP to destroy, enable or disable a vport by filling
- * in the vport_id in virtchnl2_vport structure.
+/**
+ * struct virtchnl2_vport - Vport identifier information
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends this message to CP to destroy, enable or disable a vport by
+ * filling in the vport_id in virtchnl2_vport structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
+ * VIRTCHNL2_OP_DISABLE_VPORT.
  */
 struct virtchnl2_vport {
 	__le32 vport_id;
@@ -668,42 +794,43 @@ struct virtchnl2_vport {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
 
-/* Transmit queue config info */
+/**
+ * struct virtchnl2_txq_info - Transmit queue config info
+ * @dma_ring_addr: DMA address
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue ID
+ * @relative_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue. Used in many to one mapping of transmit queues to
+ *		       completion queue.
+ * @model: See enum virtchnl2_queue_model
+ * @sched_mode: See enum virtchnl2_txq_sched_mode
+ * @qflags: TX queue feature flags
+ * @ring_len: Ring length
+ * @tx_compl_queue_id: Valid only if queue model is split and type is transmit
+ *		       queue.
+ * @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
+ * @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
+ *		      messages for the respective CONFIG_TX queue.
+ * @pad: Padding
+ * @egress_pasid: Egress PASID info
+ * @egress_hdr_pasid: Egress HDR passid
+ * @egress_buf_pasid: Egress buf passid
+ * @pad1: Padding for future extensions
+ */
 struct virtchnl2_txq_info {
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
-
 	__le32 queue_id;
-	/* valid only if queue model is split and type is transmit queue. Used
-	 * in many to one mapping of transmit queues to completion queue
-	 */
 	__le16 relative_queue_id;
-
-	/* see VIRTCHNL2_QUEUE_MODEL definitions */
 	__le16 model;
-
-	/* see VIRTCHNL2_TXQ_SCHED_MODE definitions */
 	__le16 sched_mode;
-
-	/* see VIRTCHNL2_TXQ_FLAGS definitions */
 	__le16 qflags;
 	__le16 ring_len;
-
-	/* valid only if queue model is split and type is transmit queue */
 	__le16 tx_compl_queue_id;
-	/* valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX */
-	/* see VIRTCHNL2_PEER_TYPE definitions */
 	__le16 peer_type;
-	/* valid only if queue type is CONFIG_TX and used to deliver messages
-	 * for the respective CONFIG_TX queue
-	 */
 	__le16 peer_rx_queue_id;
 
 	u8 pad[4];
-
-	/* Egress pasid is used for SIOV use case */
 	__le32 egress_pasid;
 	__le32 egress_hdr_pasid;
 	__le32 egress_buf_pasid;
@@ -713,12 +840,20 @@ struct virtchnl2_txq_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
 
-/* VIRTCHNL2_OP_CONFIG_TX_QUEUES
- * PF sends this message to set up parameters for one or more transmit queues.
- * This message contains an array of num_qinfo instances of virtchnl2_txq_info
- * structures. CP configures requested queues and returns a status code. If
- * num_qinfo specified is greater than the number of queues associated with the
- * vport, an error is returned and no queues are configured.
+/**
+ * struct virtchnl2_config_tx_queues - TX queue config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of virtchnl2_txq_info structs
+ * @pad: Padding for future extensions
+ * @qinfo: Tx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more transmit
+ * queues. This message contains an array of num_qinfo instances of
+ * virtchnl2_txq_info structures. CP configures requested queues and returns
+ * a status code. If num_qinfo specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
  */
 struct virtchnl2_config_tx_queues {
 	__le32 vport_id;
@@ -730,47 +865,55 @@ struct virtchnl2_config_tx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
 
-/* Receive queue config info */
+/**
+ * struct virtchnl2_rxq_info - Receive queue config info
+ * @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions
+ * @type: See enum virtchnl2_queue_type
+ * @queue_id: Queue id
+ * @model: See enum virtchnl2_queue_model
+ * @hdr_buffer_size: Header buffer size
+ * @data_buffer_size: Data buffer size
+ * @max_pkt_size: Max packet size
+ * @ring_len: Ring length
+ * @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
+ *			 This field must be a power of 2.
+ * @pad: Padding
+ * @dma_head_wb_addr: Applicable only for receive buffer queues
+ * @qflags: Applicable only for receive completion queues.
+ *	    See enum virtchnl2_rxq_flags.
+ * @rx_buffer_low_watermark: Rx buffer low watermark
+ * @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
+ *		 the Rx queue. Valid only in split queue model.
+ * @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
+ *	       only if this field is set.
+ * @pad1: Padding
+ * @ingress_pasid: Ingress PASID
+ * @ingress_hdr_pasid: Ingress PASID header
+ * @ingress_buf_pasid: Ingress PASID buffer
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_rxq_info {
-	/* see VIRTCHNL2_RX_DESC_IDS definitions */
 	__le64 desc_ids;
 	__le64 dma_ring_addr;
-
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 queue_id;
-
-	/* see QUEUE_MODEL definitions */
 	__le16 model;
-
 	__le16 hdr_buffer_size;
 	__le32 data_buffer_size;
 	__le32 max_pkt_size;
-
 	__le16 ring_len;
 	u8 buffer_notif_stride;
 	u8 pad;
-
-	/* Applicable only for receive buffer queues */
 	__le64 dma_head_wb_addr;
-
-	/* Applicable only for receive completion queues */
-	/* see VIRTCHNL2_RXQ_FLAGS definitions */
 	__le16 qflags;
-
 	__le16 rx_buffer_low_watermark;
-
-	/* valid only in split queue model */
 	__le16 rx_bufq1_id;
-	/* valid only in split queue model */
 	__le16 rx_bufq2_id;
-	/* it indicates if there is a second buffer, rx_bufq2_id is valid only
-	 * if this field is set
-	 */
 	u8 bufq2_ena;
 	u8 pad1[3];
-
-	/* Ingress pasid is used for SIOV use case */
 	__le32 ingress_pasid;
 	__le32 ingress_hdr_pasid;
 	__le32 ingress_buf_pasid;
@@ -779,12 +922,20 @@ struct virtchnl2_rxq_info {
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
 
-/* VIRTCHNL2_OP_CONFIG_RX_QUEUES
- * PF sends this message to set up parameters for one or more receive queues.
+/**
+ * struct virtchnl2_config_rx_queues - Rx queues config
+ * @vport_id: Vport id
+ * @num_qinfo: Number of instances
+ * @pad: Padding for future extensions
+ * @qinfo: Rx queues config info
+ *
+ * PF/VF sends this message to set up parameters for one or more receive queues.
  * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
  * structures. CP configures requested queues and returns a status code.
  * If the number of queues specified is greater than the number of queues
  * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
  */
 struct virtchnl2_config_rx_queues {
 	__le32 vport_id;
@@ -796,12 +947,23 @@ struct virtchnl2_config_rx_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
 
-/* VIRTCHNL2_OP_ADD_QUEUES
- * PF sends this message to request additional transmit/receive queues beyond
+/**
+ * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
+ * @vport_id: Vport id
+ * @num_tx_q: Number of Tx qieues
+ * @num_tx_complq: Number of Tx completion queues
+ * @num_rx_q:  Number of Rx queues
+ * @num_rx_bufq:  Number of Rx buffer queues
+ * @pad: Padding for future extensions
+ * @chunks: Chunks of contiguous queues
+ *
+ * PF/VF sends this message to request additional transmit/receive queues beyond
  * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
  * structure is used to specify the number of each type of queues.
  * CP responds with the same structure with the actual number of queues assigned
  * followed by num_chunks of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUES.
  */
 struct virtchnl2_add_queues {
 	__le32 vport_id;
@@ -817,65 +979,81 @@ struct virtchnl2_add_queues {
 VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
 
 /* Queue Groups Extension */
-
+/**
+ * struct virtchnl2_rx_queue_group_info - RX queue group info
+ * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
+ *		  allocated by CreateVport command. New size will be returned
+ *		  if allocation succeeded, otherwise original rss_size from
+ *		  CreateVport will be returned.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_rx_queue_group_info {
-	/* IN/OUT, user can ask to update rss_lut size originally allocated
-	 * by CreateVport command. New size will be returned if allocation
-	 * succeeded, otherwise original rss_size from CreateVport will
-	 * be returned.
-	 */
 	__le16 rss_lut_size;
-	/* Future extension purpose */
 	u8 pad[6];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
 
+/**
+ * struct virtchnl2_tx_queue_group_info - TX queue group info
+ * @tx_tc: TX TC queue group will be connected to
+ * @priority: Each group can have its own priority, value 0-7, while each group
+ *	      with unique priority is strict priority. It can be single set of
+ *	      queue groups which configured with same priority, then they are
+ *	      assumed part of WFQ arbitration group and are expected to be
+ *	      assigned with weight.
+ * @is_sp: Determines if queue group is expected to be Strict Priority according
+ *	   to its priority.
+ * @pad: Padding
+ * @pir_weight: Peak Info Rate Weight in case Queue Group is part of WFQ
+ *		arbitration set.
+ *		The weights of the groups are independent of each other.
+ *		Possible values: 1-200
+ * @cir_pad: Future extension purpose for CIR only
+ * @pad2: Padding for future extensions
+ */
 struct virtchnl2_tx_queue_group_info { /* IN */
-	/* TX TC queue group will be connected to */
 	u8 tx_tc;
-	/* Each group can have its own priority, value 0-7, while each group
-	 * with unique priority is strict priority.
-	 * It can be single set of queue groups which configured with
-	 * same priority, then they are assumed part of WFQ arbitration
-	 * group and are expected to be assigned with weight.
-	 */
 	u8 priority;
-	/* Determines if queue group is expected to be Strict Priority
-	 * according to its priority
-	 */
 	u8 is_sp;
 	u8 pad;
-
-	/* Peak Info Rate Weight in case Queue Group is part of WFQ
-	 * arbitration set.
-	 * The weights of the groups are independent of each other.
-	 * Possible values: 1-200
-	 */
 	__le16 pir_weight;
-	/* Future extension purpose for CIR only */
 	u8 cir_pad[2];
-	/* Future extension purpose*/
 	u8 pad2[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_tx_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_group_id - Queue group ID
+ * @queue_group_id: Queue group ID - Depended on it's type
+ *		    Data: Is an ID which is relative to Vport
+ *		    Config & Mailbox: Is an ID which is relative to func
+ *		    This ID is use in future calls, i.e. delete.
+ *		    Requested by host and assigned by Control plane.
+ * @queue_group_type: Functional type: See enum virtchnl2_queue_group_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_group_id {
-	/* Queue group ID - depended on it's type
-	 * Data: is an ID which is relative to Vport
-	 * Config & Mailbox: is an ID which is relative to func.
-	 * This ID is use in future calls, i.e. delete.
-	 * Requested by host and assigned by Control plane.
-	 */
 	__le16 queue_group_id;
-	/* Functional type: see VIRTCHNL2_QUEUE_GROUP_TYPE definitions */
 	__le16 queue_group_type;
 	u8 pad[4];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 
+/**
+ * struct virtchnl2_queue_group_info - Queue group info
+ * @qg_id: Queue group ID
+ * @num_tx_q: Number of TX queues
+ * @num_tx_complq: Number of completion queues
+ * @num_rx_q: Number of RX queues
+ * @num_rx_bufq: Number of RX buffer queues
+ * @tx_q_grp_info: TX queue group info
+ * @rx_q_grp_info: RX queue group info
+ * @pad: Padding for future extensions
+ * @chunks: Queue register chunks
+ */
 struct virtchnl2_queue_group_info {
 	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
@@ -887,13 +1065,18 @@ struct virtchnl2_queue_group_info {
 
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
-	/* Future extension purpose */
 	u8 pad[40];
 	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
 
+/**
+ * struct virtchnl2_queue_groups - Queue groups list
+ * @num_queue_groups: Total number of queue groups
+ * @pad: Padding for future extensions
+ * @groups: Array of queue group info
+ */
 struct virtchnl2_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[6];
@@ -902,78 +1085,107 @@ struct virtchnl2_queue_groups {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
 
-/* VIRTCHNL2_OP_ADD_QUEUE_GROUPS
+/**
+ * struct virtchnl2_add_queue_groups - Add queue groups
+ * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @pad: Padding for future extensions
+ * @qg_info: IN/OUT. List of all the queue groups
+ *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
  * groups and queues assigned followed by num_queue_groups and num_chunks of
  * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
-	/* IN, vport_id to add queue group to, same as allocated by CreateVport.
-	 * NA for mailbox and other types not assigned to vport
-	 */
 	__le32 vport_id;
 	u8 pad[4];
-	/* IN/OUT */
 	struct virtchnl2_queue_groups qg_info;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
-/* VIRTCHNL2_OP_DEL_QUEUE_GROUPS
+/**
+ * struct virtchnl2_delete_queue_groups - Delete queue groups
+ * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ *	      CreateVport.
+ * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @pad: Padding
+ * @qg_ids: IN, IDs & types of Queue Groups to delete
+ *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
  * to be deleted. CP performs requested action and returns status and update
  * num_queue_groups with number of successfully deleted queue groups.
+ *
+ * Associated with VIRTCHNL2_OP_DEL_QUEUE_GROUPS.
  */
 struct virtchnl2_delete_queue_groups {
-	/* IN, vport_id to delete queue group from, same as
-	 * allocated by CreateVport.
-	 */
 	__le32 vport_id;
-	/* IN/OUT, Defines number of groups provided below */
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	/* IN, IDs & types of Queue Groups to delete */
 	struct virtchnl2_queue_group_id qg_ids[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
 
-/* Structure to specify a chunk of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
+ *				   interrupt vectors.
+ * @start_vector_id: Start vector id
+ * @start_evv_id: Start EVV id
+ * @num_vectors: Number of vectors
+ * @pad: Padding
+ * @dynctl_reg_start: DYN_CTL register offset
+ * @dynctl_reg_spacing: Register spacing between DYN_CTL registers of 2
+ *			consecutive vectors.
+ * @itrn_reg_start: ITRN register offset
+ * @itrn_reg_spacing: Register spacing between dynctl registers of 2
+ *		      consecutive vectors.
+ * @itrn_index_spacing: Register spacing between itrn registers of the same
+ *			vector where n=0..2.
+ * @pad1: Padding for future extensions
+ *
+ * Register offsets and spacing provided by CP.
+ * Dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
 struct virtchnl2_vector_chunk {
 	__le16 start_vector_id;
 	__le16 start_evv_id;
 	__le16 num_vectors;
 	__le16 pad;
 
-	/* Register offsets and spacing provided by CP.
-	 * dynamic control registers are used for enabling/disabling/re-enabling
-	 * interrupts and updating interrupt rates in the hotpath. Any changes
-	 * to interrupt rates in the dynamic control registers will be reflected
-	 * in the interrupt throttling rate registers.
-	 * itrn registers are used to update interrupt rates for specific
-	 * interrupt indices without modifying the state of the interrupt.
-	 */
 	__le32 dynctl_reg_start;
-	/* register spacing between dynctl registers of 2 consecutive vectors */
 	__le32 dynctl_reg_spacing;
 
 	__le32 itrn_reg_start;
-	/* register spacing between itrn registers of 2 consecutive vectors */
 	__le32 itrn_reg_spacing;
-	/* register spacing between itrn registers of the same vector
-	 * where n=0..2
-	 */
 	__le32 itrn_index_spacing;
 	u8 pad1[4];
 };
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
 
-/* Structure to specify several chunks of contiguous interrupt vectors */
+/**
+ * struct virtchnl2_vector_chunks - Chunks of contiguous interrupt vectors
+ * @num_vchunks: number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends virtchnl2_vector_chunks struct to specify the vectors it is
+ * giving away. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
+ */
 struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -983,12 +1195,19 @@ struct virtchnl2_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
 
-/* VIRTCHNL2_OP_ALLOC_VECTORS
- * PF sends this message to request additional interrupt vectors beyond the
+/**
+ * struct virtchnl2_alloc_vectors - Vector allocation info
+ * @num_vectors: Number of vectors
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info
+ *
+ * PF/VF sends this message to request additional interrupt vectors beyond the
  * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
  * structure is used to specify the number of vectors requested. CP responds
  * with the same structure with the actual number of vectors assigned followed
  * by virtchnl2_vector_chunks structure identifying the vector ids.
+ *
+ * Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
  */
 struct virtchnl2_alloc_vectors {
 	__le16 num_vectors;
@@ -999,46 +1218,46 @@ struct virtchnl2_alloc_vectors {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
 
-/* VIRTCHNL2_OP_DEALLOC_VECTORS
- * PF sends this message to release the vectors.
- * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
- * away. CP performs requested action and returns status.
- */
-
-/* VIRTCHNL2_OP_GET_RSS_LUT
- * VIRTCHNL2_OP_SET_RSS_LUT
- * PF sends this message to get or set RSS lookup table. Only supported if
+/**
+ * struct virtchnl2_rss_lut - RSS LUT info
+ * @vport_id: Vport id
+ * @lut_entries_start: Start of LUT entries
+ * @lut_entries: Number of LUT entrties
+ * @pad: Padding
+ * @lut: RSS lookup table
+ *
+ * PF/VF sends this message to get or set RSS lookup table. Only supported if
  * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_lut structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
  */
 struct virtchnl2_rss_lut {
 	__le32 vport_id;
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	/* RSS lookup table */
 	__le32 lut[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * PF sends this message to get RSS key. Only supported if both PF and CP
- * drivers set the VIRTCHNL2_CAP_RSS bit during configuration negotiation. Uses
- * the virtchnl2_rss_key structure
- */
-
-/* VIRTCHNL2_OP_GET_RSS_HASH
- * VIRTCHNL2_OP_SET_RSS_HASH
- * PF sends these messages to get and set the hash filter enable bits for RSS.
- * By default, the CP sets these to all possible traffic types that the
+/**
+ * struct virtchnl2_rss_hash - RSS hash info
+ * @ptype_groups: Packet type groups bitmap
+ * @vport_id: Vport id
+ * @pad: Padding for future extensions
+ *
+ * PF/VF sends these messages to get and set the hash filter enable bits for
+ * RSS. By default, the CP sets these to all possible traffic types that the
  * hardware supports. The PF can query this value if it wants to change the
  * traffic types that are hashed by the hardware.
  * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
  * during configuration negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
  */
 struct virtchnl2_rss_hash {
-	/* Packet Type Groups bitmap */
 	__le64 ptype_groups;
 	__le32 vport_id;
 	u8 pad[4];
@@ -1046,12 +1265,18 @@ struct virtchnl2_rss_hash {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
 
-/* VIRTCHNL2_OP_SET_SRIOV_VFS
+/**
+ * struct virtchnl2_sriov_vfs_info - VFs info
+ * @num_vfs: Number of VFs
+ * @pad: Padding for future extensions
+ *
  * This message is used to set number of SRIOV VFs to be created. The actual
  * allocation of resources for the VFs in terms of vport, queues and interrupts
- * is done by CP. When this call completes, the APF driver calls
+ * is done by CP. When this call completes, the IDPF driver calls
  * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
  * The number of VFs set to 0 will destroy all the VFs of this function.
+ *
+ * Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
  */
 
 struct virtchnl2_sriov_vfs_info {
@@ -1061,8 +1286,14 @@ struct virtchnl2_sriov_vfs_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
 
-/* structure to specify single chunk of queue */
-/* 'chunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_queue_reg_chunks - Specify several chunks of
+ *						contiguous queues.
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of queue info. 'chunks' is fixed size(not flexible) and
+ *	    will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1071,8 +1302,14 @@ struct virtchnl2_non_flex_queue_reg_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_non_flex_queue_reg_chunks);
 
-/* structure to specify single chunk of interrupt vector */
-/* 'vchunks' is fixed size(not flexible) and will be deprecated at some point */
+/**
+ * struct virtchnl2_non_flex_vector_chunks - Chunks of contiguous interrupt
+ *					     vectors.
+ * @num_vchunks: Number of vector chunks
+ * @pad: Padding for future extensions
+ * @vchunks: Chunks of contiguous vector info. 'vchunks' is fixed size
+ *	     (not flexible) and will be deprecated at some point.
+ */
 struct virtchnl2_non_flex_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
@@ -1081,40 +1318,49 @@ struct virtchnl2_non_flex_vector_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_non_flex_vector_chunks);
 
-/* VIRTCHNL2_OP_NON_FLEX_CREATE_ADI
+/**
+ * struct virtchnl2_non_flex_create_adi - Create ADI
+ * @pasid: PF sends PASID to CP
+ * @mbx_id: mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
+ *	    id else it is set to 0 by PF.
+ * @mbx_vec_id: PF sends mailbox vector id to CP
+ * @adi_index: PF populates this ADI index
+ * @adi_id: CP populates ADI id
+ * @pad: Padding
+ * @chunks: CP populates queue chunks
+ * @vchunks: PF sends vector chunks to CP
+ *
  * PF sends this message to CP to create ADI by filling in required
  * fields of virtchnl2_non_flex_create_adi structure.
- * CP responds with the updated virtchnl2_non_flex_create_adi structure containing
- * the necessary fields followed by chunks which in turn will have an array of
- * num_chunks entries of virtchnl2_queue_chunk structures.
+ * CP responds with the updated virtchnl2_non_flex_create_adi structure
+ * containing the necessary fields followed by chunks which in turn will have
+ * an array of num_chunks entries of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_CREATE_ADI.
  */
 struct virtchnl2_non_flex_create_adi {
-	/* PF sends PASID to CP */
 	__le32 pasid;
-	/*
-	 * mbx_id is set to 1 by PF when requesting CP to provide HW mailbox
-	 * id else it is set to 0 by PF
-	 */
 	__le16 mbx_id;
-	/* PF sends mailbox vector id to CP */
 	__le16 mbx_vec_id;
-	/* PF populates this ADI index */
 	__le16 adi_index;
-	/* CP populates ADI id */
 	__le16 adi_id;
 	u8 pad[68];
-	/* CP populates queue chunks */
 	struct virtchnl2_non_flex_queue_reg_chunks chunks;
-	/* PF sends vector chunks to CP */
 	struct virtchnl2_non_flex_vector_chunks vchunks;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(168, virtchnl2_non_flex_create_adi);
 
-/* VIRTCHNL2_OP_DESTROY_ADI
+/**
+ * struct virtchnl2_non_flex_destroy_adi - Destroy ADI
+ * @adi_id: ADI id to destroy
+ * @pad: Padding
+ *
  * PF sends this message to CP to destroy ADI by filling
  * in the adi_id in virtchnl2_destropy_adi structure.
  * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI.
  */
 struct virtchnl2_non_flex_destroy_adi {
 	__le16 adi_id;
@@ -1123,7 +1369,17 @@ struct virtchnl2_non_flex_destroy_adi {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 
-/* Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+/**
+ * struct virtchnl2_ptype - Packet type info
+ * @ptype_id_10: 10-bit packet type
+ * @ptype_id_8: 8-bit packet type
+ * @proto_id_count: Number of protocol ids the packet supports, maximum of 32
+ *		    protocol ids are supported.
+ * @pad: Padding
+ * @proto_id: proto_id_count decides the allocation of protocol id array.
+ *	      See enum virtchnl2_proto_hdr_type.
+ *
+ * Based on the descriptor type the PF supports, CP fills ptype_id_10 or
  * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
  * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
  * last ptype.
@@ -1131,32 +1387,42 @@ VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_non_flex_destroy_adi);
 struct virtchnl2_ptype {
 	__le16 ptype_id_10;
 	u8 ptype_id_8;
-	/* number of protocol ids the packet supports, maximum of 32
-	 * protocol ids are supported
-	 */
 	u8 proto_id_count;
 	__le16 pad;
-	/* proto_id_count decides the allocation of protocol id array */
-	/* see VIRTCHNL2_PROTO_HDR_TYPE */
 	__le16 proto_id[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
 
-/* VIRTCHNL2_OP_GET_PTYPE_INFO
- * PF sends this message to CP to get all supported packet types. It does by
- * filling in start_ptype_id and num_ptypes. Depending on descriptor type the
- * PF supports, it sets num_ptypes to 1024 (10-bit ptype) for flex descriptor
- * and 256 (8-bit ptype) for base descriptor support. CP responds back to PF by
- * populating start_ptype_id, num_ptypes and array of ptypes. If all ptypes
- * doesn't fit into one mailbox buffer, CP splits ptype info into multiple
- * messages, where each message will have the start ptype id, number of ptypes
- * sent in that message and the ptype array itself. When CP is done updating
- * all ptype information it extracted from the package (number of ptypes
- * extracted might be less than what PF expects), it will append a dummy ptype
- * (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF) to the ptype
- * array. PF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO
- * messages.
+/**
+ * struct virtchnl2_get_ptype_info - Packet type info
+ * @start_ptype_id: Starting ptype ID
+ * @num_ptypes: Number of packet types from start_ptype_id
+ * @pad: Padding for future extensions
+ * @ptype: Array of packet type info
+ *
+ * The total number of supported packet types is based on the descriptor type.
+ * For the flex descriptor, it is 1024 (10-bit ptype), and for the base
+ * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
+ * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
+ * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
+ * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
+ * is no specific field for the ptypes but are added at the end of the
+ * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * Reason for doing this is because compiler doesn't allow nested flexible
+ * array fields).
+ *
+ * If all the ptypes don't fit into one mailbox buffer, CP splits the
+ * ptype info into multiple messages, where each message will have its own
+ * 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
+ * updating all the ptype information extracted from the package (the number of
+ * ptypes extracted might be less than what PF/VF expects), it will append a
+ * dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
+ * to the ptype array.
+ *
+ * PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
  */
 struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
@@ -1167,25 +1433,46 @@ struct virtchnl2_get_ptype_info {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
 
-/* VIRTCHNL2_OP_GET_STATS
+/**
+ * struct virtchnl2_vport_stats - Vport statistics
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @rx_bytes: Received bytes
+ * @rx_unicast: Received unicast packets
+ * @rx_multicast: Received multicast packets
+ * @rx_broadcast: Received broadcast packets
+ * @rx_discards: Discarded packets on receive
+ * @rx_errors: Receive errors
+ * @rx_unknown_protocol: Unlnown protocol
+ * @tx_bytes: Transmitted bytes
+ * @tx_unicast: Transmitted unicast packets
+ * @tx_multicast: Transmitted multicast packets
+ * @tx_broadcast: Transmitted broadcast packets
+ * @tx_discards: Discarded packets on transmit
+ * @tx_errors: Transmit errors
+ * @rx_invalid_frame_length: Packets with invalid frame length
+ * @rx_overflow_drop: Packets dropped on buffer overflow
+ *
  * PF/VF sends this message to CP to get the update stats by specifying the
  * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ *
+ * Associated with VIRTCHNL2_OP_GET_STATS.
  */
 struct virtchnl2_vport_stats {
 	__le32 vport_id;
 	u8 pad[4];
 
-	__le64 rx_bytes;		/* received bytes */
-	__le64 rx_unicast;		/* received unicast pkts */
-	__le64 rx_multicast;		/* received multicast pkts */
-	__le64 rx_broadcast;		/* received broadcast pkts */
+	__le64 rx_bytes;
+	__le64 rx_unicast;
+	__le64 rx_multicast;
+	__le64 rx_broadcast;
 	__le64 rx_discards;
 	__le64 rx_errors;
 	__le64 rx_unknown_protocol;
-	__le64 tx_bytes;		/* transmitted bytes */
-	__le64 tx_unicast;		/* transmitted unicast pkts */
-	__le64 tx_multicast;		/* transmitted multicast pkts */
-	__le64 tx_broadcast;		/* transmitted broadcast pkts */
+	__le64 tx_bytes;
+	__le64 tx_unicast;
+	__le64 tx_multicast;
+	__le64 tx_broadcast;
 	__le64 tx_discards;
 	__le64 tx_errors;
 	__le64 rx_invalid_frame_length;
@@ -1194,7 +1481,9 @@ struct virtchnl2_vport_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
 
-/* physical port statistics */
+/**
+ * struct virtchnl2_phy_port_stats - Physical port statistics
+ */
 struct virtchnl2_phy_port_stats {
 	__le64 rx_bytes;
 	__le64 rx_unicast_pkts;
@@ -1247,10 +1536,17 @@ struct virtchnl2_phy_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(600, virtchnl2_phy_port_stats);
 
-/* VIRTCHNL2_OP_GET_PORT_STATS
- * PF/VF sends this message to CP to get the updated stats by specifying the
+/**
+ * struct virtchnl2_port_stats - Port statistics
+ * @vport_id: Vport ID
+ * @pad: Padding
+ * @phy_port_stats: Physical port statistics
+ * @virt_port_stats: Vport statistics
+ *
  * vport_id. CP responds with stats in struct virtchnl2_port_stats that
  * includes both physical port as well as vport statistics.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PORT_STATS.
  */
 struct virtchnl2_port_stats {
 	__le32 vport_id;
@@ -1262,44 +1558,61 @@ struct virtchnl2_port_stats {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(736, virtchnl2_port_stats);
 
-/* VIRTCHNL2_OP_EVENT
+/**
+ * struct virtchnl2_event - Event info
+ * @event: Event opcode. See enum virtchnl2_event_codes
+ * @link_speed: Link_speed provided in Mbps
+ * @vport_id: Vport ID
+ * @link_status: Link status
+ * @pad: Padding
+ * @adi_id: ADI id
+ *
  * CP sends this message to inform the PF/VF driver of events that may affect
  * it. No direct response is expected from the driver, though it may generate
  * other messages in response to this one.
+ *
+ * Associated with VIRTCHNL2_OP_EVENT.
  */
 struct virtchnl2_event {
-	/* see VIRTCHNL2_EVENT_CODES definitions */
 	__le32 event;
-	/* link_speed provided in Mbps */
 	__le32 link_speed;
 	__le32 vport_id;
 	u8 link_status;
 	u8 pad;
-
-	/* CP sends reset notification to PF with corresponding ADI ID */
 	__le16 adi_id;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
 
-/* VIRTCHNL2_OP_GET_RSS_KEY
- * VIRTCHNL2_OP_SET_RSS_KEY
+/**
+ * struct virtchnl2_rss_key - RSS key info
+ * @vport_id: Vport id
+ * @key_len: Length of RSS key
+ * @pad: Padding
+ * @key: RSS hash key, packed bytes
  * PF/VF sends this message to get or set RSS key. Only supported if both
  * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
- * negotiation. Uses the virtchnl2_rss_key structure
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
  */
 struct virtchnl2_rss_key {
 	__le32 vport_id;
 	__le16 key_len;
 	u8 pad;
-	u8 key[1];         /* RSS hash key, packed bytes */
+	u8 key[1];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
 
-/* structure to specify a chunk of contiguous queues */
+/**
+ * struct virtchnl2_queue_chunk - Chunk of contiguous queues
+ * @type: See enum virtchnl2_queue_type
+ * @start_queue_id: Starting queue id
+ * @num_queues: Number of queues
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_chunk {
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 type;
 	__le32 start_queue_id;
 	__le32 num_queues;
@@ -1308,7 +1621,11 @@ struct virtchnl2_queue_chunk {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 
-/* structure to specify several chunks of contiguous queues */
+/* struct virtchnl2_queue_chunks - Chunks of contiguous queues
+ * @num_chunks: Number of chunks
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
+ */
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
@@ -1317,14 +1634,19 @@ struct virtchnl2_queue_chunks {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
 
-/* VIRTCHNL2_OP_ENABLE_QUEUES
- * VIRTCHNL2_OP_DISABLE_QUEUES
- * VIRTCHNL2_OP_DEL_QUEUES
+/**
+ * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
+ * @vport_id: Vport id
+ * @pad: Padding
+ * @chunks: Chunks of contiguous queues info
  *
- * PF sends these messages to enable, disable or delete queues specified in
- * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * PF/VF sends these messages to enable, disable or delete queues specified in
+ * chunks. It sends virtchnl2_del_ena_dis_queues struct to specify the queues
  * to be enabled/disabled/deleted. Also applicable to single queue receive or
  * transmit. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
+ * VIRTCHNL2_OP_DISABLE_QUEUES.
  */
 struct virtchnl2_del_ena_dis_queues {
 	__le32 vport_id;
@@ -1335,30 +1657,43 @@ struct virtchnl2_del_ena_dis_queues {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
 
-/* Queue to vector mapping */
+/**
+ * struct virtchnl2_queue_vector - Queue to vector mapping
+ * @queue_id: Queue id
+ * @vector_id: Vector id
+ * @pad: Padding
+ * @itr_idx: See enum virtchnl2_itr_idx
+ * @queue_type: See enum virtchnl2_queue_type
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_queue_vector {
 	__le32 queue_id;
 	__le16 vector_id;
 	u8 pad[2];
 
-	/* see VIRTCHNL2_ITR_IDX definitions */
 	__le32 itr_idx;
 
-	/* see VIRTCHNL2_QUEUE_TYPE definitions */
 	__le32 queue_type;
 	u8 pad1[8];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
 
-/* VIRTCHNL2_OP_MAP_QUEUE_VECTOR
- * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR
+/**
+ * struct virtchnl2_queue_vector_maps - Map/unmap queues info
+ * @vport_id: Vport id
+ * @num_qv_maps: Number of queue vector maps
+ * @pad: Padding
+ * @qv_maps: Queue to vector maps
  *
- * PF sends this message to map or unmap queues to vectors and interrupt
+ * PF/VF sends this message to map or unmap queues to vectors and interrupt
  * throttling rate index registers. External data buffer contains
  * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
  * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
  * after validating the queue and vector ids and returns a status code.
+ *
+ * Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
  */
 struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
@@ -1369,11 +1704,17 @@ struct virtchnl2_queue_vector_maps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
 
-/* VIRTCHNL2_OP_LOOPBACK
+/**
+ * struct virtchnl2_loopback - Loopback info
+ * @vport_id: Vport id
+ * @enable: Enable/disable
+ * @pad: Padding for future extensions
  *
  * PF/VF sends this message to transition to/from the loopback state. Setting
  * the 'enable' to 1 enables the loopback state and setting 'enable' to 0
  * disables it. CP configures the state to loopback and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_LOOPBACK.
  */
 struct virtchnl2_loopback {
 	__le32 vport_id;
@@ -1383,22 +1724,31 @@ struct virtchnl2_loopback {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
 
-/* structure to specify each MAC address */
+/* struct virtchnl2_mac_addr - MAC address info
+ * @addr: MAC address
+ * @type: MAC type. See enum virtchnl2_mac_addr_type.
+ * @pad: Padding for future extensions
+ */
 struct virtchnl2_mac_addr {
 	u8 addr[VIRTCHNL2_ETH_LENGTH_OF_ADDRESS];
-	/* see VIRTCHNL2_MAC_TYPE definitions */
 	u8 type;
 	u8 pad;
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
 
-/* VIRTCHNL2_OP_ADD_MAC_ADDR
- * VIRTCHNL2_OP_DEL_MAC_ADDR
+/**
+ * struct virtchnl2_mac_addr_list - List of MAC addresses
+ * @vport_id: Vport id
+ * @num_mac_addr: Number of MAC addresses
+ * @pad: Padding
+ * @mac_addr_list: List with MAC address info
  *
  * PF/VF driver uses this structure to send list of MAC addresses to be
  * added/deleted to the CP where as CP performs the action and returns the
  * status.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
  */
 struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
@@ -1409,30 +1759,40 @@ struct virtchnl2_mac_addr_list {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
 
-/* VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE
+/**
+ * struct virtchnl2_promisc_info - Promiscuous type information
+ * @vport_id: Vport id
+ * @flags: See enum virtchnl2_promisc_flags
+ * @pad: Padding for future extensions
  *
  * PF/VF sends vport id and flags to the CP where as CP performs the action
  * and returns the status.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
  */
 struct virtchnl2_promisc_info {
 	__le32 vport_id;
-	/* see VIRTCHNL2_PROMISC_FLAGS definitions */
 	__le16 flags;
 	u8 pad[2];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
 
-/* VIRTCHNL2_PTP_CAPS
- * PTP capabilities
+/**
+ * enum virtchnl2_ptp_caps - PTP capabilities
  */
-#define VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	BIT(0)
-#define VIRTCHNL2_PTP_CAP_PTM			BIT(1)
-#define VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	BIT(2)
-#define VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	BIT(3)
-#define	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	BIT(4)
+enum virtchnl2_ptp_caps {
+	VIRTCHNL2_PTP_CAP_LEGACY_CROSS_TIME	= BIT(0),
+	VIRTCHNL2_PTP_CAP_PTM			= BIT(1),
+	VIRTCHNL2_PTP_CAP_DEVICE_CLOCK_CONTROL	= BIT(2),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_DIRECT	= BIT(3),
+	VIRTCHNL2_PTP_CAP_TX_TSTAMPS_VIRTCHNL	= BIT(4),
+};
 
-/* Legacy cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_legacy_cross_time_reg - Legacy cross time registers
+ *						offsets.
+ */
 struct virtchnl2_ptp_legacy_cross_time_reg {
 	__le32 shadow_time_0;
 	__le32 shadow_time_l;
@@ -1442,7 +1802,9 @@ struct virtchnl2_ptp_legacy_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_legacy_cross_time_reg);
 
-/* PTM cross time registers offsets */
+/**
+ * struct virtchnl2_ptp_ptm_cross_time_reg - PTM cross time registers offsets
+ */
 struct virtchnl2_ptp_ptm_cross_time_reg {
 	__le32 art_l;
 	__le32 art_h;
@@ -1452,7 +1814,10 @@ struct virtchnl2_ptp_ptm_cross_time_reg {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_ptm_cross_time_reg);
 
-/* Registers needed to control the main clock */
+/**
+ * struct virtchnl2_ptp_device_clock_control - Registers needed to control the
+ *					       main clock.
+ */
 struct virtchnl2_ptp_device_clock_control {
 	__le32 cmd;
 	__le32 incval_l;
@@ -1464,7 +1829,13 @@ struct virtchnl2_ptp_device_clock_control {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_device_clock_control);
 
-/* Structure that defines tx tstamp entry - index and register offset */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_entry - PTP TX timestamp entry
+ * @tx_latch_register_base: TX latch register base
+ * @tx_latch_register_offset: TX latch register offset
+ * @index: Index
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_entry {
 	__le32 tx_latch_register_base;
 	__le32 tx_latch_register_offset;
@@ -1474,12 +1845,15 @@ struct virtchnl2_ptp_tx_tstamp_entry {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_entry);
 
-/* Structure that defines tx tstamp entries - total number of latches
- * and the array of entries.
+/**
+ * struct virtchnl2_ptp_tx_tstamp - Structure that defines tx tstamp entries
+ * @num_latches: Total number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @ptp_tx_tstamp_entries: Aarray of TX timestamp entries
  */
 struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
 	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
@@ -1487,13 +1861,21 @@ struct virtchnl2_ptp_tx_tstamp {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
 
-/* VIRTCHNL2_OP_GET_PTP_CAPS
+/**
+ * struct virtchnl2_get_ptp_caps - Get PTP capabilities
+ * @ptp_caps: PTP capability bitmap. See enum virtchnl2_ptp_caps.
+ * @pad: Padding
+ * @legacy_cross_time_reg: Legacy cross time register
+ * @ptm_cross_time_reg: PTM cross time register
+ * @device_clock_control: Device clock control
+ * @tx_tstamp: TX timestamp
+ *
  * PV/VF sends this message to negotiate PTP capabilities. CP updates bitmap
  * with supported features and fulfills appropriate structures.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_CAPS.
  */
 struct virtchnl2_get_ptp_caps {
-	/* PTP capability bitmap */
-	/* see VIRTCHNL2_PTP_CAPS definitions */
 	__le32 ptp_caps;
 	u8 pad[4];
 
@@ -1505,7 +1887,15 @@ struct virtchnl2_get_ptp_caps {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
 
-/* Structure that describes tx tstamp values, index and validity */
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
+ *					  values, index and validity.
+ * @tstamp_h: Timestamp high
+ * @tstamp_l: Timestamp low
+ * @index: Index
+ * @valid: Timestamp validity
+ * @pad: Padding
+ */
 struct virtchnl2_ptp_tx_tstamp_latch {
 	__le32 tstamp_h;
 	__le32 tstamp_l;
@@ -1516,9 +1906,17 @@ struct virtchnl2_ptp_tx_tstamp_latch {
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
 
-/* VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES
+/**
+ * struct virtchnl2_ptp_tx_tstamp_latches - PTP TX timestamp latches
+ * @num_latches: Number of latches
+ * @latch_size: Latch size expressed in bits
+ * @pad: Padding
+ * @tstamp_latches: PTP TX timestamp latch
+ *
  * PF/VF sends this message to receive a specified number of timestamps
  * entries.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES.
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
@@ -1613,7 +2011,7 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
  * @msg: pointer to the msg buffer
  * @msglen: msg length
  *
- * validate msg format against struct for each opcode
+ * Validate msg format against struct for each opcode.
  */
 static inline int
 virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
@@ -1622,7 +2020,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	bool err_msg_format = false;
 	__le32 valid_len = 0;
 
-	/* Validate message length. */
+	/* Validate message length */
 	switch (v_opcode) {
 	case VIRTCHNL2_OP_VERSION:
 		valid_len = sizeof(struct virtchnl2_version_info);
@@ -1637,7 +2035,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_create_vport *)msg;
 
 			if (cvport->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1652,7 +2050,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_non_flex_create_adi *)msg;
 
 			if (cadi->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1707,7 +2105,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_add_queues *)msg;
 
 			if (add_q->chunks.num_chunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1734,7 +2132,8 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
 		valid_len = sizeof(struct virtchnl2_add_queue_groups);
 		if (msglen != valid_len) {
-			__le32 i = 0, offset = 0;
+			__le64 offset;
+			__le32 i;
 			struct virtchnl2_add_queue_groups *add_queue_grp =
 				(struct virtchnl2_add_queue_groups *)msg;
 			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
@@ -1801,7 +2200,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_alloc_vectors *)msg;
 
 			if (v_av->vchunks.num_vchunks == 0) {
-				/* zero chunks is allowed as input */
+				/* Zero chunks is allowed as input */
 				break;
 			}
 
@@ -1830,7 +2229,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_key *)msg;
 
 			if (vrk->key_len == 0) {
-				/* zero length is allowed as input */
+				/* Zero length is allowed as input */
 				break;
 			}
 
@@ -1845,7 +2244,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				(struct virtchnl2_rss_lut *)msg;
 
 			if (vrl->lut_entries == 0) {
-				/* zero entries is allowed as input */
+				/* Zero entries is allowed as input */
 				break;
 			}
 
@@ -1902,13 +2301,13 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
 		}
 		break;
-	/* These are always errors coming from the VF. */
+	/* These are always errors coming from the VF */
 	case VIRTCHNL2_OP_EVENT:
 	case VIRTCHNL2_OP_UNKNOWN:
 	default:
 		return VIRTCHNL2_STATUS_ERR_ESRCH;
 	}
-	/* few more checks */
+	/* Few more checks */
 	if (err_msg_format || valid_len != msglen)
 		return VIRTCHNL2_STATUS_ERR_EINVAL;
 
diff --git a/drivers/common/idpf/base/virtchnl2_lan_desc.h b/drivers/common/idpf/base/virtchnl2_lan_desc.h
index 9e04cf8628..f7521d87a7 100644
--- a/drivers/common/idpf/base/virtchnl2_lan_desc.h
+++ b/drivers/common/idpf/base/virtchnl2_lan_desc.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 /*
  * Copyright (C) 2019 Intel Corporation
@@ -12,199 +12,220 @@
 /* VIRTCHNL2_TX_DESC_IDS
  * Transmit descriptor ID flags
  */
-#define VIRTCHNL2_TXDID_DATA				BIT(0)
-#define VIRTCHNL2_TXDID_CTX				BIT(1)
-#define VIRTCHNL2_TXDID_REINJECT_CTX			BIT(2)
-#define VIRTCHNL2_TXDID_FLEX_DATA			BIT(3)
-#define VIRTCHNL2_TXDID_FLEX_CTX			BIT(4)
-#define VIRTCHNL2_TXDID_FLEX_TSO_CTX			BIT(5)
-#define VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		BIT(6)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		BIT(7)
-#define VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	BIT(8)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	BIT(9)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		BIT(10)
-#define VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			BIT(11)
-#define VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			BIT(12)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		BIT(13)
-#define VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		BIT(14)
-#define VIRTCHNL2_TXDID_DESC_DONE			BIT(15)
-
-/* VIRTCHNL2_RX_DESC_IDS
+enum virtchnl2_tx_desc_ids {
+	VIRTCHNL2_TXDID_DATA				= BIT(0),
+	VIRTCHNL2_TXDID_CTX				= BIT(1),
+	VIRTCHNL2_TXDID_REINJECT_CTX			= BIT(2),
+	VIRTCHNL2_TXDID_FLEX_DATA			= BIT(3),
+	VIRTCHNL2_TXDID_FLEX_CTX			= BIT(4),
+	VIRTCHNL2_TXDID_FLEX_TSO_CTX			= BIT(5),
+	VIRTCHNL2_TXDID_FLEX_TSYN_L2TAG1		= BIT(6),
+	VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2		= BIT(7),
+	VIRTCHNL2_TXDID_FLEX_TSO_L2TAG2_PARSTAG_CTX	= BIT(8),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_TSO_CTX	= BIT(9),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_SA_CTX		= BIT(10),
+	VIRTCHNL2_TXDID_FLEX_L2TAG2_CTX			= BIT(11),
+	VIRTCHNL2_TXDID_FLEX_FLOW_SCHED			= BIT(12),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_TSO_CTX		= BIT(13),
+	VIRTCHNL2_TXDID_FLEX_HOSTSPLIT_CTX		= BIT(14),
+	VIRTCHNL2_TXDID_DESC_DONE			= BIT(15),
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_IDS
  * Receive descriptor IDs (range from 0 to 63)
  */
-#define VIRTCHNL2_RXDID_0_16B_BASE			0
-#define VIRTCHNL2_RXDID_1_32B_BASE			1
-/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
- * differentiated based on queue model; e.g. single queue model can
- * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
- * for DID 2.
- */
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ			2
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC			2
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW			3
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB		4
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL		5
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2			6
-#define VIRTCHNL2_RXDID_7_HW_RSVD			7
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC		16
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN		17
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4		18
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6		19
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW		20
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP		21
-/* 22 through 63 are reserved */
-
-/* VIRTCHNL2_RX_DESC_ID_BITMASKS
+enum virtchnl2_rx_desc_ids {
+	VIRTCHNL2_RXDID_0_16B_BASE,
+	VIRTCHNL2_RXDID_1_32B_BASE,
+	/* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+	 * differentiated based on queue model; e.g. single queue model can
+	 * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+	 * for DID 2.
+	 */
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ		= 2,
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC		= VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW		= 3,
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB	= 4,
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL	= 5,
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2		= 6,
+	VIRTCHNL2_RXDID_7_HW_RSVD		= 7,
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC	= 16,
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN	= 17,
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4	= 18,
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6	= 19,
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW	= 20,
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP	= 21,
+	/* 22 through 63 are reserved */
+};
+
+/**
+ * VIRTCHNL2_RX_DESC_ID_BITMASKS
  * Receive descriptor ID bitmasks
  */
-#define VIRTCHNL2_RXDID_M(bit)			BIT(VIRTCHNL2_RXDID_##bit)
-#define VIRTCHNL2_RXDID_0_16B_BASE_M		VIRTCHNL2_RXDID_M(0_16B_BASE)
-#define VIRTCHNL2_RXDID_1_32B_BASE_M		VIRTCHNL2_RXDID_M(1_32B_BASE)
-#define VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ)
-#define VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC)
-#define VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW)
-#define VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB)
-#define VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL)
-#define VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2)
-#define VIRTCHNL2_RXDID_7_HW_RSVD_M		VIRTCHNL2_RXDID_M(7_HW_RSVD)
-/* 9 through 15 are reserved */
-#define VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	VIRTCHNL2_RXDID_M(16_COMMS_GENERIC)
-#define VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN)
-#define VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4)
-#define VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6)
-#define VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW)
-#define VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP)
-/* 22 through 63 are reserved */
-
-/* Rx */
+#define VIRTCHNL2_RXDID_M(bit)			BIT_ULL(VIRTCHNL2_RXDID_##bit)
+
+enum virtchnl2_rx_desc_id_bitmasks {
+	VIRTCHNL2_RXDID_0_16B_BASE_M		= VIRTCHNL2_RXDID_M(0_16B_BASE),
+	VIRTCHNL2_RXDID_1_32B_BASE_M		= VIRTCHNL2_RXDID_M(1_32B_BASE),
+	VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M		= VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
+	VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M		= VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
+	VIRTCHNL2_RXDID_3_FLEX_SQ_SW_M		= VIRTCHNL2_RXDID_M(3_FLEX_SQ_SW),
+	VIRTCHNL2_RXDID_4_FLEX_SQ_NIC_VEB_M	= VIRTCHNL2_RXDID_M(4_FLEX_SQ_NIC_VEB),
+	VIRTCHNL2_RXDID_5_FLEX_SQ_NIC_ACL_M	= VIRTCHNL2_RXDID_M(5_FLEX_SQ_NIC_ACL),
+	VIRTCHNL2_RXDID_6_FLEX_SQ_NIC_2_M	= VIRTCHNL2_RXDID_M(6_FLEX_SQ_NIC_2),
+	VIRTCHNL2_RXDID_7_HW_RSVD_M		= VIRTCHNL2_RXDID_M(7_HW_RSVD),
+	/* 9 through 15 are reserved */
+	VIRTCHNL2_RXDID_16_COMMS_GENERIC_M	= VIRTCHNL2_RXDID_M(16_COMMS_GENERIC),
+	VIRTCHNL2_RXDID_17_COMMS_AUX_VLAN_M	= VIRTCHNL2_RXDID_M(17_COMMS_AUX_VLAN),
+	VIRTCHNL2_RXDID_18_COMMS_AUX_IPV4_M	= VIRTCHNL2_RXDID_M(18_COMMS_AUX_IPV4),
+	VIRTCHNL2_RXDID_19_COMMS_AUX_IPV6_M	= VIRTCHNL2_RXDID_M(19_COMMS_AUX_IPV6),
+	VIRTCHNL2_RXDID_20_COMMS_AUX_FLOW_M	= VIRTCHNL2_RXDID_M(20_COMMS_AUX_FLOW),
+	VIRTCHNL2_RXDID_21_COMMS_AUX_TCP_M	= VIRTCHNL2_RXDID_M(21_COMMS_AUX_TCP),
+	/* 22 through 63 are reserved */
+};
+
 /* For splitq virtchnl2_rx_flex_desc_adv desc members */
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M		GENMASK(3, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		6
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		GENMASK(7, 6)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_S)
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M		\
-	IDPF_M(0x3UL, VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M			\
-	IDPF_M(0xFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M		GENMASK(15, 13)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M	\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M		GENMASK(13, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M		GENMASK(9, 0)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S		10
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M			\
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M			\
-	IDPF_M(0x7UL, VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M		GENMASK(14, 12)
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S		15
 #define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M		\
 	BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
 
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S		7
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_S,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_0_QW0_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S		0
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S		1
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST			8 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
- * for splitq virtchnl2_rx_flex_desc_adv
+enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS_ERROR_1_BITS
+ * For splitq virtchnl2_rx_flex_desc_adv
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		0 /* 2 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		2
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		3
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	4
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	5
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	6
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	7
-#define VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			8 /* this entry must be last!!! */
-
-/* for singleq (flex) virtchnl2_rx_flex_desc fields */
-/* for virtchnl2_rx_flex_desc.ptype_flex_flags0 member */
+enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_S		= 0,
+	/* 2 bits */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_S		= 2,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_S		= 3,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_S	= 4,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_S	= 5,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_S	= 6,
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_S	= 7,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_LAST			= 8,
+};
+
+/* for singleq (flex) virtchnl2_rx_flex_desc fields
+ * for virtchnl2_rx_flex_desc.ptype_flex_flags0 member
+ */
 #define VIRTCHNL2_RX_FLEX_DESC_PTYPE_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			\
-	IDPF_M(0x3FFUL, VIRTCHNL2_RX_FLEX_DESC_PTYPE_S) /* 10 bits */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M			GENMASK(9, 0)
 
-/* for virtchnl2_rx_flex_desc.pkt_length member */
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M			\
-	IDPF_M(0x3FFFUL, VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S) /* 14 bits */
+/* For virtchnl2_rx_flex_desc.pkt_len member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_S		0
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M		GENMASK(13, 0)
 
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_0_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S			0
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S			1
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S			2
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S			3
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S		4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S		5
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S		6
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S		7
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S			8
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S		9
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S			10
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S			11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST			16 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
- * for singleq (flex) virtchnl2_rx_flex_desc
+enum virtchnl2_rx_flex_desc_status_error_0_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_S,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_S,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS0_LAST,
+};
+
+/**
+ * VIRTCHNL2_RX_FLEX_DESC_STATUS_ERROR_1_BITS
+ * For singleq (flex) virtchnl2_rx_flex_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			0 /* 4 bits */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			4
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			5
-/* [10:6] reserved */
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		11
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		12
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		13
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		14
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		15
-#define VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			16 /* this entry must be last!!! */
-
-/* for virtchnl2_rx_flex_desc.ts_low member */
+enum virtchnl2_rx_flex_desc_status_error_1_bits {
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_S			= 0,
+	/* 4 bits */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_S			= 4,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_S			= 5,
+	/* [10:6] reserved */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_S		= 11,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S		= 12,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S		= 13,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S		= 14,
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S		= 15,
+	/* this entry must be last!!! */
+	VIRTCHNL2_RX_FLEX_DESC_STATUS1_LAST			= 16,
+};
+
+/* For virtchnl2_rx_flex_desc.ts_low member */
 #define VIRTCHNL2_RX_FLEX_TSTAMP_VALID				BIT(0)
 
 /* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
@@ -212,72 +233,89 @@
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_M	\
 	BIT_ULL(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_SPH_S)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S	52
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	\
-	IDPF_M(0x7FFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_HBUF_M	GENMASK_ULL(62, 52)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S	38
-#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	\
-	IDPF_M(0x3FFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M	GENMASK_ULL(51, 38)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S	30
-#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	\
-	IDPF_M(0xFFULL, VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M	GENMASK_ULL(37, 30)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S	19
-#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	\
-	IDPF_M(0xFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M	GENMASK_ULL(26, 19)
 #define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S	0
-#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	\
-	IDPF_M(0x7FFFFUL, VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_S)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M	GENMASK_ULL(18, 0)
 
-/* VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		0
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		1
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		2
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		3
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		4
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		5 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	8
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		9 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		11
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		12 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		14
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	15
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		16 /* 2 bits */
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	18
-#define VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		19 /* this entry must be last!!! */
-
-/* VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_STATUS_DD_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_S		= 1,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_S		= 5, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_S	= 8,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_S		= 9, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_S		= 11,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_S		= 12, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_S		= 14,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_S	= 15,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_S		= 16, /* 2 bits */
+	VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_S	= 18,
+	VIRTCHNL2_RX_BASE_DESC_STATUS_LAST		= 19, /* this entry must be last!!! */
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S	0
+enum virtcnl2_rx_base_desc_status_bits {
+	VIRTCHNL2_RX_BASE_DESC_EXT_STATUS_L2TAG2P_S,
+};
 
-/* VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
- * for singleq (base) virtchnl2_rx_base_desc
+/**
+ * VIRTCHNL2_RX_BASE_DESC_ERROR_BITS
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		0
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	1
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		2
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		3 /* 3 bits */
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		3
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		4
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		5
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		6
-#define VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		7
-
-/* VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
- * for singleq (base) virtchnl2_rx_base_desc
+enum virtchnl2_rx_base_desc_error_bits {
+	VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_S		= 0,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_S	= 1,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_S		= 2,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_S		= 3, /* 3 bits */
+	VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_S		= 3,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_S		= 4,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_S		= 5,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_S		= 6,
+	VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_S		= 7,
+};
+
+/**
+ * VIRTCHNL2_RX_BASE_DESC_FLTSTAT_VALUES
+ * For singleq (base) virtchnl2_rx_base_desc
  * Note: These are predefined bit offsets
  */
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA		0
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID		1
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV		2
-#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH		3
+enum virtchnl2_rx_base_desc_flstat_values {
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_NO_DATA,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_FD_ID,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSV,
+	VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH,
+};
 
-/* Receive Descriptors */
-/* splitq buf
+/**
+ * struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
+ * @qword0: RX buffer struct
+ * @qword0.buf_id: Buffer identifier
+ * @qword0.rsvd0: Reserved
+ * @qword0.rsvd1: Reserved
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd2: Reserved
+ *
+ * Receive Descriptors
+ * SplitQ buffer
  * |                                       16|                   0|
  * ----------------------------------------------------------------
  * | RSV                                     | Buffer ID          |
@@ -292,16 +330,23 @@
  */
 struct virtchnl2_splitq_rx_buf_desc {
 	struct {
-		__le16  buf_id; /* Buffer Identifier */
+		__le16  buf_id;
 		__le16  rsvd0;
 		__le32  rsvd1;
 	} qword0;
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
-/* singleq buf
+/**
+ * struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format
+ * @pkt_addr: Packet buffer address
+ * @hdr_addr: Header buffer address
+ * @rsvd1: Reserved
+ * @rsvd2: Reserved
+ *
+ * SingleQ buffer
  * |                                                             0|
  * ----------------------------------------------------------------
  * | Rx packet buffer address                                     |
@@ -315,18 +360,44 @@ struct virtchnl2_splitq_rx_buf_desc {
  * |                                                             0|
  */
 struct virtchnl2_singleq_rx_buf_desc {
-	__le64  pkt_addr; /* Packet buffer address */
-	__le64  hdr_addr; /* Header buffer address */
+	__le64  pkt_addr;
+	__le64  hdr_addr;
 	__le64  rsvd1;
 	__le64  rsvd2;
-}; /* read used with buffer queues*/
+};
 
+/**
+ * union virtchnl2_rx_buf_desc - RX buffer descriptor
+ * @read: Singleq RX buffer descriptor format
+ * @split_rd: Splitq RX buffer descriptor format
+ */
 union virtchnl2_rx_buf_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
 	struct virtchnl2_splitq_rx_buf_desc		split_rd;
 };
 
-/* (0x00) singleq wb(compl) */
+/**
+ * struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format
+ * @qword0: First quad word struct
+ * @qword0.lo_dword: Lower dual word struct
+ * @qword0.lo_dword.mirroring_status: Mirrored packet status
+ * @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet
+ * @qword0.hi_dword: High dual word union
+ * @qword0.hi_dword.rss: RSS hash
+ * @qword0.hi_dword.fd_id: Flow director filter id
+ * @qword1: Second quad word struct
+ * @qword1.status_error_ptype_len: Status/error/PTYPE/length
+ * @qword2: Third quad word struct
+ * @qword2.ext_status: Extended status
+ * @qword2.rsvd: Reserved
+ * @qword2.l2tag2_1: Extracted L2 tag 2 from the packet
+ * @qword2.l2tag2_2: Reserved
+ * @qword3: Fourth quad word struct
+ * @qword3.reserved: Reserved
+ * @qword3.fd_id: Flow director filter id
+ *
+ * Profile ID 0x1, SingleQ, base writeback format.
+ */
 struct virtchnl2_singleq_base_rx_desc {
 	struct {
 		struct {
@@ -334,16 +405,15 @@ struct virtchnl2_singleq_base_rx_desc {
 			__le16 l2tag1;
 		} lo_dword;
 		union {
-			__le32 rss; /* RSS Hash */
-			__le32 fd_id; /* Flow Director filter id */
+			__le32 rss;
+			__le32 fd_id;
 		} hi_dword;
 	} qword0;
 	struct {
-		/* status/error/PTYPE/length */
 		__le64 status_error_ptype_len;
 	} qword1;
 	struct {
-		__le16 ext_status; /* extended status */
+		__le16 ext_status;
 		__le16 rsvd;
 		__le16 l2tag2_1;
 		__le16 l2tag2_2;
@@ -352,19 +422,40 @@ struct virtchnl2_singleq_base_rx_desc {
 		__le32 reserved;
 		__le32 fd_id;
 	} qword3;
-}; /* writeback */
+};
 
-/* (0x01) singleq flex compl */
+/**
+ * struct virtchnl2_rx_flex_desc - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @flex_meta0: Flexible metadata container 0
+ * @flex_meta1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @time_stamp_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flex_meta2: Flexible metadata container 2
+ * @flex_meta3: Flexible metadata container 3
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.flex_meta4: Flexible metadata container 4
+ * @flex_ts.flex.flex_meta5: Flexible metadata container 5
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x1, SingleQ, flex completion writeback format.
+ */
 struct virtchnl2_rx_flex_desc {
 	/* Qword 0 */
-	u8 rxdid; /* descriptor builder profile id */
-	u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
-	__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
-	__le16 pkt_len; /* [15:14] are reserved */
-	__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
-					/* sph=[11:11] */
-					/* ff1/ext=[15:12] */
-
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
@@ -390,7 +481,29 @@ struct virtchnl2_rx_flex_desc {
 	} flex_ts;
 };
 
-/* (0x02) */
+/**
+ * struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SingleQ, flex writeback format.
+ */
 struct virtchnl2_rx_flex_desc_nic {
 	/* Qword 0 */
 	u8 rxdid;
@@ -422,8 +535,27 @@ struct virtchnl2_rx_flex_desc_nic {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Switch Profile
- * RxDID Profile Id 3
+/**
+ * struct virtchnl2_rx_flex_desc_sw - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @src_vsi: Source VSI, [10:15] are reserved
+ * @flex_md1_rsvd: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor Switch Profile
+ * RxDID Profile ID 0x3, SingleQ
  * Flex-field 0: Source Vsi
  */
 struct virtchnl2_rx_flex_desc_sw {
@@ -437,9 +569,55 @@ struct virtchnl2_rx_flex_desc_sw {
 	/* Qword 1 */
 	__le16 status_error0;
 	__le16 l2tag1;
-	__le16 src_vsi; /* [10:15] are reserved */
+	__le16 src_vsi;
 	__le16 flex_md1_rsvd;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le32 rsvd;
+	__le32 ts_high;
+};
 
+#ifndef EXTERNAL_RELEASE
+/**
+ * struct virtchnl2_rx_flex_desc_nic_veb_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @dst_vsi: Destination VSI, [10:15] are reserved
+ * @flex_field_1: Flexible metadata container 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC VEB Profile
+ * RxDID Profile Id 0x4
+ * Flex-field 0: Destination Vsi
+ */
+struct virtchnl2_rx_flex_desc_nic_veb_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 dst_vsi;
+	__le16 flex_field_1;
 	/* Qword 2 */
 	__le16 status_error1;
 	u8 flex_flags2;
@@ -448,13 +626,85 @@ struct virtchnl2_rx_flex_desc_sw {
 	__le16 l2tag2_2nd;
 
 	/* Qword 3 */
-	__le32 rsvd; /* flex words 2-3 are reserved */
+	__le32 rsvd;
 	__le32 ts_high;
 };
 
-
-/* Rx Flex Descriptor NIC Profile
- * RxDID Profile Id 6
+/**
+ * struct virtchnl2_rx_flex_desc_nic_acl_dbg - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @acl_ctr0: ACL counter 0
+ * @acl_ctr1: ACL counter 1
+ * @status_error1: Status/Error section 1
+ * @flex_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @acl_ctr2: ACL counter 2
+ * @rsvd: Flex words 2-3 are reserved
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC ACL Profile
+ * RxDID Profile ID 0x5
+ * Flex-field 0: ACL Counter 0
+ * Flex-field 1: ACL Counter 1
+ * Flex-field 2: ACL Counter 2
+ */
+struct virtchnl2_rx_flex_desc_nic_acl_dbg {
+	/* Qword 0 */
+	u8 rxdid;
+	u8 mir_id_umb_cast;
+	__le16 ptype_flex_flags0;
+	__le16 pkt_len;
+	__le16 hdr_len_sph_flex_flags1;
+	/* Qword 1 */
+	__le16 status_error0;
+	__le16 l2tag1;
+	__le16 acl_ctr0;
+	__le16 acl_ctr1;
+	/* Qword 2 */
+	__le16 status_error1;
+	u8 flex_flags2;
+	u8 ts_low;
+	__le16 l2tag2_1st;
+	__le16 l2tag2_2nd;
+	/* Qword 3 */
+	__le16 acl_ctr2;
+	__le16 rsvd;
+	__le32 ts_high;
+};
+#endif /* !EXTERNAL_RELEASE */
+
+/**
+ * struct virtchnl2_rx_flex_desc_nic_2 - RX descriptor writeback format
+ * @rxdid: Descriptor builder profile id
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0]
+ * @status_error0: Status/Error section 0
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash
+ * @status_error1: Status/Error section 1
+ * @flexi_flags2: Flexible flags section 2
+ * @ts_low: Lower word of timestamp value
+ * @l2tag2_1st: First L2TAG2
+ * @l2tag2_2nd: Second L2TAG2
+ * @flow_id: Flow id
+ * @src_vsi: Source VSI
+ * @flex_ts: Timestamp and flexible flow id union
+ * @flex_ts.flex.rsvd: Reserved
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value
+ *
+ * Rx Flex Descriptor NIC Profile
+ * RxDID Profile ID 0x6
  * Flex-field 0: RSS hash lower 16-bits
  * Flex-field 1: RSS hash upper 16-bits
  * Flex-field 2: Flow Id lower 16-bits
@@ -493,29 +743,43 @@ struct virtchnl2_rx_flex_desc_nic_2 {
 	} flex_ts;
 };
 
-/* Rx Flex Descriptor Advanced (Split Queue Model)
- * RxDID Profile Id 7
+/**
+ * struct virtchnl2_rx_flex_desc_adv - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @fmd0: Flexible metadata container 0
+ * @fmd1: Flexible metadata container 1
+ * @fmd2: Flexible metadata container 2
+ * @fflags2: Flags
+ * @hash3: Upper bits of Rx hash value
+ * @fmd3: Flexible metadata container 3
+ * @fmd4: Flexible metadata container 4
+ * @fmd5: Flexible metadata container 5
+ * @fmd6: Flexible metadata container 6
+ * @fmd7_0: Flexible metadata container 7.0
+ * @fmd7_1: Flexible metadata container 7.1
+ *
+ * RX Flex Descriptor Advanced (Split Queue Model)
+ * RxDID Profile ID 0x2
  */
 struct virtchnl2_rx_flex_desc_adv {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
@@ -534,10 +798,42 @@ struct virtchnl2_rx_flex_desc_adv {
 	__le16 fmd6;
 	__le16 fmd7_0;
 	__le16 fmd7_1;
-}; /* writeback */
+};
 
-/* Rx Flex Descriptor Advanced (Split Queue Model) NIC Profile
- * RxDID Profile Id 8
+/**
+ * struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0]
+ * @status_err0_qw0: Status/Error section 0 in quad word 0
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ *		       ptype=[9:0]
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ *			plen=[13:0]
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ *		  ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ *		  only in splitq, header=[9:0]
+ * @status_err0_qw1: Status/Error section 0 in quad word 1
+ * @status_err1: Status/Error section 1
+ * @fflags1: Flexible flags section 1
+ * @ts_low: Lower word of timestamp value
+ * @buf_id: Buffer identifier. Only in splitq mode.
+ * @misc: Union
+ * @misc.raw_cs: Raw checksum
+ * @misc.l2tag1: Stripped L2 tag from the received packet
+ * @misc.rscseglen: RSC segment length
+ * @hash1: Lower 16 bits of Rx hash value, hash[15:0]
+ * @ff2_mirrid_hash2: Union
+ * @ff2_mirrid_hash2.fflags2: Flexible flags section 2
+ * @ff2_mirrid_hash2.mirrorid: Mirror id
+ * @ff2_mirrid_hash2.hash2: 8 bits of Rx hash value, hash[23:16]
+ * @hash3: Upper 8 bits of Rx hash value, hash[31:24]
+ * @l2tag2: Extracted L2 tag 2 from the packet
+ * @fmd4: Flexible metadata container 4
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @fmd6: Flexible metadata container 6
+ * @ts_high: Timestamp higher word of the timestamp value
+ *
+ * Profile ID 0x2, SplitQ, flex writeback format.
+ *
  * Flex-field 0: BufferID
  * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
  * Flex-field 2: Hash[15:0]
@@ -548,30 +844,17 @@ struct virtchnl2_rx_flex_desc_adv {
  */
 struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	/* Qword 0 */
-	u8 rxdid_ucast; /* profile_id=[3:0] */
-			/* rsvd=[5:4] */
-			/* ucast=[7:6] */
+	u8 rxdid_ucast;
 	u8 status_err0_qw0;
-	__le16 ptype_err_fflags0;	/* ptype=[9:0] */
-					/* ip_hdr_err=[10:10] */
-					/* udp_len_err=[11:11] */
-					/* ff0=[15:12] */
-	__le16 pktlen_gen_bufq_id;	/* plen=[13:0] */
-					/* gen=[14:14]  only in splitq */
-					/* bufq_id=[15:15] only in splitq */
-	__le16 hdrlen_flags;		/* header=[9:0] */
-					/* rsc=[10:10] only in splitq */
-					/* sph=[11:11] only in splitq */
-					/* ext_udp_0=[12:12] */
-					/* int_udp_0=[13:13] */
-					/* trunc_mirr=[14:14] */
-					/* miss_prepend=[15:15] */
+	__le16 ptype_err_fflags0;
+	__le16 pktlen_gen_bufq_id;
+	__le16 hdrlen_flags;
 	/* Qword 1 */
 	u8 status_err0_qw1;
 	u8 status_err1;
 	u8 fflags1;
 	u8 ts_low;
-	__le16 buf_id; /* only in splitq */
+	__le16 buf_id;
 	union {
 		__le16 raw_cs;
 		__le16 l2tag1;
@@ -591,7 +874,7 @@ struct virtchnl2_rx_flex_desc_adv_nic_3 {
 	__le16 l2tag1;
 	__le16 fmd6;
 	__le32 ts_high;
-}; /* writeback */
+};
 
 union virtchnl2_rx_desc {
 	struct virtchnl2_singleq_rx_buf_desc		read;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 12/21] common/idpf: avoid variable 0-init
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (10 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 11/21] common/idpf: move related defines into enums Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
                               ` (9 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Don't initialize the variables if not needed.

Also use 'err' instead of 'status', 'ret_code', 'ret' etc.
for consistency and change the return label 'sq_send_command_out'
to 'err_unlock'.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c      | 60 +++++++++----------
 .../common/idpf/base/idpf_controlq_setup.c    | 18 +++---
 2 files changed, 38 insertions(+), 40 deletions(-)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index d9ca33cdb9..65e5599614 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -61,7 +61,7 @@ static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
  */
 static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	for (i = 0; i < cq->ring_size; i++) {
 		struct idpf_ctlq_desc *desc = IDPF_CTLQ_DESC(cq, i);
@@ -134,7 +134,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 {
 	struct idpf_ctlq_info *cq;
 	bool is_rxq = false;
-	int status = 0;
+	int err;
 
 	if (!qinfo->len || !qinfo->buf_size ||
 	    qinfo->len > IDPF_CTLQ_MAX_RING_SIZE ||
@@ -160,14 +160,14 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 		is_rxq = true;
 		/* fallthrough */
 	case IDPF_CTLQ_TYPE_MAILBOX_TX:
-		status = idpf_ctlq_alloc_ring_res(hw, cq);
+		err = idpf_ctlq_alloc_ring_res(hw, cq);
 		break;
 	default:
-		status = -EINVAL;
+		err = -EINVAL;
 		break;
 	}
 
-	if (status)
+	if (err)
 		goto init_free_q;
 
 	if (is_rxq) {
@@ -178,7 +178,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 			idpf_calloc(hw, qinfo->len,
 				    sizeof(struct idpf_ctlq_msg *));
 		if (!cq->bi.tx_msg) {
-			status = -ENOMEM;
+			err = -ENOMEM;
 			goto init_dealloc_q_mem;
 		}
 	}
@@ -192,7 +192,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	LIST_INSERT_HEAD(&hw->cq_list_head, cq, cq_list);
 
 	*cq_out = cq;
-	return status;
+	return 0;
 
 init_dealloc_q_mem:
 	/* free ring buffers and the ring itself */
@@ -201,7 +201,7 @@ int idpf_ctlq_add(struct idpf_hw *hw,
 	idpf_free(hw, cq);
 	cq = NULL;
 
-	return status;
+	return err;
 }
 
 /**
@@ -232,27 +232,27 @@ int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
 		   struct idpf_ctlq_create_info *q_info)
 {
 	struct idpf_ctlq_info *cq = NULL, *tmp = NULL;
-	int ret_code = 0;
-	int i = 0;
+	int err;
+	int i;
 
 	LIST_INIT(&hw->cq_list_head);
 
 	for (i = 0; i < num_q; i++) {
 		struct idpf_ctlq_create_info *qinfo = q_info + i;
 
-		ret_code = idpf_ctlq_add(hw, qinfo, &cq);
-		if (ret_code)
+		err = idpf_ctlq_add(hw, qinfo, &cq);
+		if (err)
 			goto init_destroy_qs;
 	}
 
-	return ret_code;
+	return 0;
 
 init_destroy_qs:
 	LIST_FOR_EACH_ENTRY_SAFE(cq, tmp, &hw->cq_list_head,
 				 idpf_ctlq_info, cq_list)
 		idpf_ctlq_remove(hw, cq);
 
-	return ret_code;
+	return err;
 }
 
 /**
@@ -286,9 +286,9 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 		   u16 num_q_msg, struct idpf_ctlq_msg q_msg[])
 {
 	struct idpf_ctlq_desc *desc;
-	int num_desc_avail = 0;
-	int status = 0;
-	int i = 0;
+	int num_desc_avail;
+	int err = 0;
+	int i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -298,8 +298,8 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* Ensure there are enough descriptors to send all messages */
 	num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
 	if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
-		status = -ENOSPC;
-		goto sq_send_command_out;
+		err = -ENOSPC;
+		goto err_unlock;
 	}
 
 	for (i = 0; i < num_q_msg; i++) {
@@ -370,10 +370,10 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 
 	wr32(hw, cq->reg.tail, cq->next_to_use);
 
-sq_send_command_out:
+err_unlock:
 	idpf_release_lock(&cq->cq_lock);
 
-	return status;
+	return err;
 }
 
 /**
@@ -397,9 +397,8 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 				struct idpf_ctlq_msg *msg_status[], bool force)
 {
 	struct idpf_ctlq_desc *desc;
-	u16 i = 0, num_to_clean;
+	u16 i, num_to_clean;
 	u16 ntc, desc_err;
-	int ret = 0;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -446,7 +445,7 @@ static int __idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
 	/* Return number of descriptors actually cleaned */
 	*clean_count = i;
 
-	return ret;
+	return 0;
 }
 
 /**
@@ -513,7 +512,6 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	u16 ntp = cq->next_to_post;
 	bool buffs_avail = false;
 	u16 tbp = ntp + 1;
-	int status = 0;
 	int i = 0;
 
 	if (*buff_count > cq->ring_size)
@@ -614,7 +612,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 	/* return the number of buffers that were not posted */
 	*buff_count = *buff_count - i;
 
-	return status;
+	return 0;
 }
 
 /**
@@ -633,8 +631,8 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 {
 	u16 num_to_clean, ntc, ret_val, flags;
 	struct idpf_ctlq_desc *desc;
-	int ret_code = 0;
-	u16 i = 0;
+	int err = 0;
+	u16 i;
 
 	if (!cq || !cq->ring_size)
 		return -ENOBUFS;
@@ -667,7 +665,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 				      IDPF_CTLQ_FLAG_FTYPE_S;
 
 		if (flags & IDPF_CTLQ_FLAG_ERR)
-			ret_code = -EBADMSG;
+			err = -EBADMSG;
 
 		q_msg[i].cookie.mbx.chnl_opcode = LE32_TO_CPU(desc->cookie_high);
 		q_msg[i].cookie.mbx.chnl_retval = LE32_TO_CPU(desc->cookie_low);
@@ -713,7 +711,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
 
 	*num_q_msg = i;
 	if (*num_q_msg == 0)
-		ret_code = -ENOMSG;
+		err = -ENOMSG;
 
-	return ret_code;
+	return err;
 }
diff --git a/drivers/common/idpf/base/idpf_controlq_setup.c b/drivers/common/idpf/base/idpf_controlq_setup.c
index 21f43c74f5..cd6bcb1cf0 100644
--- a/drivers/common/idpf/base/idpf_controlq_setup.c
+++ b/drivers/common/idpf/base/idpf_controlq_setup.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 
@@ -34,7 +34,7 @@ static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
 static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
 				struct idpf_ctlq_info *cq)
 {
-	int i = 0;
+	int i;
 
 	/* Do not allocate DMA buffers for transmit queues */
 	if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
@@ -153,20 +153,20 @@ void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
  */
 int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 {
-	int ret_code;
+	int err;
 
 	/* verify input for valid configuration */
 	if (!cq->ring_size || !cq->buf_size)
 		return -EINVAL;
 
 	/* allocate the ring memory */
-	ret_code = idpf_ctlq_alloc_desc_ring(hw, cq);
-	if (ret_code)
-		return ret_code;
+	err = idpf_ctlq_alloc_desc_ring(hw, cq);
+	if (err)
+		return err;
 
 	/* allocate buffers in the rings */
-	ret_code = idpf_ctlq_alloc_bufs(hw, cq);
-	if (ret_code)
+	err = idpf_ctlq_alloc_bufs(hw, cq);
+	if (err)
 		goto idpf_init_cq_free_ring;
 
 	/* success! */
@@ -174,5 +174,5 @@ int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
 
 idpf_init_cq_free_ring:
 	idpf_free_dma_mem(hw, &cq->desc_ring);
-	return ret_code;
+	return err;
 }
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 13/21] common/idpf: update in PTP message validation
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (11 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 12/21] common/idpf: avoid variable 0-init Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-28 14:33               ` Bruce Richardson
  2024-06-24  9:16             ` [PATCH v5 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
                               ` (8 subsequent siblings)
  21 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

When the message for getting timestamp latches is sent by the driver,
number of latches is equal to 0. Current implementation of message
validation function incorrectly notifies this kind of message length as
invalid.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index e76ccbd46f..24a8b37876 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -2272,7 +2272,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_CAPS:
 		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_get_ptp_caps *ptp_caps =
 			(struct virtchnl2_get_ptp_caps *)msg;
 
@@ -2288,7 +2288,7 @@ virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u3
 	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
 		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
 
-		if (msglen >= valid_len) {
+		if (msglen > valid_len) {
 			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
 			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (12 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 15/21] common/idpf: add wmb before tail Soumyadeep Hore
                               ` (7 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

This capability bit indicates both inline as well as side band flow
steering capability.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 24a8b37876..9dd5191c0e 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -243,7 +243,7 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_FLOW_DIRECTOR		= BIT_ULL(3),
 	VIRTCHNL2_CAP_SPLITQ_QSCHED		= BIT_ULL(4),
 	VIRTCHNL2_CAP_CRC			= BIT_ULL(5),
-	VIRTCHNL2_CAP_INLINE_FLOW_STEER		= BIT_ULL(6),
+	VIRTCHNL2_CAP_FLOW_STEER		= BIT_ULL(6),
 	VIRTCHNL2_CAP_WB_ON_ITR			= BIT_ULL(7),
 	VIRTCHNL2_CAP_PROMISC			= BIT_ULL(8),
 	VIRTCHNL2_CAP_LINK_SPEED		= BIT_ULL(9),
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 15/21] common/idpf: add wmb before tail
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (13 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-28 14:45               ` Bruce Richardson
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
                               ` (6 subsequent siblings)
  21 siblings, 2 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Introduced through customer's feedback in their attempt to address some
bugs this introduces a memory barrier before posting ctlq tail. This
makes sure memory writes have a chance to take place before HW starts
messing with the descriptors.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index 65e5599614..4f47759a4f 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -604,6 +604,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			/* Wrap to end of end ring since current ntp is 0 */
 			cq->next_to_post = cq->ring_size - 1;
 
+		idpf_wmb();
+
 		wr32(hw, cq->reg.tail, cq->next_to_post);
 	}
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 16/21] drivers: add flex array support and fix issues
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (14 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 15/21] common/idpf: add wmb before tail Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-28 14:50               ` Bruce Richardson
  2024-06-24  9:16             ` [PATCH v5 17/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
                               ` (5 subsequent siblings)
  21 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

With the internal Linux upstream feedback that is received on
IDPF driver and also some references available online, it
is discouraged to use 1-sized array fields in the structures,
especially in the new Linux drivers that are going to be
upstreamed. Instead, it is recommended to use flex array fields
for the dynamic sized structures.

Some fixes based on code change is introduced to compile dpdk.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h       | 466 ++++-----------------
 drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
 drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
 3 files changed, 86 insertions(+), 410 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 9dd5191c0e..317bd06c0f 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -63,6 +63,10 @@ enum virtchnl2_status {
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
+
+#define STRUCT_VAR_LEN		1
 
 /**
  * New major set of opcodes introduced and so leaving room for
@@ -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_reg_chunk chunks[1];
+	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 
 /**
  * enum virtchnl2_vport_flags - Vport flags
@@ -773,7 +776,7 @@ struct virtchnl2_create_vport {
 	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(192, virtchnl2_create_vport, chunks.chunks);
 
 /**
  * struct virtchnl2_vport - Vport identifier information
@@ -860,10 +863,9 @@ struct virtchnl2_config_tx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[10];
-	struct virtchnl2_txq_info qinfo[1];
+	struct virtchnl2_txq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(72, virtchnl2_config_tx_queues, qinfo);
 
 /**
  * struct virtchnl2_rxq_info - Receive queue config info
@@ -942,10 +944,9 @@ struct virtchnl2_config_rx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[18];
-	struct virtchnl2_rxq_info qinfo[1];
+	struct virtchnl2_rxq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(112, virtchnl2_config_rx_queues, qinfo);
 
 /**
  * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
@@ -975,16 +976,15 @@ struct virtchnl2_add_queues {
 
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(56, virtchnl2_add_queues, chunks.chunks);
 
 /* Queue Groups Extension */
 /**
  * struct virtchnl2_rx_queue_group_info - RX queue group info
- * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
- *		  allocated by CreateVport command. New size will be returned
- *		  if allocation succeeded, otherwise original rss_size from
- *		  CreateVport will be returned.
+ * @rss_lut_size: User can ask to update rss_lut size originally allocated by
+ *		  CreateVport command. New size will be returned if allocation
+ *		  succeeded, otherwise original rss_size from CreateVport
+ *		  will be returned.
  * @pad: Padding for future extensions
  */
 struct virtchnl2_rx_queue_group_info {
@@ -1012,7 +1012,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
  * @cir_pad: Future extension purpose for CIR only
  * @pad2: Padding for future extensions
  */
-struct virtchnl2_tx_queue_group_info { /* IN */
+struct virtchnl2_tx_queue_group_info {
 	u8 tx_tc;
 	u8 priority;
 	u8 is_sp;
@@ -1045,19 +1045,17 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 /**
  * struct virtchnl2_queue_group_info - Queue group info
  * @qg_id: Queue group ID
- * @num_tx_q: Number of TX queues
- * @num_tx_complq: Number of completion queues
- * @num_rx_q: Number of RX queues
- * @num_rx_bufq: Number of RX buffer queues
+ * @num_tx_q: Number of TX queues requested
+ * @num_tx_complq: Number of completion queues requested
+ * @num_rx_q: Number of RX queues requested
+ * @num_rx_bufq: Number of RX buffer queues requested
  * @tx_q_grp_info: TX queue group info
  * @rx_q_grp_info: RX queue group info
  * @pad: Padding for future extensions
- * @chunks: Queue register chunks
+ * @chunks: Queue register chunks from CP
  */
 struct virtchnl2_queue_group_info {
-	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
-	/* IN, Number of queue of different types in the group. */
 	__le16 num_tx_q;
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
@@ -1066,56 +1064,52 @@ struct virtchnl2_queue_group_info {
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
 	u8 pad[40];
-	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
-};
-
-VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
-
-/**
- * struct virtchnl2_queue_groups - Queue groups list
- * @num_queue_groups: Total number of queue groups
- * @pad: Padding for future extensions
- * @groups: Array of queue group info
- */
-struct virtchnl2_queue_groups {
-	__le16 num_queue_groups;
-	u8 pad[6];
-	struct virtchnl2_queue_group_info groups[1];
+	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(120, virtchnl2_queue_group_info, chunks.chunks);
 
 /**
  * struct virtchnl2_add_queue_groups - Add queue groups
- * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ * @vport_id: Vport_id to add queue group to, same as allocated by
  *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @num_queue_groups: Total number of queue groups
  * @pad: Padding for future extensions
- * @qg_info: IN/OUT. List of all the queue groups
+#ifndef FLEX_ARRAY_SUPPORT
+ * @groups: List of all the queue group info structures
+#endif
  *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
- * groups and queues assigned followed by num_queue_groups and num_chunks of
- * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ * groups and queues assigned followed by num_queue_groups and groups of
+ * virtchnl2_queue_group_info and virtchnl2_queue_chunk structures.
+#ifdef FLEX_ARRAY_SUPPORT
+ * (Note: There is no specific field for the queue group info but are added at
+ * the end of the add queue groups message. Receiver of this message is expected
+ * to extract the queue group info accordingly. Reason for doing this is because
+ * compiler doesn't allow nested flexible array fields).
+#endif
  *
  * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
 	__le32 vport_id;
-	u8 pad[4];
-	struct virtchnl2_queue_groups qg_info;
+	__le16 num_queue_groups;
+	u8 pad[10];
+	struct virtchnl2_queue_group_info groups[STRUCT_VAR_LEN];
+
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
 /**
  * struct virtchnl2_delete_queue_groups - Delete queue groups
- * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ * @vport_id: Vport ID to delete queue group from, same as allocated by
  *	      CreateVport.
- * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @num_queue_groups: Defines number of groups provided
  * @pad: Padding
- * @qg_ids: IN, IDs & types of Queue Groups to delete
+ * @qg_ids: IDs & types of Queue Groups to delete
  *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
@@ -1129,10 +1123,9 @@ struct virtchnl2_delete_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	struct virtchnl2_queue_group_id qg_ids[1];
+	struct virtchnl2_queue_group_id qg_ids[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_delete_queue_groups, qg_ids);
 
 /**
  * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
@@ -1190,10 +1183,9 @@ struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
 
-	struct virtchnl2_vector_chunk vchunks[1];
+	struct virtchnl2_vector_chunk vchunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(48, virtchnl2_vector_chunks, vchunks);
 
 /**
  * struct virtchnl2_alloc_vectors - Vector allocation info
@@ -1215,8 +1207,7 @@ struct virtchnl2_alloc_vectors {
 
 	struct virtchnl2_vector_chunks vchunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(64, virtchnl2_alloc_vectors, vchunks.vchunks);
 
 /**
  * struct virtchnl2_rss_lut - RSS LUT info
@@ -1237,10 +1228,9 @@ struct virtchnl2_rss_lut {
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	__le32 lut[1];
+	__le32 lut[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_rss_lut, lut);
 
 /**
  * struct virtchnl2_rss_hash - RSS hash info
@@ -1389,10 +1379,9 @@ struct virtchnl2_ptype {
 	u8 ptype_id_8;
 	u8 proto_id_count;
 	__le16 pad;
-	__le16 proto_id[1];
+	__le16 proto_id[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(8, virtchnl2_ptype, proto_id);
 
 /**
  * struct virtchnl2_get_ptype_info - Packet type info
@@ -1428,7 +1417,7 @@ struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
-	struct virtchnl2_ptype ptype[1];
+	struct virtchnl2_ptype ptype[STRUCT_VAR_LEN];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
@@ -1629,10 +1618,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_chunk chunks[1];
+	struct virtchnl2_queue_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_queue_chunks, chunks);
 
 /**
  * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
@@ -1654,8 +1642,7 @@ struct virtchnl2_del_ena_dis_queues {
 
 	struct virtchnl2_queue_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(32, virtchnl2_del_ena_dis_queues, chunks.chunks);
 
 /**
  * struct virtchnl2_queue_vector - Queue to vector mapping
@@ -1699,10 +1686,10 @@ struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
 	__le16 num_qv_maps;
 	u8 pad[10];
-	struct virtchnl2_queue_vector qv_maps[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+	struct virtchnl2_queue_vector qv_maps[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_vector_maps, qv_maps);
 
 /**
  * struct virtchnl2_loopback - Loopback info
@@ -1754,10 +1741,10 @@ struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
 	__le16 num_mac_addr;
 	u8 pad[2];
-	struct virtchnl2_mac_addr mac_addr_list[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
+	struct virtchnl2_mac_addr mac_addr_list[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_mac_addr_list, mac_addr_list);
 
 /**
  * struct virtchnl2_promisc_info - Promiscuous type information
@@ -1856,10 +1843,10 @@ struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
+	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp,
+			       ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_get_ptp_caps - Get PTP capabilities
@@ -1884,8 +1871,8 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(88, virtchnl2_get_ptp_caps,
+			       tx_tstamp.ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
@@ -1920,13 +1907,12 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
+	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp_latches);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp_latches,
+			       tstamp_latches);
 
 static inline const char *virtchnl2_op_str(__le32 v_opcode)
 {
@@ -2004,314 +1990,4 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
 	}
 }
 
-/**
- * virtchnl2_vc_validate_vf_msg
- * @ver: Virtchnl2 version info
- * @v_opcode: Opcode for the message
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- *
- * Validate msg format against struct for each opcode.
- */
-static inline int
-virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
-			     u8 *msg, __le16 msglen)
-{
-	bool err_msg_format = false;
-	__le32 valid_len = 0;
-
-	/* Validate message length */
-	switch (v_opcode) {
-	case VIRTCHNL2_OP_VERSION:
-		valid_len = sizeof(struct virtchnl2_version_info);
-		break;
-	case VIRTCHNL2_OP_GET_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_capabilities);
-		break;
-	case VIRTCHNL2_OP_CREATE_VPORT:
-		valid_len = sizeof(struct virtchnl2_create_vport);
-		if (msglen >= valid_len) {
-			struct virtchnl2_create_vport *cvport =
-				(struct virtchnl2_create_vport *)msg;
-
-			if (cvport->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (cvport->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_CREATE_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_create_adi);
-		if (msglen >= valid_len) {
-			struct virtchnl2_non_flex_create_adi *cadi =
-				(struct virtchnl2_non_flex_create_adi *)msg;
-
-			if (cadi->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			if (cadi->vchunks.num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (cadi->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-			valid_len += (cadi->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_destroy_adi);
-		break;
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-		valid_len = sizeof(struct virtchnl2_vport);
-		break;
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_tx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_tx_queues *ctq =
-				(struct virtchnl2_config_tx_queues *)msg;
-			if (ctq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (ctq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_txq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_rx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_rx_queues *crq =
-				(struct virtchnl2_config_rx_queues *)msg;
-			if (crq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (crq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_rxq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUES:
-		valid_len = sizeof(struct virtchnl2_add_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_add_queues *add_q =
-				(struct virtchnl2_add_queues *)msg;
-
-			if (add_q->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (add_q->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_DEL_QUEUES:
-		valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_del_ena_dis_queues *qs =
-				(struct virtchnl2_del_ena_dis_queues *)msg;
-			if (qs->chunks.num_chunks == 0 ||
-			    qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (qs->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_add_queue_groups);
-		if (msglen != valid_len) {
-			__le64 offset;
-			__le32 i;
-			struct virtchnl2_add_queue_groups *add_queue_grp =
-				(struct virtchnl2_add_queue_groups *)msg;
-			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
-			struct virtchnl2_queue_group_info *grp_info;
-			__le32 chunk_size = sizeof(struct virtchnl2_queue_reg_chunk);
-			__le32 group_size = sizeof(struct virtchnl2_queue_group_info);
-			__le32 total_chunks_size;
-
-			if (groups->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (groups->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_info);
-			offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
-
-			for (i = 0; i < groups->num_queue_groups; i++) {
-				grp_info = (struct virtchnl2_queue_group_info *)
-						   ((u8 *)groups + offset);
-				if (grp_info->chunks.num_chunks == 0) {
-					offset += group_size;
-					continue;
-				}
-				total_chunks_size = (grp_info->chunks.num_chunks - 1) * chunk_size;
-				offset += group_size + total_chunks_size;
-				valid_len += total_chunks_size;
-			}
-		}
-		break;
-	case VIRTCHNL2_OP_DEL_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_delete_queue_groups);
-		if (msglen != valid_len) {
-			struct virtchnl2_delete_queue_groups *del_queue_grp =
-				(struct virtchnl2_delete_queue_groups *)msg;
-
-			if (del_queue_grp->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += (del_queue_grp->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_id);
-		}
-		break;
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-		valid_len = sizeof(struct virtchnl2_queue_vector_maps);
-		if (msglen >= valid_len) {
-			struct virtchnl2_queue_vector_maps *v_qp =
-				(struct virtchnl2_queue_vector_maps *)msg;
-			if (v_qp->num_qv_maps == 0 ||
-			    v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_qp->num_qv_maps - 1) *
-				      sizeof(struct virtchnl2_queue_vector);
-		}
-		break;
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_alloc_vectors);
-		if (msglen >= valid_len) {
-			struct virtchnl2_alloc_vectors *v_av =
-				(struct virtchnl2_alloc_vectors *)msg;
-
-			if (v_av->vchunks.num_vchunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (v_av->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_vector_chunks);
-		if (msglen >= valid_len) {
-			struct virtchnl2_vector_chunks *v_chunks =
-				(struct virtchnl2_vector_chunks *)msg;
-			if (v_chunks->num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_chunks->num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-		valid_len = sizeof(struct virtchnl2_rss_key);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_key *vrk =
-				(struct virtchnl2_rss_key *)msg;
-
-			if (vrk->key_len == 0) {
-				/* Zero length is allowed as input */
-				break;
-			}
-
-			valid_len += vrk->key_len - 1;
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-		valid_len = sizeof(struct virtchnl2_rss_lut);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_lut *vrl =
-				(struct virtchnl2_rss_lut *)msg;
-
-			if (vrl->lut_entries == 0) {
-				/* Zero entries is allowed as input */
-				break;
-			}
-
-			valid_len += (vrl->lut_entries - 1) * sizeof(vrl->lut);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-		valid_len = sizeof(struct virtchnl2_rss_hash);
-		break;
-	case VIRTCHNL2_OP_SET_SRIOV_VFS:
-		valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		valid_len = sizeof(struct virtchnl2_get_ptype_info);
-		break;
-	case VIRTCHNL2_OP_GET_STATS:
-		valid_len = sizeof(struct virtchnl2_vport_stats);
-		break;
-	case VIRTCHNL2_OP_GET_PORT_STATS:
-		valid_len = sizeof(struct virtchnl2_port_stats);
-		break;
-	case VIRTCHNL2_OP_RESET_VF:
-		break;
-	case VIRTCHNL2_OP_GET_PTP_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_get_ptp_caps *ptp_caps =
-			(struct virtchnl2_get_ptp_caps *)msg;
-
-			if (ptp_caps->tx_tstamp.num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_entry));
-		}
-		break;
-	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
-		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
-			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
-
-			if (tx_tstamp_latches->num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((tx_tstamp_latches->num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
-		}
-		break;
-	/* These are always errors coming from the VF */
-	case VIRTCHNL2_OP_EVENT:
-	case VIRTCHNL2_OP_UNKNOWN:
-	default:
-		return VIRTCHNL2_STATUS_ERR_ESRCH;
-	}
-	/* Few more checks */
-	if (err_msg_format || valid_len != msglen)
-		return VIRTCHNL2_STATUS_ERR_EINVAL;
-
-	return 0;
-}
-
 #endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index c46ed50eb5..f00202f43c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -366,7 +366,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	int err = -1;
 
 	size = sizeof(*p2p_queue_grps_info) +
-	       (p2p_queue_grps_info->qg_info.num_queue_groups - 1) *
+	       (p2p_queue_grps_info->num_queue_groups - 1) *
 		   sizeof(struct virtchnl2_queue_group_info);
 
 	memset(&args, 0, sizeof(args));
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e718e9e19..e707043bf7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2393,18 +2393,18 @@ cpfl_p2p_q_grps_add(struct idpf_vport *vport,
 	int ret;
 
 	p2p_queue_grps_info->vport_id = vport->vport_id;
-	p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
-	p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0;
+	p2p_queue_grps_info->num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
+	p2p_queue_grps_info->groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
+	p2p_queue_grps_info->groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
+	p2p_queue_grps_info->groups[0].rx_q_grp_info.rss_lut_size = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.tx_tc = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.priority = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.is_sp = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.pir_weight = 0;
 
 	ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info);
 	if (ret != 0) {
@@ -2423,13 +2423,13 @@ cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
 	struct virtchnl2_queue_reg_chunks *vc_chunks_out;
 	int i, type;
 
-	if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=
+	if (p2p_q_vc_out_info->groups[0].qg_id.queue_group_type !=
 	    VIRTCHNL2_QUEUE_GROUP_P2P) {
 		PMD_DRV_LOG(ERR, "Add queue group response mismatch.");
 		return -EINVAL;
 	}
 
-	vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks;
+	vc_chunks_out = &p2p_q_vc_out_info->groups[0].chunks;
 
 	for (i = 0; i < vc_chunks_out->num_chunks; i++) {
 		type = vc_chunks_out->chunks[i].type;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 17/21] common/idpf: enable flow steer capability for vports
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (15 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 18/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
                               ` (4 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Added virtchnl2_flow_types to be used for flow steering.

Added flow steer cap flags for vport create.

Add flow steer flow types and action types for vport create.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 60 ++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 317bd06c0f..c14a4e2c7d 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -269,6 +269,43 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
 };
 
+/**
+ * enum virtchnl2_action_types - Available actions for sideband flow steering
+ * @VIRTCHNL2_ACTION_DROP: Drop the packet
+ * @VIRTCHNL2_ACTION_PASSTHRU: Forward the packet to the next classifier/stage
+ * @VIRTCHNL2_ACTION_QUEUE: Forward the packet to a receive queue
+ * @VIRTCHNL2_ACTION_Q_GROUP: Forward the packet to a receive queue group
+ * @VIRTCHNL2_ACTION_MARK: Mark the packet with specific marker value
+ * @VIRTCHNL2_ACTION_COUNT: Increment the corresponding counter
+ */
+
+enum virtchnl2_action_types {
+	VIRTCHNL2_ACTION_DROP		= BIT(0),
+	VIRTCHNL2_ACTION_PASSTHRU	= BIT(1),
+	VIRTCHNL2_ACTION_QUEUE		= BIT(2),
+	VIRTCHNL2_ACTION_Q_GROUP	= BIT(3),
+	VIRTCHNL2_ACTION_MARK		= BIT(4),
+	VIRTCHNL2_ACTION_COUNT		= BIT(5),
+};
+
+/* Flow type capabilities for Flow Steering and Receive-Side Scaling */
+enum virtchnl2_flow_types {
+	VIRTCHNL2_FLOW_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_FLOW_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_FLOW_IPV4_SCTP	= BIT(2),
+	VIRTCHNL2_FLOW_IPV4_OTHER	= BIT(3),
+	VIRTCHNL2_FLOW_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_FLOW_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_FLOW_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_FLOW_IPV6_OTHER	= BIT(7),
+	VIRTCHNL2_FLOW_IPV4_AH		= BIT(8),
+	VIRTCHNL2_FLOW_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_FLOW_IPV4_AH_ESP	= BIT(10),
+	VIRTCHNL2_FLOW_IPV6_AH		= BIT(11),
+	VIRTCHNL2_FLOW_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_FLOW_IPV6_AH_ESP	= BIT(13),
+};
+
 /**
  * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
  * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
@@ -707,11 +744,16 @@ VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 /**
  * enum virtchnl2_vport_flags - Vport flags
  * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
- * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER: Inline flow steering enabled
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ: Inline flow steering enabled
+ * with explicit Rx queue action
+ * @VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER: Sideband flow steering enabled
  */
 enum virtchnl2_vport_flags {
 	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
-	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ	= BIT(2),
+	VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER	= BIT(3),
 };
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
@@ -739,6 +781,14 @@ enum virtchnl2_vport_flags {
  * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
  * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
  * @reserved: Reserved bytes and cannot be used
+ * @inline_flow_types: Bit mask of supported inline-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_types: Bit mask of supported sideband-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_actions: Bit mask of supported action types
+ *  for sideband flow steering (See enum virtchnl2_action_types)
+ * @flow_steer_max_rules: Max rules allowed for inline and sideband
+ *  flow steering combined
  * @rss_algorithm: RSS algorithm
  * @rss_key_size: RSS key size
  * @rss_lut_size: RSS LUT size
@@ -768,7 +818,11 @@ struct virtchnl2_create_vport {
 	__le16 vport_flags;
 	__le64 rx_desc_ids;
 	__le64 tx_desc_ids;
-	u8 reserved[72];
+	u8 reserved[48];
+	__le64 inline_flow_types;
+	__le64 sideband_flow_types;
+	__le32 sideband_flow_actions;
+	__le32 flow_steer_max_rules;
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 18/21] common/idpf: add a new Tx context descriptor structure
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (16 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 17/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 19/21] common/idpf: remove idpf common file Soumyadeep Hore
                               ` (3 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding a new structure for the context descriptor that contains
the support for timesync packets, where the index for timestamping is set.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_lan_txrx.h | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/idpf_lan_txrx.h b/drivers/common/idpf/base/idpf_lan_txrx.h
index c9eaeb5d3f..be27973a33 100644
--- a/drivers/common/idpf/base/idpf_lan_txrx.h
+++ b/drivers/common/idpf/base/idpf_lan_txrx.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_LAN_TXRX_H_
@@ -286,6 +286,24 @@ struct idpf_flex_tx_tso_ctx_qw {
 };
 
 union idpf_flex_tx_ctx_desc {
+		/* DTYPE = IDPF_TX_DESC_DTYPE_CTX (0x01) */
+	struct  {
+		struct {
+			u8 rsv[4];
+			__le16 l2tag2;
+			u8 rsv_2[2];
+		} qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 tsyn_reg_l;
+#define IDPF_TX_DESC_CTX_TSYN_L_M	GENMASK(15, 14)
+			__le16 tsyn_reg_h;
+#define IDPF_TX_DESC_CTX_TSYN_H_M	GENMASK(15, 0)
+			__le16 mss;
+#define IDPF_TX_DESC_CTX_MSS_M		GENMASK(14, 2)
+		} qw1;
+	} tsyn;
+
 	/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
 	struct {
 		struct idpf_flex_tx_tso_ctx_qw qw0;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 19/21] common/idpf: remove idpf common file
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (17 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 18/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-24  9:16             ` [PATCH v5 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
                               ` (2 subsequent siblings)
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

The file is redundant in our implementation and is not required
further.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 382 -------------------------
 drivers/common/idpf/base/meson.build   |   1 -
 2 files changed, 383 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
deleted file mode 100644
index bb540345c2..0000000000
--- a/drivers/common/idpf/base/idpf_common.c
+++ /dev/null
@@ -1,382 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2024 Intel Corporation
- */
-
-#include "idpf_prototype.h"
-#include "idpf_type.h"
-#include <virtchnl.h>
-
-
-/**
- * idpf_set_mac_type - Sets MAC type
- * @hw: pointer to the HW structure
- *
- * This function sets the mac type of the adapter based on the
- * vendor ID and device ID stored in the hw structure.
- */
-int idpf_set_mac_type(struct idpf_hw *hw)
-{
-	int status = 0;
-
-	DEBUGFUNC("Set MAC type\n");
-
-	if (hw->vendor_id == IDPF_INTEL_VENDOR_ID) {
-		switch (hw->device_id) {
-		case IDPF_DEV_ID_PF:
-			hw->mac.type = IDPF_MAC_PF;
-			break;
-		case IDPF_DEV_ID_VF:
-			hw->mac.type = IDPF_MAC_VF;
-			break;
-		default:
-			hw->mac.type = IDPF_MAC_GENERIC;
-			break;
-		}
-	} else {
-		status = -ENODEV;
-	}
-
-	DEBUGOUT2("Setting MAC type found mac: %d, returns: %d\n",
-		  hw->mac.type, status);
-	return status;
-}
-
-/**
- *  idpf_init_hw - main initialization routine
- *  @hw: pointer to the hardware structure
- *  @ctlq_size: struct to pass ctlq size data
- */
-int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
-{
-	struct idpf_ctlq_create_info *q_info;
-	int status = 0;
-	struct idpf_ctlq_info *cq = NULL;
-
-	/* Setup initial control queues */
-	q_info = (struct idpf_ctlq_create_info *)
-		 idpf_calloc(hw, 2, sizeof(struct idpf_ctlq_create_info));
-	if (!q_info)
-		return -ENOMEM;
-
-	q_info[0].type             = IDPF_CTLQ_TYPE_MAILBOX_TX;
-	q_info[0].buf_size         = ctlq_size.asq_buf_size;
-	q_info[0].len              = ctlq_size.asq_ring_size;
-	q_info[0].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[0].reg.head         = PF_FW_ATQH;
-		q_info[0].reg.tail         = PF_FW_ATQT;
-		q_info[0].reg.len          = PF_FW_ATQLEN;
-		q_info[0].reg.bah          = PF_FW_ATQBAH;
-		q_info[0].reg.bal          = PF_FW_ATQBAL;
-		q_info[0].reg.len_mask     = PF_FW_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = PF_FW_ATQH_ATQH_M;
-	} else {
-		q_info[0].reg.head         = VF_ATQH;
-		q_info[0].reg.tail         = VF_ATQT;
-		q_info[0].reg.len          = VF_ATQLEN;
-		q_info[0].reg.bah          = VF_ATQBAH;
-		q_info[0].reg.bal          = VF_ATQBAL;
-		q_info[0].reg.len_mask     = VF_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = VF_ATQH_ATQH_M;
-	}
-
-	q_info[1].type             = IDPF_CTLQ_TYPE_MAILBOX_RX;
-	q_info[1].buf_size         = ctlq_size.arq_buf_size;
-	q_info[1].len              = ctlq_size.arq_ring_size;
-	q_info[1].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[1].reg.head         = PF_FW_ARQH;
-		q_info[1].reg.tail         = PF_FW_ARQT;
-		q_info[1].reg.len          = PF_FW_ARQLEN;
-		q_info[1].reg.bah          = PF_FW_ARQBAH;
-		q_info[1].reg.bal          = PF_FW_ARQBAL;
-		q_info[1].reg.len_mask     = PF_FW_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = PF_FW_ARQH_ARQH_M;
-	} else {
-		q_info[1].reg.head         = VF_ARQH;
-		q_info[1].reg.tail         = VF_ARQT;
-		q_info[1].reg.len          = VF_ARQLEN;
-		q_info[1].reg.bah          = VF_ARQBAH;
-		q_info[1].reg.bal          = VF_ARQBAL;
-		q_info[1].reg.len_mask     = VF_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = VF_ARQH_ARQH_M;
-	}
-
-	status = idpf_ctlq_init(hw, 2, q_info);
-	if (status) {
-		/* TODO return error */
-		idpf_free(hw, q_info);
-		return status;
-	}
-
-	LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, idpf_ctlq_info, cq_list) {
-		if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = cq;
-		else if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = cq;
-	}
-
-	/* TODO hardcode a mac addr for now */
-	hw->mac.addr[0] = 0x00;
-	hw->mac.addr[1] = 0x00;
-	hw->mac.addr[2] = 0x00;
-	hw->mac.addr[3] = 0x00;
-	hw->mac.addr[4] = 0x03;
-	hw->mac.addr[5] = 0x14;
-
-	idpf_free(hw, q_info);
-
-	return 0;
-}
-
-/**
- * idpf_send_msg_to_cp
- * @hw: pointer to the hardware structure
- * @v_opcode: opcodes for VF-PF communication
- * @v_retval: return error code
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- * @cmd_details: pointer to command details
- *
- * Send message to CP. By default, this message
- * is sent asynchronously, i.e. idpf_asq_send_command() does not wait for
- * completion before returning.
- */
-int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
-			int v_retval, u8 *msg, u16 msglen)
-{
-	struct idpf_ctlq_msg ctlq_msg = { 0 };
-	struct idpf_dma_mem dma_mem = { 0 };
-	int status;
-
-	ctlq_msg.opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg.func_id = 0;
-	ctlq_msg.data_len = msglen;
-	ctlq_msg.cookie.mbx.chnl_retval = v_retval;
-	ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
-
-	if (msglen > 0) {
-		dma_mem.va = (struct idpf_dma_mem *)
-			  idpf_alloc_dma_mem(hw, &dma_mem, msglen);
-		if (!dma_mem.va)
-			return -ENOMEM;
-
-		idpf_memcpy(dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
-		ctlq_msg.ctx.indirect.payload = &dma_mem;
-	}
-	status = idpf_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
-
-	if (dma_mem.va)
-		idpf_free_dma_mem(hw, &dma_mem);
-
-	return status;
-}
-
-/**
- *  idpf_asq_done - check if FW has processed the Admin Send Queue
- *  @hw: pointer to the hw struct
- *
- *  Returns true if the firmware has processed all descriptors on the
- *  admin send queue. Returns false if there are still requests pending.
- */
-bool idpf_asq_done(struct idpf_hw *hw)
-{
-	/* AQ designers suggest use of head for better
-	 * timing reliability than DD bit
-	 */
-	return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
-}
-
-/**
- * idpf_check_asq_alive
- * @hw: pointer to the hw struct
- *
- * Returns true if Queue is enabled else false.
- */
-bool idpf_check_asq_alive(struct idpf_hw *hw)
-{
-	if (hw->asq->reg.len)
-		return !!(rd32(hw, hw->asq->reg.len) &
-			  PF_FW_ATQLEN_ATQENABLE_M);
-
-	return false;
-}
-
-/**
- *  idpf_clean_arq_element
- *  @hw: pointer to the hw struct
- *  @e: event info from the receive descriptor, includes any buffers
- *  @pending: number of events that could be left to process
- *
- *  This function cleans one Admin Receive Queue element and returns
- *  the contents through e.  It can also return how many events are
- *  left to process through 'pending'
- */
-int idpf_clean_arq_element(struct idpf_hw *hw,
-			   struct idpf_arq_event_info *e, u16 *pending)
-{
-	struct idpf_dma_mem *dma_mem = NULL;
-	struct idpf_ctlq_msg msg = { 0 };
-	int status;
-	u16 msg_data_len;
-
-	*pending = 1;
-
-	status = idpf_ctlq_recv(hw->arq, pending, &msg);
-	if (status == -ENOMSG)
-		goto exit;
-
-	/* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
-	e->desc.opcode = msg.opcode;
-	e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
-	e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
-	e->desc.ret_val = msg.status;
-	e->desc.datalen = msg.data_len;
-	if (msg.data_len > 0) {
-		if (!msg.ctx.indirect.payload || !msg.ctx.indirect.payload->va ||
-		    !e->msg_buf) {
-			return -EFAULT;
-		}
-		e->buf_len = msg.data_len;
-		msg_data_len = msg.data_len;
-		idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len,
-			    IDPF_DMA_TO_NONDMA);
-		dma_mem = msg.ctx.indirect.payload;
-	} else {
-		*pending = 0;
-	}
-
-	status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem);
-
-exit:
-	return status;
-}
-
-/**
- *  idpf_deinit_hw - shutdown routine
- *  @hw: pointer to the hardware structure
- */
-void idpf_deinit_hw(struct idpf_hw *hw)
-{
-	hw->asq = NULL;
-	hw->arq = NULL;
-
-	idpf_ctlq_deinit(hw);
-}
-
-/**
- * idpf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a RESET message to the CPF. Does not wait for response from CPF
- * as none will be forthcoming. Immediately after calling this function,
- * the control queue should be shut down and (optionally) reinitialized.
- */
-int idpf_reset(struct idpf_hw *hw)
-{
-	return idpf_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
-				      0, NULL, 0);
-}
-
-/**
- * idpf_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- */
-STATIC int idpf_get_set_rss_lut(struct idpf_hw *hw, u16 vsi_id,
-				bool pf_lut, u8 *lut, u16 lut_size,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- */
-int idpf_get_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
-}
-
-/**
- * idpf_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- */
-int idpf_set_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * idpf_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- */
-STATIC int idpf_get_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-				struct idpf_get_set_rss_key_data *key,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- */
-int idpf_get_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * idpf_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-int idpf_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, true);
-}
-
-RTE_LOG_REGISTER_DEFAULT(idpf_common_logger, NOTICE);
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 96d7642209..649c44d0ae 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -2,7 +2,6 @@
 # Copyright(c) 2023 Intel Corporation
 
 sources += files(
-        'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
 )
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 20/21] drivers: adding type to idpf vc queue switch
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (18 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 19/21] common/idpf: remove idpf common file Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-28 14:53               ` Bruce Richardson
  2024-06-24  9:16             ` [PATCH v5 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
  2024-06-28 14:58             ` [PATCH v5 00/21] Update MEV TS Base Driver Bruce Richardson
  21 siblings, 1 reply; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding an argument named type to define queue type
in idpf_vc_queue_switch(). This solves the issue of
improper queue type in virtchnl2 message.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/idpf_common_virtchnl.c |  8 ++------
 drivers/common/idpf/idpf_common_virtchnl.h |  2 +-
 drivers/net/cpfl/cpfl_ethdev.c             | 12 ++++++++----
 drivers/net/cpfl/cpfl_rxtx.c               | 12 ++++++++----
 drivers/net/idpf/idpf_rxtx.c               | 12 ++++++++----
 5 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f00202f43c..de511da788 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -769,15 +769,11 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
+		     bool rx, bool on, uint32_t type)
 {
-	uint32_t type;
 	int err, queue_id;
 
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+	if (rx)
 		queue_id = vport->chunks_info.rx_start_qid + qid;
 	else
 		queue_id = vport->chunks_info.tx_start_qid + qid;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 73446ded86..d6555978d5 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -31,7 +31,7 @@ int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
 int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-			 bool rx, bool on);
+			 bool rx, bool on, uint32_t type);
 __rte_internal
 int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e707043bf7..9e2a74371e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -1907,7 +1907,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	int i, ret;
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
 			return ret;
@@ -1915,7 +1916,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
 			return ret;
@@ -1943,7 +1945,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
 			return ret;
@@ -1951,7 +1954,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
 			return ret;
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ab8bec4645..47351ca102 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -1200,7 +1200,8 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1252,7 +1253,8 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1283,7 +1285,8 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 						     rx_queue_id - cpfl_vport->nb_data_txq,
 						     true, false);
 	else
-		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+								VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1331,7 +1334,8 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 						     tx_queue_id - cpfl_vport->nb_data_txq,
 						     false, false);
 	else
-		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+								VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 64f2235580..858bbefe3b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -595,7 +595,8 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -646,7 +647,8 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -669,7 +671,8 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -701,7 +704,8 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v5 21/21] doc: updated the documentation for cpfl PMD
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (19 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
@ 2024-06-24  9:16             ` Soumyadeep Hore
  2024-06-28 14:58             ` [PATCH v5 00/21] Update MEV TS Base Driver Bruce Richardson
  21 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-06-24  9:16 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Updated the latest support for cpfl pmd in MEV TS
firmware version which is 1.4.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 doc/guides/nics/cpfl.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 9b7a99c894..528c809819 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -35,6 +35,8 @@ Here is the suggested matching list which has been tested and verified.
    +------------+------------------+
    |    23.11   |       1.0        |
    +------------+------------------+
+   |    24.07   |       1.4        |
+   +------------+------------------+
 
 
 Configuration
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v5 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M
  2024-06-24  9:16             ` [PATCH v5 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
@ 2024-06-28 14:16               ` Bruce Richardson
  0 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-06-28 14:16 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

On Mon, Jun 24, 2024 at 09:16:32AM +0000, Soumyadeep Hore wrote:
> Mask for VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M was defined wrongly
> and this patch fixes it.
> 

Fixes: fb4ac04e9bfa ("common/idpf: introduce common library")

Will add on apply.


^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v5 13/21] common/idpf: update in PTP message validation
  2024-06-24  9:16             ` [PATCH v5 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
@ 2024-06-28 14:33               ` Bruce Richardson
  0 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-06-28 14:33 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

On Mon, Jun 24, 2024 at 09:16:36AM +0000, Soumyadeep Hore wrote:
> When the message for getting timestamp latches is sent by the driver,
> number of latches is equal to 0. Current implementation of message
> validation function incorrectly notifies this kind of message length as
> invalid.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---

Fixes: 3f72f4472b57 ("common/idpf/base: initialize PTP support")

Will add on apply.

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v5 15/21] common/idpf: add wmb before tail
  2024-06-24  9:16             ` [PATCH v5 15/21] common/idpf: add wmb before tail Soumyadeep Hore
@ 2024-06-28 14:45               ` Bruce Richardson
  2024-07-01 10:05                 ` Hore, Soumyadeep
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
  1 sibling, 1 reply; 125+ messages in thread
From: Bruce Richardson @ 2024-06-28 14:45 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

On Mon, Jun 24, 2024 at 09:16:38AM +0000, Soumyadeep Hore wrote:
> Introduced through customer's feedback in their attempt to address some
> bugs this introduces a memory barrier before posting ctlq tail. This
> makes sure memory writes have a chance to take place before HW starts
> messing with the descriptors.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---

From the description, it seems that this may be a bugfix patch. Can you
confirm this and whether it should be backported or not. Also, provide a
fixes tag for this.

Thanks,
/Bruce

>  drivers/common/idpf/base/idpf_controlq.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
> index 65e5599614..4f47759a4f 100644
> --- a/drivers/common/idpf/base/idpf_controlq.c
> +++ b/drivers/common/idpf/base/idpf_controlq.c
> @@ -604,6 +604,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
>  			/* Wrap to end of end ring since current ntp is 0 */
>  			cq->next_to_post = cq->ring_size - 1;
>  
> +		idpf_wmb();
> +
>  		wr32(hw, cq->reg.tail, cq->next_to_post);
>  	}
>  
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v5 16/21] drivers: add flex array support and fix issues
  2024-06-24  9:16             ` [PATCH v5 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
@ 2024-06-28 14:50               ` Bruce Richardson
  2024-07-01 10:09                 ` Hore, Soumyadeep
  0 siblings, 1 reply; 125+ messages in thread
From: Bruce Richardson @ 2024-06-28 14:50 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

On Mon, Jun 24, 2024 at 09:16:39AM +0000, Soumyadeep Hore wrote:
> With the internal Linux upstream feedback that is received on
> IDPF driver and also some references available online, it
> is discouraged to use 1-sized array fields in the structures,
> especially in the new Linux drivers that are going to be
> upstreamed. Instead, it is recommended to use flex array fields
> for the dynamic sized structures.
> 
> Some fixes based on code change is introduced to compile dpdk.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>  drivers/common/idpf/base/virtchnl2.h       | 466 ++++-----------------
>  drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
>  drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
>  3 files changed, 86 insertions(+), 410 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
> index 9dd5191c0e..317bd06c0f 100644
> --- a/drivers/common/idpf/base/virtchnl2.h
> +++ b/drivers/common/idpf/base/virtchnl2.h
> @@ -63,6 +63,10 @@ enum virtchnl2_status {
>  #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
>  	static_assert((n) == sizeof(struct X),	\
>  		      "Structure length does not match with the expected value")
> +#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
> +	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
> +
> +#define STRUCT_VAR_LEN		1
>  
>  /**
>   * New major set of opcodes introduced and so leaving room for
> @@ -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
>  struct virtchnl2_queue_reg_chunks {
>  	__le16 num_chunks;
>  	u8 pad[6];
> -	struct virtchnl2_queue_reg_chunk chunks[1];
> +	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
>  };

This patch doesn't actually seem to be using flexible array members.
Instead I see a macro with value "1" being used in place of a hard-coded
"1". Can you please check that commit message matches what's actually
happening, and that changes in the patch are correct.

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v5 20/21] drivers: adding type to idpf vc queue switch
  2024-06-24  9:16             ` [PATCH v5 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
@ 2024-06-28 14:53               ` Bruce Richardson
  0 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-06-28 14:53 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

Hi,

When putting commit titles, checkpatch will complain if you use function
names, or macro/variable names. It does this by checking for underscores.
When this gets flagged, please rewrite the commit title to be more generic,
don't just remove the underscores from the function names!  I've fixed this
on the patches already applied to next-net-intel.

Thanks,
/Bruce

On Mon, Jun 24, 2024 at 09:16:43AM +0000, Soumyadeep Hore wrote:
> Adding an argument named type to define queue type
> in idpf_vc_queue_switch(). This solves the issue of
> improper queue type in virtchnl2 message.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>  drivers/common/idpf/idpf_common_virtchnl.c |  8 ++------
>  drivers/common/idpf/idpf_common_virtchnl.h |  2 +-
>  drivers/net/cpfl/cpfl_ethdev.c             | 12 ++++++++----
>  drivers/net/cpfl/cpfl_rxtx.c               | 12 ++++++++----
>  drivers/net/idpf/idpf_rxtx.c               | 12 ++++++++----
>  5 files changed, 27 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
> index f00202f43c..de511da788 100644
> --- a/drivers/common/idpf/idpf_common_virtchnl.c
> +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> @@ -769,15 +769,11 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
>  
>  int
>  idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
> -		     bool rx, bool on)
> +		     bool rx, bool on, uint32_t type)
>  {
> -	uint32_t type;
>  	int err, queue_id;
>  
> -	/* switch txq/rxq */
> -	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
> -
> -	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
> +	if (rx)
>  		queue_id = vport->chunks_info.rx_start_qid + qid;
>  	else
>  		queue_id = vport->chunks_info.tx_start_qid + qid;
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
> index 73446ded86..d6555978d5 100644
> --- a/drivers/common/idpf/idpf_common_virtchnl.h
> +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> @@ -31,7 +31,7 @@ int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
>  			struct idpf_cmd_info *args);
>  __rte_internal
>  int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
> -			 bool rx, bool on);
> +			 bool rx, bool on, uint32_t type);
>  __rte_internal
>  int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
>  __rte_internal
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index e707043bf7..9e2a74371e 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -1907,7 +1907,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
>  	int i, ret;
>  
>  	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
> -		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false);
> +		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
> +								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
>  		if (ret) {
>  			PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
>  			return ret;
> @@ -1915,7 +1916,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
>  	}
>  
>  	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
> -		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false);
> +		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
> +								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
>  		if (ret) {
>  			PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
>  			return ret;
> @@ -1943,7 +1945,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
>  	}
>  
>  	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
> -		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true);
> +		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
> +								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
>  		if (ret) {
>  			PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
>  			return ret;
> @@ -1951,7 +1954,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
>  	}
>  
>  	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
> -		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true);
> +		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
> +								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
>  		if (ret) {
>  			PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
>  			return ret;
> diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
> index ab8bec4645..47351ca102 100644
> --- a/drivers/net/cpfl/cpfl_rxtx.c
> +++ b/drivers/net/cpfl/cpfl_rxtx.c
> @@ -1200,7 +1200,8 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
>  	}
>  
>  	/* Ready to switch the queue on */
> -	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
> +	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
> +							VIRTCHNL2_QUEUE_TYPE_RX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
>  			    rx_queue_id);
> @@ -1252,7 +1253,8 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
>  	}
>  
>  	/* Ready to switch the queue on */
> -	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
> +	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
> +							VIRTCHNL2_QUEUE_TYPE_TX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
>  			    tx_queue_id);
> @@ -1283,7 +1285,8 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
>  						     rx_queue_id - cpfl_vport->nb_data_txq,
>  						     true, false);
>  	else
> -		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
> +		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
> +								VIRTCHNL2_QUEUE_TYPE_RX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
>  			    rx_queue_id);
> @@ -1331,7 +1334,8 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
>  						     tx_queue_id - cpfl_vport->nb_data_txq,
>  						     false, false);
>  	else
> -		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
> +		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
> +								VIRTCHNL2_QUEUE_TYPE_TX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
>  			    tx_queue_id);
> diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
> index 64f2235580..858bbefe3b 100644
> --- a/drivers/net/idpf/idpf_rxtx.c
> +++ b/drivers/net/idpf/idpf_rxtx.c
> @@ -595,7 +595,8 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
>  	}
>  
>  	/* Ready to switch the queue on */
> -	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
> +	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
> +							VIRTCHNL2_QUEUE_TYPE_RX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
>  			    rx_queue_id);
> @@ -646,7 +647,8 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
>  	}
>  
>  	/* Ready to switch the queue on */
> -	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
> +	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
> +							VIRTCHNL2_QUEUE_TYPE_TX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
>  			    tx_queue_id);
> @@ -669,7 +671,8 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
>  	if (rx_queue_id >= dev->data->nb_rx_queues)
>  		return -EINVAL;
>  
> -	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
> +	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
> +							VIRTCHNL2_QUEUE_TYPE_RX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
>  			    rx_queue_id);
> @@ -701,7 +704,8 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
>  	if (tx_queue_id >= dev->data->nb_tx_queues)
>  		return -EINVAL;
>  
> -	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
> +	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
> +							VIRTCHNL2_QUEUE_TYPE_TX);
>  	if (err != 0) {
>  		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
>  			    tx_queue_id);
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v5 00/21] Update MEV TS Base Driver
  2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
                               ` (20 preceding siblings ...)
  2024-06-24  9:16             ` [PATCH v5 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
@ 2024-06-28 14:58             ` Bruce Richardson
  21 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-06-28 14:58 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

On Mon, Jun 24, 2024 at 09:16:23AM +0000, Soumyadeep Hore wrote:
> ---
> v5:
> - Removed warning from patch 6
> ---

Patches 1-14 applied dpdk-next-net-intel. There are some open
questions/comments on later patches.

For v6, please rebase on dpdk-next-net-intel tree and submit only new
versions of the unmerged patches.

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 0/7] Update MEV TS Base Driver
  2024-06-24  9:16             ` [PATCH v5 15/21] common/idpf: add wmb before tail Soumyadeep Hore
  2024-06-28 14:45               ` Bruce Richardson
@ 2024-07-01  9:13               ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 1/7] common/idpf: add wmb before tail Soumyadeep Hore
                                   ` (7 more replies)
  1 sibling, 8 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

These patches integrate the latest changes in MEV TS IDPF Base driver.

---
v6:
- Addressed comments
---
v5:
- Removed warning from patch 6
---
v4:
- Removed 1st patch as we are not using NVME_CPF flag
- Addressed comments
---
v3:
- Removed additional whitespace changes
- Fixed warnings of CI
- Updated documentation relating to MEV TS FW release
---
v2:
- Changed implementation based on review comments
- Fixed compilation errors for Windows, Alpine and FreeBSD
---

Soumyadeep Hore (7):
  common/idpf: add wmb before tail
  drivers: adding macros for dynamic data structures
  common/idpf: enable flow steer capability for vports
  common/idpf: add a new Tx context descriptor structure
  common/idpf: remove idpf common file
  drivers: adding config queue types for virtchnl2 message
  doc: updated the documentation for cpfl PMD

 doc/guides/nics/cpfl.rst                   |   2 +
 drivers/common/idpf/base/idpf_common.c     | 382 ---------------
 drivers/common/idpf/base/idpf_controlq.c   |   2 +
 drivers/common/idpf/base/idpf_lan_txrx.h   |  20 +-
 drivers/common/idpf/base/meson.build       |   1 -
 drivers/common/idpf/base/virtchnl2.h       | 517 +++++----------------
 drivers/common/idpf/idpf_common_virtchnl.c |  10 +-
 drivers/common/idpf/idpf_common_virtchnl.h |   2 +-
 drivers/net/cpfl/cpfl_ethdev.c             |  40 +-
 drivers/net/cpfl/cpfl_rxtx.c               |  12 +-
 drivers/net/idpf/idpf_rxtx.c               |  12 +-
 11 files changed, 184 insertions(+), 816 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 1/7] common/idpf: add wmb before tail
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 2/7] drivers: adding macros for dynamic data structures Soumyadeep Hore
                                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adds the memory barrier properly before posting ctlq tail. This
makes sure memory writes have a chance to take place before HW starts
messing with the descriptors.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_controlq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/common/idpf/base/idpf_controlq.c b/drivers/common/idpf/base/idpf_controlq.c
index 65e5599614..4f47759a4f 100644
--- a/drivers/common/idpf/base/idpf_controlq.c
+++ b/drivers/common/idpf/base/idpf_controlq.c
@@ -604,6 +604,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
 			/* Wrap to end of end ring since current ntp is 0 */
 			cq->next_to_post = cq->ring_size - 1;
 
+		idpf_wmb();
+
 		wr32(hw, cq->reg.tail, cq->next_to_post);
 	}
 
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 2/7] drivers: adding macros for dynamic data structures
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 1/7] common/idpf: add wmb before tail Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 3/7] common/idpf: enable flow steer capability for vports Soumyadeep Hore
                                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Introducing new macro STRUCT_VAR_LEN to support length of
dynamic data structures. Currently it is set to 1.

Introducing another structure VIRTCHNL2_CHECK_STRUCT_VAR_LEN
to check the length of dynamic data structures.

Some fixes based on code changes is introduced to compile dpdk.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h       | 457 +++------------------
 drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
 drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
 3 files changed, 77 insertions(+), 410 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index 9dd5191c0e..d3104adecf 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -63,6 +63,10 @@ enum virtchnl2_status {
 #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
 	static_assert((n) == sizeof(struct X),	\
 		      "Structure length does not match with the expected value")
+#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
+	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
+
+#define STRUCT_VAR_LEN		1
 
 /**
  * New major set of opcodes introduced and so leaving room for
@@ -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
 struct virtchnl2_queue_reg_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_reg_chunk chunks[1];
+	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_reg_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 
 /**
  * enum virtchnl2_vport_flags - Vport flags
@@ -773,7 +776,7 @@ struct virtchnl2_create_vport {
 	u8 pad[20];
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-VIRTCHNL2_CHECK_STRUCT_LEN(192, virtchnl2_create_vport);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(192, virtchnl2_create_vport, chunks.chunks);
 
 /**
  * struct virtchnl2_vport - Vport identifier information
@@ -860,10 +863,9 @@ struct virtchnl2_config_tx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[10];
-	struct virtchnl2_txq_info qinfo[1];
+	struct virtchnl2_txq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(72, virtchnl2_config_tx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(72, virtchnl2_config_tx_queues, qinfo);
 
 /**
  * struct virtchnl2_rxq_info - Receive queue config info
@@ -942,10 +944,9 @@ struct virtchnl2_config_rx_queues {
 	__le16 num_qinfo;
 
 	u8 pad[18];
-	struct virtchnl2_rxq_info qinfo[1];
+	struct virtchnl2_rxq_info qinfo[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(112, virtchnl2_config_rx_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(112, virtchnl2_config_rx_queues, qinfo);
 
 /**
  * struct virtchnl2_add_queues - Data for VIRTCHNL2_OP_ADD_QUEUES
@@ -975,16 +976,15 @@ struct virtchnl2_add_queues {
 
 	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_add_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(56, virtchnl2_add_queues, chunks.chunks);
 
 /* Queue Groups Extension */
 /**
  * struct virtchnl2_rx_queue_group_info - RX queue group info
- * @rss_lut_size: IN/OUT, user can ask to update rss_lut size originally
- *		  allocated by CreateVport command. New size will be returned
- *		  if allocation succeeded, otherwise original rss_size from
- *		  CreateVport will be returned.
+ * @rss_lut_size: User can ask to update rss_lut size originally allocated by
+ *		  CreateVport command. New size will be returned if allocation
+ *		  succeeded, otherwise original rss_size from CreateVport
+ *		  will be returned.
  * @pad: Padding for future extensions
  */
 struct virtchnl2_rx_queue_group_info {
@@ -1012,7 +1012,7 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rx_queue_group_info);
  * @cir_pad: Future extension purpose for CIR only
  * @pad2: Padding for future extensions
  */
-struct virtchnl2_tx_queue_group_info { /* IN */
+struct virtchnl2_tx_queue_group_info {
 	u8 tx_tc;
 	u8 priority;
 	u8 is_sp;
@@ -1045,19 +1045,17 @@ VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_group_id);
 /**
  * struct virtchnl2_queue_group_info - Queue group info
  * @qg_id: Queue group ID
- * @num_tx_q: Number of TX queues
- * @num_tx_complq: Number of completion queues
- * @num_rx_q: Number of RX queues
- * @num_rx_bufq: Number of RX buffer queues
+ * @num_tx_q: Number of TX queues requested
+ * @num_tx_complq: Number of completion queues requested
+ * @num_rx_q: Number of RX queues requested
+ * @num_rx_bufq: Number of RX buffer queues requested
  * @tx_q_grp_info: TX queue group info
  * @rx_q_grp_info: RX queue group info
  * @pad: Padding for future extensions
- * @chunks: Queue register chunks
+ * @chunks: Queue register chunks from CP
  */
 struct virtchnl2_queue_group_info {
-	/* IN */
 	struct virtchnl2_queue_group_id qg_id;
-	/* IN, Number of queue of different types in the group. */
 	__le16 num_tx_q;
 	__le16 num_tx_complq;
 	__le16 num_rx_q;
@@ -1066,56 +1064,43 @@ struct virtchnl2_queue_group_info {
 	struct virtchnl2_tx_queue_group_info tx_q_grp_info;
 	struct virtchnl2_rx_queue_group_info rx_q_grp_info;
 	u8 pad[40];
-	struct virtchnl2_queue_reg_chunks chunks; /* OUT */
-};
-
-VIRTCHNL2_CHECK_STRUCT_LEN(120, virtchnl2_queue_group_info);
-
-/**
- * struct virtchnl2_queue_groups - Queue groups list
- * @num_queue_groups: Total number of queue groups
- * @pad: Padding for future extensions
- * @groups: Array of queue group info
- */
-struct virtchnl2_queue_groups {
-	__le16 num_queue_groups;
-	u8 pad[6];
-	struct virtchnl2_queue_group_info groups[1];
+	struct virtchnl2_queue_reg_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(120, virtchnl2_queue_group_info, chunks.chunks);
 
 /**
  * struct virtchnl2_add_queue_groups - Add queue groups
- * @vport_id: IN, vport_id to add queue group to, same as allocated by
+ * @vport_id: Vport_id to add queue group to, same as allocated by
  *	      CreateVport. NA for mailbox and other types not assigned to vport.
+ * @num_queue_groups: Total number of queue groups
  * @pad: Padding for future extensions
- * @qg_info: IN/OUT. List of all the queue groups
  *
  * PF sends this message to request additional transmit/receive queue groups
  * beyond the ones that were assigned via CREATE_VPORT request.
  * virtchnl2_add_queue_groups structure is used to specify the number of each
  * type of queues. CP responds with the same structure with the actual number of
- * groups and queues assigned followed by num_queue_groups and num_chunks of
- * virtchnl2_queue_groups and virtchnl2_queue_chunk structures.
+ * groups and queues assigned followed by num_queue_groups and groups of
+ * virtchnl2_queue_group_info and virtchnl2_queue_chunk structures.
  *
  * Associated with VIRTCHNL2_OP_ADD_QUEUE_GROUPS.
  */
 struct virtchnl2_add_queue_groups {
 	__le32 vport_id;
-	u8 pad[4];
-	struct virtchnl2_queue_groups qg_info;
+	__le16 num_queue_groups;
+	u8 pad[10];
+	struct virtchnl2_queue_group_info groups[STRUCT_VAR_LEN];
+
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(136, virtchnl2_add_queue_groups);
 
 /**
  * struct virtchnl2_delete_queue_groups - Delete queue groups
- * @vport_id: IN, vport_id to delete queue group from, same as allocated by
+ * @vport_id: Vport ID to delete queue group from, same as allocated by
  *	      CreateVport.
- * @num_queue_groups: IN/OUT, Defines number of groups provided
+ * @num_queue_groups: Defines number of groups provided
  * @pad: Padding
- * @qg_ids: IN, IDs & types of Queue Groups to delete
+ * @qg_ids: IDs & types of Queue Groups to delete
  *
  * PF sends this message to delete queue groups.
  * PF sends virtchnl2_delete_queue_groups struct to specify the queue groups
@@ -1129,10 +1114,9 @@ struct virtchnl2_delete_queue_groups {
 	__le16 num_queue_groups;
 	u8 pad[2];
 
-	struct virtchnl2_queue_group_id qg_ids[1];
+	struct virtchnl2_queue_group_id qg_ids[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_delete_queue_groups);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_delete_queue_groups, qg_ids);
 
 /**
  * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
@@ -1190,10 +1174,9 @@ struct virtchnl2_vector_chunks {
 	__le16 num_vchunks;
 	u8 pad[14];
 
-	struct virtchnl2_vector_chunk vchunks[1];
+	struct virtchnl2_vector_chunk vchunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(48, virtchnl2_vector_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(48, virtchnl2_vector_chunks, vchunks);
 
 /**
  * struct virtchnl2_alloc_vectors - Vector allocation info
@@ -1215,8 +1198,7 @@ struct virtchnl2_alloc_vectors {
 
 	struct virtchnl2_vector_chunks vchunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(64, virtchnl2_alloc_vectors);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(64, virtchnl2_alloc_vectors, vchunks.vchunks);
 
 /**
  * struct virtchnl2_rss_lut - RSS LUT info
@@ -1237,10 +1219,9 @@ struct virtchnl2_rss_lut {
 	__le16 lut_entries_start;
 	__le16 lut_entries;
 	u8 pad[4];
-	__le32 lut[1];
+	__le32 lut[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_lut);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_rss_lut, lut);
 
 /**
  * struct virtchnl2_rss_hash - RSS hash info
@@ -1389,10 +1370,9 @@ struct virtchnl2_ptype {
 	u8 ptype_id_8;
 	u8 proto_id_count;
 	__le16 pad;
-	__le16 proto_id[1];
+	__le16 proto_id[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_ptype);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(8, virtchnl2_ptype, proto_id);
 
 /**
  * struct virtchnl2_get_ptype_info - Packet type info
@@ -1428,7 +1408,7 @@ struct virtchnl2_get_ptype_info {
 	__le16 start_ptype_id;
 	__le16 num_ptypes;
 	__le32 pad;
-	struct virtchnl2_ptype ptype[1];
+	struct virtchnl2_ptype ptype[STRUCT_VAR_LEN];
 };
 
 VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_get_ptype_info);
@@ -1629,10 +1609,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
 struct virtchnl2_queue_chunks {
 	__le16 num_chunks;
 	u8 pad[6];
-	struct virtchnl2_queue_chunk chunks[1];
+	struct virtchnl2_queue_chunk chunks[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_chunks);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_queue_chunks, chunks);
 
 /**
  * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info
@@ -1654,8 +1633,7 @@ struct virtchnl2_del_ena_dis_queues {
 
 	struct virtchnl2_queue_chunks chunks;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_del_ena_dis_queues);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(32, virtchnl2_del_ena_dis_queues, chunks.chunks);
 
 /**
  * struct virtchnl2_queue_vector - Queue to vector mapping
@@ -1699,10 +1677,10 @@ struct virtchnl2_queue_vector_maps {
 	__le32 vport_id;
 	__le16 num_qv_maps;
 	u8 pad[10];
-	struct virtchnl2_queue_vector qv_maps[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(40, virtchnl2_queue_vector_maps);
+	struct virtchnl2_queue_vector qv_maps[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_vector_maps, qv_maps);
 
 /**
  * struct virtchnl2_loopback - Loopback info
@@ -1754,10 +1732,10 @@ struct virtchnl2_mac_addr_list {
 	__le32 vport_id;
 	__le16 num_mac_addr;
 	u8 pad[2];
-	struct virtchnl2_mac_addr mac_addr_list[1];
-};
 
-VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_mac_addr_list);
+	struct virtchnl2_mac_addr mac_addr_list[STRUCT_VAR_LEN];
+};
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(16, virtchnl2_mac_addr_list, mac_addr_list);
 
 /**
  * struct virtchnl2_promisc_info - Promiscuous type information
@@ -1856,10 +1834,10 @@ struct virtchnl2_ptp_tx_tstamp {
 	__le16 num_latches;
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[1];
+	struct virtchnl2_ptp_tx_tstamp_entry ptp_tx_tstamp_entries[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp,
+			       ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_get_ptp_caps - Get PTP capabilities
@@ -1884,8 +1862,8 @@ struct virtchnl2_get_ptp_caps {
 	struct virtchnl2_ptp_device_clock_control device_clock_control;
 	struct virtchnl2_ptp_tx_tstamp tx_tstamp;
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_get_ptp_caps);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(88, virtchnl2_get_ptp_caps,
+			       tx_tstamp.ptp_tx_tstamp_entries);
 
 /**
  * struct virtchnl2_ptp_tx_tstamp_latch - Structure that describes tx tstamp
@@ -1920,13 +1898,12 @@ VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_ptp_tx_tstamp_latch);
  */
 struct virtchnl2_ptp_tx_tstamp_latches {
 	__le16 num_latches;
-	/* latch size expressed in bits */
 	__le16 latch_size;
 	u8 pad[4];
-	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[1];
+	struct virtchnl2_ptp_tx_tstamp_latch tstamp_latches[STRUCT_VAR_LEN];
 };
-
-VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_ptp_tx_tstamp_latches);
+VIRTCHNL2_CHECK_STRUCT_VAR_LEN(24, virtchnl2_ptp_tx_tstamp_latches,
+			       tstamp_latches);
 
 static inline const char *virtchnl2_op_str(__le32 v_opcode)
 {
@@ -2004,314 +1981,4 @@ static inline const char *virtchnl2_op_str(__le32 v_opcode)
 	}
 }
 
-/**
- * virtchnl2_vc_validate_vf_msg
- * @ver: Virtchnl2 version info
- * @v_opcode: Opcode for the message
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- *
- * Validate msg format against struct for each opcode.
- */
-static inline int
-virtchnl2_vc_validate_vf_msg(__rte_unused struct virtchnl2_version_info *ver, u32 v_opcode,
-			     u8 *msg, __le16 msglen)
-{
-	bool err_msg_format = false;
-	__le32 valid_len = 0;
-
-	/* Validate message length */
-	switch (v_opcode) {
-	case VIRTCHNL2_OP_VERSION:
-		valid_len = sizeof(struct virtchnl2_version_info);
-		break;
-	case VIRTCHNL2_OP_GET_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_capabilities);
-		break;
-	case VIRTCHNL2_OP_CREATE_VPORT:
-		valid_len = sizeof(struct virtchnl2_create_vport);
-		if (msglen >= valid_len) {
-			struct virtchnl2_create_vport *cvport =
-				(struct virtchnl2_create_vport *)msg;
-
-			if (cvport->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (cvport->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_CREATE_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_create_adi);
-		if (msglen >= valid_len) {
-			struct virtchnl2_non_flex_create_adi *cadi =
-				(struct virtchnl2_non_flex_create_adi *)msg;
-
-			if (cadi->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			if (cadi->vchunks.num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (cadi->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-			valid_len += (cadi->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_NON_FLEX_DESTROY_ADI:
-		valid_len = sizeof(struct virtchnl2_non_flex_destroy_adi);
-		break;
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-		valid_len = sizeof(struct virtchnl2_vport);
-		break;
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_tx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_tx_queues *ctq =
-				(struct virtchnl2_config_tx_queues *)msg;
-			if (ctq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (ctq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_txq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-		valid_len = sizeof(struct virtchnl2_config_rx_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_config_rx_queues *crq =
-				(struct virtchnl2_config_rx_queues *)msg;
-			if (crq->num_qinfo == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (crq->num_qinfo - 1) *
-				     sizeof(struct virtchnl2_rxq_info);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUES:
-		valid_len = sizeof(struct virtchnl2_add_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_add_queues *add_q =
-				(struct virtchnl2_add_queues *)msg;
-
-			if (add_q->chunks.num_chunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (add_q->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_reg_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_DEL_QUEUES:
-		valid_len = sizeof(struct virtchnl2_del_ena_dis_queues);
-		if (msglen >= valid_len) {
-			struct virtchnl2_del_ena_dis_queues *qs =
-				(struct virtchnl2_del_ena_dis_queues *)msg;
-			if (qs->chunks.num_chunks == 0 ||
-			    qs->chunks.num_chunks > VIRTCHNL2_OP_DEL_ENABLE_DISABLE_QUEUES_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (qs->chunks.num_chunks - 1) *
-				      sizeof(struct virtchnl2_queue_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_ADD_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_add_queue_groups);
-		if (msglen != valid_len) {
-			__le64 offset;
-			__le32 i;
-			struct virtchnl2_add_queue_groups *add_queue_grp =
-				(struct virtchnl2_add_queue_groups *)msg;
-			struct virtchnl2_queue_groups *groups = &(add_queue_grp->qg_info);
-			struct virtchnl2_queue_group_info *grp_info;
-			__le32 chunk_size = sizeof(struct virtchnl2_queue_reg_chunk);
-			__le32 group_size = sizeof(struct virtchnl2_queue_group_info);
-			__le32 total_chunks_size;
-
-			if (groups->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (groups->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_info);
-			offset = (u8 *)(&groups->groups[0]) - (u8 *)groups;
-
-			for (i = 0; i < groups->num_queue_groups; i++) {
-				grp_info = (struct virtchnl2_queue_group_info *)
-						   ((u8 *)groups + offset);
-				if (grp_info->chunks.num_chunks == 0) {
-					offset += group_size;
-					continue;
-				}
-				total_chunks_size = (grp_info->chunks.num_chunks - 1) * chunk_size;
-				offset += group_size + total_chunks_size;
-				valid_len += total_chunks_size;
-			}
-		}
-		break;
-	case VIRTCHNL2_OP_DEL_QUEUE_GROUPS:
-		valid_len = sizeof(struct virtchnl2_delete_queue_groups);
-		if (msglen != valid_len) {
-			struct virtchnl2_delete_queue_groups *del_queue_grp =
-				(struct virtchnl2_delete_queue_groups *)msg;
-
-			if (del_queue_grp->num_queue_groups == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += (del_queue_grp->num_queue_groups - 1) *
-				      sizeof(struct virtchnl2_queue_group_id);
-		}
-		break;
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-		valid_len = sizeof(struct virtchnl2_queue_vector_maps);
-		if (msglen >= valid_len) {
-			struct virtchnl2_queue_vector_maps *v_qp =
-				(struct virtchnl2_queue_vector_maps *)msg;
-			if (v_qp->num_qv_maps == 0 ||
-			    v_qp->num_qv_maps > VIRTCHNL2_OP_MAP_UNMAP_QUEUE_VECTOR_MAX) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_qp->num_qv_maps - 1) *
-				      sizeof(struct virtchnl2_queue_vector);
-		}
-		break;
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_alloc_vectors);
-		if (msglen >= valid_len) {
-			struct virtchnl2_alloc_vectors *v_av =
-				(struct virtchnl2_alloc_vectors *)msg;
-
-			if (v_av->vchunks.num_vchunks == 0) {
-				/* Zero chunks is allowed as input */
-				break;
-			}
-
-			valid_len += (v_av->vchunks.num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-		valid_len = sizeof(struct virtchnl2_vector_chunks);
-		if (msglen >= valid_len) {
-			struct virtchnl2_vector_chunks *v_chunks =
-				(struct virtchnl2_vector_chunks *)msg;
-			if (v_chunks->num_vchunks == 0) {
-				err_msg_format = true;
-				break;
-			}
-			valid_len += (v_chunks->num_vchunks - 1) *
-				      sizeof(struct virtchnl2_vector_chunk);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-		valid_len = sizeof(struct virtchnl2_rss_key);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_key *vrk =
-				(struct virtchnl2_rss_key *)msg;
-
-			if (vrk->key_len == 0) {
-				/* Zero length is allowed as input */
-				break;
-			}
-
-			valid_len += vrk->key_len - 1;
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-		valid_len = sizeof(struct virtchnl2_rss_lut);
-		if (msglen >= valid_len) {
-			struct virtchnl2_rss_lut *vrl =
-				(struct virtchnl2_rss_lut *)msg;
-
-			if (vrl->lut_entries == 0) {
-				/* Zero entries is allowed as input */
-				break;
-			}
-
-			valid_len += (vrl->lut_entries - 1) * sizeof(vrl->lut);
-		}
-		break;
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-		valid_len = sizeof(struct virtchnl2_rss_hash);
-		break;
-	case VIRTCHNL2_OP_SET_SRIOV_VFS:
-		valid_len = sizeof(struct virtchnl2_sriov_vfs_info);
-		break;
-	case VIRTCHNL2_OP_GET_PTYPE_INFO:
-		valid_len = sizeof(struct virtchnl2_get_ptype_info);
-		break;
-	case VIRTCHNL2_OP_GET_STATS:
-		valid_len = sizeof(struct virtchnl2_vport_stats);
-		break;
-	case VIRTCHNL2_OP_GET_PORT_STATS:
-		valid_len = sizeof(struct virtchnl2_port_stats);
-		break;
-	case VIRTCHNL2_OP_RESET_VF:
-		break;
-	case VIRTCHNL2_OP_GET_PTP_CAPS:
-		valid_len = sizeof(struct virtchnl2_get_ptp_caps);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_get_ptp_caps *ptp_caps =
-			(struct virtchnl2_get_ptp_caps *)msg;
-
-			if (ptp_caps->tx_tstamp.num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((ptp_caps->tx_tstamp.num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_entry));
-		}
-		break;
-	case VIRTCHNL2_OP_GET_PTP_TX_TSTAMP_LATCHES:
-		valid_len = sizeof(struct virtchnl2_ptp_tx_tstamp_latches);
-
-		if (msglen > valid_len) {
-			struct virtchnl2_ptp_tx_tstamp_latches *tx_tstamp_latches =
-			(struct virtchnl2_ptp_tx_tstamp_latches *)msg;
-
-			if (tx_tstamp_latches->num_latches == 0) {
-				err_msg_format = true;
-				break;
-			}
-
-			valid_len += ((tx_tstamp_latches->num_latches - 1) *
-				      sizeof(struct virtchnl2_ptp_tx_tstamp_latch));
-		}
-		break;
-	/* These are always errors coming from the VF */
-	case VIRTCHNL2_OP_EVENT:
-	case VIRTCHNL2_OP_UNKNOWN:
-	default:
-		return VIRTCHNL2_STATUS_ERR_ESRCH;
-	}
-	/* Few more checks */
-	if (err_msg_format || valid_len != msglen)
-		return VIRTCHNL2_STATUS_ERR_EINVAL;
-
-	return 0;
-}
-
 #endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index c46ed50eb5..f00202f43c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -366,7 +366,7 @@ idpf_vc_queue_grps_add(struct idpf_vport *vport,
 	int err = -1;
 
 	size = sizeof(*p2p_queue_grps_info) +
-	       (p2p_queue_grps_info->qg_info.num_queue_groups - 1) *
+	       (p2p_queue_grps_info->num_queue_groups - 1) *
 		   sizeof(struct virtchnl2_queue_group_info);
 
 	memset(&args, 0, sizeof(args));
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e718e9e19..e707043bf7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -2393,18 +2393,18 @@ cpfl_p2p_q_grps_add(struct idpf_vport *vport,
 	int ret;
 
 	p2p_queue_grps_info->vport_id = vport->vport_id;
-	p2p_queue_grps_info->qg_info.num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
-	p2p_queue_grps_info->qg_info.groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
-	p2p_queue_grps_info->qg_info.groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
-	p2p_queue_grps_info->qg_info.groups[0].rx_q_grp_info.rss_lut_size = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.tx_tc = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.priority = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.is_sp = 0;
-	p2p_queue_grps_info->qg_info.groups[0].tx_q_grp_info.pir_weight = 0;
+	p2p_queue_grps_info->num_queue_groups = CPFL_P2P_NB_QUEUE_GRPS;
+	p2p_queue_grps_info->groups[0].num_rx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_rx_bufq = CPFL_P2P_NB_RX_BUFQ;
+	p2p_queue_grps_info->groups[0].num_tx_q = CPFL_MAX_P2P_NB_QUEUES;
+	p2p_queue_grps_info->groups[0].num_tx_complq = CPFL_P2P_NB_TX_COMPLQ;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_id = CPFL_P2P_QUEUE_GRP_ID;
+	p2p_queue_grps_info->groups[0].qg_id.queue_group_type = VIRTCHNL2_QUEUE_GROUP_P2P;
+	p2p_queue_grps_info->groups[0].rx_q_grp_info.rss_lut_size = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.tx_tc = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.priority = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.is_sp = 0;
+	p2p_queue_grps_info->groups[0].tx_q_grp_info.pir_weight = 0;
 
 	ret = idpf_vc_queue_grps_add(vport, p2p_queue_grps_info, p2p_q_vc_out_info);
 	if (ret != 0) {
@@ -2423,13 +2423,13 @@ cpfl_p2p_queue_info_init(struct cpfl_vport *cpfl_vport,
 	struct virtchnl2_queue_reg_chunks *vc_chunks_out;
 	int i, type;
 
-	if (p2p_q_vc_out_info->qg_info.groups[0].qg_id.queue_group_type !=
+	if (p2p_q_vc_out_info->groups[0].qg_id.queue_group_type !=
 	    VIRTCHNL2_QUEUE_GROUP_P2P) {
 		PMD_DRV_LOG(ERR, "Add queue group response mismatch.");
 		return -EINVAL;
 	}
 
-	vc_chunks_out = &p2p_q_vc_out_info->qg_info.groups[0].chunks;
+	vc_chunks_out = &p2p_q_vc_out_info->groups[0].chunks;
 
 	for (i = 0; i < vc_chunks_out->num_chunks; i++) {
 		type = vc_chunks_out->chunks[i].type;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 3/7] common/idpf: enable flow steer capability for vports
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 1/7] common/idpf: add wmb before tail Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 2/7] drivers: adding macros for dynamic data structures Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 4/7] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
                                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Added virtchnl2_flow_types to be used for flow steering.

Added flow steer cap flags for vport create.

Add flow steer flow types and action types for vport create.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/virtchnl2.h | 60 ++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/base/virtchnl2.h b/drivers/common/idpf/base/virtchnl2.h
index d3104adecf..3285a2b674 100644
--- a/drivers/common/idpf/base/virtchnl2.h
+++ b/drivers/common/idpf/base/virtchnl2.h
@@ -269,6 +269,43 @@ enum virtchnl2_cap_other {
 	VIRTCHNL2_CAP_OEM			= BIT_ULL(63),
 };
 
+/**
+ * enum virtchnl2_action_types - Available actions for sideband flow steering
+ * @VIRTCHNL2_ACTION_DROP: Drop the packet
+ * @VIRTCHNL2_ACTION_PASSTHRU: Forward the packet to the next classifier/stage
+ * @VIRTCHNL2_ACTION_QUEUE: Forward the packet to a receive queue
+ * @VIRTCHNL2_ACTION_Q_GROUP: Forward the packet to a receive queue group
+ * @VIRTCHNL2_ACTION_MARK: Mark the packet with specific marker value
+ * @VIRTCHNL2_ACTION_COUNT: Increment the corresponding counter
+ */
+
+enum virtchnl2_action_types {
+	VIRTCHNL2_ACTION_DROP		= BIT(0),
+	VIRTCHNL2_ACTION_PASSTHRU	= BIT(1),
+	VIRTCHNL2_ACTION_QUEUE		= BIT(2),
+	VIRTCHNL2_ACTION_Q_GROUP	= BIT(3),
+	VIRTCHNL2_ACTION_MARK		= BIT(4),
+	VIRTCHNL2_ACTION_COUNT		= BIT(5),
+};
+
+/* Flow type capabilities for Flow Steering and Receive-Side Scaling */
+enum virtchnl2_flow_types {
+	VIRTCHNL2_FLOW_IPV4_TCP		= BIT(0),
+	VIRTCHNL2_FLOW_IPV4_UDP		= BIT(1),
+	VIRTCHNL2_FLOW_IPV4_SCTP	= BIT(2),
+	VIRTCHNL2_FLOW_IPV4_OTHER	= BIT(3),
+	VIRTCHNL2_FLOW_IPV6_TCP		= BIT(4),
+	VIRTCHNL2_FLOW_IPV6_UDP		= BIT(5),
+	VIRTCHNL2_FLOW_IPV6_SCTP	= BIT(6),
+	VIRTCHNL2_FLOW_IPV6_OTHER	= BIT(7),
+	VIRTCHNL2_FLOW_IPV4_AH		= BIT(8),
+	VIRTCHNL2_FLOW_IPV4_ESP		= BIT(9),
+	VIRTCHNL2_FLOW_IPV4_AH_ESP	= BIT(10),
+	VIRTCHNL2_FLOW_IPV6_AH		= BIT(11),
+	VIRTCHNL2_FLOW_IPV6_ESP		= BIT(12),
+	VIRTCHNL2_FLOW_IPV6_AH_ESP	= BIT(13),
+};
+
 /**
  * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes
  * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
@@ -707,11 +744,16 @@ VIRTCHNL2_CHECK_STRUCT_VAR_LEN(40, virtchnl2_queue_reg_chunks, chunks);
 /**
  * enum virtchnl2_vport_flags - Vport flags
  * @VIRTCHNL2_VPORT_UPLINK_PORT: Uplink port flag
- * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA: Inline flow steering enable flag
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER: Inline flow steering enabled
+ * @VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ: Inline flow steering enabled
+ * with explicit Rx queue action
+ * @VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER: Sideband flow steering enabled
  */
 enum virtchnl2_vport_flags {
 	VIRTCHNL2_VPORT_UPLINK_PORT		= BIT(0),
-	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_ENA	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER	= BIT(1),
+	VIRTCHNL2_VPORT_INLINE_FLOW_STEER_RXQ	= BIT(2),
+	VIRTCHNL2_VPORT_SIDEBAND_FLOW_STEER	= BIT(3),
 };
 
 #define VIRTCHNL2_ETH_LENGTH_OF_ADDRESS  6
@@ -739,6 +781,14 @@ enum virtchnl2_vport_flags {
  * @rx_desc_ids: See enum virtchnl2_rx_desc_id_bitmasks
  * @tx_desc_ids: See enum virtchnl2_tx_desc_ids
  * @reserved: Reserved bytes and cannot be used
+ * @inline_flow_types: Bit mask of supported inline-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_types: Bit mask of supported sideband-flow-steering
+ *  flow types (See enum virtchnl2_flow_types)
+ * @sideband_flow_actions: Bit mask of supported action types
+ *  for sideband flow steering (See enum virtchnl2_action_types)
+ * @flow_steer_max_rules: Max rules allowed for inline and sideband
+ *  flow steering combined
  * @rss_algorithm: RSS algorithm
  * @rss_key_size: RSS key size
  * @rss_lut_size: RSS LUT size
@@ -768,7 +818,11 @@ struct virtchnl2_create_vport {
 	__le16 vport_flags;
 	__le64 rx_desc_ids;
 	__le64 tx_desc_ids;
-	u8 reserved[72];
+	u8 reserved[48];
+	__le64 inline_flow_types;
+	__le64 sideband_flow_types;
+	__le32 sideband_flow_actions;
+	__le32 flow_steer_max_rules;
 	__le32 rss_algorithm;
 	__le16 rss_key_size;
 	__le16 rss_lut_size;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 4/7] common/idpf: add a new Tx context descriptor structure
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
                                   ` (2 preceding siblings ...)
  2024-07-01  9:13                 ` [PATCH v6 3/7] common/idpf: enable flow steer capability for vports Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 5/7] common/idpf: remove idpf common file Soumyadeep Hore
                                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding a new structure for the context descriptor that contains
the support for timesync packets, where the index for timestamping is set.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_lan_txrx.h | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/base/idpf_lan_txrx.h b/drivers/common/idpf/base/idpf_lan_txrx.h
index c9eaeb5d3f..be27973a33 100644
--- a/drivers/common/idpf/base/idpf_lan_txrx.h
+++ b/drivers/common/idpf/base/idpf_lan_txrx.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2023 Intel Corporation
+ * Copyright(c) 2001-2024 Intel Corporation
  */
 
 #ifndef _IDPF_LAN_TXRX_H_
@@ -286,6 +286,24 @@ struct idpf_flex_tx_tso_ctx_qw {
 };
 
 union idpf_flex_tx_ctx_desc {
+		/* DTYPE = IDPF_TX_DESC_DTYPE_CTX (0x01) */
+	struct  {
+		struct {
+			u8 rsv[4];
+			__le16 l2tag2;
+			u8 rsv_2[2];
+		} qw0;
+		struct {
+			__le16 cmd_dtype;
+			__le16 tsyn_reg_l;
+#define IDPF_TX_DESC_CTX_TSYN_L_M	GENMASK(15, 14)
+			__le16 tsyn_reg_h;
+#define IDPF_TX_DESC_CTX_TSYN_H_M	GENMASK(15, 0)
+			__le16 mss;
+#define IDPF_TX_DESC_CTX_MSS_M		GENMASK(14, 2)
+		} qw1;
+	} tsyn;
+
 	/* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
 	struct {
 		struct idpf_flex_tx_tso_ctx_qw qw0;
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 5/7] common/idpf: remove idpf common file
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
                                   ` (3 preceding siblings ...)
  2024-07-01  9:13                 ` [PATCH v6 4/7] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 6/7] drivers: adding config queue types for virtchnl2 message Soumyadeep Hore
                                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

The file is redundant in our implementation and is not required
further.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/base/idpf_common.c | 382 -------------------------
 drivers/common/idpf/base/meson.build   |   1 -
 2 files changed, 383 deletions(-)
 delete mode 100644 drivers/common/idpf/base/idpf_common.c

diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c
deleted file mode 100644
index bb540345c2..0000000000
--- a/drivers/common/idpf/base/idpf_common.c
+++ /dev/null
@@ -1,382 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2001-2024 Intel Corporation
- */
-
-#include "idpf_prototype.h"
-#include "idpf_type.h"
-#include <virtchnl.h>
-
-
-/**
- * idpf_set_mac_type - Sets MAC type
- * @hw: pointer to the HW structure
- *
- * This function sets the mac type of the adapter based on the
- * vendor ID and device ID stored in the hw structure.
- */
-int idpf_set_mac_type(struct idpf_hw *hw)
-{
-	int status = 0;
-
-	DEBUGFUNC("Set MAC type\n");
-
-	if (hw->vendor_id == IDPF_INTEL_VENDOR_ID) {
-		switch (hw->device_id) {
-		case IDPF_DEV_ID_PF:
-			hw->mac.type = IDPF_MAC_PF;
-			break;
-		case IDPF_DEV_ID_VF:
-			hw->mac.type = IDPF_MAC_VF;
-			break;
-		default:
-			hw->mac.type = IDPF_MAC_GENERIC;
-			break;
-		}
-	} else {
-		status = -ENODEV;
-	}
-
-	DEBUGOUT2("Setting MAC type found mac: %d, returns: %d\n",
-		  hw->mac.type, status);
-	return status;
-}
-
-/**
- *  idpf_init_hw - main initialization routine
- *  @hw: pointer to the hardware structure
- *  @ctlq_size: struct to pass ctlq size data
- */
-int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size)
-{
-	struct idpf_ctlq_create_info *q_info;
-	int status = 0;
-	struct idpf_ctlq_info *cq = NULL;
-
-	/* Setup initial control queues */
-	q_info = (struct idpf_ctlq_create_info *)
-		 idpf_calloc(hw, 2, sizeof(struct idpf_ctlq_create_info));
-	if (!q_info)
-		return -ENOMEM;
-
-	q_info[0].type             = IDPF_CTLQ_TYPE_MAILBOX_TX;
-	q_info[0].buf_size         = ctlq_size.asq_buf_size;
-	q_info[0].len              = ctlq_size.asq_ring_size;
-	q_info[0].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[0].reg.head         = PF_FW_ATQH;
-		q_info[0].reg.tail         = PF_FW_ATQT;
-		q_info[0].reg.len          = PF_FW_ATQLEN;
-		q_info[0].reg.bah          = PF_FW_ATQBAH;
-		q_info[0].reg.bal          = PF_FW_ATQBAL;
-		q_info[0].reg.len_mask     = PF_FW_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = PF_FW_ATQH_ATQH_M;
-	} else {
-		q_info[0].reg.head         = VF_ATQH;
-		q_info[0].reg.tail         = VF_ATQT;
-		q_info[0].reg.len          = VF_ATQLEN;
-		q_info[0].reg.bah          = VF_ATQBAH;
-		q_info[0].reg.bal          = VF_ATQBAL;
-		q_info[0].reg.len_mask     = VF_ATQLEN_ATQLEN_M;
-		q_info[0].reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
-		q_info[0].reg.head_mask    = VF_ATQH_ATQH_M;
-	}
-
-	q_info[1].type             = IDPF_CTLQ_TYPE_MAILBOX_RX;
-	q_info[1].buf_size         = ctlq_size.arq_buf_size;
-	q_info[1].len              = ctlq_size.arq_ring_size;
-	q_info[1].id               = -1; /* default queue */
-
-	if (hw->mac.type == IDPF_MAC_PF) {
-		q_info[1].reg.head         = PF_FW_ARQH;
-		q_info[1].reg.tail         = PF_FW_ARQT;
-		q_info[1].reg.len          = PF_FW_ARQLEN;
-		q_info[1].reg.bah          = PF_FW_ARQBAH;
-		q_info[1].reg.bal          = PF_FW_ARQBAL;
-		q_info[1].reg.len_mask     = PF_FW_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = PF_FW_ARQH_ARQH_M;
-	} else {
-		q_info[1].reg.head         = VF_ARQH;
-		q_info[1].reg.tail         = VF_ARQT;
-		q_info[1].reg.len          = VF_ARQLEN;
-		q_info[1].reg.bah          = VF_ARQBAH;
-		q_info[1].reg.bal          = VF_ARQBAL;
-		q_info[1].reg.len_mask     = VF_ARQLEN_ARQLEN_M;
-		q_info[1].reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
-		q_info[1].reg.head_mask    = VF_ARQH_ARQH_M;
-	}
-
-	status = idpf_ctlq_init(hw, 2, q_info);
-	if (status) {
-		/* TODO return error */
-		idpf_free(hw, q_info);
-		return status;
-	}
-
-	LIST_FOR_EACH_ENTRY(cq, &hw->cq_list_head, idpf_ctlq_info, cq_list) {
-		if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
-			hw->asq = cq;
-		else if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX)
-			hw->arq = cq;
-	}
-
-	/* TODO hardcode a mac addr for now */
-	hw->mac.addr[0] = 0x00;
-	hw->mac.addr[1] = 0x00;
-	hw->mac.addr[2] = 0x00;
-	hw->mac.addr[3] = 0x00;
-	hw->mac.addr[4] = 0x03;
-	hw->mac.addr[5] = 0x14;
-
-	idpf_free(hw, q_info);
-
-	return 0;
-}
-
-/**
- * idpf_send_msg_to_cp
- * @hw: pointer to the hardware structure
- * @v_opcode: opcodes for VF-PF communication
- * @v_retval: return error code
- * @msg: pointer to the msg buffer
- * @msglen: msg length
- * @cmd_details: pointer to command details
- *
- * Send message to CP. By default, this message
- * is sent asynchronously, i.e. idpf_asq_send_command() does not wait for
- * completion before returning.
- */
-int idpf_send_msg_to_cp(struct idpf_hw *hw, int v_opcode,
-			int v_retval, u8 *msg, u16 msglen)
-{
-	struct idpf_ctlq_msg ctlq_msg = { 0 };
-	struct idpf_dma_mem dma_mem = { 0 };
-	int status;
-
-	ctlq_msg.opcode = idpf_mbq_opc_send_msg_to_pf;
-	ctlq_msg.func_id = 0;
-	ctlq_msg.data_len = msglen;
-	ctlq_msg.cookie.mbx.chnl_retval = v_retval;
-	ctlq_msg.cookie.mbx.chnl_opcode = v_opcode;
-
-	if (msglen > 0) {
-		dma_mem.va = (struct idpf_dma_mem *)
-			  idpf_alloc_dma_mem(hw, &dma_mem, msglen);
-		if (!dma_mem.va)
-			return -ENOMEM;
-
-		idpf_memcpy(dma_mem.va, msg, msglen, IDPF_NONDMA_TO_DMA);
-		ctlq_msg.ctx.indirect.payload = &dma_mem;
-	}
-	status = idpf_ctlq_send(hw, hw->asq, 1, &ctlq_msg);
-
-	if (dma_mem.va)
-		idpf_free_dma_mem(hw, &dma_mem);
-
-	return status;
-}
-
-/**
- *  idpf_asq_done - check if FW has processed the Admin Send Queue
- *  @hw: pointer to the hw struct
- *
- *  Returns true if the firmware has processed all descriptors on the
- *  admin send queue. Returns false if there are still requests pending.
- */
-bool idpf_asq_done(struct idpf_hw *hw)
-{
-	/* AQ designers suggest use of head for better
-	 * timing reliability than DD bit
-	 */
-	return rd32(hw, hw->asq->reg.head) == hw->asq->next_to_use;
-}
-
-/**
- * idpf_check_asq_alive
- * @hw: pointer to the hw struct
- *
- * Returns true if Queue is enabled else false.
- */
-bool idpf_check_asq_alive(struct idpf_hw *hw)
-{
-	if (hw->asq->reg.len)
-		return !!(rd32(hw, hw->asq->reg.len) &
-			  PF_FW_ATQLEN_ATQENABLE_M);
-
-	return false;
-}
-
-/**
- *  idpf_clean_arq_element
- *  @hw: pointer to the hw struct
- *  @e: event info from the receive descriptor, includes any buffers
- *  @pending: number of events that could be left to process
- *
- *  This function cleans one Admin Receive Queue element and returns
- *  the contents through e.  It can also return how many events are
- *  left to process through 'pending'
- */
-int idpf_clean_arq_element(struct idpf_hw *hw,
-			   struct idpf_arq_event_info *e, u16 *pending)
-{
-	struct idpf_dma_mem *dma_mem = NULL;
-	struct idpf_ctlq_msg msg = { 0 };
-	int status;
-	u16 msg_data_len;
-
-	*pending = 1;
-
-	status = idpf_ctlq_recv(hw->arq, pending, &msg);
-	if (status == -ENOMSG)
-		goto exit;
-
-	/* ctlq_msg does not align to ctlq_desc, so copy relevant data here */
-	e->desc.opcode = msg.opcode;
-	e->desc.cookie_high = msg.cookie.mbx.chnl_opcode;
-	e->desc.cookie_low = msg.cookie.mbx.chnl_retval;
-	e->desc.ret_val = msg.status;
-	e->desc.datalen = msg.data_len;
-	if (msg.data_len > 0) {
-		if (!msg.ctx.indirect.payload || !msg.ctx.indirect.payload->va ||
-		    !e->msg_buf) {
-			return -EFAULT;
-		}
-		e->buf_len = msg.data_len;
-		msg_data_len = msg.data_len;
-		idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len,
-			    IDPF_DMA_TO_NONDMA);
-		dma_mem = msg.ctx.indirect.payload;
-	} else {
-		*pending = 0;
-	}
-
-	status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem);
-
-exit:
-	return status;
-}
-
-/**
- *  idpf_deinit_hw - shutdown routine
- *  @hw: pointer to the hardware structure
- */
-void idpf_deinit_hw(struct idpf_hw *hw)
-{
-	hw->asq = NULL;
-	hw->arq = NULL;
-
-	idpf_ctlq_deinit(hw);
-}
-
-/**
- * idpf_reset
- * @hw: pointer to the hardware structure
- *
- * Send a RESET message to the CPF. Does not wait for response from CPF
- * as none will be forthcoming. Immediately after calling this function,
- * the control queue should be shut down and (optionally) reinitialized.
- */
-int idpf_reset(struct idpf_hw *hw)
-{
-	return idpf_send_msg_to_cp(hw, VIRTCHNL_OP_RESET_VF,
-				      0, NULL, 0);
-}
-
-/**
- * idpf_get_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- * @set: set true to set the table, false to get the table
- *
- * Internal function to get or set RSS look up table
- */
-STATIC int idpf_get_set_rss_lut(struct idpf_hw *hw, u16 vsi_id,
-				bool pf_lut, u8 *lut, u16 lut_size,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * get the RSS lookup table, PF or VSI type
- */
-int idpf_get_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, false);
-}
-
-/**
- * idpf_set_rss_lut
- * @hw: pointer to the hardware structure
- * @vsi_id: vsi fw index
- * @pf_lut: for PF table set true, for VSI table set false
- * @lut: pointer to the lut buffer provided by the caller
- * @lut_size: size of the lut buffer
- *
- * set the RSS lookup table, PF or VSI type
- */
-int idpf_set_rss_lut(struct idpf_hw *hw, u16 vsi_id, bool pf_lut,
-		     u8 *lut, u16 lut_size)
-{
-	return idpf_get_set_rss_lut(hw, vsi_id, pf_lut, lut, lut_size, true);
-}
-
-/**
- * idpf_get_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- * @set: set true to set the key, false to get the key
- *
- * get the RSS key per VSI
- */
-STATIC int idpf_get_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-				struct idpf_get_set_rss_key_data *key,
-				bool set)
-{
-	/* TODO fill out command */
-	return 0;
-}
-
-/**
- * idpf_get_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- */
-int idpf_get_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, false);
-}
-
-/**
- * idpf_set_rss_key
- * @hw: pointer to the hw struct
- * @vsi_id: vsi fw index
- * @key: pointer to key info struct
- *
- * set the RSS key per VSI
- */
-int idpf_set_rss_key(struct idpf_hw *hw, u16 vsi_id,
-		     struct idpf_get_set_rss_key_data *key)
-{
-	return idpf_get_set_rss_key(hw, vsi_id, key, true);
-}
-
-RTE_LOG_REGISTER_DEFAULT(idpf_common_logger, NOTICE);
diff --git a/drivers/common/idpf/base/meson.build b/drivers/common/idpf/base/meson.build
index 96d7642209..649c44d0ae 100644
--- a/drivers/common/idpf/base/meson.build
+++ b/drivers/common/idpf/base/meson.build
@@ -2,7 +2,6 @@
 # Copyright(c) 2023 Intel Corporation
 
 sources += files(
-        'idpf_common.c',
         'idpf_controlq.c',
         'idpf_controlq_setup.c',
 )
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 6/7] drivers: adding config queue types for virtchnl2 message
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
                                   ` (4 preceding siblings ...)
  2024-07-01  9:13                 ` [PATCH v6 5/7] common/idpf: remove idpf common file Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01  9:13                 ` [PATCH v6 7/7] doc: updated the documentation for cpfl PMD Soumyadeep Hore
  2024-07-01 11:23                 ` [PATCH v6 0/7] Update MEV TS Base Driver Bruce Richardson
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Adding an argument named type to define queue type
in idpf_vc_queue_switch(). This solves the issue of
improper queue type in virtchnl2 message.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 drivers/common/idpf/idpf_common_virtchnl.c |  8 ++------
 drivers/common/idpf/idpf_common_virtchnl.h |  2 +-
 drivers/net/cpfl/cpfl_ethdev.c             | 12 ++++++++----
 drivers/net/cpfl/cpfl_rxtx.c               | 12 ++++++++----
 drivers/net/idpf/idpf_rxtx.c               | 12 ++++++++----
 5 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index f00202f43c..de511da788 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -769,15 +769,11 @@ idpf_vc_ena_dis_one_queue(struct idpf_vport *vport, uint16_t qid,
 
 int
 idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-		     bool rx, bool on)
+		     bool rx, bool on, uint32_t type)
 {
-	uint32_t type;
 	int err, queue_id;
 
-	/* switch txq/rxq */
-	type = rx ? VIRTCHNL2_QUEUE_TYPE_RX : VIRTCHNL2_QUEUE_TYPE_TX;
-
-	if (type == VIRTCHNL2_QUEUE_TYPE_RX)
+	if (rx)
 		queue_id = vport->chunks_info.rx_start_qid + qid;
 	else
 		queue_id = vport->chunks_info.tx_start_qid + qid;
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 73446ded86..d6555978d5 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -31,7 +31,7 @@ int idpf_vc_cmd_execute(struct idpf_adapter *adapter,
 			struct idpf_cmd_info *args);
 __rte_internal
 int idpf_vc_queue_switch(struct idpf_vport *vport, uint16_t qid,
-			 bool rx, bool on);
+			 bool rx, bool on, uint32_t type);
 __rte_internal
 int idpf_vc_queues_ena_dis(struct idpf_vport *vport, bool enable);
 __rte_internal
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e707043bf7..9e2a74371e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -1907,7 +1907,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	int i, ret;
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Tx config queue.");
 			return ret;
@@ -1915,7 +1916,8 @@ cpfl_stop_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, false,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to disable Rx config queue.");
 			return ret;
@@ -1943,7 +1945,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_TX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, false, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_TX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Tx config queue.");
 			return ret;
@@ -1951,7 +1954,8 @@ cpfl_start_cfgqs(struct cpfl_adapter_ext *adapter)
 	}
 
 	for (i = 0; i < CPFL_RX_CFGQ_NUM; i++) {
-		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true);
+		ret = idpf_vc_queue_switch(&adapter->ctrl_vport.base, i, true, true,
+								VIRTCHNL2_QUEUE_TYPE_CONFIG_RX);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Fail to enable Rx config queue.");
 			return ret;
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ab8bec4645..47351ca102 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -1200,7 +1200,8 @@ cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -1252,7 +1253,8 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -1283,7 +1285,8 @@ cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 						     rx_queue_id - cpfl_vport->nb_data_txq,
 						     true, false);
 	else
-		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+		err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+								VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -1331,7 +1334,8 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 						     tx_queue_id - cpfl_vport->nb_data_txq,
 						     false, false);
 	else
-		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+		err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+								VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 64f2235580..858bbefe3b 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -595,7 +595,8 @@ idpf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
 			    rx_queue_id);
@@ -646,7 +647,8 @@ idpf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	}
 
 	/* Ready to switch the queue on */
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
 			    tx_queue_id);
@@ -669,7 +671,8 @@ idpf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (rx_queue_id >= dev->data->nb_rx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false,
+							VIRTCHNL2_QUEUE_TYPE_RX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
 			    rx_queue_id);
@@ -701,7 +704,8 @@ idpf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (tx_queue_id >= dev->data->nb_tx_queues)
 		return -EINVAL;
 
-	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false,
+							VIRTCHNL2_QUEUE_TYPE_TX);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
 			    tx_queue_id);
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* [PATCH v6 7/7] doc: updated the documentation for cpfl PMD
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
                                   ` (5 preceding siblings ...)
  2024-07-01  9:13                 ` [PATCH v6 6/7] drivers: adding config queue types for virtchnl2 message Soumyadeep Hore
@ 2024-07-01  9:13                 ` Soumyadeep Hore
  2024-07-01 11:23                 ` [PATCH v6 0/7] Update MEV TS Base Driver Bruce Richardson
  7 siblings, 0 replies; 125+ messages in thread
From: Soumyadeep Hore @ 2024-07-01  9:13 UTC (permalink / raw)
  To: bruce.richardson, anatoly.burakov; +Cc: dev

Updated the latest support for cpfl pmd in MEV TS
firmware version which is 1.4.

Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
---
 doc/guides/nics/cpfl.rst | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 9b7a99c894..528c809819 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -35,6 +35,8 @@ Here is the suggested matching list which has been tested and verified.
    +------------+------------------+
    |    23.11   |       1.0        |
    +------------+------------------+
+   |    24.07   |       1.4        |
+   +------------+------------------+
 
 
 Configuration
-- 
2.43.0


^ permalink raw reply	[flat|nested] 125+ messages in thread

* RE: [PATCH v5 15/21] common/idpf: add wmb before tail
  2024-06-28 14:45               ` Bruce Richardson
@ 2024-07-01 10:05                 ` Hore, Soumyadeep
  0 siblings, 0 replies; 125+ messages in thread
From: Hore, Soumyadeep @ 2024-07-01 10:05 UTC (permalink / raw)
  To: Richardson, Bruce, dev; +Cc: Burakov, Anatoly

> Introduced through customer's feedback in their attempt to address 
> some bugs this introduces a memory barrier before posting ctlq tail. 
> This makes sure memory writes have a chance to take place before HW 
> starts messing with the descriptors.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---

From the description, it seems that this may be a bugfix patch. Can you confirm this and whether it should be backported or not. Also, provide a fixes tag for this.

Thanks,
/Bruce

This is a new feature supported in MEV TS 1.4 which is only compatible with dpdk 24.07. 
So previous dpdk versions are not affected, hence backporting is not required.

Regards,
Soumya

>  drivers/common/idpf/base/idpf_controlq.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/common/idpf/base/idpf_controlq.c 
> b/drivers/common/idpf/base/idpf_controlq.c
> index 65e5599614..4f47759a4f 100644
> --- a/drivers/common/idpf/base/idpf_controlq.c
> +++ b/drivers/common/idpf/base/idpf_controlq.c
> @@ -604,6 +604,8 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
>  			/* Wrap to end of end ring since current ntp is 0 */
>  			cq->next_to_post = cq->ring_size - 1;
>  
> +		idpf_wmb();
> +
>  		wr32(hw, cq->reg.tail, cq->next_to_post);
>  	}
>  
> --
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 125+ messages in thread

* RE: [PATCH v5 16/21] drivers: add flex array support and fix issues
  2024-06-28 14:50               ` Bruce Richardson
@ 2024-07-01 10:09                 ` Hore, Soumyadeep
  0 siblings, 0 replies; 125+ messages in thread
From: Hore, Soumyadeep @ 2024-07-01 10:09 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: Burakov, Anatoly, dev

> With the internal Linux upstream feedback that is received on IDPF 
> driver and also some references available online, it is discouraged to 
> use 1-sized array fields in the structures, especially in the new 
> Linux drivers that are going to be upstreamed. Instead, it is 
> recommended to use flex array fields for the dynamic sized structures.
> 
> Some fixes based on code change is introduced to compile dpdk.
> 
> Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
> ---
>  drivers/common/idpf/base/virtchnl2.h       | 466 ++++-----------------
>  drivers/common/idpf/idpf_common_virtchnl.c |   2 +-
>  drivers/net/cpfl/cpfl_ethdev.c             |  28 +-
>  3 files changed, 86 insertions(+), 410 deletions(-)
> 
> diff --git a/drivers/common/idpf/base/virtchnl2.h 
> b/drivers/common/idpf/base/virtchnl2.h
> index 9dd5191c0e..317bd06c0f 100644
> --- a/drivers/common/idpf/base/virtchnl2.h
> +++ b/drivers/common/idpf/base/virtchnl2.h
> @@ -63,6 +63,10 @@ enum virtchnl2_status {
>  #define VIRTCHNL2_CHECK_STRUCT_LEN(n, X)	\
>  	static_assert((n) == sizeof(struct X),	\
>  		      "Structure length does not match with the expected value")
> +#define VIRTCHNL2_CHECK_STRUCT_VAR_LEN(n, X, T)		\
> +	VIRTCHNL2_CHECK_STRUCT_LEN(n, X)
> +
> +#define STRUCT_VAR_LEN		1
>  
>  /**
>   * New major set of opcodes introduced and so leaving room for @@ 
> -696,10 +700,9 @@ VIRTCHNL2_CHECK_STRUCT_LEN(32, 
> virtchnl2_queue_reg_chunk);  struct virtchnl2_queue_reg_chunks {
>  	__le16 num_chunks;
>  	u8 pad[6];
> -	struct virtchnl2_queue_reg_chunk chunks[1];
> +	struct virtchnl2_queue_reg_chunk chunks[STRUCT_VAR_LEN];
>  };

This patch doesn't actually seem to be using flexible array members.
Instead I see a macro with value "1" being used in place of a hard-coded "1". Can you please check that commit message matches what's actually happening, and that changes in the patch are correct.

Thanks,
/Bruce

Addressed in v6 by adding the corresponding commit message.

Regards,
Soumya

^ permalink raw reply	[flat|nested] 125+ messages in thread

* Re: [PATCH v6 0/7] Update MEV TS Base Driver
  2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
                                   ` (6 preceding siblings ...)
  2024-07-01  9:13                 ` [PATCH v6 7/7] doc: updated the documentation for cpfl PMD Soumyadeep Hore
@ 2024-07-01 11:23                 ` Bruce Richardson
  7 siblings, 0 replies; 125+ messages in thread
From: Bruce Richardson @ 2024-07-01 11:23 UTC (permalink / raw)
  To: Soumyadeep Hore; +Cc: anatoly.burakov, dev

On Mon, Jul 01, 2024 at 09:13:44AM +0000, Soumyadeep Hore wrote:
> These patches integrate the latest changes in MEV TS IDPF Base driver.
> 
> ---
> v6:
> - Addressed comments
> ---
> v5:
> - Removed warning from patch 6
> ---
> v4:
> - Removed 1st patch as we are not using NVME_CPF flag
> - Addressed comments
> ---
> v3:
> - Removed additional whitespace changes
> - Fixed warnings of CI
> - Updated documentation relating to MEV TS FW release
> ---
> v2:
> - Changed implementation based on review comments
> - Fixed compilation errors for Windows, Alpine and FreeBSD
> ---
> 
> Soumyadeep Hore (7):
>   common/idpf: add wmb before tail
>   drivers: adding macros for dynamic data structures
>   common/idpf: enable flow steer capability for vports
>   common/idpf: add a new Tx context descriptor structure
>   common/idpf: remove idpf common file
>   drivers: adding config queue types for virtchnl2 message
>   doc: updated the documentation for cpfl PMD
> 
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

Series applied to dpdk-next-net-intel.

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 125+ messages in thread

end of thread, other threads:[~2024-07-01 11:23 UTC | newest]

Thread overview: 125+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-28  7:28 [PATCH 00/25] Update IDPF Base Driver Soumyadeep Hore
2024-05-28  7:28 ` [PATCH 01/25] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
2024-05-29 12:32   ` Bruce Richardson
2024-05-28  7:28 ` [PATCH 02/25] common/idpf: updated IDPF VF device ID Soumyadeep Hore
2024-05-28  7:28 ` [PATCH 03/25] common/idpf: update ADD QUEUE GROUPS offset Soumyadeep Hore
2024-05-29 12:38   ` Bruce Richardson
2024-05-28  7:28 ` [PATCH 04/25] common/idpf: update in PTP message validation Soumyadeep Hore
2024-05-29 13:03   ` Bruce Richardson
2024-05-28  7:28 ` [PATCH 05/25] common/idpf: added FLOW STEER capability and a vport flag Soumyadeep Hore
2024-05-28  7:28 ` [PATCH 06/25] common/idpf: moved the IDPF HW into API header file Soumyadeep Hore
2024-05-28  7:28 ` [PATCH 07/25] common/idpf: avoid defensive programming Soumyadeep Hore
2024-05-28  7:28 ` [PATCH 08/25] common/idpf: move related defines into enums Soumyadeep Hore
2024-05-28  7:28 ` [PATCH 09/25] common/idpf: add flex array support to virtchnl2 structures Soumyadeep Hore
2024-06-04  8:05 ` [PATCH v2 00/21] Update MEV TS Base Driver Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 01/21] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 02/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 03/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 04/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 05/21] common/idpf: avoid defensive programming Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 06/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 07/21] common/idpf: convert data type to 'le' Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 08/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
2024-06-04  8:05   ` [PATCH v2 09/21] common/idpf: refactor size check macro Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 10/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 11/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 12/21] common/idpf: move related defines into enums Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 13/21] common/idpf: avoid variable 0-init Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 14/21] common/idpf: update in PTP message validation Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 15/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 16/21] common/idpf: add wmb before tail Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 17/21] drivers: add flex array support and fix issues Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 18/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 19/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 20/21] common/idpf: remove idpf common file Soumyadeep Hore
2024-06-04  8:06   ` [PATCH v2 21/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
2024-06-12  3:52   ` [PATCH v3 00/22] Update MEV TS Base Driver Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 01/22] common/idpf: added NVME CPF specific code with defines Soumyadeep Hore
2024-06-14 10:33       ` Burakov, Anatoly
2024-06-12  3:52     ` [PATCH v3 02/22] common/idpf: updated IDPF VF device ID Soumyadeep Hore
2024-06-14 10:36       ` Burakov, Anatoly
2024-06-12  3:52     ` [PATCH v3 03/22] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 04/22] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 05/22] common/idpf: avoid defensive programming Soumyadeep Hore
2024-06-14 12:16       ` Burakov, Anatoly
2024-06-12  3:52     ` [PATCH v3 06/22] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 07/22] common/idpf: convert data type to 'le' Soumyadeep Hore
2024-06-14 12:19       ` Burakov, Anatoly
2024-06-12  3:52     ` [PATCH v3 08/22] common/idpf: compress RXDID mask definitions Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 09/22] common/idpf: refactor size check macro Soumyadeep Hore
2024-06-14 12:21       ` Burakov, Anatoly
2024-06-12  3:52     ` [PATCH v3 10/22] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
2024-06-18 10:57       ` [PATCH v4 00/21] Update MEV TS Base Driver Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 02/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 03/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 04/21] common/idpf: avoid defensive programming Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 05/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 06/21] common/idpf: convert data type to 'le' Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 07/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 08/21] common/idpf: refactor size check macro Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 11/21] common/idpf: move related defines into enums Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 12/21] common/idpf: avoid variable 0-init Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 15/21] common/idpf: add wmb before tail Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 17/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 18/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 19/21] common/idpf: remove idpf common file Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
2024-06-18 10:57         ` [PATCH v4 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
2024-06-24  9:16           ` [PATCH v5 00/21] Update MEV TS Base Driver Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 01/21] common/idpf: updated IDPF VF device ID Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 02/21] common/idpf: added new virtchnl2 capability and vport flag Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 03/21] common/idpf: moved the idpf HW into API header file Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 04/21] common/idpf: avoid defensive programming Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 05/21] common/idpf: use BIT ULL for large bitmaps Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 06/21] common/idpf: convert data type to 'le' Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 07/21] common/idpf: compress RXDID mask definitions Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 08/21] common/idpf: refactor size check macro Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 09/21] common/idpf: update mask of Rx FLEX DESC ADV FF1 M Soumyadeep Hore
2024-06-28 14:16               ` Bruce Richardson
2024-06-24  9:16             ` [PATCH v5 10/21] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 11/21] common/idpf: move related defines into enums Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 12/21] common/idpf: avoid variable 0-init Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 13/21] common/idpf: update in PTP message validation Soumyadeep Hore
2024-06-28 14:33               ` Bruce Richardson
2024-06-24  9:16             ` [PATCH v5 14/21] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 15/21] common/idpf: add wmb before tail Soumyadeep Hore
2024-06-28 14:45               ` Bruce Richardson
2024-07-01 10:05                 ` Hore, Soumyadeep
2024-07-01  9:13               ` [PATCH v6 0/7] Update MEV TS Base Driver Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 1/7] common/idpf: add wmb before tail Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 2/7] drivers: adding macros for dynamic data structures Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 3/7] common/idpf: enable flow steer capability for vports Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 4/7] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 5/7] common/idpf: remove idpf common file Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 6/7] drivers: adding config queue types for virtchnl2 message Soumyadeep Hore
2024-07-01  9:13                 ` [PATCH v6 7/7] doc: updated the documentation for cpfl PMD Soumyadeep Hore
2024-07-01 11:23                 ` [PATCH v6 0/7] Update MEV TS Base Driver Bruce Richardson
2024-06-24  9:16             ` [PATCH v5 16/21] drivers: add flex array support and fix issues Soumyadeep Hore
2024-06-28 14:50               ` Bruce Richardson
2024-07-01 10:09                 ` Hore, Soumyadeep
2024-06-24  9:16             ` [PATCH v5 17/21] common/idpf: enable flow steer capability for vports Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 18/21] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 19/21] common/idpf: remove idpf common file Soumyadeep Hore
2024-06-24  9:16             ` [PATCH v5 20/21] drivers: adding type to idpf vc queue switch Soumyadeep Hore
2024-06-28 14:53               ` Bruce Richardson
2024-06-24  9:16             ` [PATCH v5 21/21] doc: updated the documentation for cpfl PMD Soumyadeep Hore
2024-06-28 14:58             ` [PATCH v5 00/21] Update MEV TS Base Driver Bruce Richardson
2024-06-12  3:52     ` [PATCH v3 11/22] common/idpf: use 'pad' and 'reserved' fields appropriately Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 12/22] common/idpf: move related defines into enums Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 13/22] common/idpf: avoid variable 0-init Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 14/22] common/idpf: update in PTP message validation Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 15/22] common/idpf: rename INLINE FLOW STEER to FLOW STEER Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 16/22] common/idpf: add wmb before tail Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 17/22] drivers: add flex array support and fix issues Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 18/22] common/idpf: enable flow steer capability for vports Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 19/22] common/idpf: add a new Tx context descriptor structure Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 20/22] common/idpf: remove idpf common file Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 21/22] drivers: adding type to idpf vc queue switch Soumyadeep Hore
2024-06-12  3:52     ` [PATCH v3 22/22] doc: updated the documentation for cpfl PMD Soumyadeep Hore
2024-06-14 12:48     ` [PATCH v3 00/22] Update MEV TS Base Driver Burakov, Anatoly

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).